DPDK patches and discussions
 help / color / mirror / Atom feed
Search results ordered by [date|relevance]  view[summary|nested|Atom feed]
thread overview below | download: 
* RE: [EXT] Re: [PATCH 02/13] security: add MACsec packet number threshold
  2023-05-24  7:12  0%       ` [EXT] " Akhil Goyal
@ 2023-05-24  8:09  3%         ` Akhil Goyal
  0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2023-05-24  8:09 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: dev, thomas, olivier.matz, orika, david.marchand, hemant.agrawal,
	Vamsi Krishna Attunuru, ferruh.yigit, Jerin Jacob Kollanukkaran,
	Ankur Dwivedi

> Subject: RE: [EXT] Re: [PATCH 02/13] security: add MACsec packet number
> threshold
> 
> > On Wed, 24 May 2023 01:19:07 +0530
> > Akhil Goyal <gakhil@marvell.com> wrote:
> >
> > > diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> > > index c7a523b6d6..30bac4e25a 100644
> > > --- a/lib/security/rte_security.h
> > > +++ b/lib/security/rte_security.h
> > > @@ -399,6 +399,8 @@ struct rte_security_macsec_sa {
> > >  struct rte_security_macsec_sc {
> > >  	/** Direction of SC */
> > >  	enum rte_security_macsec_direction dir;
> > > +	/** Packet number threshold */
> > > +	uint64_t pn_threshold;
> > >  	union {
> > >  		struct {
> > >  			/** SAs for each association number */
> > > @@ -407,8 +409,10 @@ struct rte_security_macsec_sc {
> > >  			uint8_t sa_in_use[RTE_SECURITY_MACSEC_NUM_AN];
> > >  			/** Channel is active */
> > >  			uint8_t active : 1;
> > > +			/** Extended packet number is enabled for SAs */
> > > +			uint8_t is_xpn : 1;
> > >  			/** Reserved bitfields for future */
> > > -			uint8_t reserved : 7;
> > > +			uint8
> >
> > Is this an ABI change? If so needs to wait for 23.11 release
> rte_security_macsec_sc/sa_create are experimental APIs. So, it won't be an
> issue I believe.
Looking at the ABI issues reported for this patchset.
Even if these APIs are experimental, we cannot really change them.
As all are part of rte_security_ctx which is exposed.
But, user is not required to know its contents and it should not be exposed.
In next release I would make it internal like rte_security_session.
For now, I would defer this MACsec support to next release.


^ permalink raw reply	[relevance 3%]

* RE: [EXT] Re: [PATCH 02/13] security: add MACsec packet number threshold
  2023-05-23 21:29  3%     ` Stephen Hemminger
@ 2023-05-24  7:12  0%       ` Akhil Goyal
  2023-05-24  8:09  3%         ` Akhil Goyal
  0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2023-05-24  7:12 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: dev, thomas, olivier.matz, orika, david.marchand, hemant.agrawal,
	Vamsi Krishna Attunuru, ferruh.yigit, Jerin Jacob Kollanukkaran,
	Ankur Dwivedi

> On Wed, 24 May 2023 01:19:07 +0530
> Akhil Goyal <gakhil@marvell.com> wrote:
> 
> > diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> > index c7a523b6d6..30bac4e25a 100644
> > --- a/lib/security/rte_security.h
> > +++ b/lib/security/rte_security.h
> > @@ -399,6 +399,8 @@ struct rte_security_macsec_sa {
> >  struct rte_security_macsec_sc {
> >  	/** Direction of SC */
> >  	enum rte_security_macsec_direction dir;
> > +	/** Packet number threshold */
> > +	uint64_t pn_threshold;
> >  	union {
> >  		struct {
> >  			/** SAs for each association number */
> > @@ -407,8 +409,10 @@ struct rte_security_macsec_sc {
> >  			uint8_t sa_in_use[RTE_SECURITY_MACSEC_NUM_AN];
> >  			/** Channel is active */
> >  			uint8_t active : 1;
> > +			/** Extended packet number is enabled for SAs */
> > +			uint8_t is_xpn : 1;
> >  			/** Reserved bitfields for future */
> > -			uint8_t reserved : 7;
> > +			uint8
> 
> Is this an ABI change? If so needs to wait for 23.11 release
rte_security_macsec_sc/sa_create are experimental APIs. So, it won't be an issue I believe.


^ permalink raw reply	[relevance 0%]

* Re: [PATCH v6 04/15] graph: add get/set graph worker model APIs
  @ 2023-05-24  6:08  3%     ` Jerin Jacob
  0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2023-05-24  6:08 UTC (permalink / raw)
  To: Zhirun Yan
  Cc: dev, jerinj, kirankumark, ndabilpuram, stephen, pbhagavatula,
	cunming.liang, haiyue.wang

On Tue, May 9, 2023 at 11:34 AM Zhirun Yan <zhirun.yan@intel.com> wrote:
>
> Add new get/set APIs to configure graph worker model which is used to
> determine which model will be chosen.
>
> Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
> Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
> ---
> diff --git a/lib/graph/rte_graph_worker.c b/lib/graph/rte_graph_worker.c
> new file mode 100644
> index 0000000000..cabc101262
> --- /dev/null
> +++ b/lib/graph/rte_graph_worker.c
> @@ -0,0 +1,54 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(C) 2023 Intel Corporation
> + */
> +
> +#include "rte_graph_worker_common.h"
> +
> +RTE_DEFINE_PER_LCORE(enum rte_graph_worker_model, worker_model) = RTE_GRAPH_MODEL_DEFAULT;
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
> + * Set the graph worker model

Just declaring this top of the header file enough to avoid duplicating
in every functions
as all functions in header is experimental. See lib/graph/rte_graph.h


> + *
> + * @note This function does not perform any locking, and is only safe to call
> + *    before graph running.
> + *
> + * @param name
> + *   Name of the graph worker model.
> + *
> + * @return
> + *   0 on success, -1 otherwise.
> + */
> +int
> +rte_graph_worker_model_set(enum rte_graph_worker_model model)
> +{
> +       if (model >= RTE_GRAPH_MODEL_LIST_END)
> +               goto fail;
> +
> +       RTE_PER_LCORE(worker_model) = model;

Application needs to set this per core . Right?
Are we anticipating a case where one core runs one model and another
core runs with another model?
If not OR it is not practically possible, then,  To make application
programmer life easy,
We could loop through all lore and set on all of them instead of
application setting on each
one separately.


> +       return 0;
> +
> +fail:
> +       RTE_PER_LCORE(worker_model) = RTE_GRAPH_MODEL_DEFAULT;
> +       return -1;
> +}
> +

> +/** Graph worker models */
> +enum rte_graph_worker_model {
> +       RTE_GRAPH_MODEL_DEFAULT,

Add Doxygen comment
> +       RTE_GRAPH_MODEL_RTC = RTE_GRAPH_MODEL_DEFAULT,


Add Doxygen comment to explain what this mode does.


> +       RTE_GRAPH_MODEL_MCORE_DISPATCH,

Add Doxygen comment to explain what this mode does.

> +       RTE_GRAPH_MODEL_LIST_END

This can break the ABI if we add one in middle. Please remove this.
See lib/crytodev for
how to handle with _END symbols.

^ permalink raw reply	[relevance 3%]

* [PATCH v5 5/5] ethdev: add MPLS header modification support
    2023-05-23 21:31  3%         ` [PATCH v5 4/5] ethdev: add GENEVE TLV option modification support Michael Baum
@ 2023-05-23 21:31  2%         ` Michael Baum
  1 sibling, 0 replies; 200+ results
From: Michael Baum @ 2023-05-23 21:31 UTC (permalink / raw)
  To: dev; +Cc: Ori Kam, Aman Singh, Yuying Zhang, Ferruh Yigit, Thomas Monjalon

Add support for MPLS modify header using "RTE_FLOW_FIELD_MPLS" id.

Since MPLS heaser might appear more the one time in inner/outer/tunnel,
a new field was added to "rte_flow_action_modify_data" structure in
addition to "level" field.
The "tag_index" field is the index of the header inside encapsulation
level. It is used for modify multiple MPLS headers in same encapsulation
level.

This addition enables to modify multiple VLAN headers too, so the
description of "RTE_FLOW_FIELD_VLAN_XXXX" was updated.

Since the "tag_index" field is added, the "RTE_FLOW_FIELD_TAG" type
moves to use it for tag array instead of using "level" field.
Using "level" is still supported for backwards compatibility when
"tag_index" field is zero.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c            |  24 +++++-
 doc/guides/prog_guide/rte_flow.rst     |  18 ++--
 doc/guides/rel_notes/release_23_07.rst |   8 +-
 drivers/net/mlx5/mlx5_flow.c           |  34 ++++++++
 drivers/net/mlx5/mlx5_flow.h           |  23 ++++++
 drivers/net/mlx5/mlx5_flow_dv.c        | 110 +++++++++++--------------
 drivers/net/mlx5/mlx5_flow_hw.c        |  21 +++--
 lib/ethdev/rte_flow.h                  |  51 ++++++++----
 8 files changed, 199 insertions(+), 90 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 8c1dea53c0..a51e37276b 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -636,6 +636,7 @@ enum index {
 	ACTION_MODIFY_FIELD_DST_TYPE_VALUE,
 	ACTION_MODIFY_FIELD_DST_LEVEL,
 	ACTION_MODIFY_FIELD_DST_LEVEL_VALUE,
+	ACTION_MODIFY_FIELD_DST_TAG_INDEX,
 	ACTION_MODIFY_FIELD_DST_TYPE_ID,
 	ACTION_MODIFY_FIELD_DST_CLASS_ID,
 	ACTION_MODIFY_FIELD_DST_OFFSET,
@@ -643,6 +644,7 @@ enum index {
 	ACTION_MODIFY_FIELD_SRC_TYPE_VALUE,
 	ACTION_MODIFY_FIELD_SRC_LEVEL,
 	ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE,
+	ACTION_MODIFY_FIELD_SRC_TAG_INDEX,
 	ACTION_MODIFY_FIELD_SRC_TYPE_ID,
 	ACTION_MODIFY_FIELD_SRC_CLASS_ID,
 	ACTION_MODIFY_FIELD_SRC_OFFSET,
@@ -859,7 +861,7 @@ static const char *const modify_field_ids[] = {
 	"ipv6_proto",
 	"flex_item",
 	"hash_result",
-	"geneve_opt_type", "geneve_opt_class", "geneve_opt_data",
+	"geneve_opt_type", "geneve_opt_class", "geneve_opt_data", "mpls",
 	NULL
 };
 
@@ -2301,6 +2303,7 @@ static const enum index next_action_sample[] = {
 
 static const enum index action_modify_field_dst[] = {
 	ACTION_MODIFY_FIELD_DST_LEVEL,
+	ACTION_MODIFY_FIELD_DST_TAG_INDEX,
 	ACTION_MODIFY_FIELD_DST_TYPE_ID,
 	ACTION_MODIFY_FIELD_DST_CLASS_ID,
 	ACTION_MODIFY_FIELD_DST_OFFSET,
@@ -2310,6 +2313,7 @@ static const enum index action_modify_field_dst[] = {
 
 static const enum index action_modify_field_src[] = {
 	ACTION_MODIFY_FIELD_SRC_LEVEL,
+	ACTION_MODIFY_FIELD_SRC_TAG_INDEX,
 	ACTION_MODIFY_FIELD_SRC_TYPE_ID,
 	ACTION_MODIFY_FIELD_SRC_CLASS_ID,
 	ACTION_MODIFY_FIELD_SRC_OFFSET,
@@ -6398,6 +6402,15 @@ static const struct token token_list[] = {
 		.call = parse_vc_modify_field_level,
 		.comp = comp_none,
 	},
+	[ACTION_MODIFY_FIELD_DST_TAG_INDEX] = {
+		.name = "dst_tag_index",
+		.help = "destination field tag array",
+		.next = NEXT(action_modify_field_dst,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+					dst.tag_index)),
+		.call = parse_vc_conf,
+	},
 	[ACTION_MODIFY_FIELD_DST_TYPE_ID] = {
 		.name = "dst_type_id",
 		.help = "destination field type ID",
@@ -6451,6 +6464,15 @@ static const struct token token_list[] = {
 		.call = parse_vc_modify_field_level,
 		.comp = comp_none,
 	},
+	[ACTION_MODIFY_FIELD_SRC_TAG_INDEX] = {
+		.name = "stc_tag_index",
+		.help = "source field tag array",
+		.next = NEXT(action_modify_field_src,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+					src.tag_index)),
+		.call = parse_vc_conf,
+	},
 	[ACTION_MODIFY_FIELD_SRC_TYPE_ID] = {
 		.name = "src_type_id",
 		.help = "source field type ID",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index ec812de335..e4328e7ed6 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2925,8 +2925,7 @@ See ``enum rte_flow_field_id`` for the list of supported fields.
 
 ``width`` defines a number of bits to use from ``src`` field.
 
-``level`` is used to access any packet field on any encapsulation level
-as well as any tag element in the tag array:
+``level`` is used to access any packet field on any encapsulation level:
 
 - ``0`` means the default behaviour. Depending on the packet type,
   it can mean outermost, innermost or anything in between.
@@ -2934,8 +2933,15 @@ as well as any tag element in the tag array:
 - ``2`` and subsequent values requests access to the specified packet
   encapsulation level, from outermost to innermost (lower to higher values).
 
-For the tag array (in case of multiple tags are supported and present)
-``level`` translates directly into the array index.
+``tag_index`` is the index of the header inside encapsulation level.
+It is used for modify either ``VLAN`` or ``MPLS`` or ``TAG`` headers which
+multiple of them might be supported in same encapsulation level.
+
+.. note::
+
+   For ``RTE_FLOW_FIELD_TAG`` type, the tag array was provided in ``level``
+   field and it is still supported for backwards compatibility.
+   When ``tag_index`` is zero, the tag array is taken from ``level`` field.
 
 ``type`` is used to specify (along with ``class_id``) the Geneve option which
 is being modified.
@@ -3011,7 +3017,9 @@ and provide immediate value 0xXXXX85XX.
    +=================+==========================================================+
    | ``field``       | ID: packet field, mark, meta, tag, immediate, pointer    |
    +-----------------+----------------------------------------------------------+
-   | ``level``       | encapsulation level of a packet field or tag array index |
+   | ``level``       | encapsulation level of a packet field                    |
+   +-----------------+----------------------------------------------------------+
+   | ``tag_index``   | tag index inside encapsulation level                     |
    +-----------------+----------------------------------------------------------+
    | ``type``        | geneve option type                                       |
    +-----------------+----------------------------------------------------------+
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index ce1755096f..fd3e35eea3 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -84,8 +84,12 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
-* The ``level`` field in experimental structure
-  ``struct rte_flow_action_modify_data`` was reduced to 8 bits.
+* ethdev: in experimental structure ``struct rte_flow_action_modify_data``:
+
+  * ``level`` field was reduced to 8 bits.
+
+  * ``tag_index`` field replaced ``level`` field in representing tag array for
+    ``RTE_FLOW_FIELD_TAG`` type.
 
 
 ABI Changes
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 19f7f92717..867b7b8ea2 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2318,6 +2318,40 @@ mlx5_validate_action_ct(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/**
+ * Validate the level value for modify field action.
+ *
+ * @param[in] data
+ *   Pointer to the rte_flow_action_modify_data structure either src or dst.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+flow_validate_modify_field_level(const struct rte_flow_action_modify_data *data,
+				 struct rte_flow_error *error)
+{
+	if (data->level == 0)
+		return 0;
+	if (data->field != RTE_FLOW_FIELD_TAG)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+					  "inner header fields modification is not supported");
+	if (data->tag_index != 0)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+					  "tag array can be provided using 'level' or 'tag_index' fields, not both");
+	/*
+	 * The tag array for RTE_FLOW_FIELD_TAG type is provided using
+	 * 'tag_index' field. In old API, it was provided using 'level' field
+	 * and it is still supported for backwards compatibility.
+	 */
+	DRV_LOG(WARNING, "tag array provided in 'level' field instead of 'tag_index' field.");
+	return 0;
+}
+
 /**
  * Validate ICMP6 item.
  *
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 1d116ea0f6..cba04b4f45 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1045,6 +1045,26 @@ flow_items_to_tunnel(const struct rte_flow_item items[])
 	return items[0].spec;
 }
 
+/**
+ * Gets the tag array given for RTE_FLOW_FIELD_TAG type.
+ *
+ * In old API the value was provided in "level" field, but in new API
+ * it is provided in "tag_array" field. Since encapsulation level is not
+ * relevant for metadata, the tag array can be still provided in "level"
+ * for backwards compatibility.
+ *
+ * @param[in] data
+ *   Pointer to tag modify data structure.
+ *
+ * @return
+ *   Tag array index.
+ */
+static inline uint8_t
+flow_tag_index_get(const struct rte_flow_action_modify_data *data)
+{
+	return data->tag_index ? data->tag_index : data->level;
+}
+
 /**
  * Fetch 1, 2, 3 or 4 byte field from the byte array
  * and return as unsigned integer in host-endian format.
@@ -2276,6 +2296,9 @@ int mlx5_flow_validate_action_rss(const struct rte_flow_action *action,
 int mlx5_flow_validate_action_default_miss(uint64_t action_flags,
 				const struct rte_flow_attr *attr,
 				struct rte_flow_error *error);
+int flow_validate_modify_field_level
+			(const struct rte_flow_action_modify_data *data,
+			 struct rte_flow_error *error);
 int mlx5_flow_item_acceptable(const struct rte_flow_item *item,
 			      const uint8_t *mask,
 			      const uint8_t *nic_mask,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index f136f43b0a..3070f75ce8 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1896,16 +1896,17 @@ mlx5_flow_field_id_to_modify_info
 	case RTE_FLOW_FIELD_TAG:
 		{
 			MLX5_ASSERT(data->offset + width <= 32);
+			uint8_t tag_index = flow_tag_index_get(data);
 			int reg;
 
-			off_be = (data->level == MLX5_LINEAR_HASH_TAG_INDEX) ?
+			off_be = (tag_index == MLX5_LINEAR_HASH_TAG_INDEX) ?
 				 16 - (data->offset + width) + 16 : data->offset;
 			if (priv->sh->config.dv_flow_en == 2)
 				reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_TAG,
-							 data->level);
+							 tag_index);
 			else
 				reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG,
-							   data->level, error);
+							   tag_index, error);
 			if (reg < 0)
 				return;
 			MLX5_ASSERT(reg != REG_NON);
@@ -1985,7 +1986,7 @@ mlx5_flow_field_id_to_modify_info
 		{
 			uint32_t meta_mask = priv->sh->dv_meta_mask;
 			uint32_t meta_count = __builtin_popcount(meta_mask);
-			uint32_t reg = data->level;
+			uint8_t reg = flow_tag_index_get(data);
 
 			RTE_SET_USED(meta_count);
 			MLX5_ASSERT(data->offset + width <= meta_count);
@@ -5245,115 +5246,105 @@ flow_dv_validate_action_modify_field(struct rte_eth_dev *dev,
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_sh_config *config = &priv->sh->config;
 	struct mlx5_hca_attr *hca_attr = &priv->sh->cdev->config.hca_attr;
-	const struct rte_flow_action_modify_field *action_modify_field =
-		action->conf;
-	uint32_t dst_width, src_width;
+	const struct rte_flow_action_modify_field *conf = action->conf;
+	const struct rte_flow_action_modify_data *src_data = &conf->src;
+	const struct rte_flow_action_modify_data *dst_data = &conf->dst;
+	uint32_t dst_width, src_width, width = conf->width;
 
 	ret = flow_dv_validate_action_modify_hdr(action_flags, action, error);
 	if (ret)
 		return ret;
-	if (action_modify_field->src.field == RTE_FLOW_FIELD_FLEX_ITEM ||
-	    action_modify_field->dst.field == RTE_FLOW_FIELD_FLEX_ITEM)
+	if (src_data->field == RTE_FLOW_FIELD_FLEX_ITEM ||
+	    dst_data->field == RTE_FLOW_FIELD_FLEX_ITEM)
 		return rte_flow_error_set(error, ENOTSUP,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"flex item fields modification"
 				" is not supported");
-	dst_width = mlx5_flow_item_field_width(dev, action_modify_field->dst.field,
+	dst_width = mlx5_flow_item_field_width(dev, dst_data->field,
 					       -1, attr, error);
-	src_width = mlx5_flow_item_field_width(dev, action_modify_field->src.field,
+	src_width = mlx5_flow_item_field_width(dev, src_data->field,
 					       dst_width, attr, error);
-	if (action_modify_field->width == 0)
+	if (width == 0)
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"no bits are requested to be modified");
-	else if (action_modify_field->width > dst_width ||
-		 action_modify_field->width > src_width)
+	else if (width > dst_width || width > src_width)
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"cannot modify more bits than"
 				" the width of a field");
-	if (action_modify_field->dst.field != RTE_FLOW_FIELD_VALUE &&
-	    action_modify_field->dst.field != RTE_FLOW_FIELD_POINTER) {
-		if (action_modify_field->dst.offset +
-		    action_modify_field->width > dst_width)
+	if (dst_data->field != RTE_FLOW_FIELD_VALUE &&
+	    dst_data->field != RTE_FLOW_FIELD_POINTER) {
+		if (dst_data->offset + width > dst_width)
 			return rte_flow_error_set(error, EINVAL,
 					RTE_FLOW_ERROR_TYPE_ACTION, action,
 					"destination offset is too big");
-		if (action_modify_field->dst.level &&
-		    action_modify_field->dst.field != RTE_FLOW_FIELD_TAG)
-			return rte_flow_error_set(error, ENOTSUP,
-					RTE_FLOW_ERROR_TYPE_ACTION, action,
-					"inner header fields modification"
-					" is not supported");
+		ret = flow_validate_modify_field_level(dst_data, error);
+		if (ret)
+			return ret;
 	}
-	if (action_modify_field->src.field != RTE_FLOW_FIELD_VALUE &&
-	    action_modify_field->src.field != RTE_FLOW_FIELD_POINTER) {
+	if (src_data->field != RTE_FLOW_FIELD_VALUE &&
+	    src_data->field != RTE_FLOW_FIELD_POINTER) {
 		if (root)
 			return rte_flow_error_set(error, ENOTSUP,
 					RTE_FLOW_ERROR_TYPE_ACTION, action,
 					"modify field action is not"
 					" supported for group 0");
-		if (action_modify_field->src.offset +
-		    action_modify_field->width > src_width)
+		if (src_data->offset + width > src_width)
 			return rte_flow_error_set(error, EINVAL,
 					RTE_FLOW_ERROR_TYPE_ACTION, action,
 					"source offset is too big");
-		if (action_modify_field->src.level &&
-		    action_modify_field->src.field != RTE_FLOW_FIELD_TAG)
-			return rte_flow_error_set(error, ENOTSUP,
-					RTE_FLOW_ERROR_TYPE_ACTION, action,
-					"inner header fields modification"
-					" is not supported");
+		ret = flow_validate_modify_field_level(src_data, error);
+		if (ret)
+			return ret;
 	}
-	if ((action_modify_field->dst.field ==
-	     action_modify_field->src.field) &&
-	    (action_modify_field->dst.level ==
-	     action_modify_field->src.level))
+	if ((dst_data->field == src_data->field) &&
+	    (dst_data->level == src_data->level))
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"source and destination fields"
 				" cannot be the same");
-	if (action_modify_field->dst.field == RTE_FLOW_FIELD_VALUE ||
-	    action_modify_field->dst.field == RTE_FLOW_FIELD_POINTER ||
-	    action_modify_field->dst.field == RTE_FLOW_FIELD_MARK)
+	if (dst_data->field == RTE_FLOW_FIELD_VALUE ||
+	    dst_data->field == RTE_FLOW_FIELD_POINTER ||
+	    dst_data->field == RTE_FLOW_FIELD_MARK)
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"mark, immediate value or a pointer to it"
 				" cannot be used as a destination");
-	if (action_modify_field->dst.field == RTE_FLOW_FIELD_START ||
-	    action_modify_field->src.field == RTE_FLOW_FIELD_START)
+	if (dst_data->field == RTE_FLOW_FIELD_START ||
+	    src_data->field == RTE_FLOW_FIELD_START)
 		return rte_flow_error_set(error, ENOTSUP,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"modifications of an arbitrary"
 				" place in a packet is not supported");
-	if (action_modify_field->dst.field == RTE_FLOW_FIELD_VLAN_TYPE ||
-	    action_modify_field->src.field == RTE_FLOW_FIELD_VLAN_TYPE)
+	if (dst_data->field == RTE_FLOW_FIELD_VLAN_TYPE ||
+	    src_data->field == RTE_FLOW_FIELD_VLAN_TYPE)
 		return rte_flow_error_set(error, ENOTSUP,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"modifications of the 802.1Q Tag"
 				" Identifier is not supported");
-	if (action_modify_field->dst.field == RTE_FLOW_FIELD_VXLAN_VNI ||
-	    action_modify_field->src.field == RTE_FLOW_FIELD_VXLAN_VNI)
+	if (dst_data->field == RTE_FLOW_FIELD_VXLAN_VNI ||
+	    src_data->field == RTE_FLOW_FIELD_VXLAN_VNI)
 		return rte_flow_error_set(error, ENOTSUP,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"modifications of the VXLAN Network"
 				" Identifier is not supported");
-	if (action_modify_field->dst.field == RTE_FLOW_FIELD_GENEVE_VNI ||
-	    action_modify_field->src.field == RTE_FLOW_FIELD_GENEVE_VNI)
+	if (dst_data->field == RTE_FLOW_FIELD_GENEVE_VNI ||
+	    src_data->field == RTE_FLOW_FIELD_GENEVE_VNI)
 		return rte_flow_error_set(error, ENOTSUP,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"modifications of the GENEVE Network"
 				" Identifier is not supported");
-	if (action_modify_field->dst.field == RTE_FLOW_FIELD_MARK ||
-	    action_modify_field->src.field == RTE_FLOW_FIELD_MARK)
+	if (dst_data->field == RTE_FLOW_FIELD_MARK ||
+	    src_data->field == RTE_FLOW_FIELD_MARK)
 		if (config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY ||
 		    !mlx5_flow_ext_mreg_supported(dev))
 			return rte_flow_error_set(error, ENOTSUP,
 					RTE_FLOW_ERROR_TYPE_ACTION, action,
 					"cannot modify mark in legacy mode"
 					" or without extensive registers");
-	if (action_modify_field->dst.field == RTE_FLOW_FIELD_META ||
-	    action_modify_field->src.field == RTE_FLOW_FIELD_META) {
+	if (dst_data->field == RTE_FLOW_FIELD_META ||
+	    src_data->field == RTE_FLOW_FIELD_META) {
 		if (config->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY &&
 		    !mlx5_flow_ext_mreg_supported(dev))
 			return rte_flow_error_set(error, ENOTSUP,
@@ -5367,20 +5358,19 @@ flow_dv_validate_action_modify_field(struct rte_eth_dev *dev,
 					"cannot modify meta without"
 					" extensive registers available");
 	}
-	if (action_modify_field->operation == RTE_FLOW_MODIFY_SUB)
+	if (conf->operation == RTE_FLOW_MODIFY_SUB)
 		return rte_flow_error_set(error, ENOTSUP,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"sub operations are not supported");
-	if (action_modify_field->dst.field == RTE_FLOW_FIELD_IPV4_ECN ||
-	    action_modify_field->src.field == RTE_FLOW_FIELD_IPV4_ECN ||
-	    action_modify_field->dst.field == RTE_FLOW_FIELD_IPV6_ECN ||
-	    action_modify_field->src.field == RTE_FLOW_FIELD_IPV6_ECN)
+	if (dst_data->field == RTE_FLOW_FIELD_IPV4_ECN ||
+	    src_data->field == RTE_FLOW_FIELD_IPV4_ECN ||
+	    dst_data->field == RTE_FLOW_FIELD_IPV6_ECN ||
+	    src_data->field == RTE_FLOW_FIELD_IPV6_ECN)
 		if (!hca_attr->modify_outer_ip_ecn && root)
 			return rte_flow_error_set(error, ENOTSUP,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"modifications of the ECN for current firmware is not supported");
-	return (action_modify_field->width / 32) +
-	       !!(action_modify_field->width % 32);
+	return (width / 32) + !!(width % 32);
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 1b68a19900..39ea76c0c0 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -1022,9 +1022,11 @@ flow_hw_modify_field_compile(struct rte_eth_dev *dev,
 		    conf->dst.field == RTE_FLOW_FIELD_TAG ||
 		    conf->dst.field == RTE_FLOW_FIELD_METER_COLOR ||
 		    conf->dst.field == (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG) {
+			uint8_t tag_index = flow_tag_index_get(&conf->dst);
+
 			value = *(const unaligned_uint32_t *)item.spec;
 			if (conf->dst.field == RTE_FLOW_FIELD_TAG &&
-			    conf->dst.level == MLX5_LINEAR_HASH_TAG_INDEX)
+			    tag_index == MLX5_LINEAR_HASH_TAG_INDEX)
 				value = rte_cpu_to_be_32(value << 16);
 			else
 				value = rte_cpu_to_be_32(value);
@@ -2055,9 +2057,11 @@ flow_hw_modify_field_construct(struct mlx5_hw_q_job *job,
 	    mhdr_action->dst.field == RTE_FLOW_FIELD_TAG ||
 	    mhdr_action->dst.field == RTE_FLOW_FIELD_METER_COLOR ||
 	    mhdr_action->dst.field == (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG) {
+		uint8_t tag_index = flow_tag_index_get(&mhdr_action->dst);
+
 		value_p = (unaligned_uint32_t *)values;
 		if (mhdr_action->dst.field == RTE_FLOW_FIELD_TAG &&
-		    mhdr_action->dst.level == MLX5_LINEAR_HASH_TAG_INDEX)
+		    tag_index == MLX5_LINEAR_HASH_TAG_INDEX)
 			*value_p = rte_cpu_to_be_32(*value_p << 16);
 		else
 			*value_p = rte_cpu_to_be_32(*value_p);
@@ -3546,10 +3550,9 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
 				     const struct rte_flow_action *mask,
 				     struct rte_flow_error *error)
 {
-	const struct rte_flow_action_modify_field *action_conf =
-		action->conf;
-	const struct rte_flow_action_modify_field *mask_conf =
-		mask->conf;
+	const struct rte_flow_action_modify_field *action_conf = action->conf;
+	const struct rte_flow_action_modify_field *mask_conf = mask->conf;
+	int ret;
 
 	if (action_conf->operation != mask_conf->operation)
 		return rte_flow_error_set(error, EINVAL,
@@ -3565,6 +3568,9 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"immediate value, pointer and hash result cannot be used as destination");
+	ret = flow_validate_modify_field_level(&action_conf->dst, error);
+	if (ret)
+		return ret;
 	if (mask_conf->dst.level != UINT8_MAX)
 		return rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ACTION, action,
@@ -3587,6 +3593,9 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
 			return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"source offset level must be fully masked");
+		ret = flow_validate_modify_field_level(&action_conf->src, error);
+		if (ret)
+			return ret;
 	}
 	if (mask_conf->width != UINT32_MAX)
 		return rte_flow_error_set(error, EINVAL,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index f30d4b033f..1df4b49219 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3740,8 +3740,8 @@ enum rte_flow_field_id {
 	RTE_FLOW_FIELD_START = 0,	/**< Start of a packet. */
 	RTE_FLOW_FIELD_MAC_DST,		/**< Destination MAC Address. */
 	RTE_FLOW_FIELD_MAC_SRC,		/**< Source MAC Address. */
-	RTE_FLOW_FIELD_VLAN_TYPE,	/**< 802.1Q Tag Identifier. */
-	RTE_FLOW_FIELD_VLAN_ID,		/**< 802.1Q VLAN Identifier. */
+	RTE_FLOW_FIELD_VLAN_TYPE,	/**< VLAN Tag Identifier. */
+	RTE_FLOW_FIELD_VLAN_ID,		/**< VLAN Identifier. */
 	RTE_FLOW_FIELD_MAC_TYPE,	/**< EtherType. */
 	RTE_FLOW_FIELD_IPV4_DSCP,	/**< IPv4 DSCP. */
 	RTE_FLOW_FIELD_IPV4_TTL,	/**< IPv4 Time To Live. */
@@ -3775,7 +3775,8 @@ enum rte_flow_field_id {
 	RTE_FLOW_FIELD_HASH_RESULT,	/**< Hash result. */
 	RTE_FLOW_FIELD_GENEVE_OPT_TYPE,	/**< GENEVE option type */
 	RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */
-	RTE_FLOW_FIELD_GENEVE_OPT_DATA	/**< GENEVE option data */
+	RTE_FLOW_FIELD_GENEVE_OPT_DATA,	/**< GENEVE option data */
+	RTE_FLOW_FIELD_MPLS		/**< MPLS header. */
 };
 
 /**
@@ -3789,7 +3790,7 @@ struct rte_flow_action_modify_data {
 	RTE_STD_C11
 	union {
 		struct {
-			/** Encapsulation level or tag index or flex item handle. */
+			/** Encapsulation level and tag index or flex item handle. */
 			union {
 				struct {
 					/**
@@ -3820,20 +3821,38 @@ struct rte_flow_action_modify_data {
 					 *
 					 * Values other than @p 0 are not
 					 * necessarily supported.
+					 *
+					 * @note that for MPLS field,
+					 * encapsulation level also include
+					 * tunnel since MPLS may appear in
+					 * outer, inner or tunnel.
 					 */
 					uint8_t level;
-					/**
-					 * Geneve option type. relevant only
-					 * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
-					 * modification type.
-					 */
-					uint8_t type;
-					/**
-					 * Geneve option class. relevant only
-					 * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
-					 * modification type.
-					 */
-					rte_be16_t class_id;
+					union {
+						/**
+						 * Tag index array inside
+						 * encapsulation level.
+						 * Used for VLAN, MPLS or TAG
+						 * types.
+						 */
+						uint8_t tag_index;
+						/**
+						 * Geneve option identifier.
+						 * relevant only for
+						 * RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+						 * modification type.
+						 */
+						struct {
+							/**
+							 * Geneve option type.
+							 */
+							uint8_t type;
+							/**
+							 * Geneve option class.
+							 */
+							rte_be16_t class_id;
+						};
+					};
 				};
 				struct rte_flow_item_flex_handle *flex_handle;
 			};
-- 
2.25.1


^ permalink raw reply	[relevance 2%]

* [PATCH v5 4/5] ethdev: add GENEVE TLV option modification support
  @ 2023-05-23 21:31  3%         ` Michael Baum
  2023-05-23 21:31  2%         ` [PATCH v5 5/5] ethdev: add MPLS header " Michael Baum
  1 sibling, 0 replies; 200+ results
From: Michael Baum @ 2023-05-23 21:31 UTC (permalink / raw)
  To: dev; +Cc: Ori Kam, Aman Singh, Yuying Zhang, Ferruh Yigit, Thomas Monjalon

Add modify field support for GENEVE option fields:
 - "RTE_FLOW_FIELD_GENEVE_OPT_TYPE"
 - "RTE_FLOW_FIELD_GENEVE_OPT_CLASS"
 - "RTE_FLOW_FIELD_GENEVE_OPT_DATA"

Each GENEVE TLV option is identified by both its "class" and "type", so
2 new fields were added to "rte_flow_action_modify_data" structure to
help specify which option to modify.

To get room for those 2 new fields, the "level" field move to use
"uint8_t" which is more than enough for encapsulation level.
This patch also reduces all modify field encapsulation level "fully
masked" initializations to use UINT8_MAX instead of UINT32_MAX.
This change avoids compilation warning caused by this API changing.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c            | 48 +++++++++++++++++++++++++-
 doc/guides/prog_guide/rte_flow.rst     | 23 ++++++++++++
 doc/guides/rel_notes/release_23_07.rst |  3 ++
 drivers/net/mlx5/mlx5_flow_hw.c        | 22 ++++++------
 lib/ethdev/rte_flow.h                  | 48 +++++++++++++++++++++++++-
 5 files changed, 131 insertions(+), 13 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 58939ec321..8c1dea53c0 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -636,11 +636,15 @@ enum index {
 	ACTION_MODIFY_FIELD_DST_TYPE_VALUE,
 	ACTION_MODIFY_FIELD_DST_LEVEL,
 	ACTION_MODIFY_FIELD_DST_LEVEL_VALUE,
+	ACTION_MODIFY_FIELD_DST_TYPE_ID,
+	ACTION_MODIFY_FIELD_DST_CLASS_ID,
 	ACTION_MODIFY_FIELD_DST_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_TYPE,
 	ACTION_MODIFY_FIELD_SRC_TYPE_VALUE,
 	ACTION_MODIFY_FIELD_SRC_LEVEL,
 	ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE,
+	ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+	ACTION_MODIFY_FIELD_SRC_CLASS_ID,
 	ACTION_MODIFY_FIELD_SRC_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_VALUE,
 	ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -854,7 +858,9 @@ static const char *const modify_field_ids[] = {
 	"ipv4_ecn", "ipv6_ecn", "gtp_psc_qfi", "meter_color",
 	"ipv6_proto",
 	"flex_item",
-	"hash_result", NULL
+	"hash_result",
+	"geneve_opt_type", "geneve_opt_class", "geneve_opt_data",
+	NULL
 };
 
 static const char *const meter_colors[] = {
@@ -2295,6 +2301,8 @@ static const enum index next_action_sample[] = {
 
 static const enum index action_modify_field_dst[] = {
 	ACTION_MODIFY_FIELD_DST_LEVEL,
+	ACTION_MODIFY_FIELD_DST_TYPE_ID,
+	ACTION_MODIFY_FIELD_DST_CLASS_ID,
 	ACTION_MODIFY_FIELD_DST_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_TYPE,
 	ZERO,
@@ -2302,6 +2310,8 @@ static const enum index action_modify_field_dst[] = {
 
 static const enum index action_modify_field_src[] = {
 	ACTION_MODIFY_FIELD_SRC_LEVEL,
+	ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+	ACTION_MODIFY_FIELD_SRC_CLASS_ID,
 	ACTION_MODIFY_FIELD_SRC_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_VALUE,
 	ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -6388,6 +6398,24 @@ static const struct token token_list[] = {
 		.call = parse_vc_modify_field_level,
 		.comp = comp_none,
 	},
+	[ACTION_MODIFY_FIELD_DST_TYPE_ID] = {
+		.name = "dst_type_id",
+		.help = "destination field type ID",
+		.next = NEXT(action_modify_field_dst,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+					dst.type)),
+		.call = parse_vc_conf,
+	},
+	[ACTION_MODIFY_FIELD_DST_CLASS_ID] = {
+		.name = "dst_class",
+		.help = "destination field class ID",
+		.next = NEXT(action_modify_field_dst,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+					     dst.class_id)),
+		.call = parse_vc_conf,
+	},
 	[ACTION_MODIFY_FIELD_DST_OFFSET] = {
 		.name = "dst_offset",
 		.help = "destination field bit offset",
@@ -6423,6 +6451,24 @@ static const struct token token_list[] = {
 		.call = parse_vc_modify_field_level,
 		.comp = comp_none,
 	},
+	[ACTION_MODIFY_FIELD_SRC_TYPE_ID] = {
+		.name = "src_type_id",
+		.help = "source field type ID",
+		.next = NEXT(action_modify_field_src,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+					src.type)),
+		.call = parse_vc_conf,
+	},
+	[ACTION_MODIFY_FIELD_SRC_CLASS_ID] = {
+		.name = "src_class",
+		.help = "source field class ID",
+		.next = NEXT(action_modify_field_src,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+					     src.class_id)),
+		.call = parse_vc_conf,
+	},
 	[ACTION_MODIFY_FIELD_SRC_OFFSET] = {
 		.name = "src_offset",
 		.help = "source field bit offset",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 25b57bf86d..ec812de335 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2937,6 +2937,14 @@ as well as any tag element in the tag array:
 For the tag array (in case of multiple tags are supported and present)
 ``level`` translates directly into the array index.
 
+``type`` is used to specify (along with ``class_id``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
+``class_id`` is used to specify (along with ``type``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
 ``flex_handle`` is used to specify the flex item pointer which is being
 modified. ``flex_handle`` and ``level`` are mutually exclusive.
 
@@ -2967,6 +2975,17 @@ to replace the third byte of MAC address with value 0x85, application should
 specify destination width as 8, destination offset as 16, and provide immediate
 value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
 
+The ``RTE_FLOW_FIELD_GENEVE_OPT_DATA`` type supports modifying only one DW in
+single action and align to 32 bits. For example, for modifying 16 bits start
+from offset 24, 2 different actions should be prepared. The first one includs
+``offset=24`` and ``width=8``, and the seconde one includs ``offset=32`` and
+``width=8``.
+Application should provide the data in immediate value memory only for the
+single DW even though the offset is related to start of first DW. For example,
+to replace the third byte of second DW in Geneve option data with value 0x85,
+application should specify destination width as 8, destination offset as 48,
+and provide immediate value 0xXXXX85XX.
+
 .. _table_rte_flow_action_modify_field:
 
 .. table:: MODIFY_FIELD
@@ -2994,6 +3013,10 @@ value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
    +-----------------+----------------------------------------------------------+
    | ``level``       | encapsulation level of a packet field or tag array index |
    +-----------------+----------------------------------------------------------+
+   | ``type``        | geneve option type                                       |
+   +-----------------+----------------------------------------------------------+
+   | ``class_id``    | geneve option class ID                                   |
+   +-----------------+----------------------------------------------------------+
    | ``flex_handle`` | flex item handle of a packet field                       |
    +-----------------+----------------------------------------------------------+
    | ``offset``      | number of bits to skip at the beginning                  |
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index a9b1293689..ce1755096f 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -84,6 +84,9 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* The ``level`` field in experimental structure
+  ``struct rte_flow_action_modify_data`` was reduced to 8 bits.
+
 
 ABI Changes
 -----------
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 7e0ee8d883..1b68a19900 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -3565,7 +3565,7 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"immediate value, pointer and hash result cannot be used as destination");
-	if (mask_conf->dst.level != UINT32_MAX)
+	if (mask_conf->dst.level != UINT8_MAX)
 		return rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ACTION, action,
 			"destination encapsulation level must be fully masked");
@@ -3579,7 +3579,7 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
 				"destination field mask and template are not equal");
 	if (action_conf->src.field != RTE_FLOW_FIELD_POINTER &&
 	    action_conf->src.field != RTE_FLOW_FIELD_VALUE) {
-		if (mask_conf->src.level != UINT32_MAX)
+		if (mask_conf->src.level != UINT8_MAX)
 			return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"source encapsulation level must be fully masked");
@@ -4450,7 +4450,7 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev,
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = RTE_FLOW_FIELD_VLAN_ID,
-			.level = 0xffffffff, .offset = 0xffffffff,
+			.level = 0xff, .offset = 0xffffffff,
 		},
 		.src = {
 			.field = RTE_FLOW_FIELD_VALUE,
@@ -4583,12 +4583,12 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.width = UINT32_MAX,
@@ -5653,7 +5653,7 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev)
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
@@ -5677,12 +5677,12 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev)
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.width = UINT32_MAX,
@@ -6009,7 +6009,7 @@ flow_hw_create_ctrl_regc_jump_actions_template(struct rte_eth_dev *dev)
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
@@ -6182,12 +6182,12 @@ flow_hw_create_tx_default_mreg_copy_actions_template(struct rte_eth_dev *dev)
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.width = UINT32_MAX,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 713ba8b65c..f30d4b033f 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3773,6 +3773,9 @@ enum rte_flow_field_id {
 	RTE_FLOW_FIELD_IPV6_PROTO,	/**< IPv6 next header. */
 	RTE_FLOW_FIELD_FLEX_ITEM,	/**< Flex item. */
 	RTE_FLOW_FIELD_HASH_RESULT,	/**< Hash result. */
+	RTE_FLOW_FIELD_GENEVE_OPT_TYPE,	/**< GENEVE option type */
+	RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */
+	RTE_FLOW_FIELD_GENEVE_OPT_DATA	/**< GENEVE option data */
 };
 
 /**
@@ -3788,7 +3791,50 @@ struct rte_flow_action_modify_data {
 		struct {
 			/** Encapsulation level or tag index or flex item handle. */
 			union {
-				uint32_t level;
+				struct {
+					/**
+					 * Packet encapsulation level containing
+					 * the field modify to.
+					 *
+					 * - @p 0 requests the default behavior.
+					 *   Depending on the packet type, it
+					 *   can mean outermost, innermost or
+					 *   anything in between.
+					 *
+					 *   It basically stands for the
+					 *   innermost encapsulation level
+					 *   modification can be performed on
+					 *   according to PMD and device
+					 *   capabilities.
+					 *
+					 * - @p 1 requests modification to be
+					 *   performed on the outermost packet
+					 *   encapsulation level.
+					 *
+					 * - @p 2 and subsequent values request
+					 *   modification to be performed on
+					 *   the specified inner packet
+					 *   encapsulation level, from
+					 *   outermost to innermost (lower to
+					 *   higher values).
+					 *
+					 * Values other than @p 0 are not
+					 * necessarily supported.
+					 */
+					uint8_t level;
+					/**
+					 * Geneve option type. relevant only
+					 * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+					 * modification type.
+					 */
+					uint8_t type;
+					/**
+					 * Geneve option class. relevant only
+					 * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+					 * modification type.
+					 */
+					rte_be16_t class_id;
+				};
 				struct rte_flow_item_flex_handle *flex_handle;
 			};
 			/** Number of bits to skip from a field. */
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [PATCH 02/13] security: add MACsec packet number threshold
  @ 2023-05-23 21:29  3%     ` Stephen Hemminger
  2023-05-24  7:12  0%       ` [EXT] " Akhil Goyal
  0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-05-23 21:29 UTC (permalink / raw)
  To: Akhil Goyal
  Cc: dev, thomas, olivier.matz, orika, david.marchand, hemant.agrawal,
	vattunuru, ferruh.yigit, jerinj, adwivedi

On Wed, 24 May 2023 01:19:07 +0530
Akhil Goyal <gakhil@marvell.com> wrote:

> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index c7a523b6d6..30bac4e25a 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -399,6 +399,8 @@ struct rte_security_macsec_sa {
>  struct rte_security_macsec_sc {
>  	/** Direction of SC */
>  	enum rte_security_macsec_direction dir;
> +	/** Packet number threshold */
> +	uint64_t pn_threshold;
>  	union {
>  		struct {
>  			/** SAs for each association number */
> @@ -407,8 +409,10 @@ struct rte_security_macsec_sc {
>  			uint8_t sa_in_use[RTE_SECURITY_MACSEC_NUM_AN];
>  			/** Channel is active */
>  			uint8_t active : 1;
> +			/** Extended packet number is enabled for SAs */
> +			uint8_t is_xpn : 1;
>  			/** Reserved bitfields for future */
> -			uint8_t reserved : 7;
> +			uint8

Is this an ABI change? If so needs to wait for 23.11 release

^ permalink raw reply	[relevance 3%]

* Re: [PATCH] eventdev: fix alignment padding
  2023-05-17 13:35  3%         ` Morten Brørup
@ 2023-05-23 15:15  3%           ` Jerin Jacob
  0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2023-05-23 15:15 UTC (permalink / raw)
  To: Morten Brørup; +Cc: Mattias Rönnblom, Sivaprasad Tummala, jerinj, dev

On Wed, May 17, 2023 at 7:05 PM Morten Brørup <mb@smartsharesystems.com> wrote:
>
> > From: Jerin Jacob [mailto:jerinjacobk@gmail.com]
> > Sent: Wednesday, 17 May 2023 15.20
> >
> > On Tue, Apr 18, 2023 at 8:46 PM Mattias Rönnblom
> > <mattias.ronnblom@ericsson.com> wrote:
> > >
> > > On 2023-04-18 16:07, Morten Brørup wrote:
> > > >> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> > > >> Sent: Tuesday, 18 April 2023 14.31
> > > >>
> > > >> On 2023-04-18 12:45, Sivaprasad Tummala wrote:
> > > >>> fixed the padding required to align to cacheline size.
> > > >>>
> > > >>
> > > >> What's the point in having this structure cache-line aligned? False
> > > >> sharing is a non-issue, since this is more or less a read only struct.
> > > >>
> > > >> This is not so much a comment on your patch, but the __rte_cache_aligned
> > > >> attribute.
> > > >
> > > > When the structure is cache aligned, an individual entry in the array does
> > not unnecessarily cross a cache line border. With 16 pointers and aligned, it
> > uses exactly two cache lines. If unaligned, it may span three cache lines.
> > > >
> > > An *element* in the reserved uint64_t array won't span across two cache
> > > lines, regardless if __rte_cache_aligned is specified or not. You would
> > > need a packed struct for that to occur, plus the reserved array field
> > > being preceded by some appropriately-sized fields.
> > >
> > > The only effect __rte_cache_aligned has on this particular struct is
> > > that if you instantiate the struct on the stack, or as a static
> > > variable, it will be cache-line aligned. That effect you can get by
> > > specifying the attribute when you define the variable, and you will save
> > > some space (by having smaller elements). In this case it doesn't matter
> > > if the array is compact or not, since an application is likely to only
> > > use one of the members in the array.
> > >
> > > It also doesn't matter of the struct is two or three cache lines, as
> > > long as only the first two are used.
> >
> >
> > Discussions stalled at this point.
>
> Not stalled at this point. You seem to have missed my follow-up email clarifying why cache aligning is relevant:
> http://inbox.dpdk.org/dev/98CBD80474FA8B44BF855DF32C47DC35D87897@smartserver.smartshare.dk/
>
> But the patch still breaks the ABI, and thus should be postponed to 23.11.

Yes.

>
> >
> > Hi Shiva,
> >
> > Marking this patch as rejected. If you think the other way, Please
> > change patchwork status and let's discuss more here.
>
> I am not taking any action regarding the status of this patch. I will leave that decision to Jerin and Shiva.

It is good to merge.

Shiva,

Please send ABI change notice for this for 23.11 NOW.
Once it is Acked and merged. I will merge the patch for 23.11 release.

I am marking the patch as DEFERRED in patchwork and next release
window it will come as NEW in patchwork.

>
> >
> >
> >
> > >
> > > >>
> > > >>> Fixes: 54f17843a887 ("eventdev: add port maintenance API")
> > > >>> Cc: mattias.ronnblom@ericsson.com
> > > >>>
> > > >>> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > > >>> ---
> > > >>>    lib/eventdev/rte_eventdev_core.h | 2 +-
> > > >>>    1 file changed, 1 insertion(+), 1 deletion(-)
> > > >>>
> > > >>> diff --git a/lib/eventdev/rte_eventdev_core.h
> > > >> b/lib/eventdev/rte_eventdev_core.h
> > > >>> index c328bdbc82..c27a52ccc0 100644
> > > >>> --- a/lib/eventdev/rte_eventdev_core.h
> > > >>> +++ b/lib/eventdev/rte_eventdev_core.h
> > > >>> @@ -65,7 +65,7 @@ struct rte_event_fp_ops {
> > > >>>     /**< PMD Tx adapter enqueue same destination function. */
> > > >>>     event_crypto_adapter_enqueue_t ca_enqueue;
> > > >>>     /**< PMD Crypto adapter enqueue function. */
> > > >>> -   uintptr_t reserved[6];
> > > >>> +   uintptr_t reserved[5];
> > > >>>    } __rte_cache_aligned;
> > > >>>
> > > >>>    extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
> > > >
> > >

^ permalink raw reply	[relevance 3%]

* [PATCH v4 5/5] ethdev: add MPLS header modification support
    2023-05-23 12:48  3%       ` [PATCH v4 4/5] ethdev: add GENEVE TLV option modification support Michael Baum
@ 2023-05-23 12:48  2%       ` Michael Baum
    2 siblings, 0 replies; 200+ results
From: Michael Baum @ 2023-05-23 12:48 UTC (permalink / raw)
  To: dev; +Cc: Ori Kam, Aman Singh, Yuying Zhang, Ferruh Yigit, Thomas Monjalon

Add support for MPLS modify header using "RTE_FLOW_FIELD_MPLS" id.

Since MPLS heaser might appear more the one time in inner/outer/tunnel,
a new field was added to "rte_flow_action_modify_data" structure in
addition to "level" field.
The "tag_index" field is the index of the header inside encapsulation
level. It is used for modify multiple MPLS headers in same encapsulation
level.

This addition enables to modify multiple VLAN headers too, so the
description of "RTE_FLOW_FIELD_VLAN_XXXX" was updated.

Since the "tag_index" field is added, the "RTE_FLOW_FIELD_TAG" type
moves to use it for tag array instead of using "level" field.
Using "level" is still supported for backwards compatibility when
"tag_index" field is zero.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c            | 24 +++++++++++-
 doc/guides/prog_guide/rte_flow.rst     | 18 ++++++---
 doc/guides/rel_notes/release_23_07.rst |  8 +++-
 drivers/net/mlx5/mlx5_flow.c           | 34 +++++++++++++++++
 drivers/net/mlx5/mlx5_flow.h           | 23 ++++++++++++
 drivers/net/mlx5/mlx5_flow_dv.c        | 29 +++++++--------
 drivers/net/mlx5/mlx5_flow_hw.c        | 21 ++++++++---
 lib/ethdev/rte_flow.h                  | 51 ++++++++++++++++++--------
 8 files changed, 162 insertions(+), 46 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 8c1dea53c0..a51e37276b 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -636,6 +636,7 @@ enum index {
 	ACTION_MODIFY_FIELD_DST_TYPE_VALUE,
 	ACTION_MODIFY_FIELD_DST_LEVEL,
 	ACTION_MODIFY_FIELD_DST_LEVEL_VALUE,
+	ACTION_MODIFY_FIELD_DST_TAG_INDEX,
 	ACTION_MODIFY_FIELD_DST_TYPE_ID,
 	ACTION_MODIFY_FIELD_DST_CLASS_ID,
 	ACTION_MODIFY_FIELD_DST_OFFSET,
@@ -643,6 +644,7 @@ enum index {
 	ACTION_MODIFY_FIELD_SRC_TYPE_VALUE,
 	ACTION_MODIFY_FIELD_SRC_LEVEL,
 	ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE,
+	ACTION_MODIFY_FIELD_SRC_TAG_INDEX,
 	ACTION_MODIFY_FIELD_SRC_TYPE_ID,
 	ACTION_MODIFY_FIELD_SRC_CLASS_ID,
 	ACTION_MODIFY_FIELD_SRC_OFFSET,
@@ -859,7 +861,7 @@ static const char *const modify_field_ids[] = {
 	"ipv6_proto",
 	"flex_item",
 	"hash_result",
-	"geneve_opt_type", "geneve_opt_class", "geneve_opt_data",
+	"geneve_opt_type", "geneve_opt_class", "geneve_opt_data", "mpls",
 	NULL
 };
 
@@ -2301,6 +2303,7 @@ static const enum index next_action_sample[] = {
 
 static const enum index action_modify_field_dst[] = {
 	ACTION_MODIFY_FIELD_DST_LEVEL,
+	ACTION_MODIFY_FIELD_DST_TAG_INDEX,
 	ACTION_MODIFY_FIELD_DST_TYPE_ID,
 	ACTION_MODIFY_FIELD_DST_CLASS_ID,
 	ACTION_MODIFY_FIELD_DST_OFFSET,
@@ -2310,6 +2313,7 @@ static const enum index action_modify_field_dst[] = {
 
 static const enum index action_modify_field_src[] = {
 	ACTION_MODIFY_FIELD_SRC_LEVEL,
+	ACTION_MODIFY_FIELD_SRC_TAG_INDEX,
 	ACTION_MODIFY_FIELD_SRC_TYPE_ID,
 	ACTION_MODIFY_FIELD_SRC_CLASS_ID,
 	ACTION_MODIFY_FIELD_SRC_OFFSET,
@@ -6398,6 +6402,15 @@ static const struct token token_list[] = {
 		.call = parse_vc_modify_field_level,
 		.comp = comp_none,
 	},
+	[ACTION_MODIFY_FIELD_DST_TAG_INDEX] = {
+		.name = "dst_tag_index",
+		.help = "destination field tag array",
+		.next = NEXT(action_modify_field_dst,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+					dst.tag_index)),
+		.call = parse_vc_conf,
+	},
 	[ACTION_MODIFY_FIELD_DST_TYPE_ID] = {
 		.name = "dst_type_id",
 		.help = "destination field type ID",
@@ -6451,6 +6464,15 @@ static const struct token token_list[] = {
 		.call = parse_vc_modify_field_level,
 		.comp = comp_none,
 	},
+	[ACTION_MODIFY_FIELD_SRC_TAG_INDEX] = {
+		.name = "stc_tag_index",
+		.help = "source field tag array",
+		.next = NEXT(action_modify_field_src,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+					src.tag_index)),
+		.call = parse_vc_conf,
+	},
 	[ACTION_MODIFY_FIELD_SRC_TYPE_ID] = {
 		.name = "src_type_id",
 		.help = "source field type ID",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index ec812de335..e4328e7ed6 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2925,8 +2925,7 @@ See ``enum rte_flow_field_id`` for the list of supported fields.
 
 ``width`` defines a number of bits to use from ``src`` field.
 
-``level`` is used to access any packet field on any encapsulation level
-as well as any tag element in the tag array:
+``level`` is used to access any packet field on any encapsulation level:
 
 - ``0`` means the default behaviour. Depending on the packet type,
   it can mean outermost, innermost or anything in between.
@@ -2934,8 +2933,15 @@ as well as any tag element in the tag array:
 - ``2`` and subsequent values requests access to the specified packet
   encapsulation level, from outermost to innermost (lower to higher values).
 
-For the tag array (in case of multiple tags are supported and present)
-``level`` translates directly into the array index.
+``tag_index`` is the index of the header inside encapsulation level.
+It is used for modify either ``VLAN`` or ``MPLS`` or ``TAG`` headers which
+multiple of them might be supported in same encapsulation level.
+
+.. note::
+
+   For ``RTE_FLOW_FIELD_TAG`` type, the tag array was provided in ``level``
+   field and it is still supported for backwards compatibility.
+   When ``tag_index`` is zero, the tag array is taken from ``level`` field.
 
 ``type`` is used to specify (along with ``class_id``) the Geneve option which
 is being modified.
@@ -3011,7 +3017,9 @@ and provide immediate value 0xXXXX85XX.
    +=================+==========================================================+
    | ``field``       | ID: packet field, mark, meta, tag, immediate, pointer    |
    +-----------------+----------------------------------------------------------+
-   | ``level``       | encapsulation level of a packet field or tag array index |
+   | ``level``       | encapsulation level of a packet field                    |
+   +-----------------+----------------------------------------------------------+
+   | ``tag_index``   | tag index inside encapsulation level                     |
    +-----------------+----------------------------------------------------------+
    | ``type``        | geneve option type                                       |
    +-----------------+----------------------------------------------------------+
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index ce1755096f..fd3e35eea3 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -84,8 +84,12 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
-* The ``level`` field in experimental structure
-  ``struct rte_flow_action_modify_data`` was reduced to 8 bits.
+* ethdev: in experimental structure ``struct rte_flow_action_modify_data``:
+
+  * ``level`` field was reduced to 8 bits.
+
+  * ``tag_index`` field replaced ``level`` field in representing tag array for
+    ``RTE_FLOW_FIELD_TAG`` type.
 
 
 ABI Changes
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 19f7f92717..867b7b8ea2 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2318,6 +2318,40 @@ mlx5_validate_action_ct(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/**
+ * Validate the level value for modify field action.
+ *
+ * @param[in] data
+ *   Pointer to the rte_flow_action_modify_data structure either src or dst.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+flow_validate_modify_field_level(const struct rte_flow_action_modify_data *data,
+				 struct rte_flow_error *error)
+{
+	if (data->level == 0)
+		return 0;
+	if (data->field != RTE_FLOW_FIELD_TAG)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+					  "inner header fields modification is not supported");
+	if (data->tag_index != 0)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+					  "tag array can be provided using 'level' or 'tag_index' fields, not both");
+	/*
+	 * The tag array for RTE_FLOW_FIELD_TAG type is provided using
+	 * 'tag_index' field. In old API, it was provided using 'level' field
+	 * and it is still supported for backwards compatibility.
+	 */
+	DRV_LOG(WARNING, "tag array provided in 'level' field instead of 'tag_index' field.");
+	return 0;
+}
+
 /**
  * Validate ICMP6 item.
  *
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 1d116ea0f6..cba04b4f45 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1045,6 +1045,26 @@ flow_items_to_tunnel(const struct rte_flow_item items[])
 	return items[0].spec;
 }
 
+/**
+ * Gets the tag array given for RTE_FLOW_FIELD_TAG type.
+ *
+ * In old API the value was provided in "level" field, but in new API
+ * it is provided in "tag_array" field. Since encapsulation level is not
+ * relevant for metadata, the tag array can be still provided in "level"
+ * for backwards compatibility.
+ *
+ * @param[in] data
+ *   Pointer to tag modify data structure.
+ *
+ * @return
+ *   Tag array index.
+ */
+static inline uint8_t
+flow_tag_index_get(const struct rte_flow_action_modify_data *data)
+{
+	return data->tag_index ? data->tag_index : data->level;
+}
+
 /**
  * Fetch 1, 2, 3 or 4 byte field from the byte array
  * and return as unsigned integer in host-endian format.
@@ -2276,6 +2296,9 @@ int mlx5_flow_validate_action_rss(const struct rte_flow_action *action,
 int mlx5_flow_validate_action_default_miss(uint64_t action_flags,
 				const struct rte_flow_attr *attr,
 				struct rte_flow_error *error);
+int flow_validate_modify_field_level
+			(const struct rte_flow_action_modify_data *data,
+			 struct rte_flow_error *error);
 int mlx5_flow_item_acceptable(const struct rte_flow_item *item,
 			      const uint8_t *mask,
 			      const uint8_t *nic_mask,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index f136f43b0a..729962a488 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1896,16 +1896,17 @@ mlx5_flow_field_id_to_modify_info
 	case RTE_FLOW_FIELD_TAG:
 		{
 			MLX5_ASSERT(data->offset + width <= 32);
+			uint8_t tag_index = flow_tag_index_get(data);
 			int reg;
 
-			off_be = (data->level == MLX5_LINEAR_HASH_TAG_INDEX) ?
+			off_be = (tag_index == MLX5_LINEAR_HASH_TAG_INDEX) ?
 				 16 - (data->offset + width) + 16 : data->offset;
 			if (priv->sh->config.dv_flow_en == 2)
 				reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_TAG,
-							 data->level);
+							 tag_index);
 			else
 				reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG,
-							   data->level, error);
+							   tag_index, error);
 			if (reg < 0)
 				return;
 			MLX5_ASSERT(reg != REG_NON);
@@ -1985,7 +1986,7 @@ mlx5_flow_field_id_to_modify_info
 		{
 			uint32_t meta_mask = priv->sh->dv_meta_mask;
 			uint32_t meta_count = __builtin_popcount(meta_mask);
-			uint32_t reg = data->level;
+			uint8_t reg = flow_tag_index_get(data);
 
 			RTE_SET_USED(meta_count);
 			MLX5_ASSERT(data->offset + width <= meta_count);
@@ -5250,6 +5251,14 @@ flow_dv_validate_action_modify_field(struct rte_eth_dev *dev,
 	uint32_t dst_width, src_width;
 
 	ret = flow_dv_validate_action_modify_hdr(action_flags, action, error);
+	if (ret)
+		return ret;
+	ret = flow_validate_modify_field_level(&action_modify_field->dst,
+					       error);
+	if (ret)
+		return ret;
+	ret = flow_validate_modify_field_level(&action_modify_field->src,
+					       error);
 	if (ret)
 		return ret;
 	if (action_modify_field->src.field == RTE_FLOW_FIELD_FLEX_ITEM ||
@@ -5279,12 +5288,6 @@ flow_dv_validate_action_modify_field(struct rte_eth_dev *dev,
 			return rte_flow_error_set(error, EINVAL,
 					RTE_FLOW_ERROR_TYPE_ACTION, action,
 					"destination offset is too big");
-		if (action_modify_field->dst.level &&
-		    action_modify_field->dst.field != RTE_FLOW_FIELD_TAG)
-			return rte_flow_error_set(error, ENOTSUP,
-					RTE_FLOW_ERROR_TYPE_ACTION, action,
-					"inner header fields modification"
-					" is not supported");
 	}
 	if (action_modify_field->src.field != RTE_FLOW_FIELD_VALUE &&
 	    action_modify_field->src.field != RTE_FLOW_FIELD_POINTER) {
@@ -5298,12 +5301,6 @@ flow_dv_validate_action_modify_field(struct rte_eth_dev *dev,
 			return rte_flow_error_set(error, EINVAL,
 					RTE_FLOW_ERROR_TYPE_ACTION, action,
 					"source offset is too big");
-		if (action_modify_field->src.level &&
-		    action_modify_field->src.field != RTE_FLOW_FIELD_TAG)
-			return rte_flow_error_set(error, ENOTSUP,
-					RTE_FLOW_ERROR_TYPE_ACTION, action,
-					"inner header fields modification"
-					" is not supported");
 	}
 	if ((action_modify_field->dst.field ==
 	     action_modify_field->src.field) &&
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 1b68a19900..e55e3d6c1a 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -1022,9 +1022,11 @@ flow_hw_modify_field_compile(struct rte_eth_dev *dev,
 		    conf->dst.field == RTE_FLOW_FIELD_TAG ||
 		    conf->dst.field == RTE_FLOW_FIELD_METER_COLOR ||
 		    conf->dst.field == (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG) {
+			uint8_t tag_index = flow_tag_index_get(&conf->dst);
+
 			value = *(const unaligned_uint32_t *)item.spec;
 			if (conf->dst.field == RTE_FLOW_FIELD_TAG &&
-			    conf->dst.level == MLX5_LINEAR_HASH_TAG_INDEX)
+			    tag_index == MLX5_LINEAR_HASH_TAG_INDEX)
 				value = rte_cpu_to_be_32(value << 16);
 			else
 				value = rte_cpu_to_be_32(value);
@@ -2055,9 +2057,11 @@ flow_hw_modify_field_construct(struct mlx5_hw_q_job *job,
 	    mhdr_action->dst.field == RTE_FLOW_FIELD_TAG ||
 	    mhdr_action->dst.field == RTE_FLOW_FIELD_METER_COLOR ||
 	    mhdr_action->dst.field == (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG) {
+		uint8_t tag_index = flow_tag_index_get(&mhdr_action->dst);
+
 		value_p = (unaligned_uint32_t *)values;
 		if (mhdr_action->dst.field == RTE_FLOW_FIELD_TAG &&
-		    mhdr_action->dst.level == MLX5_LINEAR_HASH_TAG_INDEX)
+		    tag_index == MLX5_LINEAR_HASH_TAG_INDEX)
 			*value_p = rte_cpu_to_be_32(*value_p << 16);
 		else
 			*value_p = rte_cpu_to_be_32(*value_p);
@@ -3546,11 +3550,16 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
 				     const struct rte_flow_action *mask,
 				     struct rte_flow_error *error)
 {
-	const struct rte_flow_action_modify_field *action_conf =
-		action->conf;
-	const struct rte_flow_action_modify_field *mask_conf =
-		mask->conf;
+	const struct rte_flow_action_modify_field *action_conf = action->conf;
+	const struct rte_flow_action_modify_field *mask_conf = mask->conf;
+	int ret;
 
+	ret = flow_validate_modify_field_level(&action_conf->dst, error);
+	if (ret)
+		return ret;
+	ret = flow_validate_modify_field_level(&action_conf->src, error);
+	if (ret)
+		return ret;
 	if (action_conf->operation != mask_conf->operation)
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index f30d4b033f..1df4b49219 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3740,8 +3740,8 @@ enum rte_flow_field_id {
 	RTE_FLOW_FIELD_START = 0,	/**< Start of a packet. */
 	RTE_FLOW_FIELD_MAC_DST,		/**< Destination MAC Address. */
 	RTE_FLOW_FIELD_MAC_SRC,		/**< Source MAC Address. */
-	RTE_FLOW_FIELD_VLAN_TYPE,	/**< 802.1Q Tag Identifier. */
-	RTE_FLOW_FIELD_VLAN_ID,		/**< 802.1Q VLAN Identifier. */
+	RTE_FLOW_FIELD_VLAN_TYPE,	/**< VLAN Tag Identifier. */
+	RTE_FLOW_FIELD_VLAN_ID,		/**< VLAN Identifier. */
 	RTE_FLOW_FIELD_MAC_TYPE,	/**< EtherType. */
 	RTE_FLOW_FIELD_IPV4_DSCP,	/**< IPv4 DSCP. */
 	RTE_FLOW_FIELD_IPV4_TTL,	/**< IPv4 Time To Live. */
@@ -3775,7 +3775,8 @@ enum rte_flow_field_id {
 	RTE_FLOW_FIELD_HASH_RESULT,	/**< Hash result. */
 	RTE_FLOW_FIELD_GENEVE_OPT_TYPE,	/**< GENEVE option type */
 	RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */
-	RTE_FLOW_FIELD_GENEVE_OPT_DATA	/**< GENEVE option data */
+	RTE_FLOW_FIELD_GENEVE_OPT_DATA,	/**< GENEVE option data */
+	RTE_FLOW_FIELD_MPLS		/**< MPLS header. */
 };
 
 /**
@@ -3789,7 +3790,7 @@ struct rte_flow_action_modify_data {
 	RTE_STD_C11
 	union {
 		struct {
-			/** Encapsulation level or tag index or flex item handle. */
+			/** Encapsulation level and tag index or flex item handle. */
 			union {
 				struct {
 					/**
@@ -3820,20 +3821,38 @@ struct rte_flow_action_modify_data {
 					 *
 					 * Values other than @p 0 are not
 					 * necessarily supported.
+					 *
+					 * @note that for MPLS field,
+					 * encapsulation level also include
+					 * tunnel since MPLS may appear in
+					 * outer, inner or tunnel.
 					 */
 					uint8_t level;
-					/**
-					 * Geneve option type. relevant only
-					 * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
-					 * modification type.
-					 */
-					uint8_t type;
-					/**
-					 * Geneve option class. relevant only
-					 * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
-					 * modification type.
-					 */
-					rte_be16_t class_id;
+					union {
+						/**
+						 * Tag index array inside
+						 * encapsulation level.
+						 * Used for VLAN, MPLS or TAG
+						 * types.
+						 */
+						uint8_t tag_index;
+						/**
+						 * Geneve option identifier.
+						 * relevant only for
+						 * RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+						 * modification type.
+						 */
+						struct {
+							/**
+							 * Geneve option type.
+							 */
+							uint8_t type;
+							/**
+							 * Geneve option class.
+							 */
+							rte_be16_t class_id;
+						};
+					};
 				};
 				struct rte_flow_item_flex_handle *flex_handle;
 			};
-- 
2.25.1


^ permalink raw reply	[relevance 2%]

* [PATCH v4 4/5] ethdev: add GENEVE TLV option modification support
  @ 2023-05-23 12:48  3%       ` Michael Baum
  2023-05-23 12:48  2%       ` [PATCH v4 5/5] ethdev: add MPLS header " Michael Baum
    2 siblings, 0 replies; 200+ results
From: Michael Baum @ 2023-05-23 12:48 UTC (permalink / raw)
  To: dev; +Cc: Ori Kam, Aman Singh, Yuying Zhang, Ferruh Yigit, Thomas Monjalon

Add modify field support for GENEVE option fields:
 - "RTE_FLOW_FIELD_GENEVE_OPT_TYPE"
 - "RTE_FLOW_FIELD_GENEVE_OPT_CLASS"
 - "RTE_FLOW_FIELD_GENEVE_OPT_DATA"

Each GENEVE TLV option is identified by both its "class" and "type", so
2 new fields were added to "rte_flow_action_modify_data" structure to
help specify which option to modify.

To get room for those 2 new fields, the "level" field move to use
"uint8_t" which is more than enough for encapsulation level.
This patch also reduces all modify field encapsulation level "fully
masked" initializations to use UINT8_MAX instead of UINT32_MAX.
This change avoids compilation warning caused by this API changing.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c            | 48 +++++++++++++++++++++++++-
 doc/guides/prog_guide/rte_flow.rst     | 23 ++++++++++++
 doc/guides/rel_notes/release_23_07.rst |  3 ++
 drivers/net/mlx5/mlx5_flow_hw.c        | 22 ++++++------
 lib/ethdev/rte_flow.h                  | 48 +++++++++++++++++++++++++-
 5 files changed, 131 insertions(+), 13 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 58939ec321..8c1dea53c0 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -636,11 +636,15 @@ enum index {
 	ACTION_MODIFY_FIELD_DST_TYPE_VALUE,
 	ACTION_MODIFY_FIELD_DST_LEVEL,
 	ACTION_MODIFY_FIELD_DST_LEVEL_VALUE,
+	ACTION_MODIFY_FIELD_DST_TYPE_ID,
+	ACTION_MODIFY_FIELD_DST_CLASS_ID,
 	ACTION_MODIFY_FIELD_DST_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_TYPE,
 	ACTION_MODIFY_FIELD_SRC_TYPE_VALUE,
 	ACTION_MODIFY_FIELD_SRC_LEVEL,
 	ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE,
+	ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+	ACTION_MODIFY_FIELD_SRC_CLASS_ID,
 	ACTION_MODIFY_FIELD_SRC_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_VALUE,
 	ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -854,7 +858,9 @@ static const char *const modify_field_ids[] = {
 	"ipv4_ecn", "ipv6_ecn", "gtp_psc_qfi", "meter_color",
 	"ipv6_proto",
 	"flex_item",
-	"hash_result", NULL
+	"hash_result",
+	"geneve_opt_type", "geneve_opt_class", "geneve_opt_data",
+	NULL
 };
 
 static const char *const meter_colors[] = {
@@ -2295,6 +2301,8 @@ static const enum index next_action_sample[] = {
 
 static const enum index action_modify_field_dst[] = {
 	ACTION_MODIFY_FIELD_DST_LEVEL,
+	ACTION_MODIFY_FIELD_DST_TYPE_ID,
+	ACTION_MODIFY_FIELD_DST_CLASS_ID,
 	ACTION_MODIFY_FIELD_DST_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_TYPE,
 	ZERO,
@@ -2302,6 +2310,8 @@ static const enum index action_modify_field_dst[] = {
 
 static const enum index action_modify_field_src[] = {
 	ACTION_MODIFY_FIELD_SRC_LEVEL,
+	ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+	ACTION_MODIFY_FIELD_SRC_CLASS_ID,
 	ACTION_MODIFY_FIELD_SRC_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_VALUE,
 	ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -6388,6 +6398,24 @@ static const struct token token_list[] = {
 		.call = parse_vc_modify_field_level,
 		.comp = comp_none,
 	},
+	[ACTION_MODIFY_FIELD_DST_TYPE_ID] = {
+		.name = "dst_type_id",
+		.help = "destination field type ID",
+		.next = NEXT(action_modify_field_dst,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+					dst.type)),
+		.call = parse_vc_conf,
+	},
+	[ACTION_MODIFY_FIELD_DST_CLASS_ID] = {
+		.name = "dst_class",
+		.help = "destination field class ID",
+		.next = NEXT(action_modify_field_dst,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+					     dst.class_id)),
+		.call = parse_vc_conf,
+	},
 	[ACTION_MODIFY_FIELD_DST_OFFSET] = {
 		.name = "dst_offset",
 		.help = "destination field bit offset",
@@ -6423,6 +6451,24 @@ static const struct token token_list[] = {
 		.call = parse_vc_modify_field_level,
 		.comp = comp_none,
 	},
+	[ACTION_MODIFY_FIELD_SRC_TYPE_ID] = {
+		.name = "src_type_id",
+		.help = "source field type ID",
+		.next = NEXT(action_modify_field_src,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+					src.type)),
+		.call = parse_vc_conf,
+	},
+	[ACTION_MODIFY_FIELD_SRC_CLASS_ID] = {
+		.name = "src_class",
+		.help = "source field class ID",
+		.next = NEXT(action_modify_field_src,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+					     src.class_id)),
+		.call = parse_vc_conf,
+	},
 	[ACTION_MODIFY_FIELD_SRC_OFFSET] = {
 		.name = "src_offset",
 		.help = "source field bit offset",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 25b57bf86d..ec812de335 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2937,6 +2937,14 @@ as well as any tag element in the tag array:
 For the tag array (in case of multiple tags are supported and present)
 ``level`` translates directly into the array index.
 
+``type`` is used to specify (along with ``class_id``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
+``class_id`` is used to specify (along with ``type``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
 ``flex_handle`` is used to specify the flex item pointer which is being
 modified. ``flex_handle`` and ``level`` are mutually exclusive.
 
@@ -2967,6 +2975,17 @@ to replace the third byte of MAC address with value 0x85, application should
 specify destination width as 8, destination offset as 16, and provide immediate
 value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
 
+The ``RTE_FLOW_FIELD_GENEVE_OPT_DATA`` type supports modifying only one DW in
+single action and align to 32 bits. For example, for modifying 16 bits start
+from offset 24, 2 different actions should be prepared. The first one includs
+``offset=24`` and ``width=8``, and the seconde one includs ``offset=32`` and
+``width=8``.
+Application should provide the data in immediate value memory only for the
+single DW even though the offset is related to start of first DW. For example,
+to replace the third byte of second DW in Geneve option data with value 0x85,
+application should specify destination width as 8, destination offset as 48,
+and provide immediate value 0xXXXX85XX.
+
 .. _table_rte_flow_action_modify_field:
 
 .. table:: MODIFY_FIELD
@@ -2994,6 +3013,10 @@ value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
    +-----------------+----------------------------------------------------------+
    | ``level``       | encapsulation level of a packet field or tag array index |
    +-----------------+----------------------------------------------------------+
+   | ``type``        | geneve option type                                       |
+   +-----------------+----------------------------------------------------------+
+   | ``class_id``    | geneve option class ID                                   |
+   +-----------------+----------------------------------------------------------+
    | ``flex_handle`` | flex item handle of a packet field                       |
    +-----------------+----------------------------------------------------------+
    | ``offset``      | number of bits to skip at the beginning                  |
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index a9b1293689..ce1755096f 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -84,6 +84,9 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* The ``level`` field in experimental structure
+  ``struct rte_flow_action_modify_data`` was reduced to 8 bits.
+
 
 ABI Changes
 -----------
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 7e0ee8d883..1b68a19900 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -3565,7 +3565,7 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"immediate value, pointer and hash result cannot be used as destination");
-	if (mask_conf->dst.level != UINT32_MAX)
+	if (mask_conf->dst.level != UINT8_MAX)
 		return rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ACTION, action,
 			"destination encapsulation level must be fully masked");
@@ -3579,7 +3579,7 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
 				"destination field mask and template are not equal");
 	if (action_conf->src.field != RTE_FLOW_FIELD_POINTER &&
 	    action_conf->src.field != RTE_FLOW_FIELD_VALUE) {
-		if (mask_conf->src.level != UINT32_MAX)
+		if (mask_conf->src.level != UINT8_MAX)
 			return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"source encapsulation level must be fully masked");
@@ -4450,7 +4450,7 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev,
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = RTE_FLOW_FIELD_VLAN_ID,
-			.level = 0xffffffff, .offset = 0xffffffff,
+			.level = 0xff, .offset = 0xffffffff,
 		},
 		.src = {
 			.field = RTE_FLOW_FIELD_VALUE,
@@ -4583,12 +4583,12 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.width = UINT32_MAX,
@@ -5653,7 +5653,7 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev)
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
@@ -5677,12 +5677,12 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev)
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.width = UINT32_MAX,
@@ -6009,7 +6009,7 @@ flow_hw_create_ctrl_regc_jump_actions_template(struct rte_eth_dev *dev)
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
@@ -6182,12 +6182,12 @@ flow_hw_create_tx_default_mreg_copy_actions_template(struct rte_eth_dev *dev)
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.width = UINT32_MAX,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 713ba8b65c..f30d4b033f 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3773,6 +3773,9 @@ enum rte_flow_field_id {
 	RTE_FLOW_FIELD_IPV6_PROTO,	/**< IPv6 next header. */
 	RTE_FLOW_FIELD_FLEX_ITEM,	/**< Flex item. */
 	RTE_FLOW_FIELD_HASH_RESULT,	/**< Hash result. */
+	RTE_FLOW_FIELD_GENEVE_OPT_TYPE,	/**< GENEVE option type */
+	RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */
+	RTE_FLOW_FIELD_GENEVE_OPT_DATA	/**< GENEVE option data */
 };
 
 /**
@@ -3788,7 +3791,50 @@ struct rte_flow_action_modify_data {
 		struct {
 			/** Encapsulation level or tag index or flex item handle. */
 			union {
-				uint32_t level;
+				struct {
+					/**
+					 * Packet encapsulation level containing
+					 * the field modify to.
+					 *
+					 * - @p 0 requests the default behavior.
+					 *   Depending on the packet type, it
+					 *   can mean outermost, innermost or
+					 *   anything in between.
+					 *
+					 *   It basically stands for the
+					 *   innermost encapsulation level
+					 *   modification can be performed on
+					 *   according to PMD and device
+					 *   capabilities.
+					 *
+					 * - @p 1 requests modification to be
+					 *   performed on the outermost packet
+					 *   encapsulation level.
+					 *
+					 * - @p 2 and subsequent values request
+					 *   modification to be performed on
+					 *   the specified inner packet
+					 *   encapsulation level, from
+					 *   outermost to innermost (lower to
+					 *   higher values).
+					 *
+					 * Values other than @p 0 are not
+					 * necessarily supported.
+					 */
+					uint8_t level;
+					/**
+					 * Geneve option type. relevant only
+					 * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+					 * modification type.
+					 */
+					uint8_t type;
+					/**
+					 * Geneve option class. relevant only
+					 * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+					 * modification type.
+					 */
+					rte_be16_t class_id;
+				};
 				struct rte_flow_item_flex_handle *flex_handle;
 			};
 			/** Number of bits to skip from a field. */
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [PATCH V5 0/5] app/testpmd: support multiple process attach and detach port
    2023-05-16 11:27  0%   ` lihuisong (C)
@ 2023-05-23  0:46  0%   ` fengchengwen
  1 sibling, 0 replies; 200+ results
From: fengchengwen @ 2023-05-23  0:46 UTC (permalink / raw)
  To: Huisong Li, dev
  Cc: thomas, ferruh.yigit, andrew.rybchenko, liudongdong3, huangdaode

with 2/5 fixed,
Series-acked-by: Chengwen Feng <fengchengwen@huawei.com>

On 2023/1/31 11:33, Huisong Li wrote:
> This patchset fix some bugs and support attaching and detaching port
> in primary and secondary.
> 
> ---
>  -v5: move 'ALLOCATED' state to the back of 'REMOVED' to avoid abi break.
>  -v4: fix a misspelling. 
>  -v3:
>    #1 merge patch 1/6 and patch 2/6 into patch 1/5, and add modification
>       for other bus type.
>    #2 add a RTE_ETH_DEV_ALLOCATED state in rte_eth_dev_state to resolve
>       the probelm in patch 2/5. 
>  -v2: resend due to CI unexplained failure.
> 
> Huisong Li (5):
>   drivers/bus: restore driver assignment at front of probing
>   ethdev: fix skip valid port in probing callback
>   app/testpmd: check the validity of the port
>   app/testpmd: add attach and detach port for multiple process
>   app/testpmd: stop forwarding in new or destroy event
> 
>  app/test-pmd/testpmd.c                   | 47 +++++++++++++++---------
>  app/test-pmd/testpmd.h                   |  1 -
>  drivers/bus/auxiliary/auxiliary_common.c |  9 ++++-
>  drivers/bus/dpaa/dpaa_bus.c              |  9 ++++-
>  drivers/bus/fslmc/fslmc_bus.c            |  8 +++-
>  drivers/bus/ifpga/ifpga_bus.c            | 12 ++++--
>  drivers/bus/pci/pci_common.c             |  9 ++++-
>  drivers/bus/vdev/vdev.c                  | 10 ++++-
>  drivers/bus/vmbus/vmbus_common.c         |  9 ++++-
>  drivers/net/bnxt/bnxt_ethdev.c           |  3 +-
>  drivers/net/bonding/bonding_testpmd.c    |  1 -
>  drivers/net/mlx5/mlx5.c                  |  2 +-
>  lib/ethdev/ethdev_driver.c               | 13 +++++--
>  lib/ethdev/ethdev_driver.h               | 12 ++++++
>  lib/ethdev/ethdev_pci.h                  |  2 +-
>  lib/ethdev/rte_class_eth.c               |  2 +-
>  lib/ethdev/rte_ethdev.c                  |  4 +-
>  lib/ethdev/rte_ethdev.h                  |  4 +-
>  lib/ethdev/version.map                   |  1 +
>  19 files changed, 114 insertions(+), 44 deletions(-)
> 

^ permalink raw reply	[relevance 0%]

* [PATCH v3 5/5] ethdev: add MPLS header modification support
    2023-05-22 19:28  3%     ` [PATCH v3 4/5] ethdev: add GENEVE TLV option modification support Michael Baum
@ 2023-05-22 19:28  3%     ` Michael Baum
    2 siblings, 0 replies; 200+ results
From: Michael Baum @ 2023-05-22 19:28 UTC (permalink / raw)
  To: dev; +Cc: Ori Kam, Aman Singh, Yuying Zhang, Ferruh Yigit, Thomas Monjalon

Add support for MPLS modify header using "RTE_FLOW_FIELD_MPLS" id.

Since MPLS heaser might appear more the one time in inner/outer/tunnel,
a new field was added to "rte_flow_action_modify_data" structure in
addition to "level" field.
The "tag_index" field is the index of the header inside encapsulation
level. It is used for modify multiple MPLS headers in same encapsulation
level.

This addition enables to modify multiple VLAN headers too, so the
description of "RTE_FLOW_FIELD_VLAN_XXXX" was updated.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
 app/test-pmd/cmdline_flow.c            | 24 +++++++++++-
 doc/guides/prog_guide/rte_flow.rst     | 18 ++++++---
 doc/guides/rel_notes/release_23_07.rst |  8 +++-
 lib/ethdev/rte_flow.h                  | 51 ++++++++++++++++++--------
 4 files changed, 77 insertions(+), 24 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 8c1dea53c0..a51e37276b 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -636,6 +636,7 @@ enum index {
 	ACTION_MODIFY_FIELD_DST_TYPE_VALUE,
 	ACTION_MODIFY_FIELD_DST_LEVEL,
 	ACTION_MODIFY_FIELD_DST_LEVEL_VALUE,
+	ACTION_MODIFY_FIELD_DST_TAG_INDEX,
 	ACTION_MODIFY_FIELD_DST_TYPE_ID,
 	ACTION_MODIFY_FIELD_DST_CLASS_ID,
 	ACTION_MODIFY_FIELD_DST_OFFSET,
@@ -643,6 +644,7 @@ enum index {
 	ACTION_MODIFY_FIELD_SRC_TYPE_VALUE,
 	ACTION_MODIFY_FIELD_SRC_LEVEL,
 	ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE,
+	ACTION_MODIFY_FIELD_SRC_TAG_INDEX,
 	ACTION_MODIFY_FIELD_SRC_TYPE_ID,
 	ACTION_MODIFY_FIELD_SRC_CLASS_ID,
 	ACTION_MODIFY_FIELD_SRC_OFFSET,
@@ -859,7 +861,7 @@ static const char *const modify_field_ids[] = {
 	"ipv6_proto",
 	"flex_item",
 	"hash_result",
-	"geneve_opt_type", "geneve_opt_class", "geneve_opt_data",
+	"geneve_opt_type", "geneve_opt_class", "geneve_opt_data", "mpls",
 	NULL
 };
 
@@ -2301,6 +2303,7 @@ static const enum index next_action_sample[] = {
 
 static const enum index action_modify_field_dst[] = {
 	ACTION_MODIFY_FIELD_DST_LEVEL,
+	ACTION_MODIFY_FIELD_DST_TAG_INDEX,
 	ACTION_MODIFY_FIELD_DST_TYPE_ID,
 	ACTION_MODIFY_FIELD_DST_CLASS_ID,
 	ACTION_MODIFY_FIELD_DST_OFFSET,
@@ -2310,6 +2313,7 @@ static const enum index action_modify_field_dst[] = {
 
 static const enum index action_modify_field_src[] = {
 	ACTION_MODIFY_FIELD_SRC_LEVEL,
+	ACTION_MODIFY_FIELD_SRC_TAG_INDEX,
 	ACTION_MODIFY_FIELD_SRC_TYPE_ID,
 	ACTION_MODIFY_FIELD_SRC_CLASS_ID,
 	ACTION_MODIFY_FIELD_SRC_OFFSET,
@@ -6398,6 +6402,15 @@ static const struct token token_list[] = {
 		.call = parse_vc_modify_field_level,
 		.comp = comp_none,
 	},
+	[ACTION_MODIFY_FIELD_DST_TAG_INDEX] = {
+		.name = "dst_tag_index",
+		.help = "destination field tag array",
+		.next = NEXT(action_modify_field_dst,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+					dst.tag_index)),
+		.call = parse_vc_conf,
+	},
 	[ACTION_MODIFY_FIELD_DST_TYPE_ID] = {
 		.name = "dst_type_id",
 		.help = "destination field type ID",
@@ -6451,6 +6464,15 @@ static const struct token token_list[] = {
 		.call = parse_vc_modify_field_level,
 		.comp = comp_none,
 	},
+	[ACTION_MODIFY_FIELD_SRC_TAG_INDEX] = {
+		.name = "stc_tag_index",
+		.help = "source field tag array",
+		.next = NEXT(action_modify_field_src,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+					src.tag_index)),
+		.call = parse_vc_conf,
+	},
 	[ACTION_MODIFY_FIELD_SRC_TYPE_ID] = {
 		.name = "src_type_id",
 		.help = "source field type ID",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index ec812de335..e4328e7ed6 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2925,8 +2925,7 @@ See ``enum rte_flow_field_id`` for the list of supported fields.
 
 ``width`` defines a number of bits to use from ``src`` field.
 
-``level`` is used to access any packet field on any encapsulation level
-as well as any tag element in the tag array:
+``level`` is used to access any packet field on any encapsulation level:
 
 - ``0`` means the default behaviour. Depending on the packet type,
   it can mean outermost, innermost or anything in between.
@@ -2934,8 +2933,15 @@ as well as any tag element in the tag array:
 - ``2`` and subsequent values requests access to the specified packet
   encapsulation level, from outermost to innermost (lower to higher values).
 
-For the tag array (in case of multiple tags are supported and present)
-``level`` translates directly into the array index.
+``tag_index`` is the index of the header inside encapsulation level.
+It is used for modify either ``VLAN`` or ``MPLS`` or ``TAG`` headers which
+multiple of them might be supported in same encapsulation level.
+
+.. note::
+
+   For ``RTE_FLOW_FIELD_TAG`` type, the tag array was provided in ``level``
+   field and it is still supported for backwards compatibility.
+   When ``tag_index`` is zero, the tag array is taken from ``level`` field.
 
 ``type`` is used to specify (along with ``class_id``) the Geneve option which
 is being modified.
@@ -3011,7 +3017,9 @@ and provide immediate value 0xXXXX85XX.
    +=================+==========================================================+
    | ``field``       | ID: packet field, mark, meta, tag, immediate, pointer    |
    +-----------------+----------------------------------------------------------+
-   | ``level``       | encapsulation level of a packet field or tag array index |
+   | ``level``       | encapsulation level of a packet field                    |
+   +-----------------+----------------------------------------------------------+
+   | ``tag_index``   | tag index inside encapsulation level                     |
    +-----------------+----------------------------------------------------------+
    | ``type``        | geneve option type                                       |
    +-----------------+----------------------------------------------------------+
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index ce1755096f..fd3e35eea3 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -84,8 +84,12 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
-* The ``level`` field in experimental structure
-  ``struct rte_flow_action_modify_data`` was reduced to 8 bits.
+* ethdev: in experimental structure ``struct rte_flow_action_modify_data``:
+
+  * ``level`` field was reduced to 8 bits.
+
+  * ``tag_index`` field replaced ``level`` field in representing tag array for
+    ``RTE_FLOW_FIELD_TAG`` type.
 
 
 ABI Changes
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index f30d4b033f..1df4b49219 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3740,8 +3740,8 @@ enum rte_flow_field_id {
 	RTE_FLOW_FIELD_START = 0,	/**< Start of a packet. */
 	RTE_FLOW_FIELD_MAC_DST,		/**< Destination MAC Address. */
 	RTE_FLOW_FIELD_MAC_SRC,		/**< Source MAC Address. */
-	RTE_FLOW_FIELD_VLAN_TYPE,	/**< 802.1Q Tag Identifier. */
-	RTE_FLOW_FIELD_VLAN_ID,		/**< 802.1Q VLAN Identifier. */
+	RTE_FLOW_FIELD_VLAN_TYPE,	/**< VLAN Tag Identifier. */
+	RTE_FLOW_FIELD_VLAN_ID,		/**< VLAN Identifier. */
 	RTE_FLOW_FIELD_MAC_TYPE,	/**< EtherType. */
 	RTE_FLOW_FIELD_IPV4_DSCP,	/**< IPv4 DSCP. */
 	RTE_FLOW_FIELD_IPV4_TTL,	/**< IPv4 Time To Live. */
@@ -3775,7 +3775,8 @@ enum rte_flow_field_id {
 	RTE_FLOW_FIELD_HASH_RESULT,	/**< Hash result. */
 	RTE_FLOW_FIELD_GENEVE_OPT_TYPE,	/**< GENEVE option type */
 	RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */
-	RTE_FLOW_FIELD_GENEVE_OPT_DATA	/**< GENEVE option data */
+	RTE_FLOW_FIELD_GENEVE_OPT_DATA,	/**< GENEVE option data */
+	RTE_FLOW_FIELD_MPLS		/**< MPLS header. */
 };
 
 /**
@@ -3789,7 +3790,7 @@ struct rte_flow_action_modify_data {
 	RTE_STD_C11
 	union {
 		struct {
-			/** Encapsulation level or tag index or flex item handle. */
+			/** Encapsulation level and tag index or flex item handle. */
 			union {
 				struct {
 					/**
@@ -3820,20 +3821,38 @@ struct rte_flow_action_modify_data {
 					 *
 					 * Values other than @p 0 are not
 					 * necessarily supported.
+					 *
+					 * @note that for MPLS field,
+					 * encapsulation level also include
+					 * tunnel since MPLS may appear in
+					 * outer, inner or tunnel.
 					 */
 					uint8_t level;
-					/**
-					 * Geneve option type. relevant only
-					 * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
-					 * modification type.
-					 */
-					uint8_t type;
-					/**
-					 * Geneve option class. relevant only
-					 * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
-					 * modification type.
-					 */
-					rte_be16_t class_id;
+					union {
+						/**
+						 * Tag index array inside
+						 * encapsulation level.
+						 * Used for VLAN, MPLS or TAG
+						 * types.
+						 */
+						uint8_t tag_index;
+						/**
+						 * Geneve option identifier.
+						 * relevant only for
+						 * RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+						 * modification type.
+						 */
+						struct {
+							/**
+							 * Geneve option type.
+							 */
+							uint8_t type;
+							/**
+							 * Geneve option class.
+							 */
+							rte_be16_t class_id;
+						};
+					};
 				};
 				struct rte_flow_item_flex_handle *flex_handle;
 			};
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* [PATCH v3 4/5] ethdev: add GENEVE TLV option modification support
  @ 2023-05-22 19:28  3%     ` Michael Baum
  2023-05-22 19:28  3%     ` [PATCH v3 5/5] ethdev: add MPLS header " Michael Baum
    2 siblings, 0 replies; 200+ results
From: Michael Baum @ 2023-05-22 19:28 UTC (permalink / raw)
  To: dev; +Cc: Ori Kam, Aman Singh, Yuying Zhang, Ferruh Yigit, Thomas Monjalon

Add modify field support for GENEVE option fields:
 - "RTE_FLOW_FIELD_GENEVE_OPT_TYPE"
 - "RTE_FLOW_FIELD_GENEVE_OPT_CLASS"
 - "RTE_FLOW_FIELD_GENEVE_OPT_DATA"

Each GENEVE TLV option is identified by both its "class" and "type", so
2 new fields were added to "rte_flow_action_modify_data" structure to
help specify which option to modify.

To get room for those 2 new fields, the "level" field move to use
"uint8_t" which is more than enough for encapsulation level.
This patch also reduces all modify field encapsulation level "fully
masked" initializations to use UINT8_MAX instead of UINT32_MAX.
This change avoids compilation warning caused by this API changing.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
 app/test-pmd/cmdline_flow.c            | 48 +++++++++++++++++++++++++-
 doc/guides/prog_guide/rte_flow.rst     | 23 ++++++++++++
 doc/guides/rel_notes/release_23_07.rst |  3 ++
 drivers/net/mlx5/mlx5_flow_hw.c        | 22 ++++++------
 lib/ethdev/rte_flow.h                  | 48 +++++++++++++++++++++++++-
 5 files changed, 131 insertions(+), 13 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 58939ec321..8c1dea53c0 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -636,11 +636,15 @@ enum index {
 	ACTION_MODIFY_FIELD_DST_TYPE_VALUE,
 	ACTION_MODIFY_FIELD_DST_LEVEL,
 	ACTION_MODIFY_FIELD_DST_LEVEL_VALUE,
+	ACTION_MODIFY_FIELD_DST_TYPE_ID,
+	ACTION_MODIFY_FIELD_DST_CLASS_ID,
 	ACTION_MODIFY_FIELD_DST_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_TYPE,
 	ACTION_MODIFY_FIELD_SRC_TYPE_VALUE,
 	ACTION_MODIFY_FIELD_SRC_LEVEL,
 	ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE,
+	ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+	ACTION_MODIFY_FIELD_SRC_CLASS_ID,
 	ACTION_MODIFY_FIELD_SRC_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_VALUE,
 	ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -854,7 +858,9 @@ static const char *const modify_field_ids[] = {
 	"ipv4_ecn", "ipv6_ecn", "gtp_psc_qfi", "meter_color",
 	"ipv6_proto",
 	"flex_item",
-	"hash_result", NULL
+	"hash_result",
+	"geneve_opt_type", "geneve_opt_class", "geneve_opt_data",
+	NULL
 };
 
 static const char *const meter_colors[] = {
@@ -2295,6 +2301,8 @@ static const enum index next_action_sample[] = {
 
 static const enum index action_modify_field_dst[] = {
 	ACTION_MODIFY_FIELD_DST_LEVEL,
+	ACTION_MODIFY_FIELD_DST_TYPE_ID,
+	ACTION_MODIFY_FIELD_DST_CLASS_ID,
 	ACTION_MODIFY_FIELD_DST_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_TYPE,
 	ZERO,
@@ -2302,6 +2310,8 @@ static const enum index action_modify_field_dst[] = {
 
 static const enum index action_modify_field_src[] = {
 	ACTION_MODIFY_FIELD_SRC_LEVEL,
+	ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+	ACTION_MODIFY_FIELD_SRC_CLASS_ID,
 	ACTION_MODIFY_FIELD_SRC_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_VALUE,
 	ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -6388,6 +6398,24 @@ static const struct token token_list[] = {
 		.call = parse_vc_modify_field_level,
 		.comp = comp_none,
 	},
+	[ACTION_MODIFY_FIELD_DST_TYPE_ID] = {
+		.name = "dst_type_id",
+		.help = "destination field type ID",
+		.next = NEXT(action_modify_field_dst,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+					dst.type)),
+		.call = parse_vc_conf,
+	},
+	[ACTION_MODIFY_FIELD_DST_CLASS_ID] = {
+		.name = "dst_class",
+		.help = "destination field class ID",
+		.next = NEXT(action_modify_field_dst,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+					     dst.class_id)),
+		.call = parse_vc_conf,
+	},
 	[ACTION_MODIFY_FIELD_DST_OFFSET] = {
 		.name = "dst_offset",
 		.help = "destination field bit offset",
@@ -6423,6 +6451,24 @@ static const struct token token_list[] = {
 		.call = parse_vc_modify_field_level,
 		.comp = comp_none,
 	},
+	[ACTION_MODIFY_FIELD_SRC_TYPE_ID] = {
+		.name = "src_type_id",
+		.help = "source field type ID",
+		.next = NEXT(action_modify_field_src,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+					src.type)),
+		.call = parse_vc_conf,
+	},
+	[ACTION_MODIFY_FIELD_SRC_CLASS_ID] = {
+		.name = "src_class",
+		.help = "source field class ID",
+		.next = NEXT(action_modify_field_src,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+					     src.class_id)),
+		.call = parse_vc_conf,
+	},
 	[ACTION_MODIFY_FIELD_SRC_OFFSET] = {
 		.name = "src_offset",
 		.help = "source field bit offset",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 25b57bf86d..ec812de335 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2937,6 +2937,14 @@ as well as any tag element in the tag array:
 For the tag array (in case of multiple tags are supported and present)
 ``level`` translates directly into the array index.
 
+``type`` is used to specify (along with ``class_id``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
+``class_id`` is used to specify (along with ``type``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
 ``flex_handle`` is used to specify the flex item pointer which is being
 modified. ``flex_handle`` and ``level`` are mutually exclusive.
 
@@ -2967,6 +2975,17 @@ to replace the third byte of MAC address with value 0x85, application should
 specify destination width as 8, destination offset as 16, and provide immediate
 value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
 
+The ``RTE_FLOW_FIELD_GENEVE_OPT_DATA`` type supports modifying only one DW in
+single action and align to 32 bits. For example, for modifying 16 bits start
+from offset 24, 2 different actions should be prepared. The first one includs
+``offset=24`` and ``width=8``, and the seconde one includs ``offset=32`` and
+``width=8``.
+Application should provide the data in immediate value memory only for the
+single DW even though the offset is related to start of first DW. For example,
+to replace the third byte of second DW in Geneve option data with value 0x85,
+application should specify destination width as 8, destination offset as 48,
+and provide immediate value 0xXXXX85XX.
+
 .. _table_rte_flow_action_modify_field:
 
 .. table:: MODIFY_FIELD
@@ -2994,6 +3013,10 @@ value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
    +-----------------+----------------------------------------------------------+
    | ``level``       | encapsulation level of a packet field or tag array index |
    +-----------------+----------------------------------------------------------+
+   | ``type``        | geneve option type                                       |
+   +-----------------+----------------------------------------------------------+
+   | ``class_id``    | geneve option class ID                                   |
+   +-----------------+----------------------------------------------------------+
    | ``flex_handle`` | flex item handle of a packet field                       |
    +-----------------+----------------------------------------------------------+
    | ``offset``      | number of bits to skip at the beginning                  |
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index a9b1293689..ce1755096f 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -84,6 +84,9 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* The ``level`` field in experimental structure
+  ``struct rte_flow_action_modify_data`` was reduced to 8 bits.
+
 
 ABI Changes
 -----------
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 7e0ee8d883..1b68a19900 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -3565,7 +3565,7 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"immediate value, pointer and hash result cannot be used as destination");
-	if (mask_conf->dst.level != UINT32_MAX)
+	if (mask_conf->dst.level != UINT8_MAX)
 		return rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ACTION, action,
 			"destination encapsulation level must be fully masked");
@@ -3579,7 +3579,7 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
 				"destination field mask and template are not equal");
 	if (action_conf->src.field != RTE_FLOW_FIELD_POINTER &&
 	    action_conf->src.field != RTE_FLOW_FIELD_VALUE) {
-		if (mask_conf->src.level != UINT32_MAX)
+		if (mask_conf->src.level != UINT8_MAX)
 			return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"source encapsulation level must be fully masked");
@@ -4450,7 +4450,7 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev,
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = RTE_FLOW_FIELD_VLAN_ID,
-			.level = 0xffffffff, .offset = 0xffffffff,
+			.level = 0xff, .offset = 0xffffffff,
 		},
 		.src = {
 			.field = RTE_FLOW_FIELD_VALUE,
@@ -4583,12 +4583,12 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.width = UINT32_MAX,
@@ -5653,7 +5653,7 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev)
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
@@ -5677,12 +5677,12 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev)
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.width = UINT32_MAX,
@@ -6009,7 +6009,7 @@ flow_hw_create_ctrl_regc_jump_actions_template(struct rte_eth_dev *dev)
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
@@ -6182,12 +6182,12 @@ flow_hw_create_tx_default_mreg_copy_actions_template(struct rte_eth_dev *dev)
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.width = UINT32_MAX,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 713ba8b65c..f30d4b033f 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3773,6 +3773,9 @@ enum rte_flow_field_id {
 	RTE_FLOW_FIELD_IPV6_PROTO,	/**< IPv6 next header. */
 	RTE_FLOW_FIELD_FLEX_ITEM,	/**< Flex item. */
 	RTE_FLOW_FIELD_HASH_RESULT,	/**< Hash result. */
+	RTE_FLOW_FIELD_GENEVE_OPT_TYPE,	/**< GENEVE option type */
+	RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */
+	RTE_FLOW_FIELD_GENEVE_OPT_DATA	/**< GENEVE option data */
 };
 
 /**
@@ -3788,7 +3791,50 @@ struct rte_flow_action_modify_data {
 		struct {
 			/** Encapsulation level or tag index or flex item handle. */
 			union {
-				uint32_t level;
+				struct {
+					/**
+					 * Packet encapsulation level containing
+					 * the field modify to.
+					 *
+					 * - @p 0 requests the default behavior.
+					 *   Depending on the packet type, it
+					 *   can mean outermost, innermost or
+					 *   anything in between.
+					 *
+					 *   It basically stands for the
+					 *   innermost encapsulation level
+					 *   modification can be performed on
+					 *   according to PMD and device
+					 *   capabilities.
+					 *
+					 * - @p 1 requests modification to be
+					 *   performed on the outermost packet
+					 *   encapsulation level.
+					 *
+					 * - @p 2 and subsequent values request
+					 *   modification to be performed on
+					 *   the specified inner packet
+					 *   encapsulation level, from
+					 *   outermost to innermost (lower to
+					 *   higher values).
+					 *
+					 * Values other than @p 0 are not
+					 * necessarily supported.
+					 */
+					uint8_t level;
+					/**
+					 * Geneve option type. relevant only
+					 * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+					 * modification type.
+					 */
+					uint8_t type;
+					/**
+					 * Geneve option class. relevant only
+					 * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+					 * modification type.
+					 */
+					rte_be16_t class_id;
+				};
 				struct rte_flow_item_flex_handle *flex_handle;
 			};
 			/** Number of bits to skip from a field. */
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* [PATCH 1/2] net/nfp: align reading of version info with kernel driver
  @ 2023-05-22 11:40  6% ` Chaoyong He
  0 siblings, 0 replies; 200+ results
From: Chaoyong He @ 2023-05-22 11:40 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He

Align the method of reading the version information with the linux
driver. This is done to make it easier to share code between the
DPDK PMD and the kernel driver.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower.c |  4 ++--
 drivers/net/nfp/nfp_common.c        | 30 +++++++++++++++++++----------
 drivers/net/nfp/nfp_common.h        | 21 ++------------------
 drivers/net/nfp/nfp_ctrl.h          | 22 +++++++++++++--------
 drivers/net/nfp/nfp_ethdev.c        | 10 +++++-----
 drivers/net/nfp/nfp_ethdev_vf.c     | 10 +++++-----
 drivers/net/nfp/nfp_rxtx.c          |  6 +++---
 7 files changed, 51 insertions(+), 52 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 72933e55d0..778ea777dd 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -650,7 +650,7 @@ nfp_flower_init_vnic_common(struct nfp_net_hw *hw, const char *vnic_type)
 	hw->rx_bar = pf_dev->hw_queues + rx_bar_off;
 
 	/* Get some of the read-only fields from the config BAR */
-	hw->ver = nn_cfg_readl(hw, NFP_NET_CFG_VERSION);
+	nfp_net_cfg_read_version(hw);
 	hw->cap = nn_cfg_readl(hw, NFP_NET_CFG_CAP);
 	hw->max_mtu = nn_cfg_readl(hw, NFP_NET_CFG_MAX_MTU);
 	/* Set the current MTU to the maximum supported */
@@ -661,7 +661,7 @@ nfp_flower_init_vnic_common(struct nfp_net_hw *hw, const char *vnic_type)
 		return -ENODEV;
 
 	/* read the Rx offset configured from firmware */
-	if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2)
+	if (hw->ver.major < 2)
 		hw->rx_offset = NFP_NET_RX_OFFSET;
 	else
 		hw->rx_offset = nn_cfg_readl(hw, NFP_NET_CFG_RX_OFFSET_ADDR);
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index c9fea765a4..a9af215626 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -356,8 +356,7 @@ void
 nfp_net_log_device_information(const struct nfp_net_hw *hw)
 {
 	PMD_INIT_LOG(INFO, "VER: %u.%u, Maximum supported MTU: %d",
-			NFD_CFG_MAJOR_VERSION_of(hw->ver),
-			NFD_CFG_MINOR_VERSION_of(hw->ver), hw->max_mtu);
+			hw->ver.major, hw->ver.minor, hw->max_mtu);
 
 	PMD_INIT_LOG(INFO, "CAP: %#x, %s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s", hw->cap,
 			hw->cap & NFP_NET_CFG_CTRL_PROMISC   ? "PROMISC "   : "",
@@ -1114,14 +1113,14 @@ nfp_net_tx_desc_limits(struct nfp_net_hw *hw,
 {
 	uint16_t tx_dpp;
 
-	switch (NFD_CFG_CLASS_VER_of(hw->ver)) {
+	switch (hw->ver.extend) {
 	case NFP_NET_CFG_VERSION_DP_NFD3:
 		tx_dpp = NFD3_TX_DESC_PER_PKT;
 		break;
 	case NFP_NET_CFG_VERSION_DP_NFDK:
-		if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 5) {
+		if (hw->ver.major < 5) {
 			PMD_DRV_LOG(ERR, "NFDK must use ABI 5 or newer, found: %d",
-				NFD_CFG_MAJOR_VERSION_of(hw->ver));
+					hw->ver.major);
 			return -EINVAL;
 		}
 		tx_dpp = NFDK_TX_DESC_PER_SIMPLE_PKT;
@@ -1911,11 +1910,10 @@ nfp_net_set_vxlan_port(struct nfp_net_hw *hw,
 int
 nfp_net_check_dma_mask(struct nfp_net_hw *hw, char *name)
 {
-	if (NFD_CFG_CLASS_VER_of(hw->ver) == NFP_NET_CFG_VERSION_DP_NFD3 &&
+	if (hw->ver.extend == NFP_NET_CFG_VERSION_DP_NFD3 &&
 			rte_mem_check_dma_mask(40) != 0) {
-		PMD_DRV_LOG(ERR,
-			"The device %s can't be used: restricted dma mask to 40 bits!",
-			name);
+		PMD_DRV_LOG(ERR, "Device %s can't be used: restricted dma mask to 40 bits!",
+				name);
 		return -ENODEV;
 	}
 
@@ -1930,7 +1928,7 @@ nfp_net_init_metadata_format(struct nfp_net_hw *hw)
 	 * single metadata if only RSS(v1) is supported by hw capability, and RSS(v2)
 	 * also indicate that we are using chained metadata.
 	 */
-	if (NFD_CFG_MAJOR_VERSION_of(hw->ver) == 4) {
+	if (hw->ver.major == 4) {
 		hw->meta_format = NFP_NET_METAFORMAT_CHAINED;
 	} else if ((hw->cap & NFP_NET_CFG_CTRL_CHAIN_META) != 0) {
 		hw->meta_format = NFP_NET_METAFORMAT_CHAINED;
@@ -1944,3 +1942,15 @@ nfp_net_init_metadata_format(struct nfp_net_hw *hw)
 		hw->meta_format = NFP_NET_METAFORMAT_SINGLE;
 	}
 }
+
+void
+nfp_net_cfg_read_version(struct nfp_net_hw *hw)
+{
+	union {
+		uint32_t whole;
+		struct nfp_net_fw_ver split;
+	} version;
+
+	version.whole = nn_cfg_readl(hw, NFP_NET_CFG_VERSION);
+	hw->ver = version.split;
+}
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 47df0510c5..424b18b0ad 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -80,24 +80,6 @@ struct nfp_net_adapter;
 #define NFP_NET_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */
 #define NFP_NET_LINK_UP_CHECK_TIMEOUT   1000 /* ms */
 
-/* Version number helper defines */
-#define NFD_CFG_CLASS_VER_msk       0xff
-#define NFD_CFG_CLASS_VER_shf       24
-#define NFD_CFG_CLASS_VER(x)        (((x) & 0xff) << 24)
-#define NFD_CFG_CLASS_VER_of(x)     (((x) >> 24) & 0xff)
-#define NFD_CFG_CLASS_TYPE_msk      0xff
-#define NFD_CFG_CLASS_TYPE_shf      16
-#define NFD_CFG_CLASS_TYPE(x)       (((x) & 0xff) << 16)
-#define NFD_CFG_CLASS_TYPE_of(x)    (((x) >> 16) & 0xff)
-#define NFD_CFG_MAJOR_VERSION_msk   0xff
-#define NFD_CFG_MAJOR_VERSION_shf   8
-#define NFD_CFG_MAJOR_VERSION(x)    (((x) & 0xff) << 8)
-#define NFD_CFG_MAJOR_VERSION_of(x) (((x) >> 8) & 0xff)
-#define NFD_CFG_MINOR_VERSION_msk   0xff
-#define NFD_CFG_MINOR_VERSION_shf   0
-#define NFD_CFG_MINOR_VERSION(x)    (((x) & 0xff) << 0)
-#define NFD_CFG_MINOR_VERSION_of(x) (((x) >> 0) & 0xff)
-
 /* Number of supported physical ports */
 #define NFP_MAX_PHYPORTS	12
 
@@ -196,7 +178,7 @@ struct nfp_net_hw {
 	struct rte_eth_dev *eth_dev;
 
 	/* Info from the firmware */
-	uint32_t ver;
+	struct nfp_net_fw_ver ver;
 	uint32_t cap;
 	uint32_t max_mtu;
 	uint32_t mtu;
@@ -490,6 +472,7 @@ int nfp_net_tx_desc_limits(struct nfp_net_hw *hw,
 		uint16_t *max_tx_desc);
 int nfp_net_check_dma_mask(struct nfp_net_hw *hw, char *name);
 void nfp_net_init_metadata_format(struct nfp_net_hw *hw);
+void nfp_net_cfg_read_version(struct nfp_net_hw *hw);
 
 #define NFP_NET_DEV_PRIVATE_TO_HW(adapter)\
 	(&((struct nfp_net_adapter *)adapter)->hw)
diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h
index bca31ac311..ff2245dfff 100644
--- a/drivers/net/nfp/nfp_ctrl.h
+++ b/drivers/net/nfp/nfp_ctrl.h
@@ -130,6 +130,20 @@
 
 #define NFP_NET_CFG_CTRL_CHAIN_META (NFP_NET_CFG_CTRL_RSS2 | \
 					NFP_NET_CFG_CTRL_CSUM_COMPLETE)
+
+/* Version number helper defines */
+struct nfp_net_fw_ver {
+	uint8_t minor;
+	uint8_t major;
+	uint8_t class;
+	/**
+	 * This byte can be extended for more use.
+	 * BIT0: NFD dp type, refer NFP_NET_CFG_VERSION_DP_NFDx
+	 * BIT[7:1]: reserved
+	 */
+	uint8_t extend;
+};
+
 /*
  * Read-only words (0x0030 - 0x0050):
  * @NFP_NET_CFG_VERSION:     Firmware version number
@@ -147,14 +161,6 @@
 #define NFP_NET_CFG_VERSION             0x0030
 #define   NFP_NET_CFG_VERSION_DP_NFD3   0
 #define   NFP_NET_CFG_VERSION_DP_NFDK   1
-#define   NFP_NET_CFG_VERSION_RESERVED_MASK	(0xff << 24)
-#define   NFP_NET_CFG_VERSION_CLASS_MASK  (0xff << 16)
-#define   NFP_NET_CFG_VERSION_CLASS(x)    (((x) & 0xff) << 16)
-#define   NFP_NET_CFG_VERSION_CLASS_GENERIC	0
-#define   NFP_NET_CFG_VERSION_MAJOR_MASK  (0xff <<  8)
-#define   NFP_NET_CFG_VERSION_MAJOR(x)    (((x) & 0xff) <<  8)
-#define   NFP_NET_CFG_VERSION_MINOR_MASK  (0xff <<  0)
-#define   NFP_NET_CFG_VERSION_MINOR(x)    (((x) & 0xff) <<  0)
 #define NFP_NET_CFG_STS                 0x0034
 #define   NFP_NET_CFG_STS_LINK            (0x1 << 0) /* Link up or down */
 /* Link rate */
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 722ec17dce..0b2dd7801b 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -466,14 +466,14 @@ static const struct eth_dev_ops nfp_net_eth_dev_ops = {
 static inline int
 nfp_net_ethdev_ops_mount(struct nfp_net_hw *hw, struct rte_eth_dev *eth_dev)
 {
-	switch (NFD_CFG_CLASS_VER_of(hw->ver)) {
+	switch (hw->ver.extend) {
 	case NFP_NET_CFG_VERSION_DP_NFD3:
 		eth_dev->tx_pkt_burst = &nfp_net_nfd3_xmit_pkts;
 		break;
 	case NFP_NET_CFG_VERSION_DP_NFDK:
-		if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 5) {
+		if (hw->ver.major < 5) {
 			PMD_DRV_LOG(ERR, "NFDK must use ABI 5 or newer, found: %d",
-				NFD_CFG_MAJOR_VERSION_of(hw->ver));
+					hw->ver.major);
 			return -EINVAL;
 		}
 		eth_dev->tx_pkt_burst = &nfp_net_nfdk_xmit_pkts;
@@ -571,7 +571,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	PMD_INIT_LOG(DEBUG, "ctrl bar: %p", hw->ctrl_bar);
 	PMD_INIT_LOG(DEBUG, "MAC stats: %p", hw->mac_stats);
 
-	hw->ver = nn_cfg_readl(hw, NFP_NET_CFG_VERSION);
+	nfp_net_cfg_read_version(hw);
 
 	if (nfp_net_check_dma_mask(hw, pci_dev->name) != 0)
 		return -ENODEV;
@@ -629,7 +629,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 
 	nfp_net_init_metadata_format(hw);
 
-	if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2)
+	if (hw->ver.major < 2)
 		hw->rx_offset = NFP_NET_RX_OFFSET;
 	else
 		hw->rx_offset = nn_cfg_readl(hw, NFP_NET_CFG_RX_OFFSET_ADDR);
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index ce55e3b728..cf3548e63a 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -246,14 +246,14 @@ static const struct eth_dev_ops nfp_netvf_eth_dev_ops = {
 static inline int
 nfp_netvf_ethdev_ops_mount(struct nfp_net_hw *hw, struct rte_eth_dev *eth_dev)
 {
-	switch (NFD_CFG_CLASS_VER_of(hw->ver)) {
+	switch (hw->ver.extend) {
 	case NFP_NET_CFG_VERSION_DP_NFD3:
 		eth_dev->tx_pkt_burst = &nfp_net_nfd3_xmit_pkts;
 		break;
 	case NFP_NET_CFG_VERSION_DP_NFDK:
-		if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 5) {
+		if (hw->ver.major < 5) {
 			PMD_DRV_LOG(ERR, "NFDK must use ABI 5 or newer, found: %d",
-				NFD_CFG_MAJOR_VERSION_of(hw->ver));
+					hw->ver.major);
 			return -EINVAL;
 		}
 		eth_dev->tx_pkt_burst = &nfp_net_nfdk_xmit_pkts;
@@ -298,7 +298,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_LOG(DEBUG, "ctrl bar: %p", hw->ctrl_bar);
 
-	hw->ver = nn_cfg_readl(hw, NFP_NET_CFG_VERSION);
+	nfp_net_cfg_read_version(hw);
 
 	if (nfp_net_check_dma_mask(hw, pci_dev->name) != 0)
 		return -ENODEV;
@@ -380,7 +380,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	nfp_net_init_metadata_format(hw);
 
-	if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2)
+	if (hw->ver.major < 2)
 		hw->rx_offset = NFP_NET_RX_OFFSET;
 	else
 		hw->rx_offset = nn_cfg_readl(hw, NFP_NET_CFG_RX_OFFSET_ADDR);
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 3c78557221..478752fa14 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -764,14 +764,14 @@ nfp_net_tx_queue_setup(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	switch (NFD_CFG_CLASS_VER_of(hw->ver)) {
+	switch (hw->ver.extend) {
 	case NFP_NET_CFG_VERSION_DP_NFD3:
 		return nfp_net_nfd3_tx_queue_setup(dev, queue_idx,
 				nb_desc, socket_id, tx_conf);
 	case NFP_NET_CFG_VERSION_DP_NFDK:
-		if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 5) {
+		if (hw->ver.major < 5) {
 			PMD_DRV_LOG(ERR, "NFDK must use ABI 5 or newer, found: %d",
-				NFD_CFG_MAJOR_VERSION_of(hw->ver));
+					hw->ver.major);
 			return -EINVAL;
 		}
 		return nfp_net_nfdk_tx_queue_setup(dev, queue_idx,
-- 
2.39.1


^ permalink raw reply	[relevance 6%]

* [PATCH v3 01/19] mbuf: replace term sanity check
  @ 2023-05-19 17:45  2%   ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-05-19 17:45 UTC (permalink / raw)
  To: dev
  Cc: Stephen Hemminger, Olivier Matz, Steven Webster, Matt Peters,
	Andrew Rybchenko

Replace rte_mbuf_sanity_check() with rte_mbuf_verify()
to match the similar macro RTE_VERIFY() in rte_debug.h

The term sanity check is on the Tier 2 list of words
that should be replaced.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 app/test/test_mbuf.c                 | 30 ++++++------
 doc/guides/prog_guide/mbuf_lib.rst   |  4 +-
 doc/guides/rel_notes/deprecation.rst |  3 ++
 drivers/net/avp/avp_ethdev.c         | 18 +++----
 drivers/net/sfc/sfc_ef100_rx.c       |  6 +--
 drivers/net/sfc/sfc_ef10_essb_rx.c   |  4 +-
 drivers/net/sfc/sfc_ef10_rx.c        |  4 +-
 drivers/net/sfc/sfc_rx.c             |  2 +-
 examples/ipv4_multicast/main.c       |  2 +-
 lib/mbuf/rte_mbuf.c                  | 23 +++++----
 lib/mbuf/rte_mbuf.h                  | 71 +++++++++++++++-------------
 lib/mbuf/version.map                 |  1 +
 12 files changed, 91 insertions(+), 77 deletions(-)

diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index 8d8d3b9386ce..c2716dc4e5fe 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -261,8 +261,8 @@ test_one_pktmbuf(struct rte_mempool *pktmbuf_pool)
 		GOTO_FAIL("Buffer should be continuous");
 	memset(hdr, 0x55, MBUF_TEST_HDR2_LEN);
 
-	rte_mbuf_sanity_check(m, 1);
-	rte_mbuf_sanity_check(m, 0);
+	rte_mbuf_verify(m, 1);
+	rte_mbuf_verify(m, 0);
 	rte_pktmbuf_dump(stdout, m, 0);
 
 	/* this prepend should fail */
@@ -1161,7 +1161,7 @@ test_refcnt_mbuf(void)
 
 #ifdef RTE_EXEC_ENV_WINDOWS
 static int
-test_failing_mbuf_sanity_check(struct rte_mempool *pktmbuf_pool)
+test_failing_mbuf_verify(struct rte_mempool *pktmbuf_pool)
 {
 	RTE_SET_USED(pktmbuf_pool);
 	return TEST_SKIPPED;
@@ -1188,7 +1188,7 @@ verify_mbuf_check_panics(struct rte_mbuf *buf)
 		/* No need to generate a coredump when panicking. */
 		rl.rlim_cur = rl.rlim_max = 0;
 		setrlimit(RLIMIT_CORE, &rl);
-		rte_mbuf_sanity_check(buf, 1); /* should panic */
+		rte_mbuf_verify(buf, 1); /* should panic */
 		exit(0);  /* return normally if it doesn't panic */
 	} else if (pid < 0) {
 		printf("Fork Failed\n");
@@ -1202,12 +1202,12 @@ verify_mbuf_check_panics(struct rte_mbuf *buf)
 }
 
 static int
-test_failing_mbuf_sanity_check(struct rte_mempool *pktmbuf_pool)
+test_failing_mbuf_verify(struct rte_mempool *pktmbuf_pool)
 {
 	struct rte_mbuf *buf;
 	struct rte_mbuf badbuf;
 
-	printf("Checking rte_mbuf_sanity_check for failure conditions\n");
+	printf("Checking rte_mbuf_verify for failure conditions\n");
 
 	/* get a good mbuf to use to make copies */
 	buf = rte_pktmbuf_alloc(pktmbuf_pool);
@@ -1729,7 +1729,7 @@ test_mbuf_validate_tx_offload(const char *test_name,
 		GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
 	if (rte_pktmbuf_pkt_len(m) != 0)
 		GOTO_FAIL("%s: Bad packet length\n", __func__);
-	rte_mbuf_sanity_check(m, 0);
+	rte_mbuf_verify(m, 0);
 	m->ol_flags = ol_flags;
 	m->tso_segsz = segsize;
 	ret = rte_validate_tx_offload(m);
@@ -1936,7 +1936,7 @@ test_pktmbuf_read(struct rte_mempool *pktmbuf_pool)
 		GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
 	if (rte_pktmbuf_pkt_len(m) != 0)
 		GOTO_FAIL("%s: Bad packet length\n", __func__);
-	rte_mbuf_sanity_check(m, 0);
+	rte_mbuf_verify(m, 0);
 
 	data = rte_pktmbuf_append(m, MBUF_TEST_DATA_LEN2);
 	if (data == NULL)
@@ -1985,7 +1985,7 @@ test_pktmbuf_read_from_offset(struct rte_mempool *pktmbuf_pool)
 
 	if (rte_pktmbuf_pkt_len(m) != 0)
 		GOTO_FAIL("%s: Bad packet length\n", __func__);
-	rte_mbuf_sanity_check(m, 0);
+	rte_mbuf_verify(m, 0);
 
 	/* prepend an ethernet header */
 	hdr = (struct ether_hdr *)rte_pktmbuf_prepend(m, hdr_len);
@@ -2130,7 +2130,7 @@ create_packet(struct rte_mempool *pktmbuf_pool,
 			GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
 		if (rte_pktmbuf_pkt_len(pkt_seg) != 0)
 			GOTO_FAIL("%s: Bad packet length\n", __func__);
-		rte_mbuf_sanity_check(pkt_seg, 0);
+		rte_mbuf_verify(pkt_seg, 0);
 		/* Add header only for the first segment */
 		if (test_data->flags == MBUF_HEADER && seg == 0) {
 			hdr_len = sizeof(struct rte_ether_hdr);
@@ -2342,7 +2342,7 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool *pktmbuf_pool)
 		GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
 	if (rte_pktmbuf_pkt_len(m) != 0)
 		GOTO_FAIL("%s: Bad packet length\n", __func__);
-	rte_mbuf_sanity_check(m, 0);
+	rte_mbuf_verify(m, 0);
 
 	ext_buf_addr = rte_malloc("External buffer", buf_len,
 			RTE_CACHE_LINE_SIZE);
@@ -2506,8 +2506,8 @@ test_pktmbuf_ext_pinned_buffer(struct rte_mempool *std_pool)
 		GOTO_FAIL("%s: test_pktmbuf_copy(pinned) failed\n",
 			  __func__);
 
-	if (test_failing_mbuf_sanity_check(pinned_pool) < 0)
-		GOTO_FAIL("%s: test_failing_mbuf_sanity_check(pinned)"
+	if (test_failing_mbuf_verify(pinned_pool) < 0)
+		GOTO_FAIL("%s: test_failing_mbuf_verify(pinned)"
 			  " failed\n", __func__);
 
 	if (test_mbuf_linearize_check(pinned_pool) < 0)
@@ -2881,8 +2881,8 @@ test_mbuf(void)
 		goto err;
 	}
 
-	if (test_failing_mbuf_sanity_check(pktmbuf_pool) < 0) {
-		printf("test_failing_mbuf_sanity_check() failed\n");
+	if (test_failing_mbuf_verify(pktmbuf_pool) < 0) {
+		printf("test_failing_mbuf_verify() failed\n");
 		goto err;
 	}
 
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 049357c75563..0accb51a98c7 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -266,8 +266,8 @@ can be found in several of the sample applications, for example, the IPv4 Multic
 Debug
 -----
 
-In debug mode, the functions of the mbuf library perform sanity checks before any operation (such as, buffer corruption,
-bad type, and so on).
+In debug mode, the functions of the mbuf library perform consistency checks
+before any operation (such as, buffer corruption, bad type, and so on).
 
 Use Cases
 ---------
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index dcc1ca169668..186cc13eea60 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -163,3 +163,6 @@ Deprecation Notices
   The new port library API (functions rte_swx_port_*)
   will gradually transition from experimental to stable status
   starting with DPDK 23.07 release.
+
+* mbuf: The function ``rte_mbuf_sanity_check`` will be deprecated in DPDK 23.07
+  and removed in DPDK 23.11. The new function will be ``rte_mbuf_verify``.
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index b2a08f563542..b402c7a2ad16 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1231,7 +1231,7 @@ _avp_mac_filter(struct avp_dev *avp, struct rte_mbuf *m)
 
 #ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
 static inline void
-__avp_dev_buffer_sanity_check(struct avp_dev *avp, struct rte_avp_desc *buf)
+__avp_dev_buffer_check(struct avp_dev *avp, struct rte_avp_desc *buf)
 {
 	struct rte_avp_desc *first_buf;
 	struct rte_avp_desc *pkt_buf;
@@ -1272,12 +1272,12 @@ __avp_dev_buffer_sanity_check(struct avp_dev *avp, struct rte_avp_desc *buf)
 			  first_buf->pkt_len, pkt_len);
 }
 
-#define avp_dev_buffer_sanity_check(a, b) \
-	__avp_dev_buffer_sanity_check((a), (b))
+#define avp_dev_buffer_check(a, b) \
+	__avp_dev_buffer_check((a), (b))
 
 #else /* RTE_LIBRTE_AVP_DEBUG_BUFFERS */
 
-#define avp_dev_buffer_sanity_check(a, b) do {} while (0)
+#define avp_dev_buffer_check(a, b) do {} while (0)
 
 #endif
 
@@ -1302,7 +1302,7 @@ avp_dev_copy_from_buffers(struct avp_dev *avp,
 	void *pkt_data;
 	unsigned int i;
 
-	avp_dev_buffer_sanity_check(avp, buf);
+	avp_dev_buffer_check(avp, buf);
 
 	/* setup the first source buffer */
 	pkt_buf = avp_dev_translate_buffer(avp, buf);
@@ -1370,7 +1370,7 @@ avp_dev_copy_from_buffers(struct avp_dev *avp,
 	rte_pktmbuf_pkt_len(m) = total_length;
 	m->vlan_tci = vlan_tci;
 
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 
 	return m;
 }
@@ -1614,7 +1614,7 @@ avp_dev_copy_to_buffers(struct avp_dev *avp,
 	char *pkt_data;
 	unsigned int i;
 
-	__rte_mbuf_sanity_check(mbuf, 1);
+	__rte_mbuf_verify(mbuf, 1);
 
 	m = mbuf;
 	src_offset = 0;
@@ -1680,7 +1680,7 @@ avp_dev_copy_to_buffers(struct avp_dev *avp,
 		first_buf->vlan_tci = mbuf->vlan_tci;
 	}
 
-	avp_dev_buffer_sanity_check(avp, buffers[0]);
+	avp_dev_buffer_check(avp, buffers[0]);
 
 	return total_length;
 }
@@ -1798,7 +1798,7 @@ avp_xmit_scattered_pkts(void *tx_queue,
 
 #ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
 	for (i = 0; i < nb_pkts; i++)
-		avp_dev_buffer_sanity_check(avp, tx_bufs[i]);
+		avp_dev_buffer_check(avp, tx_bufs[i]);
 #endif
 
 	/* send the packets */
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 16cd8524d32f..dcd3b3316752 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -178,7 +178,7 @@ sfc_ef100_rx_qrefill(struct sfc_ef100_rxq *rxq)
 			struct sfc_ef100_rx_sw_desc *rxd;
 			rte_iova_t dma_addr;
 
-			__rte_mbuf_raw_sanity_check(m);
+			__rte_mbuf_raw_verify(m);
 
 			dma_addr = rte_mbuf_data_iova_default(m);
 			if (rxq->flags & SFC_EF100_RXQ_NIC_DMA_MAP) {
@@ -541,7 +541,7 @@ sfc_ef100_rx_process_ready_pkts(struct sfc_ef100_rxq *rxq,
 		rxq->ready_pkts--;
 
 		pkt = sfc_ef100_rx_next_mbuf(rxq);
-		__rte_mbuf_raw_sanity_check(pkt);
+		__rte_mbuf_raw_verify(pkt);
 
 		RTE_BUILD_BUG_ON(sizeof(pkt->rearm_data[0]) !=
 				 sizeof(rxq->rearm_data));
@@ -565,7 +565,7 @@ sfc_ef100_rx_process_ready_pkts(struct sfc_ef100_rxq *rxq,
 			struct rte_mbuf *seg;
 
 			seg = sfc_ef100_rx_next_mbuf(rxq);
-			__rte_mbuf_raw_sanity_check(seg);
+			__rte_mbuf_raw_verify(seg);
 
 			seg->data_off = RTE_PKTMBUF_HEADROOM;
 
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 78bd430363b1..74647e2792b1 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -125,7 +125,7 @@ sfc_ef10_essb_next_mbuf(const struct sfc_ef10_essb_rxq *rxq,
 	struct rte_mbuf *m;
 
 	m = (struct rte_mbuf *)((uintptr_t)mbuf + rxq->buf_stride);
-	__rte_mbuf_raw_sanity_check(m);
+	__rte_mbuf_raw_verify(m);
 	return m;
 }
 
@@ -136,7 +136,7 @@ sfc_ef10_essb_mbuf_by_index(const struct sfc_ef10_essb_rxq *rxq,
 	struct rte_mbuf *m;
 
 	m = (struct rte_mbuf *)((uintptr_t)mbuf + idx * rxq->buf_stride);
-	__rte_mbuf_raw_sanity_check(m);
+	__rte_mbuf_raw_verify(m);
 	return m;
 }
 
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 7be224c9c412..0fdd0d84c17c 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -148,7 +148,7 @@ sfc_ef10_rx_qrefill(struct sfc_ef10_rxq *rxq)
 			struct sfc_ef10_rx_sw_desc *rxd;
 			rte_iova_t phys_addr;
 
-			__rte_mbuf_raw_sanity_check(m);
+			__rte_mbuf_raw_verify(m);
 
 			SFC_ASSERT((id & ~ptr_mask) == 0);
 			rxd = &rxq->sw_ring[id];
@@ -297,7 +297,7 @@ sfc_ef10_rx_process_event(struct sfc_ef10_rxq *rxq, efx_qword_t rx_ev,
 		rxd = &rxq->sw_ring[pending++ & ptr_mask];
 		m = rxd->mbuf;
 
-		__rte_mbuf_raw_sanity_check(m);
+		__rte_mbuf_raw_verify(m);
 
 		m->data_off = RTE_PKTMBUF_HEADROOM;
 		rte_pktmbuf_data_len(m) = seg_len;
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 5ea98187c3b4..5d5df52b269a 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -120,7 +120,7 @@ sfc_efx_rx_qrefill(struct sfc_efx_rxq *rxq)
 		     ++i, id = (id + 1) & rxq->ptr_mask) {
 			m = objs[i];
 
-			__rte_mbuf_raw_sanity_check(m);
+			__rte_mbuf_raw_verify(m);
 
 			rxd = &rxq->sw_desc[id];
 			rxd->mbuf = m;
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 6d0a8501eff5..f39658f4e249 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -258,7 +258,7 @@ mcast_out_pkt(struct rte_mbuf *pkt, int use_clone)
 	hdr->pkt_len = (uint16_t)(hdr->data_len + pkt->pkt_len);
 	hdr->nb_segs = pkt->nb_segs + 1;
 
-	__rte_mbuf_sanity_check(hdr, 1);
+	__rte_mbuf_verify(hdr, 1);
 	return hdr;
 }
 /* >8 End of mcast_out_kt. */
diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c
index 686e797c80c4..56fb6c846df6 100644
--- a/lib/mbuf/rte_mbuf.c
+++ b/lib/mbuf/rte_mbuf.c
@@ -363,9 +363,9 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n,
 	return mp;
 }
 
-/* do some sanity checks on a mbuf: panic if it fails */
+/* do some checks on a mbuf: panic if it fails */
 void
-rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
+rte_mbuf_verify(const struct rte_mbuf *m, int is_header)
 {
 	const char *reason;
 
@@ -373,6 +373,13 @@ rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
 		rte_panic("%s\n", reason);
 }
 
+/* For ABI compatabilty, to be removed in next release */
+void
+rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
+{
+	rte_mbuf_verify(m, is_header);
+}
+
 int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
 		   const char **reason)
 {
@@ -492,7 +499,7 @@ void rte_pktmbuf_free_bulk(struct rte_mbuf **mbufs, unsigned int count)
 		if (unlikely(m == NULL))
 			continue;
 
-		__rte_mbuf_sanity_check(m, 1);
+		__rte_mbuf_verify(m, 1);
 
 		do {
 			m_next = m->next;
@@ -542,7 +549,7 @@ rte_pktmbuf_clone(struct rte_mbuf *md, struct rte_mempool *mp)
 		return NULL;
 	}
 
-	__rte_mbuf_sanity_check(mc, 1);
+	__rte_mbuf_verify(mc, 1);
 	return mc;
 }
 
@@ -592,7 +599,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
 	struct rte_mbuf *mc, *m_last, **prev;
 
 	/* garbage in check */
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 
 	/* check for request to copy at offset past end of mbuf */
 	if (unlikely(off >= m->pkt_len))
@@ -656,7 +663,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
 	}
 
 	/* garbage out check */
-	__rte_mbuf_sanity_check(mc, 1);
+	__rte_mbuf_verify(mc, 1);
 	return mc;
 }
 
@@ -667,7 +674,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
 	unsigned int len;
 	unsigned int nb_segs;
 
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 
 	fprintf(f, "dump mbuf at %p, iova=%#" PRIx64 ", buf_len=%u\n", m, rte_mbuf_iova_get(m),
 		m->buf_len);
@@ -685,7 +692,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
 	nb_segs = m->nb_segs;
 
 	while (m && nb_segs != 0) {
-		__rte_mbuf_sanity_check(m, 0);
+		__rte_mbuf_verify(m, 0);
 
 		fprintf(f, "  segment at %p, data=%p, len=%u, off=%u, refcnt=%u\n",
 			m, rte_pktmbuf_mtod(m, void *),
diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h
index 913c459b1cc6..3bd50d7307b3 100644
--- a/lib/mbuf/rte_mbuf.h
+++ b/lib/mbuf/rte_mbuf.h
@@ -339,13 +339,13 @@ rte_pktmbuf_priv_flags(struct rte_mempool *mp)
 
 #ifdef RTE_LIBRTE_MBUF_DEBUG
 
-/**  check mbuf type in debug mode */
-#define __rte_mbuf_sanity_check(m, is_h) rte_mbuf_sanity_check(m, is_h)
+/**  do mbuf type in debug mode */
+#define __rte_mbuf_verify(m, is_h) rte_mbuf_verify(m, is_h)
 
 #else /*  RTE_LIBRTE_MBUF_DEBUG */
 
-/**  check mbuf type in debug mode */
-#define __rte_mbuf_sanity_check(m, is_h) do { } while (0)
+/**  ignore mbuf checks if not in debug mode */
+#define __rte_mbuf_verify(m, is_h) do { } while (0)
 
 #endif /*  RTE_LIBRTE_MBUF_DEBUG */
 
@@ -514,10 +514,9 @@ rte_mbuf_ext_refcnt_update(struct rte_mbuf_ext_shared_info *shinfo,
 
 
 /**
- * Sanity checks on an mbuf.
+ * Check that the mbuf is valid and panic if corrupted.
  *
- * Check the consistency of the given mbuf. The function will cause a
- * panic if corruption is detected.
+ * Acts assertion that mbuf is consistent. If not it calls rte_panic().
  *
  * @param m
  *   The mbuf to be checked.
@@ -526,13 +525,17 @@ rte_mbuf_ext_refcnt_update(struct rte_mbuf_ext_shared_info *shinfo,
  *   of a packet (in this case, some fields like nb_segs are not checked)
  */
 void
+rte_mbuf_verify(const struct rte_mbuf *m, int is_header);
+
+/* Older deprecated name for rte_mbuf_verify() */
+void __rte_deprecated
 rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header);
 
 /**
- * Sanity checks on a mbuf.
+ * Do consistency checks on a mbuf.
  *
- * Almost like rte_mbuf_sanity_check(), but this function gives the reason
- * if corruption is detected rather than panic.
+ * Check the consistency of the given mbuf and if not valid
+ * return the reason.
  *
  * @param m
  *   The mbuf to be checked.
@@ -551,7 +554,7 @@ int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
 		   const char **reason);
 
 /**
- * Sanity checks on a reinitialized mbuf in debug mode.
+ * Do checks on a reinitialized mbuf in debug mode.
  *
  * Check the consistency of the given reinitialized mbuf.
  * The function will cause a panic if corruption is detected.
@@ -563,16 +566,16 @@ int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
  *   The mbuf to be checked.
  */
 static __rte_always_inline void
-__rte_mbuf_raw_sanity_check(__rte_unused const struct rte_mbuf *m)
+__rte_mbuf_raw_verify(__rte_unused const struct rte_mbuf *m)
 {
 	RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);
 	RTE_ASSERT(m->next == NULL);
 	RTE_ASSERT(m->nb_segs == 1);
-	__rte_mbuf_sanity_check(m, 0);
+	__rte_mbuf_verify(m, 0);
 }
 
 /** For backwards compatibility. */
-#define MBUF_RAW_ALLOC_CHECK(m) __rte_mbuf_raw_sanity_check(m)
+#define MBUF_RAW_ALLOC_CHECK(m) __rte_mbuf_raw_verify(m)
 
 /**
  * Allocate an uninitialized mbuf from mempool *mp*.
@@ -599,7 +602,7 @@ static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct rte_mempool *mp)
 
 	if (rte_mempool_get(mp, (void **)&m) < 0)
 		return NULL;
-	__rte_mbuf_raw_sanity_check(m);
+	__rte_mbuf_raw_verify(m);
 	return m;
 }
 
@@ -622,7 +625,7 @@ rte_mbuf_raw_free(struct rte_mbuf *m)
 {
 	RTE_ASSERT(!RTE_MBUF_CLONED(m) &&
 		  (!RTE_MBUF_HAS_EXTBUF(m) || RTE_MBUF_HAS_PINNED_EXTBUF(m)));
-	__rte_mbuf_raw_sanity_check(m);
+	__rte_mbuf_raw_verify(m);
 	rte_mempool_put(m->pool, m);
 }
 
@@ -886,7 +889,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
 	rte_pktmbuf_reset_headroom(m);
 
 	m->data_len = 0;
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 }
 
 /**
@@ -942,22 +945,22 @@ static inline int rte_pktmbuf_alloc_bulk(struct rte_mempool *pool,
 	switch (count % 4) {
 	case 0:
 		while (idx != count) {
-			__rte_mbuf_raw_sanity_check(mbufs[idx]);
+			__rte_mbuf_raw_verify(mbufs[idx]);
 			rte_pktmbuf_reset(mbufs[idx]);
 			idx++;
 			/* fall-through */
 	case 3:
-			__rte_mbuf_raw_sanity_check(mbufs[idx]);
+			__rte_mbuf_raw_verify(mbufs[idx]);
 			rte_pktmbuf_reset(mbufs[idx]);
 			idx++;
 			/* fall-through */
 	case 2:
-			__rte_mbuf_raw_sanity_check(mbufs[idx]);
+			__rte_mbuf_raw_verify(mbufs[idx]);
 			rte_pktmbuf_reset(mbufs[idx]);
 			idx++;
 			/* fall-through */
 	case 1:
-			__rte_mbuf_raw_sanity_check(mbufs[idx]);
+			__rte_mbuf_raw_verify(mbufs[idx]);
 			rte_pktmbuf_reset(mbufs[idx]);
 			idx++;
 			/* fall-through */
@@ -1185,8 +1188,8 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m)
 	mi->pkt_len = mi->data_len;
 	mi->nb_segs = 1;
 
-	__rte_mbuf_sanity_check(mi, 1);
-	__rte_mbuf_sanity_check(m, 0);
+	__rte_mbuf_verify(mi, 1);
+	__rte_mbuf_verify(m, 0);
 }
 
 /**
@@ -1341,7 +1344,7 @@ static inline int __rte_pktmbuf_pinned_extbuf_decref(struct rte_mbuf *m)
 static __rte_always_inline struct rte_mbuf *
 rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
 {
-	__rte_mbuf_sanity_check(m, 0);
+	__rte_mbuf_verify(m, 0);
 
 	if (likely(rte_mbuf_refcnt_read(m) == 1)) {
 
@@ -1412,7 +1415,7 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m)
 	struct rte_mbuf *m_next;
 
 	if (m != NULL)
-		__rte_mbuf_sanity_check(m, 1);
+		__rte_mbuf_verify(m, 1);
 
 	while (m != NULL) {
 		m_next = m->next;
@@ -1493,7 +1496,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
  */
 static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
 {
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 
 	do {
 		rte_mbuf_refcnt_update(m, v);
@@ -1510,7 +1513,7 @@ static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
  */
 static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)
 {
-	__rte_mbuf_sanity_check(m, 0);
+	__rte_mbuf_verify(m, 0);
 	return m->data_off;
 }
 
@@ -1524,7 +1527,7 @@ static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)
  */
 static inline uint16_t rte_pktmbuf_tailroom(const struct rte_mbuf *m)
 {
-	__rte_mbuf_sanity_check(m, 0);
+	__rte_mbuf_verify(m, 0);
 	return (uint16_t)(m->buf_len - rte_pktmbuf_headroom(m) -
 			  m->data_len);
 }
@@ -1539,7 +1542,7 @@ static inline uint16_t rte_pktmbuf_tailroom(const struct rte_mbuf *m)
  */
 static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
 {
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 	while (m->next != NULL)
 		m = m->next;
 	return m;
@@ -1583,7 +1586,7 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
 static inline char *rte_pktmbuf_prepend(struct rte_mbuf *m,
 					uint16_t len)
 {
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 
 	if (unlikely(len > rte_pktmbuf_headroom(m)))
 		return NULL;
@@ -1618,7 +1621,7 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf *m, uint16_t len)
 	void *tail;
 	struct rte_mbuf *m_last;
 
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 
 	m_last = rte_pktmbuf_lastseg(m);
 	if (unlikely(len > rte_pktmbuf_tailroom(m_last)))
@@ -1646,7 +1649,7 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf *m, uint16_t len)
  */
 static inline char *rte_pktmbuf_adj(struct rte_mbuf *m, uint16_t len)
 {
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 
 	if (unlikely(len > m->data_len))
 		return NULL;
@@ -1678,7 +1681,7 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m, uint16_t len)
 {
 	struct rte_mbuf *m_last;
 
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 
 	m_last = rte_pktmbuf_lastseg(m);
 	if (unlikely(len > m_last->data_len))
@@ -1700,7 +1703,7 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m, uint16_t len)
  */
 static inline int rte_pktmbuf_is_contiguous(const struct rte_mbuf *m)
 {
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 	return m->nb_segs == 1;
 }
 
diff --git a/lib/mbuf/version.map b/lib/mbuf/version.map
index ed486ed14ec7..f134946f3d8d 100644
--- a/lib/mbuf/version.map
+++ b/lib/mbuf/version.map
@@ -31,6 +31,7 @@ DPDK_23 {
 	rte_mbuf_set_platform_mempool_ops;
 	rte_mbuf_set_user_mempool_ops;
 	rte_mbuf_user_mempool_ops;
+	rte_mbuf_verify;
 	rte_pktmbuf_clone;
 	rte_pktmbuf_copy;
 	rte_pktmbuf_dump;
-- 
2.39.2


^ permalink raw reply	[relevance 2%]

* [PATCH V10] ethdev: fix one address occupies two entries in MAC addrs
      2023-05-19  3:00  4% ` [PATCH V9] " Huisong Li
@ 2023-05-19  9:31  4% ` Huisong Li
  2 siblings, 0 replies; 200+ results
From: Huisong Li @ 2023-05-19  9:31 UTC (permalink / raw)
  To: dev
  Cc: thomas, ferruh.yigit, bruce.richardson, andrew.rybchenko,
	liudongdong3, liuyonglong, fengchengwen, lihuisong

The dev->data->mac_addrs[0] will be changed to a new MAC address when
applications modify the default MAC address by .mac_addr_set(). However,
if the new default one has been added as a non-default MAC address by
.mac_addr_add(), the .mac_addr_set() didn't check this address.
As a result, this MAC address occupies two entries in the list. Like:
add(MAC1)
add(MAC2)
add(MAC3)
add(MAC4)
set_default(MAC3)
default=MAC3, the rest of the list=MAC1, MAC2, MAC3, MAC4
Note: MAC3 occupies two entries.

But .mac_addr_set() cannot remove it implicitly in case of MAC address
shrinking in the list.
So this patch adds a check on whether the new default address was already
in the list and if so requires the user to remove it first.

In addition, this patch documents the position of the default MAC address
and address unique in the list.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
v10: add '-EEXIST' error type case under @return.
v9: request user to remove the address instead of doing it implicitly in
    .mac_addr_set() API.
v8: fix some comments.
v7: add announcement in the release notes and document this behavior.
v6: fix commit log and some code comments.
v5:
 - merge the second patch into the first patch.
 - add error log when rollback failed.
v4:
  - fix broken in the patchwork
v3:
  - first explicitly remove the non-default MAC, then set default one.
  - document default and non-default MAC address
v2:
  - fixed commit log.
---
 doc/guides/rel_notes/release_23_07.rst |  5 +++++
 lib/ethdev/ethdev_driver.h             |  6 +++++-
 lib/ethdev/rte_ethdev.c                | 10 ++++++++++
 lib/ethdev/rte_ethdev.h                |  4 ++++
 4 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index 4ffef85d74..7c624d8315 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -96,6 +96,11 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+   * ethdev: ensured all entries in MAC address list are uniques.
+     When setting a default MAC address with the function
+     ``rte_eth_dev_default_mac_addr_set``,
+     the default one needs to be removed by user if it was already in
+     the list.
 
 ABI Changes
 -----------
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 2c9d615fb5..367c0c4878 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -117,7 +117,11 @@ struct rte_eth_dev_data {
 
 	uint64_t rx_mbuf_alloc_failed; /**< Rx ring mbuf allocation failures */
 
-	/** Device Ethernet link address. @see rte_eth_dev_release_port() */
+	/**
+	 * Device Ethernet link addresses.
+	 * All entries are unique.
+	 * The first entry (index zero) is the default address.
+	 */
 	struct rte_ether_addr *mac_addrs;
 	/** Bitmap associating MAC addresses to pools */
 	uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 4d03255683..d46e74504e 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -4898,6 +4898,7 @@ int
 rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr)
 {
 	struct rte_eth_dev *dev;
+	int index;
 	int ret;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
@@ -4916,6 +4917,15 @@ rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr)
 	if (*dev->dev_ops->mac_addr_set == NULL)
 		return -ENOTSUP;
 
+	/* Keep address unique in dev->data->mac_addrs[]. */
+	index = eth_dev_get_mac_addr_index(port_id, addr);
+	if (index > 0) {
+		RTE_ETHDEV_LOG(ERR,
+			"New default address for port %u was already in the address list. Please remove it first.\n",
+			port_id);
+		return -EEXIST;
+	}
+
 	ret = (*dev->dev_ops->mac_addr_set)(dev, addr);
 	if (ret < 0)
 		return ret;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 99fe9e238b..fe8f7466c8 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4381,6 +4381,9 @@ int rte_eth_dev_mac_addr_remove(uint16_t port_id,
 
 /**
  * Set the default MAC address.
+ * It replaces the address at index 0 of the MAC address list.
+ * If the address was already in the MAC address list,
+ * please remove it first.
  *
  * @param port_id
  *   The port identifier of the Ethernet device.
@@ -4391,6 +4394,7 @@ int rte_eth_dev_mac_addr_remove(uint16_t port_id,
  *   - (-ENOTSUP) if hardware doesn't support.
  *   - (-ENODEV) if *port* invalid.
  *   - (-EINVAL) if MAC address is invalid.
+ *   - (-EEXIST) if MAC address was already in the address list.
  */
 int rte_eth_dev_default_mac_addr_set(uint16_t port_id,
 		struct rte_ether_addr *mac_addr);
-- 
2.33.0


^ permalink raw reply	[relevance 4%]

* Re: [PATCH 1/3] security: introduce out of place support for inline ingress
  2023-04-24 22:41  3%       ` Thomas Monjalon
@ 2023-05-19  8:07  4%         ` Jerin Jacob
  0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2023-05-19  8:07 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Stephen Hemminger, Nithin Dabilpuram, Akhil Goyal, jerinj, dev,
	Morten Brørup, techboard

On Tue, Apr 25, 2023 at 4:11 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 18/04/2023 10:33, Jerin Jacob:
> > On Tue, Apr 11, 2023 at 11:36 PM Stephen Hemminger
> > <stephen@networkplumber.org> wrote:
> > >
> > > On Tue, 11 Apr 2023 15:34:07 +0530
> > > Nithin Dabilpuram <ndabilpuram@marvell.com> wrote:
> > >
> > > > diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> > > > index 4bacf9fcd9..866cd4e8ee 100644
> > > > --- a/lib/security/rte_security.h
> > > > +++ b/lib/security/rte_security.h
> > > > @@ -275,6 +275,17 @@ struct rte_security_ipsec_sa_options {
> > > >        */
> > > >       uint32_t ip_reassembly_en : 1;
> > > >
> > > > +     /** Enable out of place processing on inline inbound packets.
> > > > +      *
> > > > +      * * 1: Enable driver to perform Out-of-place(OOP) processing for this inline
> > > > +      *      inbound SA if supported by driver. PMD need to register mbuf
> > > > +      *      dynamic field using rte_security_oop_dynfield_register()
> > > > +      *      and security session creation would fail if dynfield is not
> > > > +      *      registered successfully.
> > > > +      * * 0: Disable OOP processing for this session (default).
> > > > +      */
> > > > +     uint32_t ingress_oop : 1;
> > > > +
> > > >       /** Reserved bit fields for future extension
> > > >        *
> > > >        * User should ensure reserved_opts is cleared as it may change in
> > > > @@ -282,7 +293,7 @@ struct rte_security_ipsec_sa_options {
> > > >        *
> > > >        * Note: Reduce number of bits in reserved_opts for every new option.
> > > >        */
> > > > -     uint32_t reserved_opts : 17;
> > > > +     uint32_t reserved_opts : 16;
> > > >  };
> > >
> > > NAK
> > > Let me repeat the reserved bit rant. YAGNI
> > >
> > > Reserved space is not usable without ABI breakage unless the existing
> > > code enforces that reserved space has to be zero.
> > >
> > > Just saying "User should ensure reserved_opts is cleared" is not enough.
> >
> > Yes. I think, we need to enforce to have _init functions for the
> > structures which is using reserved filed.
> >
> > On the same note on YAGNI, I am wondering why NOT introduce
> > RTE_NEXT_ABI marco kind of scheme to compile out ABI breaking changes.
> > By keeping RTE_NEXT_ABI disable by default, enable explicitly if user
> > wants it to avoid waiting for one year any ABI breaking changes.
> > There are a lot of "fixed appliance" customers (not OS distribution
> > driven customer) they are willing to recompile DPDK for new feature.
> > What we are loosing with this scheme?
>
> RTE_NEXT_ABI is described in the ABI policy.
> We are not doing it currently, but I think we could
> when it is not too much complicate in the code.
>
> The only problems I see are:
> - more #ifdef clutter
> - 2 binary versions to test
> - CI and checks must handle RTE_NEXT_ABI version

I think, we have two buckets of ABI breakages via RTE_NEXT_ABI

1) Changes that introduces compilation failures like adding new
argument to API or change API name etc
2) Structure size change which won't affect the compilation but breaks
the ABI for shared library usage.

I think, (1) is very distributive, and I don't see recently such
changes. I think, we should avoid (1) for non XX.11 releases.(or two
or three-year cycles if we decide that path)

The (2) comes are very common due to the fact HW features are
evolving. I think, to address the (2), we have two options
a) Have reserved fields and have _init() function to initialize the structures
b) Follow YAGNI style and introduce RTE_NEXT_ABI for structure size change.

The above concerns[1] can greatly reduce with option b OR option a.

[1]
 1) more #ifdef clutter
For option (a) this is not needed or option (b) the clutter will be
limited, it will be around structure which add the new filed and
around the FULL block where new functions are added (not inside the
functions)

2) 2 binary versions to test
For option (a) this is not needed, for option (b) it is limited as for
new features only one needs to test another binary (rather than NOT
adding a new feature).

 3) CI and checks must handle RTE_NEXT_ABI version

I think, it is cheap to add this, at least for compilation test.

IMO, We need to change the API break release to 3 year kind of time
frame to have very good end user experience
and allow ABI related change to get in every release and force
_rebuild_ shared objects in major LTS release.

I think, in this major LTS version(23.11) if we can decide (a) vs (b)
then we can align the code accordingly . e.s.p for (a) we need to add
_init() functions.

Thoughts?

^ permalink raw reply	[relevance 4%]

* [PATCH V9] ethdev: fix one address occupies two entries in MAC addrs
    @ 2023-05-19  3:00  4% ` Huisong Li
  2023-05-19  9:31  4% ` [PATCH V10] " Huisong Li
  2 siblings, 0 replies; 200+ results
From: Huisong Li @ 2023-05-19  3:00 UTC (permalink / raw)
  To: dev
  Cc: thomas, ferruh.yigit, bruce.richardson, andrew.rybchenko,
	liudongdong3, huangdaode, fengchengwen, lihuisong

The dev->data->mac_addrs[0] will be changed to a new MAC address when
applications modify the default MAC address by .mac_addr_set(). However,
if the new default one has been added as a non-default MAC address by
.mac_addr_add(), the .mac_addr_set() didn't check this address.
As a result, this MAC address occupies two entries in the list. Like:
add(MAC1)
add(MAC2)
add(MAC3)
add(MAC4)
set_default(MAC3)
default=MAC3, the rest of the list=MAC1, MAC2, MAC3, MAC4
Note: MAC3 occupies two entries.

But .mac_addr_set() cannot remove it implicitly in case of MAC address
shrinking in the list.
So this patch adds a check on whether the new default address was already
in the list and if so requires the user to remove it first.

In addition, this patch documents the position of the default MAC address
and address unique in the list.

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
v9: request user to remove the address instead of doing it implicitly in
    .mac_addr_set() API.
v8: fix some comments.
v7: add announcement in the release notes and document this behavior.
v6: fix commit log and some code comments.
v5:
 - merge the second patch into the first patch.
 - add error log when rollback failed.
v4:
  - fix broken in the patchwork
v3:
  - first explicitly remove the non-default MAC, then set default one.
  - document default and non-default MAC address
v2:
  - fixed commit log.

---
 doc/guides/rel_notes/release_23_07.rst |  5 +++++
 lib/ethdev/ethdev_driver.h             |  6 +++++-
 lib/ethdev/rte_ethdev.c                | 10 ++++++++++
 lib/ethdev/rte_ethdev.h                |  3 +++
 4 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index 4ffef85d74..7c624d8315 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -96,6 +96,11 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+   * ethdev: ensured all entries in MAC address list are uniques.
+     When setting a default MAC address with the function
+     ``rte_eth_dev_default_mac_addr_set``,
+     the default one needs to be removed by user if it was already in
+     the list.
 
 ABI Changes
 -----------
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 2c9d615fb5..367c0c4878 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -117,7 +117,11 @@ struct rte_eth_dev_data {
 
 	uint64_t rx_mbuf_alloc_failed; /**< Rx ring mbuf allocation failures */
 
-	/** Device Ethernet link address. @see rte_eth_dev_release_port() */
+	/**
+	 * Device Ethernet link addresses.
+	 * All entries are unique.
+	 * The first entry (index zero) is the default address.
+	 */
 	struct rte_ether_addr *mac_addrs;
 	/** Bitmap associating MAC addresses to pools */
 	uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 4d03255683..d46e74504e 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -4898,6 +4898,7 @@ int
 rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr)
 {
 	struct rte_eth_dev *dev;
+	int index;
 	int ret;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
@@ -4916,6 +4917,15 @@ rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr)
 	if (*dev->dev_ops->mac_addr_set == NULL)
 		return -ENOTSUP;
 
+	/* Keep address unique in dev->data->mac_addrs[]. */
+	index = eth_dev_get_mac_addr_index(port_id, addr);
+	if (index > 0) {
+		RTE_ETHDEV_LOG(ERR,
+			"New default address for port %u was already in the address list. Please remove it first.\n",
+			port_id);
+		return -EEXIST;
+	}
+
 	ret = (*dev->dev_ops->mac_addr_set)(dev, addr);
 	if (ret < 0)
 		return ret;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 99fe9e238b..09b2ff9e5e 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4381,6 +4381,9 @@ int rte_eth_dev_mac_addr_remove(uint16_t port_id,
 
 /**
  * Set the default MAC address.
+ * It replaces the address at index 0 of the MAC address list.
+ * If the address was already in the MAC address list,
+ * please remove it first.
  *
  * @param port_id
  *   The port identifier of the Ethernet device.
-- 
2.33.0


^ permalink raw reply	[relevance 4%]

* [PATCH v2 4/5] ethdev: add GENEVE TLV option modification support
  @ 2023-05-18 17:40  3%   ` Michael Baum
    1 sibling, 0 replies; 200+ results
From: Michael Baum @ 2023-05-18 17:40 UTC (permalink / raw)
  To: dev; +Cc: Ori Kam, Aman Singh, Yuying Zhang, Ferruh Yigit, Thomas Monjalon

Add modify field support for GENEVE option fields:
 - "RTE_FLOW_FIELD_GENEVE_OPT_TYPE"
 - "RTE_FLOW_FIELD_GENEVE_OPT_CLASS"
 - "RTE_FLOW_FIELD_GENEVE_OPT_DATA"

Each GENEVE TLV option is identified by both its "class" and "type", so
2 new fields were added to "rte_flow_action_modify_data" structure to
help specify which option to modify.

To get room for those 2 new fields, the "level" field move to use
"uint8_t" which is more than enough for encapsulation level.
This patch also reduces all modify field encapsulation level "fully
masked" initializations to use UINT8_MAX instead of UINT32_MAX.
This change avoids compilation warning caused by this API changing.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
 app/test-pmd/cmdline_flow.c            | 48 +++++++++++++++++++++++++-
 doc/guides/prog_guide/rte_flow.rst     | 23 ++++++++++++
 doc/guides/rel_notes/release_23_07.rst |  3 ++
 drivers/net/mlx5/mlx5_flow_hw.c        | 22 ++++++------
 lib/ethdev/rte_flow.h                  | 48 +++++++++++++++++++++++++-
 5 files changed, 131 insertions(+), 13 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 58939ec321..8c1dea53c0 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -636,11 +636,15 @@ enum index {
 	ACTION_MODIFY_FIELD_DST_TYPE_VALUE,
 	ACTION_MODIFY_FIELD_DST_LEVEL,
 	ACTION_MODIFY_FIELD_DST_LEVEL_VALUE,
+	ACTION_MODIFY_FIELD_DST_TYPE_ID,
+	ACTION_MODIFY_FIELD_DST_CLASS_ID,
 	ACTION_MODIFY_FIELD_DST_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_TYPE,
 	ACTION_MODIFY_FIELD_SRC_TYPE_VALUE,
 	ACTION_MODIFY_FIELD_SRC_LEVEL,
 	ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE,
+	ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+	ACTION_MODIFY_FIELD_SRC_CLASS_ID,
 	ACTION_MODIFY_FIELD_SRC_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_VALUE,
 	ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -854,7 +858,9 @@ static const char *const modify_field_ids[] = {
 	"ipv4_ecn", "ipv6_ecn", "gtp_psc_qfi", "meter_color",
 	"ipv6_proto",
 	"flex_item",
-	"hash_result", NULL
+	"hash_result",
+	"geneve_opt_type", "geneve_opt_class", "geneve_opt_data",
+	NULL
 };
 
 static const char *const meter_colors[] = {
@@ -2295,6 +2301,8 @@ static const enum index next_action_sample[] = {
 
 static const enum index action_modify_field_dst[] = {
 	ACTION_MODIFY_FIELD_DST_LEVEL,
+	ACTION_MODIFY_FIELD_DST_TYPE_ID,
+	ACTION_MODIFY_FIELD_DST_CLASS_ID,
 	ACTION_MODIFY_FIELD_DST_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_TYPE,
 	ZERO,
@@ -2302,6 +2310,8 @@ static const enum index action_modify_field_dst[] = {
 
 static const enum index action_modify_field_src[] = {
 	ACTION_MODIFY_FIELD_SRC_LEVEL,
+	ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+	ACTION_MODIFY_FIELD_SRC_CLASS_ID,
 	ACTION_MODIFY_FIELD_SRC_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_VALUE,
 	ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -6388,6 +6398,24 @@ static const struct token token_list[] = {
 		.call = parse_vc_modify_field_level,
 		.comp = comp_none,
 	},
+	[ACTION_MODIFY_FIELD_DST_TYPE_ID] = {
+		.name = "dst_type_id",
+		.help = "destination field type ID",
+		.next = NEXT(action_modify_field_dst,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+					dst.type)),
+		.call = parse_vc_conf,
+	},
+	[ACTION_MODIFY_FIELD_DST_CLASS_ID] = {
+		.name = "dst_class",
+		.help = "destination field class ID",
+		.next = NEXT(action_modify_field_dst,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+					     dst.class_id)),
+		.call = parse_vc_conf,
+	},
 	[ACTION_MODIFY_FIELD_DST_OFFSET] = {
 		.name = "dst_offset",
 		.help = "destination field bit offset",
@@ -6423,6 +6451,24 @@ static const struct token token_list[] = {
 		.call = parse_vc_modify_field_level,
 		.comp = comp_none,
 	},
+	[ACTION_MODIFY_FIELD_SRC_TYPE_ID] = {
+		.name = "src_type_id",
+		.help = "source field type ID",
+		.next = NEXT(action_modify_field_src,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+					src.type)),
+		.call = parse_vc_conf,
+	},
+	[ACTION_MODIFY_FIELD_SRC_CLASS_ID] = {
+		.name = "src_class",
+		.help = "source field class ID",
+		.next = NEXT(action_modify_field_src,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+					     src.class_id)),
+		.call = parse_vc_conf,
+	},
 	[ACTION_MODIFY_FIELD_SRC_OFFSET] = {
 		.name = "src_offset",
 		.help = "source field bit offset",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 25b57bf86d..ec812de335 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2937,6 +2937,14 @@ as well as any tag element in the tag array:
 For the tag array (in case of multiple tags are supported and present)
 ``level`` translates directly into the array index.
 
+``type`` is used to specify (along with ``class_id``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
+``class_id`` is used to specify (along with ``type``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
 ``flex_handle`` is used to specify the flex item pointer which is being
 modified. ``flex_handle`` and ``level`` are mutually exclusive.
 
@@ -2967,6 +2975,17 @@ to replace the third byte of MAC address with value 0x85, application should
 specify destination width as 8, destination offset as 16, and provide immediate
 value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
 
+The ``RTE_FLOW_FIELD_GENEVE_OPT_DATA`` type supports modifying only one DW in
+single action and align to 32 bits. For example, for modifying 16 bits start
+from offset 24, 2 different actions should be prepared. The first one includs
+``offset=24`` and ``width=8``, and the seconde one includs ``offset=32`` and
+``width=8``.
+Application should provide the data in immediate value memory only for the
+single DW even though the offset is related to start of first DW. For example,
+to replace the third byte of second DW in Geneve option data with value 0x85,
+application should specify destination width as 8, destination offset as 48,
+and provide immediate value 0xXXXX85XX.
+
 .. _table_rte_flow_action_modify_field:
 
 .. table:: MODIFY_FIELD
@@ -2994,6 +3013,10 @@ value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
    +-----------------+----------------------------------------------------------+
    | ``level``       | encapsulation level of a packet field or tag array index |
    +-----------------+----------------------------------------------------------+
+   | ``type``        | geneve option type                                       |
+   +-----------------+----------------------------------------------------------+
+   | ``class_id``    | geneve option class ID                                   |
+   +-----------------+----------------------------------------------------------+
    | ``flex_handle`` | flex item handle of a packet field                       |
    +-----------------+----------------------------------------------------------+
    | ``offset``      | number of bits to skip at the beginning                  |
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index a9b1293689..ce1755096f 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -84,6 +84,9 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* The ``level`` field in experimental structure
+  ``struct rte_flow_action_modify_data`` was reduced to 8 bits.
+
 
 ABI Changes
 -----------
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 7e0ee8d883..1b68a19900 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -3565,7 +3565,7 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"immediate value, pointer and hash result cannot be used as destination");
-	if (mask_conf->dst.level != UINT32_MAX)
+	if (mask_conf->dst.level != UINT8_MAX)
 		return rte_flow_error_set(error, EINVAL,
 			RTE_FLOW_ERROR_TYPE_ACTION, action,
 			"destination encapsulation level must be fully masked");
@@ -3579,7 +3579,7 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
 				"destination field mask and template are not equal");
 	if (action_conf->src.field != RTE_FLOW_FIELD_POINTER &&
 	    action_conf->src.field != RTE_FLOW_FIELD_VALUE) {
-		if (mask_conf->src.level != UINT32_MAX)
+		if (mask_conf->src.level != UINT8_MAX)
 			return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION, action,
 				"source encapsulation level must be fully masked");
@@ -4450,7 +4450,7 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev,
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = RTE_FLOW_FIELD_VLAN_ID,
-			.level = 0xffffffff, .offset = 0xffffffff,
+			.level = 0xff, .offset = 0xffffffff,
 		},
 		.src = {
 			.field = RTE_FLOW_FIELD_VALUE,
@@ -4583,12 +4583,12 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.width = UINT32_MAX,
@@ -5653,7 +5653,7 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev)
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
@@ -5677,12 +5677,12 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev)
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.width = UINT32_MAX,
@@ -6009,7 +6009,7 @@ flow_hw_create_ctrl_regc_jump_actions_template(struct rte_eth_dev *dev)
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
@@ -6182,12 +6182,12 @@ flow_hw_create_tx_default_mreg_copy_actions_template(struct rte_eth_dev *dev)
 		.operation = RTE_FLOW_MODIFY_SET,
 		.dst = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.src = {
 			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
-			.level = UINT32_MAX,
+			.level = UINT8_MAX,
 			.offset = UINT32_MAX,
 		},
 		.width = UINT32_MAX,
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 713ba8b65c..f30d4b033f 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3773,6 +3773,9 @@ enum rte_flow_field_id {
 	RTE_FLOW_FIELD_IPV6_PROTO,	/**< IPv6 next header. */
 	RTE_FLOW_FIELD_FLEX_ITEM,	/**< Flex item. */
 	RTE_FLOW_FIELD_HASH_RESULT,	/**< Hash result. */
+	RTE_FLOW_FIELD_GENEVE_OPT_TYPE,	/**< GENEVE option type */
+	RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */
+	RTE_FLOW_FIELD_GENEVE_OPT_DATA	/**< GENEVE option data */
 };
 
 /**
@@ -3788,7 +3791,50 @@ struct rte_flow_action_modify_data {
 		struct {
 			/** Encapsulation level or tag index or flex item handle. */
 			union {
-				uint32_t level;
+				struct {
+					/**
+					 * Packet encapsulation level containing
+					 * the field modify to.
+					 *
+					 * - @p 0 requests the default behavior.
+					 *   Depending on the packet type, it
+					 *   can mean outermost, innermost or
+					 *   anything in between.
+					 *
+					 *   It basically stands for the
+					 *   innermost encapsulation level
+					 *   modification can be performed on
+					 *   according to PMD and device
+					 *   capabilities.
+					 *
+					 * - @p 1 requests modification to be
+					 *   performed on the outermost packet
+					 *   encapsulation level.
+					 *
+					 * - @p 2 and subsequent values request
+					 *   modification to be performed on
+					 *   the specified inner packet
+					 *   encapsulation level, from
+					 *   outermost to innermost (lower to
+					 *   higher values).
+					 *
+					 * Values other than @p 0 are not
+					 * necessarily supported.
+					 */
+					uint8_t level;
+					/**
+					 * Geneve option type. relevant only
+					 * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+					 * modification type.
+					 */
+					uint8_t type;
+					/**
+					 * Geneve option class. relevant only
+					 * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+					 * modification type.
+					 */
+					rte_be16_t class_id;
+				};
 				struct rte_flow_item_flex_handle *flex_handle;
 			};
 			/** Number of bits to skip from a field. */
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* [PATCH v2 01/19] mbuf: replace term sanity check
  @ 2023-05-18 16:45  2%   ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-05-18 16:45 UTC (permalink / raw)
  To: dev
  Cc: Stephen Hemminger, Olivier Matz, Steven Webster, Matt Peters,
	Andrew Rybchenko

Replace rte_mbuf_sanity_check() with rte_mbuf_verify()
to match the similar macro RTE_VERIFY() in rte_debug.h

The term sanity check is on the Tier 2 list of words
that should be replaced.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 app/test/test_mbuf.c               | 30 ++++++-------
 doc/guides/prog_guide/mbuf_lib.rst |  4 +-
 drivers/net/avp/avp_ethdev.c       | 18 ++++----
 drivers/net/sfc/sfc_ef100_rx.c     |  6 +--
 drivers/net/sfc/sfc_ef10_essb_rx.c |  4 +-
 drivers/net/sfc/sfc_ef10_rx.c      |  4 +-
 drivers/net/sfc/sfc_rx.c           |  2 +-
 examples/ipv4_multicast/main.c     |  2 +-
 lib/mbuf/rte_mbuf.c                | 23 ++++++----
 lib/mbuf/rte_mbuf.h                | 71 ++++++++++++++++--------------
 lib/mbuf/version.map               |  1 +
 11 files changed, 88 insertions(+), 77 deletions(-)

diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index 8d8d3b9386ce..c2716dc4e5fe 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -261,8 +261,8 @@ test_one_pktmbuf(struct rte_mempool *pktmbuf_pool)
 		GOTO_FAIL("Buffer should be continuous");
 	memset(hdr, 0x55, MBUF_TEST_HDR2_LEN);
 
-	rte_mbuf_sanity_check(m, 1);
-	rte_mbuf_sanity_check(m, 0);
+	rte_mbuf_verify(m, 1);
+	rte_mbuf_verify(m, 0);
 	rte_pktmbuf_dump(stdout, m, 0);
 
 	/* this prepend should fail */
@@ -1161,7 +1161,7 @@ test_refcnt_mbuf(void)
 
 #ifdef RTE_EXEC_ENV_WINDOWS
 static int
-test_failing_mbuf_sanity_check(struct rte_mempool *pktmbuf_pool)
+test_failing_mbuf_verify(struct rte_mempool *pktmbuf_pool)
 {
 	RTE_SET_USED(pktmbuf_pool);
 	return TEST_SKIPPED;
@@ -1188,7 +1188,7 @@ verify_mbuf_check_panics(struct rte_mbuf *buf)
 		/* No need to generate a coredump when panicking. */
 		rl.rlim_cur = rl.rlim_max = 0;
 		setrlimit(RLIMIT_CORE, &rl);
-		rte_mbuf_sanity_check(buf, 1); /* should panic */
+		rte_mbuf_verify(buf, 1); /* should panic */
 		exit(0);  /* return normally if it doesn't panic */
 	} else if (pid < 0) {
 		printf("Fork Failed\n");
@@ -1202,12 +1202,12 @@ verify_mbuf_check_panics(struct rte_mbuf *buf)
 }
 
 static int
-test_failing_mbuf_sanity_check(struct rte_mempool *pktmbuf_pool)
+test_failing_mbuf_verify(struct rte_mempool *pktmbuf_pool)
 {
 	struct rte_mbuf *buf;
 	struct rte_mbuf badbuf;
 
-	printf("Checking rte_mbuf_sanity_check for failure conditions\n");
+	printf("Checking rte_mbuf_verify for failure conditions\n");
 
 	/* get a good mbuf to use to make copies */
 	buf = rte_pktmbuf_alloc(pktmbuf_pool);
@@ -1729,7 +1729,7 @@ test_mbuf_validate_tx_offload(const char *test_name,
 		GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
 	if (rte_pktmbuf_pkt_len(m) != 0)
 		GOTO_FAIL("%s: Bad packet length\n", __func__);
-	rte_mbuf_sanity_check(m, 0);
+	rte_mbuf_verify(m, 0);
 	m->ol_flags = ol_flags;
 	m->tso_segsz = segsize;
 	ret = rte_validate_tx_offload(m);
@@ -1936,7 +1936,7 @@ test_pktmbuf_read(struct rte_mempool *pktmbuf_pool)
 		GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
 	if (rte_pktmbuf_pkt_len(m) != 0)
 		GOTO_FAIL("%s: Bad packet length\n", __func__);
-	rte_mbuf_sanity_check(m, 0);
+	rte_mbuf_verify(m, 0);
 
 	data = rte_pktmbuf_append(m, MBUF_TEST_DATA_LEN2);
 	if (data == NULL)
@@ -1985,7 +1985,7 @@ test_pktmbuf_read_from_offset(struct rte_mempool *pktmbuf_pool)
 
 	if (rte_pktmbuf_pkt_len(m) != 0)
 		GOTO_FAIL("%s: Bad packet length\n", __func__);
-	rte_mbuf_sanity_check(m, 0);
+	rte_mbuf_verify(m, 0);
 
 	/* prepend an ethernet header */
 	hdr = (struct ether_hdr *)rte_pktmbuf_prepend(m, hdr_len);
@@ -2130,7 +2130,7 @@ create_packet(struct rte_mempool *pktmbuf_pool,
 			GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
 		if (rte_pktmbuf_pkt_len(pkt_seg) != 0)
 			GOTO_FAIL("%s: Bad packet length\n", __func__);
-		rte_mbuf_sanity_check(pkt_seg, 0);
+		rte_mbuf_verify(pkt_seg, 0);
 		/* Add header only for the first segment */
 		if (test_data->flags == MBUF_HEADER && seg == 0) {
 			hdr_len = sizeof(struct rte_ether_hdr);
@@ -2342,7 +2342,7 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool *pktmbuf_pool)
 		GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
 	if (rte_pktmbuf_pkt_len(m) != 0)
 		GOTO_FAIL("%s: Bad packet length\n", __func__);
-	rte_mbuf_sanity_check(m, 0);
+	rte_mbuf_verify(m, 0);
 
 	ext_buf_addr = rte_malloc("External buffer", buf_len,
 			RTE_CACHE_LINE_SIZE);
@@ -2506,8 +2506,8 @@ test_pktmbuf_ext_pinned_buffer(struct rte_mempool *std_pool)
 		GOTO_FAIL("%s: test_pktmbuf_copy(pinned) failed\n",
 			  __func__);
 
-	if (test_failing_mbuf_sanity_check(pinned_pool) < 0)
-		GOTO_FAIL("%s: test_failing_mbuf_sanity_check(pinned)"
+	if (test_failing_mbuf_verify(pinned_pool) < 0)
+		GOTO_FAIL("%s: test_failing_mbuf_verify(pinned)"
 			  " failed\n", __func__);
 
 	if (test_mbuf_linearize_check(pinned_pool) < 0)
@@ -2881,8 +2881,8 @@ test_mbuf(void)
 		goto err;
 	}
 
-	if (test_failing_mbuf_sanity_check(pktmbuf_pool) < 0) {
-		printf("test_failing_mbuf_sanity_check() failed\n");
+	if (test_failing_mbuf_verify(pktmbuf_pool) < 0) {
+		printf("test_failing_mbuf_verify() failed\n");
 		goto err;
 	}
 
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 049357c75563..0accb51a98c7 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -266,8 +266,8 @@ can be found in several of the sample applications, for example, the IPv4 Multic
 Debug
 -----
 
-In debug mode, the functions of the mbuf library perform sanity checks before any operation (such as, buffer corruption,
-bad type, and so on).
+In debug mode, the functions of the mbuf library perform consistency checks
+before any operation (such as, buffer corruption, bad type, and so on).
 
 Use Cases
 ---------
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index b2a08f563542..b402c7a2ad16 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1231,7 +1231,7 @@ _avp_mac_filter(struct avp_dev *avp, struct rte_mbuf *m)
 
 #ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
 static inline void
-__avp_dev_buffer_sanity_check(struct avp_dev *avp, struct rte_avp_desc *buf)
+__avp_dev_buffer_check(struct avp_dev *avp, struct rte_avp_desc *buf)
 {
 	struct rte_avp_desc *first_buf;
 	struct rte_avp_desc *pkt_buf;
@@ -1272,12 +1272,12 @@ __avp_dev_buffer_sanity_check(struct avp_dev *avp, struct rte_avp_desc *buf)
 			  first_buf->pkt_len, pkt_len);
 }
 
-#define avp_dev_buffer_sanity_check(a, b) \
-	__avp_dev_buffer_sanity_check((a), (b))
+#define avp_dev_buffer_check(a, b) \
+	__avp_dev_buffer_check((a), (b))
 
 #else /* RTE_LIBRTE_AVP_DEBUG_BUFFERS */
 
-#define avp_dev_buffer_sanity_check(a, b) do {} while (0)
+#define avp_dev_buffer_check(a, b) do {} while (0)
 
 #endif
 
@@ -1302,7 +1302,7 @@ avp_dev_copy_from_buffers(struct avp_dev *avp,
 	void *pkt_data;
 	unsigned int i;
 
-	avp_dev_buffer_sanity_check(avp, buf);
+	avp_dev_buffer_check(avp, buf);
 
 	/* setup the first source buffer */
 	pkt_buf = avp_dev_translate_buffer(avp, buf);
@@ -1370,7 +1370,7 @@ avp_dev_copy_from_buffers(struct avp_dev *avp,
 	rte_pktmbuf_pkt_len(m) = total_length;
 	m->vlan_tci = vlan_tci;
 
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 
 	return m;
 }
@@ -1614,7 +1614,7 @@ avp_dev_copy_to_buffers(struct avp_dev *avp,
 	char *pkt_data;
 	unsigned int i;
 
-	__rte_mbuf_sanity_check(mbuf, 1);
+	__rte_mbuf_verify(mbuf, 1);
 
 	m = mbuf;
 	src_offset = 0;
@@ -1680,7 +1680,7 @@ avp_dev_copy_to_buffers(struct avp_dev *avp,
 		first_buf->vlan_tci = mbuf->vlan_tci;
 	}
 
-	avp_dev_buffer_sanity_check(avp, buffers[0]);
+	avp_dev_buffer_check(avp, buffers[0]);
 
 	return total_length;
 }
@@ -1798,7 +1798,7 @@ avp_xmit_scattered_pkts(void *tx_queue,
 
 #ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
 	for (i = 0; i < nb_pkts; i++)
-		avp_dev_buffer_sanity_check(avp, tx_bufs[i]);
+		avp_dev_buffer_check(avp, tx_bufs[i]);
 #endif
 
 	/* send the packets */
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 16cd8524d32f..fe8920b12590 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -178,7 +178,7 @@ sfc_ef100_rx_qrefill(struct sfc_ef100_rxq *rxq)
 			struct sfc_ef100_rx_sw_desc *rxd;
 			rte_iova_t dma_addr;
 
-			__rte_mbuf_raw_sanity_check(m);
+			__rte_mbuf_raw_validate(m);
 
 			dma_addr = rte_mbuf_data_iova_default(m);
 			if (rxq->flags & SFC_EF100_RXQ_NIC_DMA_MAP) {
@@ -541,7 +541,7 @@ sfc_ef100_rx_process_ready_pkts(struct sfc_ef100_rxq *rxq,
 		rxq->ready_pkts--;
 
 		pkt = sfc_ef100_rx_next_mbuf(rxq);
-		__rte_mbuf_raw_sanity_check(pkt);
+		__rte_mbuf_raw_validate(pkt);
 
 		RTE_BUILD_BUG_ON(sizeof(pkt->rearm_data[0]) !=
 				 sizeof(rxq->rearm_data));
@@ -565,7 +565,7 @@ sfc_ef100_rx_process_ready_pkts(struct sfc_ef100_rxq *rxq,
 			struct rte_mbuf *seg;
 
 			seg = sfc_ef100_rx_next_mbuf(rxq);
-			__rte_mbuf_raw_sanity_check(seg);
+			__rte_mbuf_raw_validate(seg);
 
 			seg->data_off = RTE_PKTMBUF_HEADROOM;
 
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 78bd430363b1..de80be462a0f 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -125,7 +125,7 @@ sfc_ef10_essb_next_mbuf(const struct sfc_ef10_essb_rxq *rxq,
 	struct rte_mbuf *m;
 
 	m = (struct rte_mbuf *)((uintptr_t)mbuf + rxq->buf_stride);
-	__rte_mbuf_raw_sanity_check(m);
+	__rte_mbuf_raw_validate(m);
 	return m;
 }
 
@@ -136,7 +136,7 @@ sfc_ef10_essb_mbuf_by_index(const struct sfc_ef10_essb_rxq *rxq,
 	struct rte_mbuf *m;
 
 	m = (struct rte_mbuf *)((uintptr_t)mbuf + idx * rxq->buf_stride);
-	__rte_mbuf_raw_sanity_check(m);
+	__rte_mbuf_raw_validate(m);
 	return m;
 }
 
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 7be224c9c412..f6c2345d2b74 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -148,7 +148,7 @@ sfc_ef10_rx_qrefill(struct sfc_ef10_rxq *rxq)
 			struct sfc_ef10_rx_sw_desc *rxd;
 			rte_iova_t phys_addr;
 
-			__rte_mbuf_raw_sanity_check(m);
+			__rte_mbuf_raw_validate(m);
 
 			SFC_ASSERT((id & ~ptr_mask) == 0);
 			rxd = &rxq->sw_ring[id];
@@ -297,7 +297,7 @@ sfc_ef10_rx_process_event(struct sfc_ef10_rxq *rxq, efx_qword_t rx_ev,
 		rxd = &rxq->sw_ring[pending++ & ptr_mask];
 		m = rxd->mbuf;
 
-		__rte_mbuf_raw_sanity_check(m);
+		__rte_mbuf_raw_validate(m);
 
 		m->data_off = RTE_PKTMBUF_HEADROOM;
 		rte_pktmbuf_data_len(m) = seg_len;
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 5ea98187c3b4..d9f99a9d583d 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -120,7 +120,7 @@ sfc_efx_rx_qrefill(struct sfc_efx_rxq *rxq)
 		     ++i, id = (id + 1) & rxq->ptr_mask) {
 			m = objs[i];
 
-			__rte_mbuf_raw_sanity_check(m);
+			__rte_mbuf_raw_validate(m);
 
 			rxd = &rxq->sw_desc[id];
 			rxd->mbuf = m;
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 6d0a8501eff5..f39658f4e249 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -258,7 +258,7 @@ mcast_out_pkt(struct rte_mbuf *pkt, int use_clone)
 	hdr->pkt_len = (uint16_t)(hdr->data_len + pkt->pkt_len);
 	hdr->nb_segs = pkt->nb_segs + 1;
 
-	__rte_mbuf_sanity_check(hdr, 1);
+	__rte_mbuf_verify(hdr, 1);
 	return hdr;
 }
 /* >8 End of mcast_out_kt. */
diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c
index 686e797c80c4..56fb6c846df6 100644
--- a/lib/mbuf/rte_mbuf.c
+++ b/lib/mbuf/rte_mbuf.c
@@ -363,9 +363,9 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n,
 	return mp;
 }
 
-/* do some sanity checks on a mbuf: panic if it fails */
+/* do some checks on a mbuf: panic if it fails */
 void
-rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
+rte_mbuf_verify(const struct rte_mbuf *m, int is_header)
 {
 	const char *reason;
 
@@ -373,6 +373,13 @@ rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
 		rte_panic("%s\n", reason);
 }
 
+/* For ABI compatabilty, to be removed in next release */
+void
+rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
+{
+	rte_mbuf_verify(m, is_header);
+}
+
 int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
 		   const char **reason)
 {
@@ -492,7 +499,7 @@ void rte_pktmbuf_free_bulk(struct rte_mbuf **mbufs, unsigned int count)
 		if (unlikely(m == NULL))
 			continue;
 
-		__rte_mbuf_sanity_check(m, 1);
+		__rte_mbuf_verify(m, 1);
 
 		do {
 			m_next = m->next;
@@ -542,7 +549,7 @@ rte_pktmbuf_clone(struct rte_mbuf *md, struct rte_mempool *mp)
 		return NULL;
 	}
 
-	__rte_mbuf_sanity_check(mc, 1);
+	__rte_mbuf_verify(mc, 1);
 	return mc;
 }
 
@@ -592,7 +599,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
 	struct rte_mbuf *mc, *m_last, **prev;
 
 	/* garbage in check */
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 
 	/* check for request to copy at offset past end of mbuf */
 	if (unlikely(off >= m->pkt_len))
@@ -656,7 +663,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
 	}
 
 	/* garbage out check */
-	__rte_mbuf_sanity_check(mc, 1);
+	__rte_mbuf_verify(mc, 1);
 	return mc;
 }
 
@@ -667,7 +674,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
 	unsigned int len;
 	unsigned int nb_segs;
 
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 
 	fprintf(f, "dump mbuf at %p, iova=%#" PRIx64 ", buf_len=%u\n", m, rte_mbuf_iova_get(m),
 		m->buf_len);
@@ -685,7 +692,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
 	nb_segs = m->nb_segs;
 
 	while (m && nb_segs != 0) {
-		__rte_mbuf_sanity_check(m, 0);
+		__rte_mbuf_verify(m, 0);
 
 		fprintf(f, "  segment at %p, data=%p, len=%u, off=%u, refcnt=%u\n",
 			m, rte_pktmbuf_mtod(m, void *),
diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h
index 913c459b1cc6..f3b62009accf 100644
--- a/lib/mbuf/rte_mbuf.h
+++ b/lib/mbuf/rte_mbuf.h
@@ -339,13 +339,13 @@ rte_pktmbuf_priv_flags(struct rte_mempool *mp)
 
 #ifdef RTE_LIBRTE_MBUF_DEBUG
 
-/**  check mbuf type in debug mode */
-#define __rte_mbuf_sanity_check(m, is_h) rte_mbuf_sanity_check(m, is_h)
+/**  do mbuf type in debug mode */
+#define __rte_mbuf_verify(m, is_h) rte_mbuf_validate(m, is_h)
 
 #else /*  RTE_LIBRTE_MBUF_DEBUG */
 
-/**  check mbuf type in debug mode */
-#define __rte_mbuf_sanity_check(m, is_h) do { } while (0)
+/**  ignore mbuf checks if not in debug mode */
+#define __rte_mbuf_verify(m, is_h) do { } while (0)
 
 #endif /*  RTE_LIBRTE_MBUF_DEBUG */
 
@@ -514,10 +514,9 @@ rte_mbuf_ext_refcnt_update(struct rte_mbuf_ext_shared_info *shinfo,
 
 
 /**
- * Sanity checks on an mbuf.
+ * Check that the mbuf is valid and panic if corrupted.
  *
- * Check the consistency of the given mbuf. The function will cause a
- * panic if corruption is detected.
+ * Acts assertion that mbuf is consistent. If not it calls rte_panic().
  *
  * @param m
  *   The mbuf to be checked.
@@ -526,13 +525,17 @@ rte_mbuf_ext_refcnt_update(struct rte_mbuf_ext_shared_info *shinfo,
  *   of a packet (in this case, some fields like nb_segs are not checked)
  */
 void
+rte_mbuf_verify(const struct rte_mbuf *m, int is_header);
+
+/* Older deprecated name for rte_mbuf_verify() */
+void __rte_deprecated
 rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header);
 
 /**
- * Sanity checks on a mbuf.
+ * Do consistency checks on a mbuf.
  *
- * Almost like rte_mbuf_sanity_check(), but this function gives the reason
- * if corruption is detected rather than panic.
+ * Check the consistency of the given mbuf and if not valid
+ * return the reason.
  *
  * @param m
  *   The mbuf to be checked.
@@ -551,7 +554,7 @@ int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
 		   const char **reason);
 
 /**
- * Sanity checks on a reinitialized mbuf in debug mode.
+ * Do checks on a reinitialized mbuf in debug mode.
  *
  * Check the consistency of the given reinitialized mbuf.
  * The function will cause a panic if corruption is detected.
@@ -563,16 +566,16 @@ int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
  *   The mbuf to be checked.
  */
 static __rte_always_inline void
-__rte_mbuf_raw_sanity_check(__rte_unused const struct rte_mbuf *m)
+__rte_mbuf_raw_validate(__rte_unused const struct rte_mbuf *m)
 {
 	RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);
 	RTE_ASSERT(m->next == NULL);
 	RTE_ASSERT(m->nb_segs == 1);
-	__rte_mbuf_sanity_check(m, 0);
+	__rte_mbuf_verify(m, 0);
 }
 
 /** For backwards compatibility. */
-#define MBUF_RAW_ALLOC_CHECK(m) __rte_mbuf_raw_sanity_check(m)
+#define MBUF_RAW_ALLOC_CHECK(m) __rte_mbuf_raw_validate(m)
 
 /**
  * Allocate an uninitialized mbuf from mempool *mp*.
@@ -599,7 +602,7 @@ static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct rte_mempool *mp)
 
 	if (rte_mempool_get(mp, (void **)&m) < 0)
 		return NULL;
-	__rte_mbuf_raw_sanity_check(m);
+	__rte_mbuf_raw_validate(m);
 	return m;
 }
 
@@ -622,7 +625,7 @@ rte_mbuf_raw_free(struct rte_mbuf *m)
 {
 	RTE_ASSERT(!RTE_MBUF_CLONED(m) &&
 		  (!RTE_MBUF_HAS_EXTBUF(m) || RTE_MBUF_HAS_PINNED_EXTBUF(m)));
-	__rte_mbuf_raw_sanity_check(m);
+	__rte_mbuf_raw_validate(m);
 	rte_mempool_put(m->pool, m);
 }
 
@@ -886,7 +889,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
 	rte_pktmbuf_reset_headroom(m);
 
 	m->data_len = 0;
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 }
 
 /**
@@ -942,22 +945,22 @@ static inline int rte_pktmbuf_alloc_bulk(struct rte_mempool *pool,
 	switch (count % 4) {
 	case 0:
 		while (idx != count) {
-			__rte_mbuf_raw_sanity_check(mbufs[idx]);
+			__rte_mbuf_raw_validate(mbufs[idx]);
 			rte_pktmbuf_reset(mbufs[idx]);
 			idx++;
 			/* fall-through */
 	case 3:
-			__rte_mbuf_raw_sanity_check(mbufs[idx]);
+			__rte_mbuf_raw_validate(mbufs[idx]);
 			rte_pktmbuf_reset(mbufs[idx]);
 			idx++;
 			/* fall-through */
 	case 2:
-			__rte_mbuf_raw_sanity_check(mbufs[idx]);
+			__rte_mbuf_raw_validate(mbufs[idx]);
 			rte_pktmbuf_reset(mbufs[idx]);
 			idx++;
 			/* fall-through */
 	case 1:
-			__rte_mbuf_raw_sanity_check(mbufs[idx]);
+			__rte_mbuf_raw_validate(mbufs[idx]);
 			rte_pktmbuf_reset(mbufs[idx]);
 			idx++;
 			/* fall-through */
@@ -1185,8 +1188,8 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m)
 	mi->pkt_len = mi->data_len;
 	mi->nb_segs = 1;
 
-	__rte_mbuf_sanity_check(mi, 1);
-	__rte_mbuf_sanity_check(m, 0);
+	__rte_mbuf_verify(mi, 1);
+	__rte_mbuf_verify(m, 0);
 }
 
 /**
@@ -1341,7 +1344,7 @@ static inline int __rte_pktmbuf_pinned_extbuf_decref(struct rte_mbuf *m)
 static __rte_always_inline struct rte_mbuf *
 rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
 {
-	__rte_mbuf_sanity_check(m, 0);
+	__rte_mbuf_verify(m, 0);
 
 	if (likely(rte_mbuf_refcnt_read(m) == 1)) {
 
@@ -1412,7 +1415,7 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m)
 	struct rte_mbuf *m_next;
 
 	if (m != NULL)
-		__rte_mbuf_sanity_check(m, 1);
+		__rte_mbuf_verify(m, 1);
 
 	while (m != NULL) {
 		m_next = m->next;
@@ -1493,7 +1496,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
  */
 static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
 {
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 
 	do {
 		rte_mbuf_refcnt_update(m, v);
@@ -1510,7 +1513,7 @@ static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
  */
 static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)
 {
-	__rte_mbuf_sanity_check(m, 0);
+	__rte_mbuf_verify(m, 0);
 	return m->data_off;
 }
 
@@ -1524,7 +1527,7 @@ static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)
  */
 static inline uint16_t rte_pktmbuf_tailroom(const struct rte_mbuf *m)
 {
-	__rte_mbuf_sanity_check(m, 0);
+	__rte_mbuf_verify(m, 0);
 	return (uint16_t)(m->buf_len - rte_pktmbuf_headroom(m) -
 			  m->data_len);
 }
@@ -1539,7 +1542,7 @@ static inline uint16_t rte_pktmbuf_tailroom(const struct rte_mbuf *m)
  */
 static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
 {
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 	while (m->next != NULL)
 		m = m->next;
 	return m;
@@ -1583,7 +1586,7 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
 static inline char *rte_pktmbuf_prepend(struct rte_mbuf *m,
 					uint16_t len)
 {
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 
 	if (unlikely(len > rte_pktmbuf_headroom(m)))
 		return NULL;
@@ -1618,7 +1621,7 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf *m, uint16_t len)
 	void *tail;
 	struct rte_mbuf *m_last;
 
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 
 	m_last = rte_pktmbuf_lastseg(m);
 	if (unlikely(len > rte_pktmbuf_tailroom(m_last)))
@@ -1646,7 +1649,7 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf *m, uint16_t len)
  */
 static inline char *rte_pktmbuf_adj(struct rte_mbuf *m, uint16_t len)
 {
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 
 	if (unlikely(len > m->data_len))
 		return NULL;
@@ -1678,7 +1681,7 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m, uint16_t len)
 {
 	struct rte_mbuf *m_last;
 
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 
 	m_last = rte_pktmbuf_lastseg(m);
 	if (unlikely(len > m_last->data_len))
@@ -1700,7 +1703,7 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m, uint16_t len)
  */
 static inline int rte_pktmbuf_is_contiguous(const struct rte_mbuf *m)
 {
-	__rte_mbuf_sanity_check(m, 1);
+	__rte_mbuf_verify(m, 1);
 	return m->nb_segs == 1;
 }
 
diff --git a/lib/mbuf/version.map b/lib/mbuf/version.map
index ed486ed14ec7..f134946f3d8d 100644
--- a/lib/mbuf/version.map
+++ b/lib/mbuf/version.map
@@ -31,6 +31,7 @@ DPDK_23 {
 	rte_mbuf_set_platform_mempool_ops;
 	rte_mbuf_set_user_mempool_ops;
 	rte_mbuf_user_mempool_ops;
+	rte_mbuf_verify;
 	rte_pktmbuf_clone;
 	rte_pktmbuf_copy;
 	rte_pktmbuf_dump;
-- 
2.39.2


^ permalink raw reply	[relevance 2%]

* Re: [PATCH v4] net/bonding: replace master/slave to main/member
  2023-05-18  8:44  1%     ` [PATCH v4] " Chaoyong He
@ 2023-05-18 15:39  3%       ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-05-18 15:39 UTC (permalink / raw)
  To: Chaoyong He; +Cc: dev, oss-drivers, niklas.soderlund, Long Wu, James Hershaw

On Thu, 18 May 2023 16:44:58 +0800
Chaoyong He <chaoyong.he@corigine.com> wrote:

> From: Long Wu <long.wu@corigine.com>
> 
> This patch replaces the usage of the word 'master/slave' with more
> appropriate word 'main/member' in bonding PMD as well as in its docs
> and examples. Also the test app and testpmd were modified to use the
> new wording.
> 
> The bonding PMD's public API was modified according to the changes
> in word:
> rte_eth_bond_8023ad_slave_info is now called
> rte_eth_bond_8023ad_member_info,
> rte_eth_bond_active_slaves_get is now called
> rte_eth_bond_active_members_get,
> rte_eth_bond_slave_add is now called
> rte_eth_bond_member_add,
> rte_eth_bond_slave_remove is now called
> rte_eth_bond_member_remove,
> rte_eth_bond_slaves_get is now called
> rte_eth_bond_members_get.
> 
> Also the macro RTE_ETH_DEV_BONDED_SLAVE was renamed to
> RTE_ETH_DEV_BONDED_MEMBER.
> 
> Mark the old visible API's as deprecated and remove
> from the ABI.
> 
> Signed-off-by: Long Wu <long.wu@corigine.com>
> Acked-by: Niklas Söderlund <niklas.soderlund@corigine.com>
> Reviewed-by: James Hershaw <james.hershaw@corigine.com>

Since this will be ABI change it will have to wait for 23.11 release.
Could you make a deprecation notice now, to foreshadow that change?

Acked-by: Stephen Hemminger <stephen@networkplumber.org>

^ permalink raw reply	[relevance 3%]

* [PATCH v4] net/bonding: replace master/slave to main/member
  2023-05-18  7:01  1%   ` [PATCH v3] " Chaoyong He
@ 2023-05-18  8:44  1%     ` Chaoyong He
  2023-05-18 15:39  3%       ` Stephen Hemminger
  0 siblings, 1 reply; 200+ results
From: Chaoyong He @ 2023-05-18  8:44 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, niklas.soderlund, Long Wu, James Hershaw

From: Long Wu <long.wu@corigine.com>

This patch replaces the usage of the word 'master/slave' with more
appropriate word 'main/member' in bonding PMD as well as in its docs
and examples. Also the test app and testpmd were modified to use the
new wording.

The bonding PMD's public API was modified according to the changes
in word:
rte_eth_bond_8023ad_slave_info is now called
rte_eth_bond_8023ad_member_info,
rte_eth_bond_active_slaves_get is now called
rte_eth_bond_active_members_get,
rte_eth_bond_slave_add is now called
rte_eth_bond_member_add,
rte_eth_bond_slave_remove is now called
rte_eth_bond_member_remove,
rte_eth_bond_slaves_get is now called
rte_eth_bond_members_get.

Also the macro RTE_ETH_DEV_BONDED_SLAVE was renamed to
RTE_ETH_DEV_BONDED_MEMBER.

Mark the old visible API's as deprecated and remove
from the ABI.

Signed-off-by: Long Wu <long.wu@corigine.com>
Acked-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Reviewed-by: James Hershaw <james.hershaw@corigine.com>
---
v2:
* Modify related doc.
* Add 'RTE_DEPRECATED' to related APIs.
v3:
* Fix the check warning about 'CamelCase'.
v4:
* Fix the doc compile problem.
---
 app/test-pmd/testpmd.c                        |  112 +-
 app/test-pmd/testpmd.h                        |    8 +-
 app/test/test_link_bonding.c                  | 2792 +++++++++--------
 app/test/test_link_bonding_mode4.c            |  588 ++--
 app/test/test_link_bonding_rssconf.c          |  166 +-
 doc/guides/howto/lm_bond_virtio_sriov.rst     |   24 +-
 doc/guides/nics/bnxt.rst                      |    4 +-
 doc/guides/prog_guide/img/bond-mode-1.svg     |    2 +-
 .../link_bonding_poll_mode_drv_lib.rst        |  230 +-
 drivers/net/bonding/bonding_testpmd.c         |  178 +-
 drivers/net/bonding/eth_bond_8023ad_private.h |   40 +-
 drivers/net/bonding/eth_bond_private.h        |  108 +-
 drivers/net/bonding/rte_eth_bond.h            |  126 +-
 drivers/net/bonding/rte_eth_bond_8023ad.c     |  372 +--
 drivers/net/bonding/rte_eth_bond_8023ad.h     |   75 +-
 drivers/net/bonding/rte_eth_bond_alb.c        |   44 +-
 drivers/net/bonding/rte_eth_bond_alb.h        |   20 +-
 drivers/net/bonding/rte_eth_bond_api.c        |  474 +--
 drivers/net/bonding/rte_eth_bond_args.c       |   32 +-
 drivers/net/bonding/rte_eth_bond_flow.c       |   54 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        | 1384 ++++----
 drivers/net/bonding/version.map               |   15 +-
 examples/bond/main.c                          |   40 +-
 lib/ethdev/rte_ethdev.h                       |    9 +-
 24 files changed, 3509 insertions(+), 3388 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 5cb6f92523..d8fd87105a 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -588,27 +588,27 @@ eth_dev_configure_mp(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 }
 
 static int
-change_bonding_slave_port_status(portid_t bond_pid, bool is_stop)
+change_bonding_member_port_status(portid_t bond_pid, bool is_stop)
 {
 #ifdef RTE_NET_BOND
 
-	portid_t slave_pids[RTE_MAX_ETHPORTS];
+	portid_t member_pids[RTE_MAX_ETHPORTS];
 	struct rte_port *port;
-	int num_slaves;
-	portid_t slave_pid;
+	int num_members;
+	portid_t member_pid;
 	int i;
 
-	num_slaves = rte_eth_bond_slaves_get(bond_pid, slave_pids,
+	num_members = rte_eth_bond_members_get(bond_pid, member_pids,
 						RTE_MAX_ETHPORTS);
-	if (num_slaves < 0) {
-		fprintf(stderr, "Failed to get slave list for port = %u\n",
+	if (num_members < 0) {
+		fprintf(stderr, "Failed to get member list for port = %u\n",
 			bond_pid);
-		return num_slaves;
+		return num_members;
 	}
 
-	for (i = 0; i < num_slaves; i++) {
-		slave_pid = slave_pids[i];
-		port = &ports[slave_pid];
+	for (i = 0; i < num_members; i++) {
+		member_pid = member_pids[i];
+		port = &ports[member_pid];
 		port->port_status =
 			is_stop ? RTE_PORT_STOPPED : RTE_PORT_STARTED;
 	}
@@ -632,12 +632,12 @@ eth_dev_start_mp(uint16_t port_id)
 		struct rte_port *port = &ports[port_id];
 
 		/*
-		 * Starting a bonded port also starts all slaves under the bonded
+		 * Starting a bonded port also starts all members under the bonded
 		 * device. So if this port is bond device, we need to modify the
-		 * port status of these slaves.
+		 * port status of these members.
 		 */
 		if (port->bond_flag == 1)
-			return change_bonding_slave_port_status(port_id, false);
+			return change_bonding_member_port_status(port_id, false);
 	}
 
 	return 0;
@@ -656,12 +656,12 @@ eth_dev_stop_mp(uint16_t port_id)
 		struct rte_port *port = &ports[port_id];
 
 		/*
-		 * Stopping a bonded port also stops all slaves under the bonded
+		 * Stopping a bonded port also stops all members under the bonded
 		 * device. So if this port is bond device, we need to modify the
-		 * port status of these slaves.
+		 * port status of these members.
 		 */
 		if (port->bond_flag == 1)
-			return change_bonding_slave_port_status(port_id, true);
+			return change_bonding_member_port_status(port_id, true);
 	}
 
 	return 0;
@@ -2610,7 +2610,7 @@ all_ports_started(void)
 		port = &ports[pi];
 		/* Check if there is a port which is not started */
 		if ((port->port_status != RTE_PORT_STARTED) &&
-			(port->slave_flag == 0))
+			(port->member_flag == 0))
 			return 0;
 	}
 
@@ -2624,7 +2624,7 @@ port_is_stopped(portid_t port_id)
 	struct rte_port *port = &ports[port_id];
 
 	if ((port->port_status != RTE_PORT_STOPPED) &&
-	    (port->slave_flag == 0))
+	    (port->member_flag == 0))
 		return 0;
 	return 1;
 }
@@ -2970,8 +2970,8 @@ fill_xstats_display_info(void)
 
 /*
  * Some capabilities (like, rx_offload_capa and tx_offload_capa) of bonding
- * device in dev_info is zero when no slave is added. And its capability
- * will be updated when add a new slave device. So adding a slave device need
+ * device in dev_info is zero when no member is added. And its capability
+ * will be updated when add a new member device. So adding a member device need
  * to update the port configurations of bonding device.
  */
 static void
@@ -3028,7 +3028,7 @@ start_port(portid_t pid)
 		if (pid != pi && pid != (portid_t)RTE_PORT_ALL)
 			continue;
 
-		if (port_is_bonding_slave(pi)) {
+		if (port_is_bonding_member(pi)) {
 			fprintf(stderr,
 				"Please remove port %d from bonded device.\n",
 				pi);
@@ -3350,7 +3350,7 @@ stop_port(portid_t pid)
 			continue;
 		}
 
-		if (port_is_bonding_slave(pi)) {
+		if (port_is_bonding_member(pi)) {
 			fprintf(stderr,
 				"Please remove port %d from bonded device.\n",
 				pi);
@@ -3439,28 +3439,28 @@ flush_port_owned_resources(portid_t pi)
 }
 
 static void
-clear_bonding_slave_device(portid_t *slave_pids, uint16_t num_slaves)
+clear_bonding_member_device(portid_t *member_pids, uint16_t num_members)
 {
 	struct rte_port *port;
-	portid_t slave_pid;
+	portid_t member_pid;
 	uint16_t i;
 
-	for (i = 0; i < num_slaves; i++) {
-		slave_pid = slave_pids[i];
-		if (port_is_started(slave_pid) == 1) {
-			if (rte_eth_dev_stop(slave_pid) != 0)
+	for (i = 0; i < num_members; i++) {
+		member_pid = member_pids[i];
+		if (port_is_started(member_pid) == 1) {
+			if (rte_eth_dev_stop(member_pid) != 0)
 				fprintf(stderr, "rte_eth_dev_stop failed for port %u\n",
-					slave_pid);
+					member_pid);
 
-			port = &ports[slave_pid];
+			port = &ports[member_pid];
 			port->port_status = RTE_PORT_STOPPED;
 		}
 
-		clear_port_slave_flag(slave_pid);
+		clear_port_member_flag(member_pid);
 
-		/* Close slave device when testpmd quit or is killed. */
+		/* Close member device when testpmd quit or is killed. */
 		if (cl_quit == 1 || f_quit == 1)
-			rte_eth_dev_close(slave_pid);
+			rte_eth_dev_close(member_pid);
 	}
 }
 
@@ -3469,8 +3469,8 @@ close_port(portid_t pid)
 {
 	portid_t pi;
 	struct rte_port *port;
-	portid_t slave_pids[RTE_MAX_ETHPORTS];
-	int num_slaves = 0;
+	portid_t member_pids[RTE_MAX_ETHPORTS];
+	int num_members = 0;
 
 	if (port_id_is_invalid(pid, ENABLED_WARN))
 		return;
@@ -3488,7 +3488,7 @@ close_port(portid_t pid)
 			continue;
 		}
 
-		if (port_is_bonding_slave(pi)) {
+		if (port_is_bonding_member(pi)) {
 			fprintf(stderr,
 				"Please remove port %d from bonded device.\n",
 				pi);
@@ -3505,17 +3505,17 @@ close_port(portid_t pid)
 			flush_port_owned_resources(pi);
 #ifdef RTE_NET_BOND
 			if (port->bond_flag == 1)
-				num_slaves = rte_eth_bond_slaves_get(pi,
-						slave_pids, RTE_MAX_ETHPORTS);
+				num_members = rte_eth_bond_members_get(pi,
+						member_pids, RTE_MAX_ETHPORTS);
 #endif
 			rte_eth_dev_close(pi);
 			/*
-			 * If this port is bonded device, all slaves under the
+			 * If this port is bonded device, all members under the
 			 * device need to be removed or closed.
 			 */
-			if (port->bond_flag == 1 && num_slaves > 0)
-				clear_bonding_slave_device(slave_pids,
-							num_slaves);
+			if (port->bond_flag == 1 && num_members > 0)
+				clear_bonding_member_device(member_pids,
+							num_members);
 		}
 
 		free_xstats_display_info(pi);
@@ -3555,7 +3555,7 @@ reset_port(portid_t pid)
 			continue;
 		}
 
-		if (port_is_bonding_slave(pi)) {
+		if (port_is_bonding_member(pi)) {
 			fprintf(stderr,
 				"Please remove port %d from bonded device.\n",
 				pi);
@@ -4203,38 +4203,38 @@ init_port_config(void)
 	}
 }
 
-void set_port_slave_flag(portid_t slave_pid)
+void set_port_member_flag(portid_t member_pid)
 {
 	struct rte_port *port;
 
-	port = &ports[slave_pid];
-	port->slave_flag = 1;
+	port = &ports[member_pid];
+	port->member_flag = 1;
 }
 
-void clear_port_slave_flag(portid_t slave_pid)
+void clear_port_member_flag(portid_t member_pid)
 {
 	struct rte_port *port;
 
-	port = &ports[slave_pid];
-	port->slave_flag = 0;
+	port = &ports[member_pid];
+	port->member_flag = 0;
 }
 
-uint8_t port_is_bonding_slave(portid_t slave_pid)
+uint8_t port_is_bonding_member(portid_t member_pid)
 {
 	struct rte_port *port;
 	struct rte_eth_dev_info dev_info;
 	int ret;
 
-	port = &ports[slave_pid];
-	ret = eth_dev_info_get_print_err(slave_pid, &dev_info);
+	port = &ports[member_pid];
+	ret = eth_dev_info_get_print_err(member_pid, &dev_info);
 	if (ret != 0) {
 		TESTPMD_LOG(ERR,
 			"Failed to get device info for port id %d,"
-			"cannot determine if the port is a bonded slave",
-			slave_pid);
+			"cannot determine if the port is a bonded member",
+			member_pid);
 		return 0;
 	}
-	if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDED_SLAVE) || (port->slave_flag == 1))
+	if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDED_MEMBER) || (port->member_flag == 1))
 		return 1;
 	return 0;
 }
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index bdfbfd36d3..7bc2f70323 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -321,7 +321,7 @@ struct rte_port {
 	uint32_t                mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
 	queueid_t               queue_nb; /**< nb. of queues for flow rules */
 	uint32_t                queue_sz; /**< size of a queue for flow rules */
-	uint8_t                 slave_flag : 1, /**< bonding slave port */
+	uint8_t                 member_flag : 1, /**< bonding member port */
 				bond_flag : 1, /**< port is bond device */
 				fwd_mac_swap : 1, /**< swap packet MAC before forward */
 				update_conf : 1; /**< need to update bonding device configuration */
@@ -1082,9 +1082,9 @@ void stop_packet_forwarding(void);
 void dev_set_link_up(portid_t pid);
 void dev_set_link_down(portid_t pid);
 void init_port_config(void);
-void set_port_slave_flag(portid_t slave_pid);
-void clear_port_slave_flag(portid_t slave_pid);
-uint8_t port_is_bonding_slave(portid_t slave_pid);
+void set_port_member_flag(portid_t member_pid);
+void clear_port_member_flag(portid_t member_pid);
+uint8_t port_is_bonding_member(portid_t member_pid);
 
 int init_port_dcb_config(portid_t pid, enum dcb_mode_enable dcb_mode,
 		     enum rte_eth_nb_tcs num_tcs,
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 5c496352c2..82daf037f1 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -59,13 +59,13 @@
 #define INVALID_BONDING_MODE	(-1)
 
 
-uint8_t slave_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
+uint8_t member_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
 uint8_t bonded_mac[] = {0xAA, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF };
 
 struct link_bonding_unittest_params {
 	int16_t bonded_port_id;
-	int16_t slave_port_ids[TEST_MAX_NUMBER_OF_PORTS];
-	uint16_t bonded_slave_count;
+	int16_t member_port_ids[TEST_MAX_NUMBER_OF_PORTS];
+	uint16_t bonded_member_count;
 	uint8_t bonding_mode;
 
 	uint16_t nb_rx_q;
@@ -73,7 +73,7 @@ struct link_bonding_unittest_params {
 
 	struct rte_mempool *mbuf_pool;
 
-	struct rte_ether_addr *default_slave_mac;
+	struct rte_ether_addr *default_member_mac;
 	struct rte_ether_addr *default_bonded_mac;
 
 	/* Packet Headers */
@@ -90,8 +90,8 @@ static struct rte_udp_hdr pkt_udp_hdr;
 
 static struct link_bonding_unittest_params default_params  = {
 	.bonded_port_id = -1,
-	.slave_port_ids = { -1 },
-	.bonded_slave_count = 0,
+	.member_port_ids = { -1 },
+	.bonded_member_count = 0,
 	.bonding_mode = BONDING_MODE_ROUND_ROBIN,
 
 	.nb_rx_q = 1,
@@ -99,7 +99,7 @@ static struct link_bonding_unittest_params default_params  = {
 
 	.mbuf_pool = NULL,
 
-	.default_slave_mac = (struct rte_ether_addr *)slave_mac,
+	.default_member_mac = (struct rte_ether_addr *)member_mac,
 	.default_bonded_mac = (struct rte_ether_addr *)bonded_mac,
 
 	.pkt_eth_hdr = NULL,
@@ -202,8 +202,8 @@ configure_ethdev(uint16_t port_id, uint8_t start, uint8_t en_isr)
 	return 0;
 }
 
-static int slaves_initialized;
-static int mac_slaves_initialized;
+static int members_initialized;
+static int mac_members_initialized;
 
 static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
 static pthread_cond_t cvar = PTHREAD_COND_INITIALIZER;
@@ -213,7 +213,7 @@ static int
 test_setup(void)
 {
 	int i, nb_mbuf_per_pool;
-	struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)slave_mac;
+	struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)member_mac;
 
 	/* Allocate ethernet packet header with space for VLAN header */
 	if (test_params->pkt_eth_hdr == NULL) {
@@ -235,7 +235,7 @@ test_setup(void)
 	}
 
 	/* Create / Initialize virtual eth devs */
-	if (!slaves_initialized) {
+	if (!members_initialized) {
 		for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++) {
 			char pmd_name[RTE_ETH_NAME_MAX_LEN];
 
@@ -243,16 +243,16 @@ test_setup(void)
 
 			snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_%d", i);
 
-			test_params->slave_port_ids[i] = virtual_ethdev_create(pmd_name,
+			test_params->member_port_ids[i] = virtual_ethdev_create(pmd_name,
 					mac_addr, rte_socket_id(), 1);
-			TEST_ASSERT(test_params->slave_port_ids[i] >= 0,
+			TEST_ASSERT(test_params->member_port_ids[i] >= 0,
 					"Failed to create virtual virtual ethdev %s", pmd_name);
 
 			TEST_ASSERT_SUCCESS(configure_ethdev(
-					test_params->slave_port_ids[i], 1, 0),
+					test_params->member_port_ids[i], 1, 0),
 					"Failed to configure virtual ethdev %s", pmd_name);
 		}
-		slaves_initialized = 1;
+		members_initialized = 1;
 	}
 
 	return 0;
@@ -261,9 +261,9 @@ test_setup(void)
 static int
 test_create_bonded_device(void)
 {
-	int current_slave_count;
+	int current_member_count;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
 	/* Don't try to recreate bonded device if re-running test suite*/
 	if (test_params->bonded_port_id == -1) {
@@ -281,19 +281,19 @@ test_create_bonded_device(void)
 			test_params->bonding_mode), "Failed to set ethdev %d to mode %d",
 			test_params->bonded_port_id, test_params->bonding_mode);
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(current_slave_count, 0,
-			"Number of slaves %d is great than expected %d.",
-			current_slave_count, 0);
+	TEST_ASSERT_EQUAL(current_member_count, 0,
+			"Number of members %d is great than expected %d.",
+			current_member_count, 0);
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
+	current_member_count = rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(current_slave_count, 0,
-			"Number of active slaves %d is great than expected %d.",
-			current_slave_count, 0);
+	TEST_ASSERT_EQUAL(current_member_count, 0,
+			"Number of active members %d is great than expected %d.",
+			current_member_count, 0);
 
 	return 0;
 }
@@ -329,46 +329,46 @@ test_create_bonded_device_with_invalid_params(void)
 }
 
 static int
-test_add_slave_to_bonded_device(void)
+test_add_member_to_bonded_device(void)
 {
-	int current_slave_count;
+	int current_member_count;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
-			test_params->slave_port_ids[test_params->bonded_slave_count]),
-			"Failed to add slave (%d) to bonded port (%d).",
-			test_params->slave_port_ids[test_params->bonded_slave_count],
+	TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+			test_params->member_port_ids[test_params->bonded_member_count]),
+			"Failed to add member (%d) to bonded port (%d).",
+			test_params->member_port_ids[test_params->bonded_member_count],
 			test_params->bonded_port_id);
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count + 1,
-			"Number of slaves (%d) is greater than expected (%d).",
-			current_slave_count, test_params->bonded_slave_count + 1);
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count + 1,
+			"Number of members (%d) is greater than expected (%d).",
+			current_member_count, test_params->bonded_member_count + 1);
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, 0,
-					"Number of active slaves (%d) is not as expected (%d).\n",
-					current_slave_count, 0);
+	current_member_count = rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, 0,
+					"Number of active members (%d) is not as expected (%d).\n",
+					current_member_count, 0);
 
-	test_params->bonded_slave_count++;
+	test_params->bonded_member_count++;
 
 	return 0;
 }
 
 static int
-test_add_slave_to_invalid_bonded_device(void)
+test_add_member_to_invalid_bonded_device(void)
 {
 	/* Invalid port ID */
-	TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->bonded_port_id + 5,
-			test_params->slave_port_ids[test_params->bonded_slave_count]),
+	TEST_ASSERT_FAIL(rte_eth_bond_member_add(test_params->bonded_port_id + 5,
+			test_params->member_port_ids[test_params->bonded_member_count]),
 			"Expected call to failed as invalid port specified.");
 
 	/* Non bonded device */
-	TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->slave_port_ids[0],
-			test_params->slave_port_ids[test_params->bonded_slave_count]),
+	TEST_ASSERT_FAIL(rte_eth_bond_member_add(test_params->member_port_ids[0],
+			test_params->member_port_ids[test_params->bonded_member_count]),
 			"Expected call to failed as invalid port specified.");
 
 	return 0;
@@ -376,63 +376,63 @@ test_add_slave_to_invalid_bonded_device(void)
 
 
 static int
-test_remove_slave_from_bonded_device(void)
+test_remove_member_from_bonded_device(void)
 {
-	int current_slave_count;
+	int current_member_count;
 	struct rte_ether_addr read_mac_addr, *mac_addr;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params->bonded_port_id,
-			test_params->slave_port_ids[test_params->bonded_slave_count-1]),
-			"Failed to remove slave %d from bonded port (%d).",
-			test_params->slave_port_ids[test_params->bonded_slave_count-1],
+	TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(test_params->bonded_port_id,
+			test_params->member_port_ids[test_params->bonded_member_count-1]),
+			"Failed to remove member %d from bonded port (%d).",
+			test_params->member_port_ids[test_params->bonded_member_count-1],
 			test_params->bonded_port_id);
 
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count - 1,
-			"Number of slaves (%d) is great than expected (%d).\n",
-			current_slave_count, test_params->bonded_slave_count - 1);
+	TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count - 1,
+			"Number of members (%d) is great than expected (%d).\n",
+			current_member_count, test_params->bonded_member_count - 1);
 
 
-	mac_addr = (struct rte_ether_addr *)slave_mac;
+	mac_addr = (struct rte_ether_addr *)member_mac;
 	mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] =
-			test_params->bonded_slave_count-1;
+			test_params->bonded_member_count-1;
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(
-			test_params->slave_port_ids[test_params->bonded_slave_count-1],
+			test_params->member_port_ids[test_params->bonded_member_count-1],
 			&read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[test_params->bonded_slave_count-1]);
+			test_params->member_port_ids[test_params->bonded_member_count-1]);
 	TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
 			"bonded port mac address not set to that of primary port\n");
 
 	rte_eth_stats_reset(
-			test_params->slave_port_ids[test_params->bonded_slave_count-1]);
+			test_params->member_port_ids[test_params->bonded_member_count-1]);
 
 	virtual_ethdev_simulate_link_status_interrupt(test_params->bonded_port_id,
 			0);
 
-	test_params->bonded_slave_count--;
+	test_params->bonded_member_count--;
 
 	return 0;
 }
 
 static int
-test_remove_slave_from_invalid_bonded_device(void)
+test_remove_member_from_invalid_bonded_device(void)
 {
 	/* Invalid port ID */
-	TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
+	TEST_ASSERT_FAIL(rte_eth_bond_member_remove(
 			test_params->bonded_port_id + 5,
-			test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
+			test_params->member_port_ids[test_params->bonded_member_count - 1]),
 			"Expected call to failed as invalid port specified.");
 
 	/* Non bonded device */
-	TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
-			test_params->slave_port_ids[0],
-			test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
+	TEST_ASSERT_FAIL(rte_eth_bond_member_remove(
+			test_params->member_port_ids[0],
+			test_params->member_port_ids[test_params->bonded_member_count - 1]),
 			"Expected call to failed as invalid port specified.");
 
 	return 0;
@@ -441,19 +441,19 @@ test_remove_slave_from_invalid_bonded_device(void)
 static int bonded_id = 2;
 
 static int
-test_add_already_bonded_slave_to_bonded_device(void)
+test_add_already_bonded_member_to_bonded_device(void)
 {
-	int port_id, current_slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int port_id, current_member_count;
+	uint16_t members[RTE_MAX_ETHPORTS];
 	char pmd_name[RTE_ETH_NAME_MAX_LEN];
 
-	test_add_slave_to_bonded_device();
+	test_add_member_to_bonded_device();
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, 1,
-			"Number of slaves (%d) is not that expected (%d).",
-			current_slave_count, 1);
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, 1,
+			"Number of members (%d) is not that expected (%d).",
+			current_member_count, 1);
 
 	snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "%s_%d", BONDED_DEV_NAME, ++bonded_id);
 
@@ -461,93 +461,93 @@ test_add_already_bonded_slave_to_bonded_device(void)
 			rte_socket_id());
 	TEST_ASSERT(port_id >= 0, "Failed to create bonded device.");
 
-	TEST_ASSERT(rte_eth_bond_slave_add(port_id,
-			test_params->slave_port_ids[test_params->bonded_slave_count - 1])
+	TEST_ASSERT(rte_eth_bond_member_add(port_id,
+			test_params->member_port_ids[test_params->bonded_member_count - 1])
 			< 0,
-			"Added slave (%d) to bonded port (%d) unexpectedly.",
-			test_params->slave_port_ids[test_params->bonded_slave_count-1],
+			"Added member (%d) to bonded port (%d) unexpectedly.",
+			test_params->member_port_ids[test_params->bonded_member_count-1],
 			port_id);
 
-	return test_remove_slave_from_bonded_device();
+	return test_remove_member_from_bonded_device();
 }
 
 
 static int
-test_get_slaves_from_bonded_device(void)
+test_get_members_from_bonded_device(void)
 {
-	int current_slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int current_member_count;
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-			"Failed to add slave to bonded device");
+	TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+			"Failed to add member to bonded device");
 
 	/* Invalid port id */
-	current_slave_count = rte_eth_bond_slaves_get(INVALID_PORT_ID, slaves,
+	current_member_count = rte_eth_bond_members_get(INVALID_PORT_ID, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
+	TEST_ASSERT(current_member_count < 0,
 			"Invalid port id unexpectedly succeeded");
 
-	current_slave_count = rte_eth_bond_active_slaves_get(INVALID_PORT_ID,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
+	current_member_count = rte_eth_bond_active_members_get(INVALID_PORT_ID,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT(current_member_count < 0,
 			"Invalid port id unexpectedly succeeded");
 
-	/* Invalid slaves pointer */
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
+	/* Invalid members pointer */
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
 			NULL, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
-			"Invalid slave array unexpectedly succeeded");
+	TEST_ASSERT(current_member_count < 0,
+			"Invalid member array unexpectedly succeeded");
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
+	current_member_count = rte_eth_bond_active_members_get(
 			test_params->bonded_port_id, NULL, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
-			"Invalid slave array unexpectedly succeeded");
+	TEST_ASSERT(current_member_count < 0,
+			"Invalid member array unexpectedly succeeded");
 
 	/* non bonded device*/
-	current_slave_count = rte_eth_bond_slaves_get(
-			test_params->slave_port_ids[0], NULL, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
+	current_member_count = rte_eth_bond_members_get(
+			test_params->member_port_ids[0], NULL, RTE_MAX_ETHPORTS);
+	TEST_ASSERT(current_member_count < 0,
 			"Invalid port id unexpectedly succeeded");
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->slave_port_ids[0],	NULL, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
+	current_member_count = rte_eth_bond_active_members_get(
+			test_params->member_port_ids[0],	NULL, RTE_MAX_ETHPORTS);
+	TEST_ASSERT(current_member_count < 0,
 			"Invalid port id unexpectedly succeeded");
 
-	TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
-			"Failed to remove slaves from bonded device");
+	TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+			"Failed to remove members from bonded device");
 
 	return 0;
 }
 
 
 static int
-test_add_remove_multiple_slaves_to_from_bonded_device(void)
+test_add_remove_multiple_members_to_from_bonded_device(void)
 {
 	int i;
 
 	for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
-		TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-				"Failed to add slave to bonded device");
+		TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+				"Failed to add member to bonded device");
 
 	for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
-		TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
-				"Failed to remove slaves from bonded device");
+		TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+				"Failed to remove members from bonded device");
 
 	return 0;
 }
 
 static void
-enable_bonded_slaves(void)
+enable_bonded_members(void)
 {
 	int i;
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		virtual_ethdev_tx_burst_fn_set_success(test_params->slave_port_ids[i],
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		virtual_ethdev_tx_burst_fn_set_success(test_params->member_port_ids[i],
 				1);
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 1);
+				test_params->member_port_ids[i], 1);
 	}
 }
 
@@ -556,34 +556,36 @@ test_start_bonded_device(void)
 {
 	struct rte_eth_link link_status;
 
-	int current_slave_count, current_bonding_mode, primary_port;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int current_member_count, current_bonding_mode, primary_port;
+	uint16_t members[RTE_MAX_ETHPORTS];
 	int retval;
 
-	/* Add slave to bonded device*/
-	TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-			"Failed to add slave to bonded device");
+	/* Add member to bonded device*/
+	TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+			"Failed to add member to bonded device");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params->bonded_port_id),
 		"Failed to start bonded pmd eth device %d.",
 		test_params->bonded_port_id);
 
-	/* Change link status of virtual pmd so it will be added to the active
-	 * slave list of the bonded device*/
+	/*
+	 * Change link status of virtual pmd so it will be added to the active
+	 * member list of the bonded device.
+	 */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[test_params->bonded_slave_count-1], 1);
+			test_params->member_port_ids[test_params->bonded_member_count-1], 1);
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
-			"Number of slaves (%d) is not expected value (%d).",
-			current_slave_count, test_params->bonded_slave_count);
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+			"Number of members (%d) is not expected value (%d).",
+			current_member_count, test_params->bonded_member_count);
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
-			"Number of active slaves (%d) is not expected value (%d).",
-			current_slave_count, test_params->bonded_slave_count);
+	current_member_count = rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+			"Number of active members (%d) is not expected value (%d).",
+			current_member_count, test_params->bonded_member_count);
 
 	current_bonding_mode = rte_eth_bond_mode_get(test_params->bonded_port_id);
 	TEST_ASSERT_EQUAL(current_bonding_mode, test_params->bonding_mode,
@@ -591,9 +593,9 @@ test_start_bonded_device(void)
 			current_bonding_mode, test_params->bonding_mode);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+	TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
 			"Primary port (%d) is not expected value (%d).",
-			primary_port, test_params->slave_port_ids[0]);
+			primary_port, test_params->member_port_ids[0]);
 
 	retval = rte_eth_link_get(test_params->bonded_port_id, &link_status);
 	TEST_ASSERT(retval >= 0,
@@ -609,8 +611,8 @@ test_start_bonded_device(void)
 static int
 test_stop_bonded_device(void)
 {
-	int current_slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int current_member_count;
+	uint16_t members[RTE_MAX_ETHPORTS];
 
 	struct rte_eth_link link_status;
 	int retval;
@@ -627,29 +629,29 @@ test_stop_bonded_device(void)
 			"Bonded port (%d) status (%d) is not expected value (%d).",
 			test_params->bonded_port_id, link_status.link_status, 0);
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
-			"Number of slaves (%d) is not expected value (%d).",
-			current_slave_count, test_params->bonded_slave_count);
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+			"Number of members (%d) is not expected value (%d).",
+			current_member_count, test_params->bonded_member_count);
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, 0,
-			"Number of active slaves (%d) is not expected value (%d).",
-			current_slave_count, 0);
+	current_member_count = rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, 0,
+			"Number of active members (%d) is not expected value (%d).",
+			current_member_count, 0);
 
 	return 0;
 }
 
 static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
 {
-	/* Clean up and remove slaves from bonded device */
+	/* Clean up and remove members from bonded device */
 	free_virtualpmd_tx_queue();
-	while (test_params->bonded_slave_count > 0)
-		TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
-				"test_remove_slave_from_bonded_device failed");
+	while (test_params->bonded_member_count > 0)
+		TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+				"test_remove_member_from_bonded_device failed");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -681,10 +683,10 @@ test_set_bonding_mode(void)
 				INVALID_PORT_ID);
 
 		/* Non bonded device */
-		TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->slave_port_ids[0],
+		TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->member_port_ids[0],
 				bonding_modes[i]),
 				"Expected call to failed as invalid port (%d) specified.",
-				test_params->slave_port_ids[0]);
+				test_params->member_port_ids[0]);
 
 		TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
 				bonding_modes[i]),
@@ -704,26 +706,26 @@ test_set_bonding_mode(void)
 				INVALID_PORT_ID);
 
 		/* Non bonded device */
-		bonding_mode = rte_eth_bond_mode_get(test_params->slave_port_ids[0]);
+		bonding_mode = rte_eth_bond_mode_get(test_params->member_port_ids[0]);
 		TEST_ASSERT(bonding_mode < 0,
 				"Expected call to failed as invalid port (%d) specified.",
-				test_params->slave_port_ids[0]);
+				test_params->member_port_ids[0]);
 	}
 
-	return remove_slaves_and_stop_bonded_device();
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
-test_set_primary_slave(void)
+test_set_primary_member(void)
 {
 	int i, j, retval;
 	struct rte_ether_addr read_mac_addr;
 	struct rte_ether_addr *expected_mac_addr;
 
-	/* Add 4 slaves to bonded device */
-	for (i = test_params->bonded_slave_count; i < 4; i++)
-		TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-				"Failed to add slave to bonded device.");
+	/* Add 4 members to bonded device */
+	for (i = test_params->bonded_member_count; i < 4; i++)
+		TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+				"Failed to add member to bonded device.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
 			BONDING_MODE_ROUND_ROBIN),
@@ -732,34 +734,34 @@ test_set_primary_slave(void)
 
 	/* Invalid port ID */
 	TEST_ASSERT_FAIL(rte_eth_bond_primary_set(INVALID_PORT_ID,
-			test_params->slave_port_ids[i]),
+			test_params->member_port_ids[i]),
 			"Expected call to failed as invalid port specified.");
 
 	/* Non bonded device */
-	TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->slave_port_ids[i],
-			test_params->slave_port_ids[i]),
+	TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->member_port_ids[i],
+			test_params->member_port_ids[i]),
 			"Expected call to failed as invalid port specified.");
 
-	/* Set slave as primary
-	 * Verify slave it is now primary slave
-	 * Verify that MAC address of bonded device is that of primary slave
-	 * Verify that MAC address of all bonded slaves are that of primary slave
+	/* Set member as primary
+	 * Verify member it is now primary member
+	 * Verify that MAC address of bonded device is that of primary member
+	 * Verify that MAC address of all bonded members are that of primary member
 	 */
 	for (i = 0; i < 4; i++) {
 		TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
-				test_params->slave_port_ids[i]),
+				test_params->member_port_ids[i]),
 				"Failed to set bonded port (%d) primary port to (%d)",
-				test_params->bonded_port_id, test_params->slave_port_ids[i]);
+				test_params->bonded_port_id, test_params->member_port_ids[i]);
 
 		retval = rte_eth_bond_primary_get(test_params->bonded_port_id);
 		TEST_ASSERT(retval >= 0,
 				"Failed to read primary port from bonded port (%d)\n",
 					test_params->bonded_port_id);
 
-		TEST_ASSERT_EQUAL(retval, test_params->slave_port_ids[i],
+		TEST_ASSERT_EQUAL(retval, test_params->member_port_ids[i],
 				"Bonded port (%d) primary port (%d) not expected value (%d)\n",
 				test_params->bonded_port_id, retval,
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 
 		/* stop/start bonded eth dev to apply new MAC */
 		TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
@@ -770,13 +772,14 @@ test_set_primary_slave(void)
 				"Failed to start bonded port %d",
 				test_params->bonded_port_id);
 
-		expected_mac_addr = (struct rte_ether_addr *)&slave_mac;
+		expected_mac_addr = (struct rte_ether_addr *)&member_mac;
 		expected_mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
 
-		/* Check primary slave MAC */
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+		/* Check primary member MAC */
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
 				sizeof(read_mac_addr)),
 				"bonded port mac address not set to that of primary port\n");
@@ -789,16 +792,17 @@ test_set_primary_slave(void)
 				sizeof(read_mac_addr)),
 				"bonded port mac address not set to that of primary port\n");
 
-		/* Check other slaves MACs */
+		/* Check other members MACs */
 		for (j = 0; j < 4; j++) {
 			if (j != i) {
-				TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[j],
+				TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(
+						test_params->member_port_ids[j],
 						&read_mac_addr),
 						"Failed to get mac address (port %d)",
-						test_params->slave_port_ids[j]);
+						test_params->member_port_ids[j]);
 				TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
 						sizeof(read_mac_addr)),
-						"slave port mac address not set to that of primary "
+						"member port mac address not set to that of primary "
 						"port");
 			}
 		}
@@ -809,14 +813,14 @@ test_set_primary_slave(void)
 	TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->bonded_port_id + 10),
 			"read primary port from expectedly");
 
-	/* Test with slave port */
-	TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->slave_port_ids[0]),
+	/* Test with member port */
+	TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->member_port_ids[0]),
 			"read primary port from expectedly\n");
 
-	TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
-			"Failed to stop and remove slaves from bonded device");
+	TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
+			"Failed to stop and remove members from bonded device");
 
-	/* No slaves  */
+	/* No members  */
 	TEST_ASSERT(rte_eth_bond_primary_get(test_params->bonded_port_id)  < 0,
 			"read primary port from expectedly\n");
 
@@ -840,7 +844,7 @@ test_set_explicit_bonded_mac(void)
 
 	/* Non bonded device */
 	TEST_ASSERT_FAIL(rte_eth_bond_mac_address_set(
-			test_params->slave_port_ids[0],	mac_addr),
+			test_params->member_port_ids[0],	mac_addr),
 			"Expected call to failed as invalid port specified.");
 
 	/* NULL MAC address */
@@ -853,10 +857,10 @@ test_set_explicit_bonded_mac(void)
 			"Failed to set MAC address on bonded port (%d)",
 			test_params->bonded_port_id);
 
-	/* Add 4 slaves to bonded device */
-	for (i = test_params->bonded_slave_count; i < 4; i++) {
-		TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-				"Failed to add slave to bonded device.\n");
+	/* Add 4 members to bonded device */
+	for (i = test_params->bonded_member_count; i < 4; i++) {
+		TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+				"Failed to add member to bonded device.\n");
 	}
 
 	/* Check bonded MAC */
@@ -866,14 +870,15 @@ test_set_explicit_bonded_mac(void)
 	TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
 			"bonded port mac address not set to that of primary port");
 
-	/* Check other slaves MACs */
+	/* Check other members MACs */
 	for (i = 0; i < 4; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port mac address not set to that of primary port");
+				"member port mac address not set to that of primary port");
 	}
 
 	/* test resetting mac address on bonded device */
@@ -883,13 +888,13 @@ test_set_explicit_bonded_mac(void)
 			test_params->bonded_port_id);
 
 	TEST_ASSERT_FAIL(
-			rte_eth_bond_mac_address_reset(test_params->slave_port_ids[0]),
+			rte_eth_bond_mac_address_reset(test_params->member_port_ids[0]),
 			"Reset MAC address on bonded port (%d) unexpectedly",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	/* test resetting mac address on bonded device with no slaves */
-	TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
-			"Failed to remove slaves and stop bonded device");
+	/* test resetting mac address on bonded device with no members */
+	TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
+			"Failed to remove members and stop bonded device");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_reset(test_params->bonded_port_id),
 			"Failed to reset MAC address on bonded port (%d)",
@@ -898,25 +903,25 @@ test_set_explicit_bonded_mac(void)
 	return 0;
 }
 
-#define BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT (3)
+#define BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT (3)
 
 static int
 test_set_bonded_port_initialization_mac_assignment(void)
 {
-	int i, slave_count;
+	int i, member_count;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 	static int bonded_port_id = -1;
-	static int slave_port_ids[BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT];
+	static int member_port_ids[BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT];
 
-	struct rte_ether_addr slave_mac_addr, bonded_mac_addr, read_mac_addr;
+	struct rte_ether_addr member_mac_addr, bonded_mac_addr, read_mac_addr;
 
 	/* Initialize default values for MAC addresses */
-	memcpy(&slave_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
-	memcpy(&bonded_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
+	memcpy(&member_mac_addr, member_mac, sizeof(struct rte_ether_addr));
+	memcpy(&bonded_mac_addr, member_mac, sizeof(struct rte_ether_addr));
 
 	/*
-	 * 1. a - Create / configure  bonded / slave ethdevs
+	 * 1. a - Create / configure  bonded / member ethdevs
 	 */
 	if (bonded_port_id == -1) {
 		bonded_port_id = rte_eth_bond_create("net_bonding_mac_ass_test",
@@ -927,46 +932,46 @@ test_set_bonded_port_initialization_mac_assignment(void)
 					"Failed to configure bonded ethdev");
 	}
 
-	if (!mac_slaves_initialized) {
-		for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
+	if (!mac_members_initialized) {
+		for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
 			char pmd_name[RTE_ETH_NAME_MAX_LEN];
 
-			slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
+			member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
 				i + 100;
 
 			snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN,
-				"eth_slave_%d", i);
+				"eth_member_%d", i);
 
-			slave_port_ids[i] = virtual_ethdev_create(pmd_name,
-					&slave_mac_addr, rte_socket_id(), 1);
+			member_port_ids[i] = virtual_ethdev_create(pmd_name,
+					&member_mac_addr, rte_socket_id(), 1);
 
-			TEST_ASSERT(slave_port_ids[i] >= 0,
-					"Failed to create slave ethdev %s",
+			TEST_ASSERT(member_port_ids[i] >= 0,
+					"Failed to create member ethdev %s",
 					pmd_name);
 
-			TEST_ASSERT_SUCCESS(configure_ethdev(slave_port_ids[i], 1, 0),
+			TEST_ASSERT_SUCCESS(configure_ethdev(member_port_ids[i], 1, 0),
 					"Failed to configure virtual ethdev %s",
 					pmd_name);
 		}
-		mac_slaves_initialized = 1;
+		mac_members_initialized = 1;
 	}
 
 
 	/*
-	 * 2. Add slave ethdevs to bonded device
+	 * 2. Add member ethdevs to bonded device
 	 */
-	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(bonded_port_id,
-				slave_port_ids[i]),
-				"Failed to add slave (%d) to bonded port (%d).",
-				slave_port_ids[i], bonded_port_id);
+	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(bonded_port_id,
+				member_port_ids[i]),
+				"Failed to add member (%d) to bonded port (%d).",
+				member_port_ids[i], bonded_port_id);
 	}
 
-	slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
+	member_count = rte_eth_bond_members_get(bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT, slave_count,
-			"Number of slaves (%d) is not as expected (%d)",
-			slave_count, BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT);
+	TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT, member_count,
+			"Number of members (%d) is not as expected (%d)",
+			member_count, BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT);
 
 
 	/*
@@ -982,16 +987,16 @@ test_set_bonded_port_initialization_mac_assignment(void)
 
 
 	/* 4. a - Start bonded ethdev
-	 *    b - Enable slave devices
-	 *    c - Verify bonded/slaves ethdev MAC addresses
+	 *    b - Enable member devices
+	 *    c - Verify bonded/members ethdev MAC addresses
 	 */
 	TEST_ASSERT_SUCCESS(rte_eth_dev_start(bonded_port_id),
 			"Failed to start bonded pmd eth device %d.",
 			bonded_port_id);
 
-	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
+	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				slave_port_ids[i], 1);
+				member_port_ids[i], 1);
 	}
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(bonded_port_id, &read_mac_addr),
@@ -1001,36 +1006,36 @@ test_set_bonded_port_initialization_mac_assignment(void)
 			sizeof(read_mac_addr)),
 			"bonded port mac address not as expected");
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[0]);
+			member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 0 mac address not as expected");
+			"member port 0 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[1]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[1]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 1 mac address not as expected");
+			"member port 1 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[2]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[2]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 2 mac address not as expected");
+			"member port 2 mac address not as expected");
 
 
 	/* 7. a - Change primary port
 	 *    b - Stop / Start bonded port
-	 *    d - Verify slave ethdev MAC addresses
+	 *    d - Verify member ethdev MAC addresses
 	 */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(bonded_port_id,
-			slave_port_ids[2]),
+			member_port_ids[2]),
 			"failed to set primary port on bonded device.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
@@ -1048,94 +1053,94 @@ test_set_bonded_port_initialization_mac_assignment(void)
 			sizeof(read_mac_addr)),
 			"bonded port mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 0 mac address not as expected");
+			"member port 0 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[1]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[1]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 1 mac address not as expected");
+			"member port 1 mac address not as expected");
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[2]);
+			member_port_ids[2]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 2 mac address not as expected");
+			"member port 2 mac address not as expected");
 
 	/* 6. a - Stop bonded ethdev
-	 *    b - remove slave ethdevs
-	 *    c - Verify slave ethdevs MACs are restored
+	 *    b - remove member ethdevs
+	 *    c - Verify member ethdevs MACs are restored
 	 */
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
 			"Failed to stop bonded port %u",
 			bonded_port_id);
 
-	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(bonded_port_id,
-				slave_port_ids[i]),
-				"Failed to remove slave %d from bonded port (%d).",
-				slave_port_ids[i], bonded_port_id);
+	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(bonded_port_id,
+				member_port_ids[i]),
+				"Failed to remove member %d from bonded port (%d).",
+				member_port_ids[i], bonded_port_id);
 	}
 
-	slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
+	member_count = rte_eth_bond_members_get(bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(slave_count, 0,
-			"Number of slaves (%d) is great than expected (%d).",
-			slave_count, 0);
+	TEST_ASSERT_EQUAL(member_count, 0,
+			"Number of members (%d) is great than expected (%d).",
+			member_count, 0);
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 0 mac address not as expected");
+			"member port 0 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[1]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[1]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 1 mac address not as expected");
+			"member port 1 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[2]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[2]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 2 mac address not as expected");
+			"member port 2 mac address not as expected");
 
 	return 0;
 }
 
 
 static int
-initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
-		uint16_t number_of_slaves, uint8_t enable_slave)
+initialize_bonded_device_with_members(uint8_t bonding_mode, uint8_t bond_en_isr,
+		uint16_t number_of_members, uint8_t enable_member)
 {
 	/* Configure bonded device */
 	TEST_ASSERT_SUCCESS(configure_ethdev(test_params->bonded_port_id, 0,
 			bond_en_isr), "Failed to configure bonding port (%d) in mode %d "
-			"with (%d) slaves.", test_params->bonded_port_id, bonding_mode,
-			number_of_slaves);
-
-	/* Add slaves to bonded device */
-	while (number_of_slaves > test_params->bonded_slave_count)
-		TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-				"Failed to add slave (%d to  bonding port (%d).",
-				test_params->bonded_slave_count - 1,
+			"with (%d) members.", test_params->bonded_port_id, bonding_mode,
+			number_of_members);
+
+	/* Add members to bonded device */
+	while (number_of_members > test_params->bonded_member_count)
+		TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+				"Failed to add member (%d to  bonding port (%d).",
+				test_params->bonded_member_count - 1,
 				test_params->bonded_port_id);
 
 	/* Set link bonding mode  */
@@ -1148,40 +1153,40 @@ initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
 		"Failed to start bonded pmd eth device %d.",
 		test_params->bonded_port_id);
 
-	if (enable_slave)
-		enable_bonded_slaves();
+	if (enable_member)
+		enable_bonded_members();
 
 	return 0;
 }
 
 static int
-test_adding_slave_after_bonded_device_started(void)
+test_adding_member_after_bonded_device_started(void)
 {
 	int i;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 0, 4, 0),
-			"Failed to add slaves to bonded device");
+			"Failed to add members to bonded device");
 
-	/* Enabled slave devices */
-	for (i = 0; i < test_params->bonded_slave_count + 1; i++) {
+	/* Enabled member devices */
+	for (i = 0; i < test_params->bonded_member_count + 1; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 1);
+				test_params->member_port_ids[i], 1);
 	}
 
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
-			test_params->slave_port_ids[test_params->bonded_slave_count]),
-			"Failed to add slave to bonded port.\n");
+	TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+			test_params->member_port_ids[test_params->bonded_member_count]),
+			"Failed to add member to bonded port.\n");
 
 	rte_eth_stats_reset(
-			test_params->slave_port_ids[test_params->bonded_slave_count]);
+			test_params->member_port_ids[test_params->bonded_member_count]);
 
-	test_params->bonded_slave_count++;
+	test_params->bonded_member_count++;
 
-	return remove_slaves_and_stop_bonded_device();
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_STATUS_INTERRUPT_SLAVE_COUNT	4
+#define TEST_STATUS_INTERRUPT_MEMBER_COUNT	4
 #define TEST_LSC_WAIT_TIMEOUT_US	500000
 
 int test_lsc_interrupt_count;
@@ -1237,13 +1242,13 @@ lsc_timeout(int wait_us)
 static int
 test_status_interrupt(void)
 {
-	int slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int member_count;
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	/* initialized bonding device with T slaves */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* initialized bonding device with T members */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 1,
-			TEST_STATUS_INTERRUPT_SLAVE_COUNT, 1),
+			TEST_STATUS_INTERRUPT_MEMBER_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	test_lsc_interrupt_count = 0;
@@ -1253,27 +1258,27 @@ test_status_interrupt(void)
 			RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
 			&test_params->bonded_port_id);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT,
-			"Number of active slaves (%d) is not as expected (%d)",
-			slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT);
+	TEST_ASSERT_EQUAL(member_count, TEST_STATUS_INTERRUPT_MEMBER_COUNT,
+			"Number of active members (%d) is not as expected (%d)",
+			member_count, TEST_STATUS_INTERRUPT_MEMBER_COUNT);
 
-	/* Bring all 4 slaves link status to down and test that we have received a
+	/* Bring all 4 members link status to down and test that we have received a
 	 * lsc interrupts */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 0);
+			test_params->member_port_ids[0], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->member_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[2], 0);
+			test_params->member_port_ids[2], 0);
 
 	TEST_ASSERT_EQUAL(test_lsc_interrupt_count, 0,
 			"Received a link status change interrupt unexpectedly");
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->member_port_ids[3], 0);
 
 	TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
 			"timed out waiting for interrupt");
@@ -1281,18 +1286,18 @@ test_status_interrupt(void)
 	TEST_ASSERT(test_lsc_interrupt_count > 0,
 			"Did not receive link status change interrupt");
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(slave_count, 0,
-			"Number of active slaves (%d) is not as expected (%d)",
-			slave_count, 0);
+	TEST_ASSERT_EQUAL(member_count, 0,
+			"Number of active members (%d) is not as expected (%d)",
+			member_count, 0);
 
-	/* bring one slave port up so link status will change */
+	/* bring one member port up so link status will change */
 	test_lsc_interrupt_count = 0;
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 1);
+			test_params->member_port_ids[0], 1);
 
 	TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
 			"timed out waiting for interrupt");
@@ -1301,12 +1306,12 @@ test_status_interrupt(void)
 	TEST_ASSERT(test_lsc_interrupt_count > 0,
 			"Did not receive link status change interrupt");
 
-	/* Verify that calling the same slave lsc interrupt doesn't cause another
+	/* Verify that calling the same member lsc interrupt doesn't cause another
 	 * lsc interrupt from bonded device */
 	test_lsc_interrupt_count = 0;
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 1);
+			test_params->member_port_ids[0], 1);
 
 	TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) != 0,
 			"received unexpected interrupt");
@@ -1320,8 +1325,8 @@ test_status_interrupt(void)
 				RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
 				&test_params->bonded_port_id);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -1398,11 +1403,11 @@ test_roundrobin_tx_burst(void)
 	struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 0, 2, 1),
 			"Failed to initialise bonded device");
 
-	burst_size = 20 * test_params->bonded_slave_count;
+	burst_size = 20 * test_params->bonded_member_count;
 
 	TEST_ASSERT(burst_size <= MAX_PKT_BURST,
 			"Burst size specified is greater than supported.");
@@ -1423,20 +1428,20 @@ test_roundrobin_tx_burst(void)
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
 			burst_size);
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+	/* Verify member ports tx stats */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
 		TEST_ASSERT_EQUAL(port_stats.opackets,
-				(uint64_t)burst_size / test_params->bonded_slave_count,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)\n",
+				(uint64_t)burst_size / test_params->bonded_member_count,
+				"Member Port (%d) opackets value (%u) not as expected (%d)\n",
 				test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-				burst_size / test_params->bonded_slave_count);
+				burst_size / test_params->bonded_member_count);
 	}
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -1444,8 +1449,8 @@ test_roundrobin_tx_burst(void)
 			pkt_burst, burst_size), 0,
 			"tx burst return unexpected value");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -1471,13 +1476,13 @@ free_mbufs(struct rte_mbuf **mbufs, int nb_mbufs)
 		rte_pktmbuf_free(mbufs[i]);
 }
 
-#define TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT		(2)
-#define TEST_RR_SLAVE_TX_FAIL_BURST_SIZE		(64)
-#define TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT		(22)
-#define TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX	(1)
+#define TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT		(2)
+#define TEST_RR_MEMBER_TX_FAIL_BURST_SIZE		(64)
+#define TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT		(22)
+#define TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX	(1)
 
 static int
-test_roundrobin_tx_burst_slave_tx_fail(void)
+test_roundrobin_tx_burst_member_tx_fail(void)
 {
 	struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
 	struct rte_mbuf *expected_tx_fail_pkts[MAX_PKT_BURST];
@@ -1486,49 +1491,51 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
 
 	int i, first_fail_idx, tx_count;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 0,
-			TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+			TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	/* Generate test bursts of packets to transmit */
 	TEST_ASSERT_EQUAL(generate_test_burst(pkt_burst,
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE,
+			TEST_RR_MEMBER_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
+			TEST_RR_MEMBER_TX_FAIL_BURST_SIZE,
 			"Failed to generate test packet burst");
 
 	/* Copy references to packets which we expect not to be transmitted */
-	first_fail_idx = (TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			(TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT *
-			TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)) +
-			TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX;
+	first_fail_idx = (TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+			(TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT *
+			TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT)) +
+			TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX;
 
-	for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+	for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
 		expected_tx_fail_pkts[i] = pkt_burst[first_fail_idx +
-				(i * TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)];
+				(i * TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT)];
 	}
 
-	/* Set virtual slave to only fail transmission of
-	 * TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT packets in burst */
+	/*
+	 * Set virtual member to only fail transmission of
+	 * TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT packets in burst.
+	 */
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
+			test_params->member_port_ids[TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
 			0);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+			test_params->member_port_ids[TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
+			TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
 
 	tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkt_burst,
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE);
+			TEST_RR_MEMBER_TX_FAIL_BURST_SIZE);
 
-	TEST_ASSERT_EQUAL(tx_count, TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
+	TEST_ASSERT_EQUAL(tx_count, TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT,
 			"Transmitted (%d) an unexpected (%d) number of packets", tx_count,
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+			TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
 
 	/* Verify that failed packet are expected failed packets */
-	for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+	for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
 		TEST_ASSERT_EQUAL(expected_tx_fail_pkts[i], pkt_burst[i + tx_count],
 				"expected mbuf (%d) pointer %p not expected pointer %p",
 				i, expected_tx_fail_pkts[i], pkt_burst[i + tx_count]);
@@ -1538,45 +1545,45 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
 	rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
+			(uint64_t)TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT,
 			"Bonded Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+			TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		int slave_expected_tx_count;
+	/* Verify member ports tx stats */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		int member_expected_tx_count;
 
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+		rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
 
-		slave_expected_tx_count = TEST_RR_SLAVE_TX_FAIL_BURST_SIZE /
-				test_params->bonded_slave_count;
+		member_expected_tx_count = TEST_RR_MEMBER_TX_FAIL_BURST_SIZE /
+				test_params->bonded_member_count;
 
-		if (i == TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX)
-			slave_expected_tx_count = slave_expected_tx_count -
-					TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT;
+		if (i == TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX)
+			member_expected_tx_count = member_expected_tx_count -
+					TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT;
 
 		TEST_ASSERT_EQUAL(port_stats.opackets,
-				(uint64_t)slave_expected_tx_count,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[i],
-				(unsigned int)port_stats.opackets, slave_expected_tx_count);
+				(uint64_t)member_expected_tx_count,
+				"Member Port (%d) opackets value (%u) not as expected (%d)",
+				test_params->member_port_ids[i],
+				(unsigned int)port_stats.opackets, member_expected_tx_count);
 	}
 
 	/* Verify that all mbufs have a ref value of zero */
 	TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkt_burst[tx_count],
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
+			TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT, 1),
 			"mbufs refcnts not as expected");
-	free_mbufs(&pkt_burst[tx_count], TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+	free_mbufs(&pkt_burst[tx_count], TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
-test_roundrobin_rx_burst_on_single_slave(void)
+test_roundrobin_rx_burst_on_single_member(void)
 {
 	struct rte_mbuf *gen_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
@@ -1585,19 +1592,19 @@ test_roundrobin_rx_burst_on_single_slave(void)
 
 	int i, j, burst_size = 25;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with members");
 
 	/* Generate test bursts of packets to transmit */
 	TEST_ASSERT_EQUAL(generate_test_burst(
 			gen_pkt_burst, burst_size, 0, 1, 0, 0, 0), burst_size,
 			"burst generation failed");
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		/* Add rx data to slave */
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		/* Add rx data to member */
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[0], burst_size);
 
 		/* Call rx burst on bonded device */
@@ -1616,25 +1623,25 @@ test_roundrobin_rx_burst_on_single_slave(void)
 
 
 
-		/* Verify bonded slave devices rx count */
-		/* Verify slave ports tx stats */
-		for (j = 0; j < test_params->bonded_slave_count; j++) {
-			rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+		/* Verify bonded member devices rx count */
+		/* Verify member ports tx stats */
+		for (j = 0; j < test_params->bonded_member_count; j++) {
+			rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
 
 			if (i == j) {
 				TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
-						"Slave Port (%d) ipackets value (%u) not as expected"
-						" (%d)", test_params->slave_port_ids[i],
+						"Member Port (%d) ipackets value (%u) not as expected"
+						" (%d)", test_params->member_port_ids[i],
 						(unsigned int)port_stats.ipackets, burst_size);
 			} else {
 				TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-						"Slave Port (%d) ipackets value (%u) not as expected"
-						" (%d)", test_params->slave_port_ids[i],
+						"Member Port (%d) ipackets value (%u) not as expected"
+						" (%d)", test_params->member_port_ids[i],
 						(unsigned int)port_stats.ipackets, 0);
 			}
 
-			/* Reset bonded slaves stats */
-			rte_eth_stats_reset(test_params->slave_port_ids[j]);
+			/* Reset bonded members stats */
+			rte_eth_stats_reset(test_params->member_port_ids[j]);
 		}
 		/* reset bonded device stats */
 		rte_eth_stats_reset(test_params->bonded_port_id);
@@ -1646,38 +1653,38 @@ test_roundrobin_rx_burst_on_single_slave(void)
 	}
 
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT (3)
+#define TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT (3)
 
 static int
-test_roundrobin_rx_burst_on_multiple_slaves(void)
+test_roundrobin_rx_burst_on_multiple_members(void)
 {
-	struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
 
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	int burst_size[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT] = { 15, 13, 36 };
+	int burst_size[TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT] = { 15, 13, 36 };
 	int i, nb_rx;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with members");
 
 	/* Generate test bursts of packets to transmit */
-	for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[i][0], burst_size[i], 0, 1, 0, 0, 0),
 				burst_size[i], "burst generation failed");
 	}
 
-	/* Add rx data to slaves */
-	for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+	/* Add rx data to members */
+	for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT; i++) {
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[i][0], burst_size[i]);
 	}
 
@@ -1697,29 +1704,29 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
 			test_params->bonded_port_id, (unsigned int)port_stats.ipackets,
 			burst_size[0] + burst_size[1] + burst_size[2]);
 
-	/* Verify bonded slave devices rx counts */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify bonded member devices rx counts */
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0],
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[0],
 			(unsigned int)port_stats.ipackets, burst_size[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[1], (unsigned int)port_stats.ipackets,
 			burst_size[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[2],
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+				test_params->member_port_ids[2],
 				(unsigned int)port_stats.ipackets, burst_size[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[3],
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[3],
 			(unsigned int)port_stats.ipackets, 0);
 
 	/* free mbufs */
@@ -1727,8 +1734,8 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
 		rte_pktmbuf_free(rx_pkt_burst[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -1739,48 +1746,54 @@ test_roundrobin_verify_mac_assignment(void)
 
 	int i;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+			&expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_2),
+			test_params->member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[2],
+			&expected_mac_addr_2),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[2]);
+			test_params->member_port_ids[2]);
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 				BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
-				"Failed to initialize bonded device with slaves");
+				"Failed to initialize bonded device with members");
 
-	/* Verify that all MACs are the same as first slave added to bonded dev */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	/* Verify that all MACs are the same as first member added to bonded dev */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of primary port",
-				test_params->slave_port_ids[i]);
+				"member port (%d) mac address not set to that of primary port",
+				test_params->member_port_ids[i]);
 	}
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[2]),
+			test_params->member_port_ids[2]),
 			"Failed to set bonded port (%d) primary port to (%d)",
-			test_params->bonded_port_id, test_params->slave_port_ids[i]);
+			test_params->bonded_port_id, test_params->member_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address has changed to that of primary"
+				"member port (%d) mac address has changed to that of primary"
 				" port without stop/start toggle of bonded device",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
-	/* stop / start bonded device and verify that primary MAC address is
-	 * propagate to bonded device and slaves */
+	/*
+	 * stop / start bonded device and verify that primary MAC address is
+	 * propagate to bonded device and members.
+	 */
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
 			test_params->bonded_port_id);
@@ -1794,16 +1807,17 @@ test_roundrobin_verify_mac_assignment(void)
 	TEST_ASSERT_SUCCESS(
 			memcmp(&expected_mac_addr_2, &read_mac_addr, sizeof(read_mac_addr)),
 			"bonded port (%d) mac address not set to that of new primary port",
-			test_params->slave_port_ids[i]);
+			test_params->member_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_2, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of new primary"
-				" port", test_params->slave_port_ids[i]);
+				"member port (%d) mac address not set to that of new primary"
+				" port", test_params->member_port_ids[i]);
 	}
 
 	/* Set explicit MAC address */
@@ -1818,19 +1832,20 @@ test_roundrobin_verify_mac_assignment(void)
 	TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
 			"bonded port (%d) mac address not set to that of new primary port",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
-				sizeof(read_mac_addr)), "slave port (%d) mac address not set to"
-				" that of new primary port\n", test_params->slave_port_ids[i]);
+				sizeof(read_mac_addr)), "member port (%d) mac address not set to"
+				" that of new primary port\n", test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -1839,10 +1854,10 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
 	int i, promiscuous_en;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with members");
 
 	ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
 	TEST_ASSERT_SUCCESS(ret,
@@ -1854,12 +1869,12 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not enabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_EQUAL(promiscuous_en, 1,
-				"slave port (%d) promiscuous mode not enabled",
-				test_params->slave_port_ids[i]);
+				"member port (%d) promiscuous mode not enabled",
+				test_params->member_port_ids[i]);
 	}
 
 	ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -1872,76 +1887,76 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled\n",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_EQUAL(promiscuous_en, 0,
 				"Port (%d) promiscuous mode not disabled\n",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_RR_LINK_STATUS_SLAVE_COUNT (4)
-#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT (2)
+#define TEST_RR_LINK_STATUS_MEMBER_COUNT (4)
+#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT (2)
 
 static int
-test_roundrobin_verify_slave_link_status_change_behaviour(void)
+test_roundrobin_verify_member_link_status_change_behaviour(void)
 {
 	struct rte_mbuf *tx_pkt_burst[MAX_PKT_BURST] = { NULL };
-	struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_MEMBER_COUNT][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 
 	struct rte_eth_stats port_stats;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count;
+	int i, burst_size, member_count;
 
 	/* NULL all pointers in array to simplify cleanup */
 	memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
 
-	/* Initialize bonded device with TEST_RR_LINK_STATUS_SLAVE_COUNT slaves
+	/* Initialize bonded device with TEST_RR_LINK_STATUS_MEMBER_COUNT members
 	 * in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
-			BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_SLAVE_COUNT, 1),
-			"Failed to initialize bonded device with slaves");
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+			BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_MEMBER_COUNT, 1),
+			"Failed to initialize bonded device with members");
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Members Count /Active Member Count is */
+	member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
-			"Number of slaves (%d) is not as expected (%d).",
-			slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
+	TEST_ASSERT_EQUAL(member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT,
+			"Number of members (%d) is not as expected (%d).",
+			member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT);
 
-	/* Set 2 slaves eth_devs link status to down */
+	/* Set 2 members eth_devs link status to down */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->member_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->member_port_ids[3], 0);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count,
-			TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT,
-			"Number of active slaves (%d) is not as expected (%d).\n",
-			slave_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count,
+			TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT,
+			"Number of active members (%d) is not as expected (%d).\n",
+			member_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT);
 
 	burst_size = 20;
 
-	/* Verify that pkts are not sent on slaves with link status down:
+	/* Verify that pkts are not sent on members with link status down:
 	 *
 	 * 1. Generate test burst of traffic
 	 * 2. Transmit burst on bonded eth_dev
 	 * 3. Verify stats for bonded eth_dev (opackets = burst_size)
-	 * 4. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
+	 * 4. Verify stats for member eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
 	 */
 	TEST_ASSERT_EQUAL(
 			generate_test_burst(tx_pkt_burst, burst_size, 0, 1, 0, 0, 0),
@@ -1960,41 +1975,41 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
 			test_params->bonded_port_id, (int)port_stats.opackets,
 			burst_size);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
 			"Port (%d) opackets stats (%d) not expected (%d) value",
-			test_params->slave_port_ids[0], (int)port_stats.opackets, 10);
+			test_params->member_port_ids[0], (int)port_stats.opackets, 10);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
 			"Port (%d) opackets stats (%d) not expected (%d) value",
-			test_params->slave_port_ids[1], (int)port_stats.opackets, 0);
+			test_params->member_port_ids[1], (int)port_stats.opackets, 0);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
 			"Port (%d) opackets stats (%d) not expected (%d) value",
-			test_params->slave_port_ids[2], (int)port_stats.opackets, 10);
+			test_params->member_port_ids[2], (int)port_stats.opackets, 10);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
 			"Port (%d) opackets stats (%d) not expected (%d) value",
-			test_params->slave_port_ids[3], (int)port_stats.opackets, 0);
+			test_params->member_port_ids[3], (int)port_stats.opackets, 0);
 
-	/* Verify that pkts are not sent on slaves with link status down:
+	/* Verify that pkts are not sent on members with link status down:
 	 *
 	 * 1. Generate test bursts of traffic
 	 * 2. Add bursts on to virtual eth_devs
 	 * 3. Rx burst on bonded eth_dev, expected (burst_ size *
-	 *    TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT) received
+	 *    TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT) received
 	 * 4. Verify stats for bonded eth_dev
-	 * 6. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
+	 * 6. Verify stats for member eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
 	 */
-	for (i = 0; i < TEST_RR_LINK_STATUS_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_RR_LINK_STATUS_MEMBER_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0),
 				burst_size, "failed to generate packet burst");
 
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[i][0], burst_size);
 	}
 
@@ -2014,49 +2029,49 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
 		rte_pktmbuf_free(rx_pkt_burst[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT (2)
+#define TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT (2)
 
-uint8_t polling_slave_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
+uint8_t polling_member_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
 
 
-int polling_test_slaves[TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT] = { -1, -1 };
+int polling_test_members[TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT] = { -1, -1 };
 
 static int
-test_roundrobin_verfiy_polling_slave_link_status_change(void)
+test_roundrobin_verify_polling_member_link_status_change(void)
 {
 	struct rte_ether_addr *mac_addr =
-		(struct rte_ether_addr *)polling_slave_mac;
-	char slave_name[RTE_ETH_NAME_MAX_LEN];
+		(struct rte_ether_addr *)polling_member_mac;
+	char member_name[RTE_ETH_NAME_MAX_LEN];
 
 	int i;
 
-	for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
-		/* Generate slave name / MAC address */
-		snprintf(slave_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
+	for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT; i++) {
+		/* Generate member name / MAC address */
+		snprintf(member_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
 		mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
 
-		/* Create slave devices with no ISR Support */
-		if (polling_test_slaves[i] == -1) {
-			polling_test_slaves[i] = virtual_ethdev_create(slave_name, mac_addr,
+		/* Create member devices with no ISR Support */
+		if (polling_test_members[i] == -1) {
+			polling_test_members[i] = virtual_ethdev_create(member_name, mac_addr,
 					rte_socket_id(), 0);
-			TEST_ASSERT(polling_test_slaves[i] >= 0,
-					"Failed to create virtual virtual ethdev %s\n", slave_name);
+			TEST_ASSERT(polling_test_members[i] >= 0,
+					"Failed to create virtual ethdev %s\n", member_name);
 
-			/* Configure slave */
-			TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_slaves[i], 0, 0),
-					"Failed to configure virtual ethdev %s(%d)", slave_name,
-					polling_test_slaves[i]);
+			/* Configure member */
+			TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_members[i], 0, 0),
+					"Failed to configure virtual ethdev %s(%d)", member_name,
+					polling_test_members[i]);
 		}
 
-		/* Add slave to bonded device */
-		TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
-				polling_test_slaves[i]),
-				"Failed to add slave %s(%d) to bonded device %d",
-				slave_name, polling_test_slaves[i],
+		/* Add member to bonded device */
+		TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+				polling_test_members[i]),
+				"Failed to add member %s(%d) to bonded device %d",
+				member_name, polling_test_members[i],
 				test_params->bonded_port_id);
 	}
 
@@ -2071,26 +2086,26 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
 			RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
 			&test_params->bonded_port_id);
 
-	/* link status change callback for first slave link up */
+	/* link status change callback for first member link up */
 	test_lsc_interrupt_count = 0;
 
-	virtual_ethdev_set_link_status(polling_test_slaves[0], 1);
+	virtual_ethdev_set_link_status(polling_test_members[0], 1);
 
 	TEST_ASSERT_SUCCESS(lsc_timeout(15000), "timed out waiting for interrupt");
 
 
-	/* no link status change callback for second slave link up */
+	/* no link status change callback for second member link up */
 	test_lsc_interrupt_count = 0;
 
-	virtual_ethdev_set_link_status(polling_test_slaves[1], 1);
+	virtual_ethdev_set_link_status(polling_test_members[1], 1);
 
 	TEST_ASSERT_FAIL(lsc_timeout(15000), "unexpectedly succeeded");
 
-	/* link status change callback for both slave links down */
+	/* link status change callback for both member links down */
 	test_lsc_interrupt_count = 0;
 
-	virtual_ethdev_set_link_status(polling_test_slaves[0], 0);
-	virtual_ethdev_set_link_status(polling_test_slaves[1], 0);
+	virtual_ethdev_set_link_status(polling_test_members[0], 0);
+	virtual_ethdev_set_link_status(polling_test_members[1], 0);
 
 	TEST_ASSERT_SUCCESS(lsc_timeout(20000), "timed out waiting for interrupt");
 
@@ -2100,17 +2115,17 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
 			&test_params->bonded_port_id);
 
 
-	/* Clean up and remove slaves from bonded device */
-	for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
+	/* Clean up and remove members from bonded device */
+	for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT; i++) {
 
 		TEST_ASSERT_SUCCESS(
-				rte_eth_bond_slave_remove(test_params->bonded_port_id,
-						polling_test_slaves[i]),
-				"Failed to remove slave %d from bonded port (%d)",
-				polling_test_slaves[i], test_params->bonded_port_id);
+				rte_eth_bond_member_remove(test_params->bonded_port_id,
+						polling_test_members[i]),
+				"Failed to remove member %d from bonded port (%d)",
+				polling_test_members[i], test_params->bonded_port_id);
 	}
 
-	return remove_slaves_and_stop_bonded_device();
+	return remove_members_and_stop_bonded_device();
 }
 
 
@@ -2123,9 +2138,9 @@ test_activebackup_tx_burst(void)
 	struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ACTIVE_BACKUP, 0, 1, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with members");
 
 	initialize_eth_header(test_params->pkt_eth_hdr,
 			(struct rte_ether_addr *)src_mac,
@@ -2136,7 +2151,7 @@ test_activebackup_tx_burst(void)
 	pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
 			dst_addr_0, pktlen);
 
-	burst_size = 20 * test_params->bonded_slave_count;
+	burst_size = 20 * test_params->bonded_member_count;
 
 	TEST_ASSERT(burst_size < MAX_PKT_BURST,
 			"Burst size specified is greater than supported.");
@@ -2160,38 +2175,38 @@ test_activebackup_tx_burst(void)
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
-		if (test_params->slave_port_ids[i] == primary_port) {
+	/* Verify member ports tx stats */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
+		if (test_params->member_port_ids[i] == primary_port) {
 			TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
-					"Slave Port (%d) opackets value (%u) not as expected (%d)",
+					"Member Port (%d) opackets value (%u) not as expected (%d)",
 					test_params->bonded_port_id,
 					(unsigned int)port_stats.opackets,
-					burst_size / test_params->bonded_slave_count);
+					burst_size / test_params->bonded_member_count);
 		} else {
 			TEST_ASSERT_EQUAL(port_stats.opackets, 0,
-					"Slave Port (%d) opackets value (%u) not as expected (%d)",
+					"Member Port (%d) opackets value (%u) not as expected (%d)",
 					test_params->bonded_port_id,
 					(unsigned int)port_stats.opackets, 0);
 		}
 	}
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
 	TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
 			pkts_burst, burst_size), 0, "Sending empty burst failed");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT (4)
+#define TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT (4)
 
 static int
 test_activebackup_rx_burst(void)
@@ -2205,24 +2220,24 @@ test_activebackup_rx_burst(void)
 
 	int i, j, burst_size = 17;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ACTIVE_BACKUP, 0,
-			TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
-			"Failed to initialize bonded device with slaves");
+			TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT, 1),
+			"Failed to initialize bonded device with members");
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 	TEST_ASSERT(primary_port >= 0,
-			"failed to get primary slave for bonded port (%d)",
+			"failed to get primary member for bonded port (%d)",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		/* Generate test bursts of packets to transmit */
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0),
 				burst_size, "burst generation failed");
 
-		/* Add rx data to slave */
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		/* Add rx data to member */
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[0], burst_size);
 
 		/* Call rx burst on bonded device */
@@ -2230,7 +2245,7 @@ test_activebackup_rx_burst(void)
 				&rx_pkt_burst[0], MAX_PKT_BURST), burst_size,
 				"rte_eth_rx_burst failed");
 
-		if (test_params->slave_port_ids[i] == primary_port) {
+		if (test_params->member_port_ids[i] == primary_port) {
 			/* Verify bonded device rx count */
 			rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 			TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
@@ -2238,27 +2253,30 @@ test_activebackup_rx_burst(void)
 					test_params->bonded_port_id,
 					(unsigned int)port_stats.ipackets, burst_size);
 
-			/* Verify bonded slave devices rx count */
-			for (j = 0; j < test_params->bonded_slave_count; j++) {
-				rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+			/* Verify bonded member devices rx count */
+			for (j = 0; j < test_params->bonded_member_count; j++) {
+				rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
 				if (i == j) {
 					TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
-							"Slave Port (%d) ipackets value (%u) not as "
-							"expected (%d)", test_params->slave_port_ids[i],
-							(unsigned int)port_stats.ipackets, burst_size);
+							"Member Port (%d) ipackets value (%u) not as "
+							"expected (%d)",
+							test_params->member_port_ids[i],
+							(unsigned int)port_stats.ipackets,
+							burst_size);
 				} else {
 					TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-							"Slave Port (%d) ipackets value (%u) not as "
-							"expected (%d)\n", test_params->slave_port_ids[i],
+							"Member Port (%d) ipackets value (%u) not as "
+							"expected (%d)\n",
+							test_params->member_port_ids[i],
 							(unsigned int)port_stats.ipackets, 0);
 				}
 			}
 		} else {
-			for (j = 0; j < test_params->bonded_slave_count; j++) {
-				rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+			for (j = 0; j < test_params->bonded_member_count; j++) {
+				rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
 				TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-						"Slave Port (%d) ipackets value (%u) not as expected "
-						"(%d)", test_params->slave_port_ids[i],
+						"Member Port (%d) ipackets value (%u) not as expected "
+						"(%d)", test_params->member_port_ids[i],
 						(unsigned int)port_stats.ipackets, 0);
 			}
 		}
@@ -2275,8 +2293,8 @@ test_activebackup_rx_burst(void)
 		rte_eth_stats_reset(test_params->bonded_port_id);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -2285,14 +2303,14 @@ test_activebackup_verify_promiscuous_enable_disable(void)
 	int i, primary_port, promiscuous_en;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ACTIVE_BACKUP, 0, 4, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with members");
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 	TEST_ASSERT(primary_port >= 0,
-			"failed to get primary slave for bonded port (%d)",
+			"failed to get primary member for bonded port (%d)",
 			test_params->bonded_port_id);
 
 	ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -2304,17 +2322,17 @@ test_activebackup_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not enabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
-		if (primary_port == test_params->slave_port_ids[i]) {
+				test_params->member_port_ids[i]);
+		if (primary_port == test_params->member_port_ids[i]) {
 			TEST_ASSERT_EQUAL(promiscuous_en, 1,
-					"slave port (%d) promiscuous mode not enabled",
-					test_params->slave_port_ids[i]);
+					"member port (%d) promiscuous mode not enabled",
+					test_params->member_port_ids[i]);
 		} else {
 			TEST_ASSERT_EQUAL(promiscuous_en, 0,
-					"slave port (%d) promiscuous mode enabled",
-					test_params->slave_port_ids[i]);
+					"member port (%d) promiscuous mode enabled",
+					test_params->member_port_ids[i]);
 		}
 
 	}
@@ -2328,16 +2346,16 @@ test_activebackup_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled\n",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_EQUAL(promiscuous_en, 0,
-				"slave port (%d) promiscuous mode not disabled\n",
-				test_params->slave_port_ids[i]);
+				"member port (%d) promiscuous mode not disabled\n",
+				test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -2346,19 +2364,21 @@ test_activebackup_verify_mac_assignment(void)
 	struct rte_ether_addr read_mac_addr;
 	struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+			&expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+			test_params->member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+			&expected_mac_addr_1),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	/* Initialize bonded device with 2 slaves in active backup mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 2 members in active backup mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with members");
 
-	/* Verify that bonded MACs is that of first slave and that the other slave
+	/* Verify that bonded MACs is that of first member and that the other member
 	 * MAC hasn't been changed */
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -2368,27 +2388,27 @@ test_activebackup_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[1]);
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[1]), 0,
+			test_params->member_port_ids[1]), 0,
 			"Failed to set bonded port (%d) primary port to (%d)",
-			test_params->bonded_port_id, test_params->slave_port_ids[1]);
+			test_params->bonded_port_id, test_params->member_port_ids[1]);
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -2398,24 +2418,26 @@ test_activebackup_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[1]);
 
-	/* stop / start bonded device and verify that primary MAC address is
-	 * propagated to bonded device and slaves */
+	/*
+	 * stop / start bonded device and verify that primary MAC address is
+	 * propagated to bonded device and members.
+	 */
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -2432,21 +2454,21 @@ test_activebackup_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[1]);
 
 	/* Set explicit MAC address */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
@@ -2462,36 +2484,36 @@ test_activebackup_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of bonded port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of bonded port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of bonded port",
+			test_params->member_port_ids[1]);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
-test_activebackup_verify_slave_link_status_change_failover(void)
+test_activebackup_verify_member_link_status_change_failover(void)
 {
-	struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count, primary_port;
+	int i, burst_size, member_count, primary_port;
 
 	burst_size = 21;
 
@@ -2502,96 +2524,96 @@ test_activebackup_verify_slave_link_status_change_failover(void)
 			&pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
 			"generate_test_burst failed");
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ACTIVE_BACKUP, 0,
-			TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
-			"Failed to initialize bonded device with slaves");
+			TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT, 1),
+			"Failed to initialize bonded device with members");
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Members Count /Active Member Count is */
+	member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of slaves (%d) is not as expected (%d).",
-			slave_count, 4);
+	TEST_ASSERT_EQUAL(member_count, 4,
+			"Number of members (%d) is not as expected (%d).",
+			member_count, 4);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 4);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count, 4,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 4);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+	TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
 			"Primary port not as expected");
 
-	/* Bring 2 slaves down and verify active slave count */
+	/* Bring 2 members down and verify active member count */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->member_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->member_port_ids[3], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 2);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 2);
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 1);
+			test_params->member_port_ids[1], 1);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 1);
+			test_params->member_port_ids[3], 1);
 
 
-	/* Bring primary port down, verify that active slave count is 3 and primary
+	/* Bring primary port down, verify that active member count is 3 and primary
 	 *  has changed */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 0);
+			test_params->member_port_ids[0], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS),
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS),
 			3,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 3);
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 3);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
+	TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[2],
 			"Primary port not as expected");
 
-	/* Verify that pkts are sent on new primary slave */
+	/* Verify that pkts are sent on new primary member */
 
 	TEST_ASSERT_EQUAL(rte_eth_tx_burst(
 			test_params->bonded_port_id, 0, &pkt_burst[0][0],
 			burst_size), burst_size, "rte_eth_tx_burst failed");
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[2]);
+			test_params->member_port_ids[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[3]);
+			test_params->member_port_ids[3]);
 
 	/* Generate packet burst for testing */
 
-	for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
 				"generate_test_burst failed");
 
 		virtual_ethdev_add_mbufs_to_rx_queue(
-			test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
+			test_params->member_port_ids[i], &pkt_burst[i][0], burst_size);
 	}
 
 	TEST_ASSERT_EQUAL(rte_eth_rx_burst(
@@ -2604,28 +2626,28 @@ test_activebackup_verify_slave_link_status_change_failover(void)
 			"(%d) port_stats.ipackets not as expected",
 			test_params->bonded_port_id);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[2]);
+			test_params->member_port_ids[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[3]);
+			test_params->member_port_ids[3]);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 /** Balance Mode Tests */
@@ -2633,9 +2655,9 @@ test_activebackup_verify_slave_link_status_change_failover(void)
 static int
 test_balance_xmit_policy_configuration(void)
 {
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			"Failed to initialize_bonded_device_with_members.");
 
 	/* Invalid port id */
 	TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
@@ -2644,7 +2666,7 @@ test_balance_xmit_policy_configuration(void)
 
 	/* Set xmit policy on non bonded device */
 	TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
-			test_params->slave_port_ids[0],	BALANCE_XMIT_POLICY_LAYER2),
+			test_params->member_port_ids[0],	BALANCE_XMIT_POLICY_LAYER2),
 			"Expected call to failed as invalid port specified.");
 
 
@@ -2677,25 +2699,25 @@ test_balance_xmit_policy_configuration(void)
 	TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_get(INVALID_PORT_ID),
 			"Expected call to failed as invalid port specified.");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT (2)
+#define TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT (2)
 
 static int
 test_balance_l2_tx_burst(void)
 {
-	struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
-	int burst_size[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT] = { 10, 15 };
+	struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
+	int burst_size[TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT] = { 10, 15 };
 
 	uint16_t pktlen;
 	int i;
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
-			BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+			BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT, 1),
+			"Failed to initialize_bonded_device_with_members.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
 			test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER2),
@@ -2730,7 +2752,7 @@ test_balance_l2_tx_burst(void)
 			"failed to generate packet burst");
 
 	/* Send burst 1 on bonded port */
-	for (i = 0; i < TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
 				&pkts_burst[i][0], burst_size[i]),
 				burst_size[i], "Failed to transmit packet burst");
@@ -2745,24 +2767,24 @@ test_balance_l2_tx_burst(void)
 			burst_size[0] + burst_size[1]);
 
 
-	/* Verify slave ports tx stats */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify member ports tx stats */
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[0],
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+			"Member Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
 			burst_size[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[1],
-			"Slave Port (%d) opackets value (%u) not as expected (%d)\n",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+			"Member Port (%d) opackets value (%u) not as expected (%d)\n",
+			test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
 			burst_size[1]);
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -2770,8 +2792,8 @@ test_balance_l2_tx_burst(void)
 			test_params->bonded_port_id, 0, &pkts_burst[0][0], burst_size[0]),
 			0, "Expected zero packet");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -2785,9 +2807,9 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BALANCE, 0, 2, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			"Failed to initialize_bonded_device_with_members.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
 			test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER23),
@@ -2825,24 +2847,24 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
 			nb_tx_1 + nb_tx_2);
 
-	/* Verify slave ports tx stats */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify member ports tx stats */
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+			"Member Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
 			nb_tx_1);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+			"Member Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
 			nb_tx_2);
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -2851,8 +2873,8 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 			burst_size_1), 0, "Expected zero packet");
 
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -2897,9 +2919,9 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BALANCE, 0, 2, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			"Failed to initialize_bonded_device_with_members.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
 			test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER34),
@@ -2938,24 +2960,24 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
 			nb_tx_1 + nb_tx_2);
 
-	/* Verify slave ports tx stats */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify member ports tx stats */
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+			"Member Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
 			nb_tx_1);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+			"Member Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
 			nb_tx_2);
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -2963,8 +2985,8 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 			test_params->bonded_port_id, 0, pkts_burst_1,
 			burst_size_1), 0, "Expected zero packet");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -3003,27 +3025,27 @@ test_balance_l34_tx_burst_ipv6_toggle_udp_port(void)
 	return balance_l34_tx_burst(0, 0, 0, 0, 1);
 }
 
-#define TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT			(2)
-#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1			(40)
-#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2			(20)
-#define TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT		(25)
-#define TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX	(0)
+#define TEST_BAL_MEMBER_TX_FAIL_MEMBER_COUNT			(2)
+#define TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1			(40)
+#define TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2			(20)
+#define TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT		(25)
+#define TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX	(0)
 
 static int
-test_balance_tx_burst_slave_tx_fail(void)
+test_balance_tx_burst_member_tx_fail(void)
 {
-	struct rte_mbuf *pkts_burst_1[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1];
-	struct rte_mbuf *pkts_burst_2[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2];
+	struct rte_mbuf *pkts_burst_1[TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1];
+	struct rte_mbuf *pkts_burst_2[TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2];
 
-	struct rte_mbuf *expected_fail_pkts[TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT];
+	struct rte_mbuf *expected_fail_pkts[TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT];
 
 	struct rte_eth_stats port_stats;
 
 	int i, first_tx_fail_idx, tx_count_1, tx_count_2;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BALANCE, 0,
-			TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+			TEST_BAL_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
@@ -3033,46 +3055,48 @@ test_balance_tx_burst_slave_tx_fail(void)
 
 	/* Generate test bursts for transmission */
 	TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_1,
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1,
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1,
 			"Failed to generate test packet burst 1");
 
-	first_tx_fail_idx = TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT;
+	first_tx_fail_idx = TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT;
 
 	/* copy mbuf references for expected transmission failures */
-	for (i = 0; i < TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT; i++)
+	for (i = 0; i < TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT; i++)
 		expected_fail_pkts[i] = pkts_burst_1[i + first_tx_fail_idx];
 
 	TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_2,
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
 			"Failed to generate test packet burst 2");
 
 
-	/* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
-	 * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
+	/*
+	 * Set virtual member TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX to only fail
+	 * transmission of TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT packets of burst.
+	 */
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
+			test_params->member_port_ids[TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
 			0);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+			test_params->member_port_ids[TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
 
 
 	/* Transmit burst 1 */
 	tx_count_1 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_1,
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1);
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1);
 
-	TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
+	TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT,
 			"Transmitted (%d) packets, expected to transmit (%d) packets",
-			tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+			tx_count_1, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
 
 	/* Verify that failed packet are expected failed packets */
-	for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+	for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
 		TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst_1[i + tx_count_1],
 				"expected mbuf (%d) pointer %p not expected pointer %p",
 				i, expected_fail_pkts[i], pkts_burst_1[i + tx_count_1]);
@@ -3080,94 +3104,94 @@ test_balance_tx_burst_slave_tx_fail(void)
 
 	/* Transmit burst 2 */
 	tx_count_2 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_2,
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
 
-	TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
+	TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
 			"Transmitted (%d) packets, expected to transmit (%d) packets",
-			tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+			tx_count_2, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
 
 
 	/* Verify bonded port tx stats */
 	rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)((TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2),
+			(uint64_t)((TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT) +
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2),
 			"Bonded Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			(TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+			(TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT) +
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
 
-	/* Verify slave ports tx stats */
+	/* Verify member ports tx stats */
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)
-				TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-				TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[0],
+				TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+				TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT,
+				"Member Port (%d) opackets value (%u) not as expected (%d)",
+				test_params->member_port_ids[0],
 				(unsigned int)port_stats.opackets,
-				TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-				TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+				TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+				TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
 
 
 
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-				(uint64_t)TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[1],
+				(uint64_t)TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
+				"Member Port (%d) opackets value (%u) not as expected (%d)",
+				test_params->member_port_ids[1],
 				(unsigned int)port_stats.opackets,
-				TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+				TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
 
 	/* Verify that all mbufs have a ref value of zero */
 	TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst_1[tx_count_1],
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT, 1),
 			"mbufs refcnts not as expected");
 
 	free_mbufs(&pkts_burst_1[tx_count_1],
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_BALANCE_RX_BURST_SLAVE_COUNT (3)
+#define TEST_BALANCE_RX_BURST_MEMBER_COUNT (3)
 
 static int
 test_balance_rx_burst(void)
 {
-	struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
 
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	int burst_size[TEST_BALANCE_RX_BURST_SLAVE_COUNT] = { 10, 5, 30 };
+	int burst_size[TEST_BALANCE_RX_BURST_MEMBER_COUNT] = { 10, 5, 30 };
 	int i, j;
 
 	memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BALANCE, 0, 3, 1),
 			"Failed to initialise bonded device");
 
 	/* Generate test bursts of packets to transmit */
-	for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[i][0], burst_size[i], 0, 0, 1,
 				0, 0), burst_size[i],
 				"failed to generate packet burst");
 	}
 
-	/* Add rx data to slaves */
-	for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+	/* Add rx data to members */
+	for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[i][0], burst_size[i]);
 	}
 
@@ -3187,33 +3211,33 @@ test_balance_rx_burst(void)
 			burst_size[0] + burst_size[1] + burst_size[2]);
 
 
-	/* Verify bonded slave devices rx counts */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify bonded member devices rx counts */
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[0],
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+				test_params->member_port_ids[0],
 				(unsigned int)port_stats.ipackets, burst_size[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[1], (unsigned int)port_stats.ipackets,
 			burst_size[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[2], (unsigned int)port_stats.ipackets,
 			burst_size[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[3],	(unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[3],	(unsigned int)port_stats.ipackets,
 			0);
 
 	/* free mbufs */
-	for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
 		for (j = 0; j < MAX_PKT_BURST; j++) {
 			if (gen_pkt_burst[i][j] != NULL) {
 				rte_pktmbuf_free(gen_pkt_burst[i][j]);
@@ -3222,8 +3246,8 @@ test_balance_rx_burst(void)
 		}
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -3232,8 +3256,8 @@ test_balance_verify_promiscuous_enable_disable(void)
 	int i;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BALANCE, 0, 4, 1),
 			"Failed to initialise bonded device");
 
@@ -3246,11 +3270,11 @@ test_balance_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not enabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]), 1,
+				test_params->member_port_ids[i]), 1,
 				"Port (%d) promiscuous mode not enabled",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
 	ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -3262,15 +3286,15 @@ test_balance_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]), 0,
+				test_params->member_port_ids[i]), 0,
 				"Port (%d) promiscuous mode not disabled",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -3279,19 +3303,21 @@ test_balance_verify_mac_assignment(void)
 	struct rte_ether_addr read_mac_addr;
 	struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+			&expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+			test_params->member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+			&expected_mac_addr_1),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	/* Initialize bonded device with 2 slaves in active backup mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 2 members in active backup mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BALANCE, 0, 2, 1),
 			"Failed to initialise bonded device");
 
-	/* Verify that bonded MACs is that of first slave and that the other slave
+	/* Verify that bonded MACs is that of first member and that the other member
 	 * MAC hasn't been changed */
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -3301,27 +3327,27 @@ test_balance_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[1]);
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[1]),
+			test_params->member_port_ids[1]),
 			"Failed to set bonded port (%d) primary port to (%d)\n",
-			test_params->bonded_port_id, test_params->slave_port_ids[1]);
+			test_params->bonded_port_id, test_params->member_port_ids[1]);
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -3331,24 +3357,26 @@ test_balance_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[1]);
 
-	/* stop / start bonded device and verify that primary MAC address is
-	 * propagated to bonded device and slaves */
+	/*
+	 * stop / start bonded device and verify that primary MAC address is
+	 * propagated to bonded device and members.
+	 */
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -3365,21 +3393,21 @@ test_balance_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[1]);
 
 	/* Set explicit MAC address */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
@@ -3395,44 +3423,44 @@ test_balance_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of bonded port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected\n",
-				test_params->slave_port_ids[0]);
+			"member port (%d) mac address not as expected\n",
+				test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of bonded port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of bonded port",
+			test_params->member_port_ids[1]);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_BALANCE_LINK_STATUS_SLAVE_COUNT (4)
+#define TEST_BALANCE_LINK_STATUS_MEMBER_COUNT (4)
 
 static int
-test_balance_verify_slave_link_status_change_behaviour(void)
+test_balance_verify_member_link_status_change_behaviour(void)
 {
-	struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_MEMBER_COUNT][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count;
+	int i, burst_size, member_count;
 
 	memset(pkt_burst, 0, sizeof(pkt_burst));
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
-			BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT, 1),
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+			BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
@@ -3440,32 +3468,34 @@ test_balance_verify_slave_link_status_change_behaviour(void)
 			"Failed to set balance xmit policy.");
 
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Members Count /Active Member Count is */
+	member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
-			"Number of slaves (%d) is not as expected (%d).",
-			slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
+	TEST_ASSERT_EQUAL(member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT,
+			"Number of members (%d) is not as expected (%d).",
+			member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT);
 
-	/* Set 2 slaves link status to down */
+	/* Set 2 members link status to down */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->member_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->member_port_ids[3], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 2);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 2);
 
-	/* Send to sets of packet burst and verify that they are balanced across
-	 *  slaves */
+	/*
+	 * Send to sets of packet burst and verify that they are balanced across
+	 *  members.
+	 */
 	burst_size = 21;
 
 	TEST_ASSERT_EQUAL(generate_test_burst(
@@ -3491,27 +3521,27 @@ test_balance_verify_slave_link_status_change_behaviour(void)
 			test_params->bonded_port_id, (int)port_stats.opackets,
 			burst_size + burst_size);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets (%d) not as expected (%d).",
-			test_params->slave_port_ids[0], (int)port_stats.opackets,
+			test_params->member_port_ids[0], (int)port_stats.opackets,
 			burst_size);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets (%d) not as expected (%d).",
-			test_params->slave_port_ids[2], (int)port_stats.opackets,
+			test_params->member_port_ids[2], (int)port_stats.opackets,
 			burst_size);
 
-	/* verify that all packets get send on primary slave when no other slaves
+	/* verify that all packets get send on primary member when no other members
 	 * are available */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[2], 0);
+			test_params->member_port_ids[2], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 1,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 1);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 1,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 1);
 
 	TEST_ASSERT_EQUAL(generate_test_burst(
 			&pkt_burst[1][0], burst_size, 0, 1, 1, 0, 0), burst_size,
@@ -3528,31 +3558,31 @@ test_balance_verify_slave_link_status_change_behaviour(void)
 			test_params->bonded_port_id, (int)port_stats.opackets,
 			burst_size + burst_size + burst_size);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size + burst_size),
 			"(%d) port_stats.opackets (%d) not as expected (%d).",
-			test_params->slave_port_ids[0], (int)port_stats.opackets,
+			test_params->member_port_ids[0], (int)port_stats.opackets,
 			burst_size + burst_size);
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 0);
+			test_params->member_port_ids[0], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 1);
+			test_params->member_port_ids[1], 1);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[2], 1);
+			test_params->member_port_ids[2], 1);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 1);
+			test_params->member_port_ids[3], 1);
 
-	for (i = 0; i < TEST_BALANCE_LINK_STATUS_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_BALANCE_LINK_STATUS_MEMBER_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
 				"Failed to generate packet burst");
 
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&pkt_burst[i][0], burst_size);
 	}
 
-	/* Verify that pkts are not received on slaves with link status down */
+	/* Verify that pkts are not received on members with link status down */
 
 	rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
 			MAX_PKT_BURST);
@@ -3564,8 +3594,8 @@ test_balance_verify_slave_link_status_change_behaviour(void)
 			test_params->bonded_port_id, (int)port_stats.ipackets,
 			burst_size * 3);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -3576,7 +3606,7 @@ test_broadcast_tx_burst(void)
 
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BROADCAST, 0, 2, 1),
 			"Failed to initialise bonded device");
 
@@ -3590,7 +3620,7 @@ test_broadcast_tx_burst(void)
 	pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
 			dst_addr_0, pktlen);
 
-	burst_size = 20 * test_params->bonded_slave_count;
+	burst_size = 20 * test_params->bonded_member_count;
 
 	TEST_ASSERT(burst_size < MAX_PKT_BURST,
 			"Burst size specified is greater than supported.");
@@ -3611,25 +3641,25 @@ test_broadcast_tx_burst(void)
 	/* Verify bonded port tx stats */
 	rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)burst_size * test_params->bonded_slave_count,
+			(uint64_t)burst_size * test_params->bonded_member_count,
 			"Bonded Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
 			burst_size);
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+	/* Verify member ports tx stats */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
 		TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)\n",
+				"Member Port (%d) opackets value (%u) not as expected (%d)\n",
 				test_params->bonded_port_id,
 				(unsigned int)port_stats.opackets, burst_size);
 	}
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -3637,159 +3667,161 @@ test_broadcast_tx_burst(void)
 			test_params->bonded_port_id, 0, pkts_burst, burst_size),  0,
 			"transmitted an unexpected number of packets");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 
-#define TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT		(3)
-#define TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE			(40)
-#define TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT	(15)
-#define TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT	(10)
+#define TEST_BCAST_MEMBER_TX_FAIL_MEMBER_COUNT		(3)
+#define TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE			(40)
+#define TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT	(15)
+#define TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT	(10)
 
 static int
-test_broadcast_tx_burst_slave_tx_fail(void)
+test_broadcast_tx_burst_member_tx_fail(void)
 {
-	struct rte_mbuf *pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE];
-	struct rte_mbuf *expected_fail_pkts[TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT];
+	struct rte_mbuf *pkts_burst[TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE];
+	struct rte_mbuf *expected_fail_pkts[TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT];
 
 	struct rte_eth_stats port_stats;
 
 	int i, tx_count;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BROADCAST, 0,
-			TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+			TEST_BCAST_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	/* Generate test bursts for transmission */
 	TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE,
+			TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
+			TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE,
 			"Failed to generate test packet burst");
 
-	for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
-		expected_fail_pkts[i] = pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT + i];
+	for (i = 0; i < TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT; i++) {
+		expected_fail_pkts[i] = pkts_burst[TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT + i];
 	}
 
-	/* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
-	 * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
+	/*
+	 * Set virtual member TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX to only fail
+	 * transmission of TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT packets of burst.
+	 */
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[0],
+			test_params->member_port_ids[0],
 			0);
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[1],
+			test_params->member_port_ids[1],
 			0);
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[2],
+			test_params->member_port_ids[2],
 			0);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[0],
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+			test_params->member_port_ids[0],
+			TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[1],
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+			test_params->member_port_ids[1],
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[2],
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+			test_params->member_port_ids[2],
+			TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
 
 	/* Transmit burst */
 	tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE);
+			TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE);
 
-	TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
+	TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT,
 			"Transmitted (%d) packets, expected to transmit (%d) packets",
-			tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+			tx_count, TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
 
 	/* Verify that failed packet are expected failed packets */
-	for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
+	for (i = 0; i < TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT; i++) {
 		TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst[i + tx_count],
 				"expected mbuf (%d) pointer %p not expected pointer %p",
 				i, expected_fail_pkts[i], pkts_burst[i + tx_count]);
 	}
 
-	/* Verify slave ports tx stats */
+	/* Verify member ports tx stats */
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
+			(uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT,
 			"Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+			TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
 
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
+			(uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT,
 			"Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+			TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
+			(uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT,
 			"Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+			TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
 
 
 	/* Verify that all mbufs who transmission failed have a ref value of one */
 	TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst[tx_count],
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT, 1),
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT, 1),
 			"mbufs refcnts not as expected");
 
 	free_mbufs(&pkts_burst[tx_count],
-		TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+		TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define BROADCAST_RX_BURST_NUM_OF_SLAVES (3)
+#define BROADCAST_RX_BURST_NUM_OF_MEMBERS (3)
 
 static int
 test_broadcast_rx_burst(void)
 {
-	struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_SLAVES][MAX_PKT_BURST];
+	struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_MEMBERS][MAX_PKT_BURST];
 
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	int burst_size[BROADCAST_RX_BURST_NUM_OF_SLAVES] = { 10, 5, 30 };
+	int burst_size[BROADCAST_RX_BURST_NUM_OF_MEMBERS] = { 10, 5, 30 };
 	int i, j;
 
 	memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BROADCAST, 0, 3, 1),
 			"Failed to initialise bonded device");
 
 	/* Generate test bursts of packets to transmit */
-	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
+	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[i][0], burst_size[i], 0, 0, 1, 0, 0),
 				burst_size[i], "failed to generate packet burst");
 	}
 
-	/* Add rx data to slave 0 */
-	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+	/* Add rx data to member 0 */
+	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[i][0], burst_size[i]);
 	}
 
@@ -3810,33 +3842,33 @@ test_broadcast_rx_burst(void)
 			burst_size[0] + burst_size[1] + burst_size[2]);
 
 
-	/* Verify bonded slave devices rx counts */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify bonded member devices rx counts */
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[0], (unsigned int)port_stats.ipackets,
 			burst_size[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[0], (unsigned int)port_stats.ipackets,
 			burst_size[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[2], (unsigned int)port_stats.ipackets,
 			burst_size[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[3], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[3], (unsigned int)port_stats.ipackets,
 			0);
 
 	/* free mbufs allocate for rx testing */
-	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
+	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
 		for (j = 0; j < MAX_PKT_BURST; j++) {
 			if (gen_pkt_burst[i][j] != NULL) {
 				rte_pktmbuf_free(gen_pkt_burst[i][j]);
@@ -3845,8 +3877,8 @@ test_broadcast_rx_burst(void)
 		}
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -3855,8 +3887,8 @@ test_broadcast_verify_promiscuous_enable_disable(void)
 	int i;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BROADCAST, 0, 4, 1),
 			"Failed to initialise bonded device");
 
@@ -3870,11 +3902,11 @@ test_broadcast_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not enabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]), 1,
+				test_params->member_port_ids[i]), 1,
 				"Port (%d) promiscuous mode not enabled",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
 	ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -3886,15 +3918,15 @@ test_broadcast_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]), 0,
+				test_params->member_port_ids[i]), 0,
 				"Port (%d) promiscuous mode not disabled",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -3905,49 +3937,55 @@ test_broadcast_verify_mac_assignment(void)
 
 	int i;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+			&expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_1),
+			test_params->member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[2],
+			&expected_mac_addr_1),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[2]);
+			test_params->member_port_ids[2]);
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BROADCAST, 0, 4, 1),
 			"Failed to initialise bonded device");
 
-	/* Verify that all MACs are the same as first slave added to bonded
+	/* Verify that all MACs are the same as first member added to bonded
 	 * device */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of primary port",
-				test_params->slave_port_ids[i]);
+				"member port (%d) mac address not set to that of primary port",
+				test_params->member_port_ids[i]);
 	}
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[2]),
+			test_params->member_port_ids[2]),
 			"Failed to set bonded port (%d) primary port to (%d)",
-			test_params->bonded_port_id, test_params->slave_port_ids[i]);
+			test_params->bonded_port_id, test_params->member_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address has changed to that of primary "
+				"member port (%d) mac address has changed to that of primary "
 				"port without stop/start toggle of bonded device",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
-	/* stop / start bonded device and verify that primary MAC address is
-	 * propagated to bonded device and slaves */
+	/*
+	 * stop / start bonded device and verify that primary MAC address is
+	 * propagated to bonded device and members.
+	 */
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -3962,16 +4000,17 @@ test_broadcast_verify_mac_assignment(void)
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
 			"bonded port (%d) mac address not set to that of new primary  port",
-			test_params->slave_port_ids[i]);
+			test_params->member_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of new primary "
-				"port", test_params->slave_port_ids[i]);
+				"member port (%d) mac address not set to that of new primary "
+				"port", test_params->member_port_ids[i]);
 	}
 
 	/* Set explicit MAC address */
@@ -3986,71 +4025,72 @@ test_broadcast_verify_mac_assignment(void)
 	TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
 			"bonded port (%d) mac address not set to that of new primary port",
-			test_params->slave_port_ids[i]);
+			test_params->member_port_ids[i]);
 
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of new primary "
-				"port", test_params->slave_port_ids[i]);
+				"member port (%d) mac address not set to that of new primary "
+				"port", test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define BROADCAST_LINK_STATUS_NUM_OF_SLAVES (4)
+#define BROADCAST_LINK_STATUS_NUM_OF_MEMBERS (4)
 static int
-test_broadcast_verify_slave_link_status_change_behaviour(void)
+test_broadcast_verify_member_link_status_change_behaviour(void)
 {
-	struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_SLAVES][MAX_PKT_BURST];
+	struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_MEMBERS][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count;
+	int i, burst_size, member_count;
 
 	memset(pkt_burst, 0, sizeof(pkt_burst));
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
-				BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_SLAVES,
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+				BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_MEMBERS,
 				1), "Failed to initialise bonded device");
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Members Count /Active Member Count is */
+	member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of slaves (%d) is not as expected (%d).",
-			slave_count, 4);
+	TEST_ASSERT_EQUAL(member_count, 4,
+			"Number of members (%d) is not as expected (%d).",
+			member_count, 4);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 4);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count, 4,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 4);
 
-	/* Set 2 slaves link status to down */
+	/* Set 2 members link status to down */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->member_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->member_port_ids[3], 0);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 2,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 2);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count, 2,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 2);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++)
-		rte_eth_stats_reset(test_params->slave_port_ids[i]);
+	for (i = 0; i < test_params->bonded_member_count; i++)
+		rte_eth_stats_reset(test_params->member_port_ids[i]);
 
-	/* Verify that pkts are not sent on slaves with link status down */
+	/* Verify that pkts are not sent on members with link status down */
 	burst_size = 21;
 
 	TEST_ASSERT_EQUAL(generate_test_burst(
@@ -4062,43 +4102,43 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
 			"rte_eth_tx_burst failed\n");
 
 	rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
-	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * slave_count),
+	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * member_count),
 			"(%d) port_stats.opackets (%d) not as expected (%d)\n",
 			test_params->bonded_port_id, (int)port_stats.opackets,
-			burst_size * slave_count);
+			burst_size * member_count);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-				test_params->slave_port_ids[1]);
+				test_params->member_port_ids[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets not as expected",
-				test_params->slave_port_ids[2]);
+				test_params->member_port_ids[2]);
 
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[3]);
+			test_params->member_port_ids[3]);
 
 
-	for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_SLAVES; i++) {
+	for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_MEMBERS; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&pkt_burst[i][0], burst_size, 0, 0, 1, 0, 0),
 				burst_size, "failed to generate packet burst");
 
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&pkt_burst[i][0], burst_size);
 	}
 
-	/* Verify that pkts are not received on slaves with link status down */
+	/* Verify that pkts are not received on members with link status down */
 	TEST_ASSERT_EQUAL(rte_eth_rx_burst(
 			test_params->bonded_port_id, 0, rx_pkt_burst, MAX_PKT_BURST),
 			burst_size + burst_size, "rte_eth_rx_burst failed");
@@ -4110,8 +4150,8 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
 			"(%d) port_stats.ipackets not as expected\n",
 			test_params->bonded_port_id);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -4146,21 +4186,21 @@ testsuite_teardown(void)
 	free(test_params->pkt_eth_hdr);
 	test_params->pkt_eth_hdr = NULL;
 
-	/* Clean up and remove slaves from bonded device */
-	remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	remove_members_and_stop_bonded_device();
 }
 
 static void
 free_virtualpmd_tx_queue(void)
 {
-	int i, slave_port, to_free_cnt;
+	int i, member_port, to_free_cnt;
 	struct rte_mbuf *pkts_to_free[MAX_PKT_BURST];
 
 	/* Free tx queue of virtual pmd */
-	for (slave_port = 0; slave_port < test_params->bonded_slave_count;
-			slave_port++) {
+	for (member_port = 0; member_port < test_params->bonded_member_count;
+			member_port++) {
 		to_free_cnt = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_port],
+				test_params->member_port_ids[member_port],
 				pkts_to_free, MAX_PKT_BURST);
 		for (i = 0; i < to_free_cnt; i++)
 			rte_pktmbuf_free(pkts_to_free[i]);
@@ -4177,11 +4217,11 @@ test_tlb_tx_burst(void)
 	uint64_t sum_ports_opackets = 0, all_bond_opackets = 0, all_bond_obytes = 0;
 	uint16_t pktlen;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members
 			(BONDING_MODE_TLB, 1, 3, 1),
 			"Failed to initialise bonded device");
 
-	burst_size = 20 * test_params->bonded_slave_count;
+	burst_size = 20 * test_params->bonded_member_count;
 
 	TEST_ASSERT(burst_size < MAX_PKT_BURST,
 			"Burst size specified is greater than supported.\n");
@@ -4197,7 +4237,7 @@ test_tlb_tx_burst(void)
 					RTE_ETHER_TYPE_IPV4, 0, 0);
 		} else {
 			initialize_eth_header(test_params->pkt_eth_hdr,
-					(struct rte_ether_addr *)test_params->default_slave_mac,
+					(struct rte_ether_addr *)test_params->default_member_mac,
 					(struct rte_ether_addr *)dst_mac_0,
 					RTE_ETHER_TYPE_IPV4, 0, 0);
 		}
@@ -4234,26 +4274,26 @@ test_tlb_tx_burst(void)
 			burst_size);
 
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats[i]);
+	/* Verify member ports tx stats */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		rte_eth_stats_get(test_params->member_port_ids[i], &port_stats[i]);
 		sum_ports_opackets += port_stats[i].opackets;
 	}
 
 	TEST_ASSERT_EQUAL(sum_ports_opackets, (uint64_t)all_bond_opackets,
-			"Total packets sent by slaves is not equal to packets sent by bond interface");
+			"Total packets sent by members is not equal to packets sent by bond interface");
 
-	/* checking if distribution of packets is balanced over slaves */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* checking if distribution of packets is balanced over members */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		TEST_ASSERT(port_stats[i].obytes > 0 &&
 				port_stats[i].obytes < all_bond_obytes,
-						"Packets are not balanced over slaves");
+						"Packets are not balanced over members");
 	}
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -4261,11 +4301,11 @@ test_tlb_tx_burst(void)
 			burst_size);
 	TEST_ASSERT_EQUAL(nb_tx, 0, " bad number of packet in burst");
 
-	/* Clean ugit checkout masterp and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean ugit checkout masterp and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT (4)
+#define TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT (4)
 
 static int
 test_tlb_rx_burst(void)
@@ -4279,26 +4319,26 @@ test_tlb_rx_burst(void)
 
 	uint16_t i, j, nb_rx, burst_size = 17;
 
-	/* Initialize bonded device with 4 slaves in transmit load balancing mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in transmit load balancing mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_TLB,
-			TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1, 1),
+			TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT, 1, 1),
 			"Failed to initialize bonded device");
 
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 	TEST_ASSERT(primary_port >= 0,
-			"failed to get primary slave for bonded port (%d)",
+			"failed to get primary member for bonded port (%d)",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		/* Generate test bursts of packets to transmit */
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0), burst_size,
 				"burst generation failed");
 
-		/* Add rx data to slave */
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		/* Add rx data to member */
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[0], burst_size);
 
 		/* Call rx burst on bonded device */
@@ -4307,7 +4347,7 @@ test_tlb_rx_burst(void)
 
 		TEST_ASSERT_EQUAL(nb_rx, burst_size, "rte_eth_rx_burst failed\n");
 
-		if (test_params->slave_port_ids[i] == primary_port) {
+		if (test_params->member_port_ids[i] == primary_port) {
 			/* Verify bonded device rx count */
 			rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 			TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
@@ -4315,27 +4355,27 @@ test_tlb_rx_burst(void)
 					test_params->bonded_port_id,
 					(unsigned int)port_stats.ipackets, burst_size);
 
-			/* Verify bonded slave devices rx count */
-			for (j = 0; j < test_params->bonded_slave_count; j++) {
-				rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+			/* Verify bonded member devices rx count */
+			for (j = 0; j < test_params->bonded_member_count; j++) {
+				rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
 				if (i == j) {
 					TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
-							"Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
-							test_params->slave_port_ids[i],
+							"Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+							test_params->member_port_ids[i],
 							(unsigned int)port_stats.ipackets, burst_size);
 				} else {
 					TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
-							"Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
-							test_params->slave_port_ids[i],
+							"Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+							test_params->member_port_ids[i],
 							(unsigned int)port_stats.ipackets, 0);
 				}
 			}
 		} else {
-			for (j = 0; j < test_params->bonded_slave_count; j++) {
-				rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+			for (j = 0; j < test_params->bonded_member_count; j++) {
+				rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
 				TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
-						"Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
-						test_params->slave_port_ids[i],
+						"Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+						test_params->member_port_ids[i],
 						(unsigned int)port_stats.ipackets, 0);
 			}
 		}
@@ -4348,8 +4388,8 @@ test_tlb_rx_burst(void)
 		rte_eth_stats_reset(test_params->bonded_port_id);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -4358,14 +4398,14 @@ test_tlb_verify_promiscuous_enable_disable(void)
 	int i, primary_port, promiscuous_en;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in transmit load balancing mode */
-	TEST_ASSERT_SUCCESS( initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in transmit load balancing mode */
+	TEST_ASSERT_SUCCESS( initialize_bonded_device_with_members(
 			BONDING_MODE_TLB, 0, 4, 1),
 			"Failed to initialize bonded device");
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 	TEST_ASSERT(primary_port >= 0,
-			"failed to get primary slave for bonded port (%d)",
+			"failed to get primary member for bonded port (%d)",
 			test_params->bonded_port_id);
 
 	ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -4377,10 +4417,10 @@ test_tlb_verify_promiscuous_enable_disable(void)
 	TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
 			"Port (%d) promiscuous mode not enabled\n",
 			test_params->bonded_port_id);
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
-		if (primary_port == test_params->slave_port_ids[i]) {
+				test_params->member_port_ids[i]);
+		if (primary_port == test_params->member_port_ids[i]) {
 			TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
 					"Port (%d) promiscuous mode not enabled\n",
 					test_params->bonded_port_id);
@@ -4402,16 +4442,16 @@ test_tlb_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled\n",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_EQUAL(promiscuous_en, (int)0,
-				"slave port (%d) promiscuous mode not disabled\n",
-				test_params->slave_port_ids[i]);
+				"member port (%d) promiscuous mode not disabled\n",
+				test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -4420,20 +4460,24 @@ test_tlb_verify_mac_assignment(void)
 	struct rte_ether_addr read_mac_addr;
 	struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+			&expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+			test_params->member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+			&expected_mac_addr_1),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	/* Initialize bonded device with 2 slaves in active backup mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 2 members in active backup mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_TLB, 0, 2, 1),
 			"Failed to initialize bonded device");
 
-	/* Verify that bonded MACs is that of first slave and that the other slave
-	 * MAC hasn't been changed */
+	/*
+	 * Verify that bonded MACs is that of first member and that the other member
+	 * MAC hasn't been changed.
+	 */
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
 			test_params->bonded_port_id);
@@ -4442,27 +4486,27 @@ test_tlb_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[1]);
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[1]), 0,
+			test_params->member_port_ids[1]), 0,
 			"Failed to set bonded port (%d) primary port to (%d)",
-			test_params->bonded_port_id, test_params->slave_port_ids[1]);
+			test_params->bonded_port_id, test_params->member_port_ids[1]);
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -4472,24 +4516,26 @@ test_tlb_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[1]);
 
-	/* stop / start bonded device and verify that primary MAC address is
-	 * propagated to bonded device and slaves */
+	/*
+	 * stop / start bonded device and verify that primary MAC address is
+	 * propagated to bonded device and members.
+	 */
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -4506,21 +4552,21 @@ test_tlb_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[1]);
 
 
 	/* Set explicit MAC address */
@@ -4537,36 +4583,36 @@ test_tlb_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of bonded port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of bonded port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of bonded port",
+			test_params->member_port_ids[1]);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
-test_tlb_verify_slave_link_status_change_failover(void)
+test_tlb_verify_member_link_status_change_failover(void)
 {
-	struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count, primary_port;
+	int i, burst_size, member_count, primary_port;
 
 	burst_size = 21;
 
@@ -4574,61 +4620,63 @@ test_tlb_verify_slave_link_status_change_failover(void)
 
 
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_TLB, 0,
-			TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1),
-			"Failed to initialize bonded device with slaves");
+			TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT, 1),
+			"Failed to initialize bonded device with members");
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Members Count /Active Member Count is */
+	member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of slaves (%d) is not as expected (%d).\n",
-			slave_count, 4);
+	TEST_ASSERT_EQUAL(member_count, 4,
+			"Number of members (%d) is not as expected (%d).\n",
+			member_count, 4);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, (int)4,
-			"Number of slaves (%d) is not as expected (%d).\n",
-			slave_count, 4);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count, 4,
+			"Number of members (%d) is not as expected (%d).\n",
+			member_count, 4);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+	TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
 			"Primary port not as expected");
 
-	/* Bring 2 slaves down and verify active slave count */
+	/* Bring 2 members down and verify active member count */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->member_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->member_port_ids[3], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 2);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 2);
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 1);
+			test_params->member_port_ids[1], 1);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 1);
+			test_params->member_port_ids[3], 1);
 
 
-	/* Bring primary port down, verify that active slave count is 3 and primary
-	 *  has changed */
+	/*
+	 * Bring primary port down, verify that active member count is 3 and primary
+	 *  has changed.
+	 */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 0);
+			test_params->member_port_ids[0], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 3,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 3);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 3,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 3);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
+	TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[2],
 			"Primary port not as expected");
 	rte_delay_us(500000);
-	/* Verify that pkts are sent on new primary slave */
+	/* Verify that pkts are sent on new primary member */
 	for (i = 0; i < 4; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
@@ -4639,36 +4687,36 @@ test_tlb_verify_slave_link_status_change_failover(void)
 		rte_delay_us(11000);
 	}
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (int8_t)0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[2]);
+			test_params->member_port_ids[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[3]);
+			test_params->member_port_ids[3]);
 
 
 	/* Generate packet burst for testing */
 
-	for (i = 0; i < TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT; i++) {
 		if (generate_test_burst(&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0) !=
 				burst_size)
 			return -1;
 
 		virtual_ethdev_add_mbufs_to_rx_queue(
-				test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
+				test_params->member_port_ids[i], &pkt_burst[i][0], burst_size);
 	}
 
 	if (rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
@@ -4684,11 +4732,11 @@ test_tlb_verify_slave_link_status_change_failover(void)
 			"(%d) port_stats.ipackets not as expected\n",
 			test_params->bonded_port_id);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_ALB_SLAVE_COUNT	2
+#define TEST_ALB_MEMBER_COUNT	2
 
 static uint8_t mac_client1[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 1};
 static uint8_t mac_client2[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 2};
@@ -4710,23 +4758,23 @@ test_alb_change_mac_in_reply_sent(void)
 	struct rte_ether_hdr *eth_pkt;
 	struct rte_arp_hdr *arp_pkt;
 
-	int slave_idx, nb_pkts, pkt_idx;
+	int member_idx, nb_pkts, pkt_idx;
 	int retval = 0;
 
 	struct rte_ether_addr bond_mac, client_mac;
-	struct rte_ether_addr *slave_mac1, *slave_mac2;
+	struct rte_ether_addr *member_mac1, *member_mac2;
 
 	TEST_ASSERT_SUCCESS(
-			initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
-					0, TEST_ALB_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			initialize_bonded_device_with_members(BONDING_MODE_ALB,
+					0, TEST_ALB_MEMBER_COUNT, 1),
+			"Failed to initialize_bonded_device_with_members.");
 
 	/* Flush tx queue */
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count;
-			slave_idx++) {
+	for (member_idx = 0; member_idx < test_params->bonded_member_count;
+			member_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->member_port_ids[member_idx], pkts_sent,
 				MAX_PKT_BURST);
 	}
 
@@ -4782,18 +4830,18 @@ test_alb_change_mac_in_reply_sent(void)
 			RTE_ARP_OP_REPLY);
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, &pkt, 1);
 
-	slave_mac1 =
-			rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
-	slave_mac2 =
-			rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
+	member_mac1 =
+			rte_eth_devices[test_params->member_port_ids[0]].data->mac_addrs;
+	member_mac2 =
+			rte_eth_devices[test_params->member_port_ids[1]].data->mac_addrs;
 
 	/*
 	 * Checking if packets are properly distributed on bonding ports. Packets
 	 * 0 and 2 should be sent on port 0 and packets 1 and 3 on port 1.
 	 */
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->member_port_ids[member_idx], pkts_sent,
 				MAX_PKT_BURST);
 
 		for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -4802,14 +4850,14 @@ test_alb_change_mac_in_reply_sent(void)
 			arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
 						sizeof(struct rte_ether_hdr));
 
-			if (slave_idx%2 == 0) {
-				if (!rte_is_same_ether_addr(slave_mac1,
+			if (member_idx%2 == 0) {
+				if (!rte_is_same_ether_addr(member_mac1,
 						&arp_pkt->arp_data.arp_sha)) {
 					retval = -1;
 					goto test_end;
 				}
 			} else {
-				if (!rte_is_same_ether_addr(slave_mac2,
+				if (!rte_is_same_ether_addr(member_mac2,
 						&arp_pkt->arp_data.arp_sha)) {
 					retval = -1;
 					goto test_end;
@@ -4819,7 +4867,7 @@ test_alb_change_mac_in_reply_sent(void)
 	}
 
 test_end:
-	retval += remove_slaves_and_stop_bonded_device();
+	retval += remove_members_and_stop_bonded_device();
 	return retval;
 }
 
@@ -4832,22 +4880,22 @@ test_alb_reply_from_client(void)
 	struct rte_mbuf *pkt;
 	struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
 
-	int slave_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
+	int member_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
 	int retval = 0;
 
 	struct rte_ether_addr bond_mac, client_mac;
-	struct rte_ether_addr *slave_mac1, *slave_mac2;
+	struct rte_ether_addr *member_mac1, *member_mac2;
 
 	TEST_ASSERT_SUCCESS(
-			initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
-					0, TEST_ALB_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			initialize_bonded_device_with_members(BONDING_MODE_ALB,
+					0, TEST_ALB_MEMBER_COUNT, 1),
+			"Failed to initialize_bonded_device_with_members.");
 
 	/* Flush tx queue */
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->member_port_ids[member_idx], pkts_sent,
 				MAX_PKT_BURST);
 	}
 
@@ -4868,7 +4916,7 @@ test_alb_reply_from_client(void)
 					sizeof(struct rte_ether_hdr));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
 			1);
 
 	pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4880,7 +4928,7 @@ test_alb_reply_from_client(void)
 					sizeof(struct rte_ether_hdr));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client2, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
 			1);
 
 	pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4892,7 +4940,7 @@ test_alb_reply_from_client(void)
 					sizeof(struct rte_ether_hdr));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client3, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
 			1);
 
 	pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4904,7 +4952,7 @@ test_alb_reply_from_client(void)
 					sizeof(struct rte_ether_hdr));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client4, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
 			1);
 
 	/*
@@ -4914,15 +4962,15 @@ test_alb_reply_from_client(void)
 	rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
 
-	slave_mac1 = rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
-	slave_mac2 = rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
+	member_mac1 = rte_eth_devices[test_params->member_port_ids[0]].data->mac_addrs;
+	member_mac2 = rte_eth_devices[test_params->member_port_ids[1]].data->mac_addrs;
 
 	/*
-	 * Checking if update ARP packets were properly send on slave ports.
+	 * Checking if update ARP packets were properly send on member ports.
 	 */
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent, MAX_PKT_BURST);
+				test_params->member_port_ids[member_idx], pkts_sent, MAX_PKT_BURST);
 		nb_pkts_sum += nb_pkts;
 
 		for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -4931,14 +4979,14 @@ test_alb_reply_from_client(void)
 			arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
 						sizeof(struct rte_ether_hdr));
 
-			if (slave_idx%2 == 0) {
-				if (!rte_is_same_ether_addr(slave_mac1,
+			if (member_idx%2 == 0) {
+				if (!rte_is_same_ether_addr(member_mac1,
 						&arp_pkt->arp_data.arp_sha)) {
 					retval = -1;
 					goto test_end;
 				}
 			} else {
-				if (!rte_is_same_ether_addr(slave_mac2,
+				if (!rte_is_same_ether_addr(member_mac2,
 						&arp_pkt->arp_data.arp_sha)) {
 					retval = -1;
 					goto test_end;
@@ -4954,7 +5002,7 @@ test_alb_reply_from_client(void)
 	}
 
 test_end:
-	retval += remove_slaves_and_stop_bonded_device();
+	retval += remove_members_and_stop_bonded_device();
 	return retval;
 }
 
@@ -4968,21 +5016,21 @@ test_alb_receive_vlan_reply(void)
 	struct rte_mbuf *pkt;
 	struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
 
-	int slave_idx, nb_pkts, pkt_idx;
+	int member_idx, nb_pkts, pkt_idx;
 	int retval = 0;
 
 	struct rte_ether_addr bond_mac, client_mac;
 
 	TEST_ASSERT_SUCCESS(
-			initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
-					0, TEST_ALB_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			initialize_bonded_device_with_members(BONDING_MODE_ALB,
+					0, TEST_ALB_MEMBER_COUNT, 1),
+			"Failed to initialize_bonded_device_with_members.");
 
 	/* Flush tx queue */
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->member_port_ids[member_idx], pkts_sent,
 				MAX_PKT_BURST);
 	}
 
@@ -5007,7 +5055,7 @@ test_alb_receive_vlan_reply(void)
 	arp_pkt = (struct rte_arp_hdr *)((char *)(vlan_pkt + 1));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
 			1);
 
 	rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
@@ -5016,9 +5064,9 @@ test_alb_receive_vlan_reply(void)
 	/*
 	 * Checking if VLAN headers in generated ARP Update packet are correct.
 	 */
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->member_port_ids[member_idx], pkts_sent,
 				MAX_PKT_BURST);
 
 		for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -5049,7 +5097,7 @@ test_alb_receive_vlan_reply(void)
 	}
 
 test_end:
-	retval += remove_slaves_and_stop_bonded_device();
+	retval += remove_members_and_stop_bonded_device();
 	return retval;
 }
 
@@ -5062,9 +5110,9 @@ test_alb_ipv4_tx(void)
 	retval = 0;
 
 	TEST_ASSERT_SUCCESS(
-			initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
-					0, TEST_ALB_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			initialize_bonded_device_with_members(BONDING_MODE_ALB,
+					0, TEST_ALB_MEMBER_COUNT, 1),
+			"Failed to initialize_bonded_device_with_members.");
 
 	burst_size = 32;
 
@@ -5085,7 +5133,7 @@ test_alb_ipv4_tx(void)
 	}
 
 test_end:
-	retval += remove_slaves_and_stop_bonded_device();
+	retval += remove_members_and_stop_bonded_device();
 	return retval;
 }
 
@@ -5096,34 +5144,34 @@ static struct unit_test_suite link_bonding_test_suite  = {
 	.unit_test_cases = {
 		TEST_CASE(test_create_bonded_device),
 		TEST_CASE(test_create_bonded_device_with_invalid_params),
-		TEST_CASE(test_add_slave_to_bonded_device),
-		TEST_CASE(test_add_slave_to_invalid_bonded_device),
-		TEST_CASE(test_remove_slave_from_bonded_device),
-		TEST_CASE(test_remove_slave_from_invalid_bonded_device),
-		TEST_CASE(test_get_slaves_from_bonded_device),
-		TEST_CASE(test_add_already_bonded_slave_to_bonded_device),
-		TEST_CASE(test_add_remove_multiple_slaves_to_from_bonded_device),
+		TEST_CASE(test_add_member_to_bonded_device),
+		TEST_CASE(test_add_member_to_invalid_bonded_device),
+		TEST_CASE(test_remove_member_from_bonded_device),
+		TEST_CASE(test_remove_member_from_invalid_bonded_device),
+		TEST_CASE(test_get_members_from_bonded_device),
+		TEST_CASE(test_add_already_bonded_member_to_bonded_device),
+		TEST_CASE(test_add_remove_multiple_members_to_from_bonded_device),
 		TEST_CASE(test_start_bonded_device),
 		TEST_CASE(test_stop_bonded_device),
 		TEST_CASE(test_set_bonding_mode),
-		TEST_CASE(test_set_primary_slave),
+		TEST_CASE(test_set_primary_member),
 		TEST_CASE(test_set_explicit_bonded_mac),
 		TEST_CASE(test_set_bonded_port_initialization_mac_assignment),
 		TEST_CASE(test_status_interrupt),
-		TEST_CASE(test_adding_slave_after_bonded_device_started),
+		TEST_CASE(test_adding_member_after_bonded_device_started),
 		TEST_CASE(test_roundrobin_tx_burst),
-		TEST_CASE(test_roundrobin_tx_burst_slave_tx_fail),
-		TEST_CASE(test_roundrobin_rx_burst_on_single_slave),
-		TEST_CASE(test_roundrobin_rx_burst_on_multiple_slaves),
+		TEST_CASE(test_roundrobin_tx_burst_member_tx_fail),
+		TEST_CASE(test_roundrobin_rx_burst_on_single_member),
+		TEST_CASE(test_roundrobin_rx_burst_on_multiple_members),
 		TEST_CASE(test_roundrobin_verify_promiscuous_enable_disable),
 		TEST_CASE(test_roundrobin_verify_mac_assignment),
-		TEST_CASE(test_roundrobin_verify_slave_link_status_change_behaviour),
-		TEST_CASE(test_roundrobin_verfiy_polling_slave_link_status_change),
+		TEST_CASE(test_roundrobin_verify_member_link_status_change_behaviour),
+		TEST_CASE(test_roundrobin_verify_polling_member_link_status_change),
 		TEST_CASE(test_activebackup_tx_burst),
 		TEST_CASE(test_activebackup_rx_burst),
 		TEST_CASE(test_activebackup_verify_promiscuous_enable_disable),
 		TEST_CASE(test_activebackup_verify_mac_assignment),
-		TEST_CASE(test_activebackup_verify_slave_link_status_change_failover),
+		TEST_CASE(test_activebackup_verify_member_link_status_change_failover),
 		TEST_CASE(test_balance_xmit_policy_configuration),
 		TEST_CASE(test_balance_l2_tx_burst),
 		TEST_CASE(test_balance_l23_tx_burst_ipv4_toggle_ip_addr),
@@ -5137,26 +5185,26 @@ static struct unit_test_suite link_bonding_test_suite  = {
 		TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_ip_addr),
 		TEST_CASE(test_balance_l34_tx_burst_vlan_ipv6_toggle_ip_addr),
 		TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_udp_port),
-		TEST_CASE(test_balance_tx_burst_slave_tx_fail),
+		TEST_CASE(test_balance_tx_burst_member_tx_fail),
 		TEST_CASE(test_balance_rx_burst),
 		TEST_CASE(test_balance_verify_promiscuous_enable_disable),
 		TEST_CASE(test_balance_verify_mac_assignment),
-		TEST_CASE(test_balance_verify_slave_link_status_change_behaviour),
+		TEST_CASE(test_balance_verify_member_link_status_change_behaviour),
 		TEST_CASE(test_tlb_tx_burst),
 		TEST_CASE(test_tlb_rx_burst),
 		TEST_CASE(test_tlb_verify_mac_assignment),
 		TEST_CASE(test_tlb_verify_promiscuous_enable_disable),
-		TEST_CASE(test_tlb_verify_slave_link_status_change_failover),
+		TEST_CASE(test_tlb_verify_member_link_status_change_failover),
 		TEST_CASE(test_alb_change_mac_in_reply_sent),
 		TEST_CASE(test_alb_reply_from_client),
 		TEST_CASE(test_alb_receive_vlan_reply),
 		TEST_CASE(test_alb_ipv4_tx),
 		TEST_CASE(test_broadcast_tx_burst),
-		TEST_CASE(test_broadcast_tx_burst_slave_tx_fail),
+		TEST_CASE(test_broadcast_tx_burst_member_tx_fail),
 		TEST_CASE(test_broadcast_rx_burst),
 		TEST_CASE(test_broadcast_verify_promiscuous_enable_disable),
 		TEST_CASE(test_broadcast_verify_mac_assignment),
-		TEST_CASE(test_broadcast_verify_slave_link_status_change_behaviour),
+		TEST_CASE(test_broadcast_verify_member_link_status_change_behaviour),
 		TEST_CASE(test_reconfigure_bonded_device),
 		TEST_CASE(test_close_bonded_device),
 
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 21c512c94b..2de907e7f3 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -31,7 +31,7 @@
 
 #include "test.h"
 
-#define SLAVE_COUNT (4)
+#define MEMBER_COUNT (4)
 
 #define RX_RING_SIZE 1024
 #define TX_RING_SIZE 1024
@@ -46,15 +46,15 @@
 
 #define BONDED_DEV_NAME         ("net_bonding_m4_bond_dev")
 
-#define SLAVE_DEV_NAME_FMT      ("net_virt_%d")
-#define SLAVE_RX_QUEUE_FMT      ("net_virt_%d_rx")
-#define SLAVE_TX_QUEUE_FMT      ("net_virt_%d_tx")
+#define MEMBER_DEV_NAME_FMT      ("net_virt_%d")
+#define MEMBER_RX_QUEUE_FMT      ("net_virt_%d_rx")
+#define MEMBER_TX_QUEUE_FMT      ("net_virt_%d_tx")
 
 #define INVALID_SOCKET_ID       (-1)
 #define INVALID_PORT_ID         (0xFF)
 #define INVALID_BONDING_MODE    (-1)
 
-static const struct rte_ether_addr slave_mac_default = {
+static const struct rte_ether_addr member_mac_default = {
 	{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 }
 };
 
@@ -70,7 +70,7 @@ static const struct rte_ether_addr slow_protocol_mac_addr = {
 	{ 0x01, 0x80, 0xC2, 0x00, 0x00, 0x02 }
 };
 
-struct slave_conf {
+struct member_conf {
 	struct rte_ring *rx_queue;
 	struct rte_ring *tx_queue;
 	uint16_t port_id;
@@ -86,21 +86,21 @@ struct ether_vlan_hdr {
 
 struct link_bonding_unittest_params {
 	uint8_t bonded_port_id;
-	struct slave_conf slave_ports[SLAVE_COUNT];
+	struct member_conf member_ports[MEMBER_COUNT];
 
 	struct rte_mempool *mbuf_pool;
 };
 
-#define TEST_DEFAULT_SLAVE_COUNT     RTE_DIM(test_params.slave_ports)
-#define TEST_RX_SLAVE_COUT           TEST_DEFAULT_SLAVE_COUNT
-#define TEST_TX_SLAVE_COUNT          TEST_DEFAULT_SLAVE_COUNT
-#define TEST_MARKER_SLAVE_COUT       TEST_DEFAULT_SLAVE_COUNT
-#define TEST_EXPIRED_SLAVE_COUNT     TEST_DEFAULT_SLAVE_COUNT
-#define TEST_PROMISC_SLAVE_COUNT     TEST_DEFAULT_SLAVE_COUNT
+#define TEST_DEFAULT_MEMBER_COUNT     RTE_DIM(test_params.member_ports)
+#define TEST_RX_MEMBER_COUT           TEST_DEFAULT_MEMBER_COUNT
+#define TEST_TX_MEMBER_COUNT          TEST_DEFAULT_MEMBER_COUNT
+#define TEST_MARKER_MEMBER_COUT       TEST_DEFAULT_MEMBER_COUNT
+#define TEST_EXPIRED_MEMBER_COUNT     TEST_DEFAULT_MEMBER_COUNT
+#define TEST_PROMISC_MEMBER_COUNT     TEST_DEFAULT_MEMBER_COUNT
 
 static struct link_bonding_unittest_params test_params  = {
 	.bonded_port_id = INVALID_PORT_ID,
-	.slave_ports = { [0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
+	.member_ports = { [0 ... MEMBER_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
 
 	.mbuf_pool = NULL,
 };
@@ -120,58 +120,58 @@ static uint8_t lacpdu_rx_count[RTE_MAX_ETHPORTS] = {0, };
 #define FOR_EACH(_i, _item, _array, _size) \
 	for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
 
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
  * in this test.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
  */
 #define FOR_EACH_PORT(_i, _port) \
-	FOR_EACH(_i, _port, test_params.slave_ports, \
-		RTE_DIM(test_params.slave_ports))
+	FOR_EACH(_i, _port, test_params.member_ports, \
+		RTE_DIM(test_params.member_ports))
 
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
  * in this test and satisfy given condition.
  *
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
  * _condition condition that need to be checked
  */
 #define FOR_EACH_PORT_IF(_i, _port, _condition) FOR_EACH_PORT((_i), (_port)) \
 	if (!!(_condition))
 
-/* Macro for iterating over every port that is currently a slave of a bonded
+/* Macro for iterating over every port that is currently a member of a bonded
  * device.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
  * */
-#define FOR_EACH_SLAVE(_i, _slave) \
-	FOR_EACH_PORT_IF(_i, _slave, (_slave)->bonded != 0)
+#define FOR_EACH_MEMBER(_i, _member) \
+	FOR_EACH_PORT_IF(_i, _member, (_member)->bonded != 0)
 
 /*
- * Returns packets from slaves TX queue.
- * slave slave port
+ * Returns packets from members TX queue.
+ * member port
  * buffer for packets
  * size size of buffer
  * return number of packets or negative error number
  */
 static int
-slave_get_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
+member_get_pkts(struct member_conf *member, struct rte_mbuf **buf, uint16_t size)
 {
-	return rte_ring_dequeue_burst(slave->tx_queue, (void **)buf,
+	return rte_ring_dequeue_burst(member->tx_queue, (void **)buf,
 			size, NULL);
 }
 
 /*
- * Injects given packets into slaves RX queue.
- * slave slave port
+ * Injects given packets into members RX queue.
+ * member port
  * buffer for packets
  * size number of packets to be injected
  * return number of queued packets or negative error number
  */
 static int
-slave_put_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
+member_put_pkts(struct member_conf *member, struct rte_mbuf **buf, uint16_t size)
 {
-	return rte_ring_enqueue_burst(slave->rx_queue, (void **)buf,
+	return rte_ring_enqueue_burst(member->rx_queue, (void **)buf,
 			size, NULL);
 }
 
@@ -219,79 +219,79 @@ configure_ethdev(uint16_t port_id, uint8_t start)
 }
 
 static int
-add_slave(struct slave_conf *slave, uint8_t start)
+add_member(struct member_conf *member, uint8_t start)
 {
 	struct rte_ether_addr addr, addr_check;
 	int retval;
 
 	/* Some sanity check */
-	RTE_VERIFY(test_params.slave_ports <= slave &&
-		slave - test_params.slave_ports < (int)RTE_DIM(test_params.slave_ports));
-	RTE_VERIFY(slave->bonded == 0);
-	RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
+	RTE_VERIFY(test_params.member_ports <= member &&
+		member - test_params.member_ports < (int)RTE_DIM(test_params.member_ports));
+	RTE_VERIFY(member->bonded == 0);
+	RTE_VERIFY(member->port_id != INVALID_PORT_ID);
 
-	rte_ether_addr_copy(&slave_mac_default, &addr);
-	addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
+	rte_ether_addr_copy(&member_mac_default, &addr);
+	addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
 
-	rte_eth_dev_mac_addr_remove(slave->port_id, &addr);
+	rte_eth_dev_mac_addr_remove(member->port_id, &addr);
 
-	TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(slave->port_id, &addr, 0),
-		"Failed to set slave MAC address");
+	TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(member->port_id, &addr, 0),
+		"Failed to set member MAC address");
 
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bonded_port_id,
-		slave->port_id),
-			"Failed to add slave (idx=%u, id=%u) to bonding (id=%u)",
-			(uint8_t)(slave - test_params.slave_ports), slave->port_id,
+	TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bonded_port_id,
+		member->port_id),
+			"Failed to add member (idx=%u, id=%u) to bonding (id=%u)",
+			(uint8_t)(member - test_params.member_ports), member->port_id,
 			test_params.bonded_port_id);
 
-	slave->bonded = 1;
+	member->bonded = 1;
 	if (start) {
-		TEST_ASSERT_SUCCESS(rte_eth_dev_start(slave->port_id),
-			"Failed to start slave %u", slave->port_id);
+		TEST_ASSERT_SUCCESS(rte_eth_dev_start(member->port_id),
+			"Failed to start member %u", member->port_id);
 	}
 
-	retval = rte_eth_macaddr_get(slave->port_id, &addr_check);
-	TEST_ASSERT_SUCCESS(retval, "Failed to get slave mac address: %s",
+	retval = rte_eth_macaddr_get(member->port_id, &addr_check);
+	TEST_ASSERT_SUCCESS(retval, "Failed to get member mac address: %s",
 			    strerror(-retval));
 	TEST_ASSERT_EQUAL(rte_is_same_ether_addr(&addr, &addr_check), 1,
-			"Slave MAC address is not as expected");
+			"Member MAC address is not as expected");
 
-	RTE_VERIFY(slave->lacp_parnter_state == 0);
+	RTE_VERIFY(member->lacp_parnter_state == 0);
 	return 0;
 }
 
 static int
-remove_slave(struct slave_conf *slave)
+remove_member(struct member_conf *member)
 {
-	ptrdiff_t slave_idx = slave - test_params.slave_ports;
+	ptrdiff_t member_idx = member - test_params.member_ports;
 
-	RTE_VERIFY(test_params.slave_ports <= slave &&
-		slave_idx < (ptrdiff_t)RTE_DIM(test_params.slave_ports));
+	RTE_VERIFY(test_params.member_ports <= member &&
+		member_idx < (ptrdiff_t)RTE_DIM(test_params.member_ports));
 
-	RTE_VERIFY(slave->bonded == 1);
-	RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
+	RTE_VERIFY(member->bonded == 1);
+	RTE_VERIFY(member->port_id != INVALID_PORT_ID);
 
-	TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
-		"Slave %u tx queue not empty while removing from bonding.",
-		slave->port_id);
+	TEST_ASSERT_EQUAL(rte_ring_count(member->rx_queue), 0,
+		"Member %u tx queue not empty while removing from bonding.",
+		member->port_id);
 
-	TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
-		"Slave %u tx queue not empty while removing from bonding.",
-		slave->port_id);
+	TEST_ASSERT_EQUAL(rte_ring_count(member->rx_queue), 0,
+		"Member %u tx queue not empty while removing from bonding.",
+		member->port_id);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_slave_remove(test_params.bonded_port_id,
-			slave->port_id), 0,
-			"Failed to remove slave (idx=%u, id=%u) from bonding (id=%u)",
-			(uint8_t)slave_idx, slave->port_id,
+	TEST_ASSERT_EQUAL(rte_eth_bond_member_remove(test_params.bonded_port_id,
+			member->port_id), 0,
+			"Failed to remove member (idx=%u, id=%u) from bonding (id=%u)",
+			(uint8_t)member_idx, member->port_id,
 			test_params.bonded_port_id);
 
-	slave->bonded = 0;
-	slave->lacp_parnter_state = 0;
+	member->bonded = 0;
+	member->lacp_parnter_state = 0;
 	return 0;
 }
 
 static void
-lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
+lacp_recv_cb(uint16_t member_id, struct rte_mbuf *lacp_pkt)
 {
 	struct rte_ether_hdr *hdr;
 	struct slow_protocol_frame *slow_hdr;
@@ -304,22 +304,22 @@ lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
 	slow_hdr = rte_pktmbuf_mtod(lacp_pkt, struct slow_protocol_frame *);
 	RTE_VERIFY(slow_hdr->slow_protocol.subtype == SLOW_SUBTYPE_LACP);
 
-	lacpdu_rx_count[slave_id]++;
+	lacpdu_rx_count[member_id]++;
 	rte_pktmbuf_free(lacp_pkt);
 }
 
 static int
-initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
+initialize_bonded_device_with_members(uint16_t member_count, uint8_t external_sm)
 {
 	uint8_t i;
 	int ret;
 
 	RTE_VERIFY(test_params.bonded_port_id != INVALID_PORT_ID);
 
-	for (i = 0; i < slave_count; i++) {
-		TEST_ASSERT_SUCCESS(add_slave(&test_params.slave_ports[i], 1),
+	for (i = 0; i < member_count; i++) {
+		TEST_ASSERT_SUCCESS(add_member(&test_params.member_ports[i], 1),
 			"Failed to add port %u to bonded device.\n",
-			test_params.slave_ports[i].port_id);
+			test_params.member_ports[i].port_id);
 	}
 
 	/* Reset mode 4 configuration */
@@ -345,34 +345,34 @@ initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
 }
 
 static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
 {
-	struct slave_conf *slave;
+	struct member_conf *member;
 	int retval;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 	uint16_t i;
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bonded_port_id),
 			"Failed to stop bonded port %u",
 			test_params.bonded_port_id);
 
-	FOR_EACH_SLAVE(i, slave)
-		remove_slave(slave);
+	FOR_EACH_MEMBER(i, member)
+		remove_member(member);
 
-	retval = rte_eth_bond_slaves_get(test_params.bonded_port_id, slaves,
-		RTE_DIM(slaves));
+	retval = rte_eth_bond_members_get(test_params.bonded_port_id, members,
+		RTE_DIM(members));
 
 	TEST_ASSERT_EQUAL(retval, 0,
-		"Expected bonded device %u have 0 slaves but returned %d.",
+		"Expected bonded device %u have 0 members but returned %d.",
 			test_params.bonded_port_id, retval);
 
-	FOR_EACH_PORT(i, slave) {
-		TEST_ASSERT_SUCCESS(rte_eth_dev_stop(slave->port_id),
+	FOR_EACH_PORT(i, member) {
+		TEST_ASSERT_SUCCESS(rte_eth_dev_stop(member->port_id),
 				"Failed to stop bonded port %u",
-				slave->port_id);
+				member->port_id);
 
-		TEST_ASSERT(slave->bonded == 0,
-			"Port id=%u is still marked as enslaved.", slave->port_id);
+		TEST_ASSERT(member->bonded == 0,
+			"Port id=%u is still marked as enmemberd.", member->port_id);
 	}
 
 	return TEST_SUCCESS;
@@ -383,7 +383,7 @@ test_setup(void)
 {
 	int retval, nb_mbuf_per_pool;
 	char name[RTE_ETH_NAME_MAX_LEN];
-	struct slave_conf *port;
+	struct member_conf *port;
 	const uint8_t socket_id = rte_socket_id();
 	uint16_t i;
 
@@ -400,10 +400,10 @@ test_setup(void)
 
 	/* Create / initialize ring eth devs. */
 	FOR_EACH_PORT(i, port) {
-		port = &test_params.slave_ports[i];
+		port = &test_params.member_ports[i];
 
 		if (port->rx_queue == NULL) {
-			retval = snprintf(name, RTE_DIM(name), SLAVE_RX_QUEUE_FMT, i);
+			retval = snprintf(name, RTE_DIM(name), MEMBER_RX_QUEUE_FMT, i);
 			TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
 			port->rx_queue = rte_ring_create(name, RX_RING_SIZE, socket_id, 0);
 			TEST_ASSERT(port->rx_queue != NULL,
@@ -412,7 +412,7 @@ test_setup(void)
 		}
 
 		if (port->tx_queue == NULL) {
-			retval = snprintf(name, RTE_DIM(name), SLAVE_TX_QUEUE_FMT, i);
+			retval = snprintf(name, RTE_DIM(name), MEMBER_TX_QUEUE_FMT, i);
 			TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
 			port->tx_queue = rte_ring_create(name, TX_RING_SIZE, socket_id, 0);
 			TEST_ASSERT_NOT_NULL(port->tx_queue,
@@ -421,7 +421,7 @@ test_setup(void)
 		}
 
 		if (port->port_id == INVALID_PORT_ID) {
-			retval = snprintf(name, RTE_DIM(name), SLAVE_DEV_NAME_FMT, i);
+			retval = snprintf(name, RTE_DIM(name), MEMBER_DEV_NAME_FMT, i);
 			TEST_ASSERT(retval < (int)RTE_DIM(name) - 1, "Name too long");
 			retval = rte_eth_from_rings(name, &port->rx_queue, 1,
 					&port->tx_queue, 1, socket_id);
@@ -460,7 +460,7 @@ test_setup(void)
 static void
 testsuite_teardown(void)
 {
-	struct slave_conf *port;
+	struct member_conf *port;
 	uint8_t i;
 
 	/* Only stop ports.
@@ -480,7 +480,7 @@ testsuite_teardown(void)
  * frame but not LACP
  */
 static int
-make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
+make_lacp_reply(struct member_conf *member, struct rte_mbuf *pkt)
 {
 	struct rte_ether_hdr *hdr;
 	struct slow_protocol_frame *slow_hdr;
@@ -501,11 +501,11 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
 	/* Change source address to partner address */
 	rte_ether_addr_copy(&parnter_mac_default, &slow_hdr->eth_hdr.src_addr);
 	slow_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
-		slave->port_id;
+		member->port_id;
 
 	lacp = (struct lacpdu *) &slow_hdr->slow_protocol;
 	/* Save last received state */
-	slave->lacp_parnter_state = lacp->actor.state;
+	member->lacp_parnter_state = lacp->actor.state;
 	/* Change it into LACP replay by matching parameters. */
 	memcpy(&lacp->partner.port_params, &lacp->actor.port_params,
 		sizeof(struct port_params));
@@ -523,27 +523,27 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
 }
 
 /*
- * Reads packets from given slave, search for LACP packet and reply them.
+ * Reads packets from given member, search for LACP packet and reply them.
  *
- * Receives burst of packets from slave. Looks for LACP packet. Drops
+ * Receives burst of packets from member. Looks for LACP packet. Drops
  * all other packets. Prepares response LACP and sends it back.
  *
  * return number of LACP received and replied, -1 on error.
  */
 static int
-bond_handshake_reply(struct slave_conf *slave)
+bond_handshake_reply(struct member_conf *member)
 {
 	int retval;
 	struct rte_mbuf *rx_buf[MAX_PKT_BURST];
 	struct rte_mbuf *lacp_tx_buf[MAX_PKT_BURST];
 	uint16_t lacp_tx_buf_cnt = 0, i;
 
-	retval = slave_get_pkts(slave, rx_buf, RTE_DIM(rx_buf));
-	TEST_ASSERT(retval >= 0, "Getting slave %u packets failed.",
-			slave->port_id);
+	retval = member_get_pkts(member, rx_buf, RTE_DIM(rx_buf));
+	TEST_ASSERT(retval >= 0, "Getting member %u packets failed.",
+			member->port_id);
 
 	for (i = 0; i < (uint16_t)retval; i++) {
-		if (make_lacp_reply(slave, rx_buf[i]) == 0) {
+		if (make_lacp_reply(member, rx_buf[i]) == 0) {
 			/* reply with actor's LACP */
 			lacp_tx_buf[lacp_tx_buf_cnt++] = rx_buf[i];
 		} else
@@ -553,7 +553,7 @@ bond_handshake_reply(struct slave_conf *slave)
 	if (lacp_tx_buf_cnt == 0)
 		return 0;
 
-	retval = slave_put_pkts(slave, lacp_tx_buf, lacp_tx_buf_cnt);
+	retval = member_put_pkts(member, lacp_tx_buf, lacp_tx_buf_cnt);
 	if (retval <= lacp_tx_buf_cnt) {
 		/* retval might be negative */
 		for (i = RTE_MAX(0, retval); retval < lacp_tx_buf_cnt; retval++)
@@ -561,24 +561,24 @@ bond_handshake_reply(struct slave_conf *slave)
 	}
 
 	TEST_ASSERT_EQUAL(retval, lacp_tx_buf_cnt,
-		"Failed to equeue lacp packets into slave %u tx queue.",
-		slave->port_id);
+		"Failed to equeue lacp packets into member %u tx queue.",
+		member->port_id);
 
 	return lacp_tx_buf_cnt;
 }
 
 /*
- * Function check if given slave tx queue contains packets that make mode 4
- * handshake complete. It will drain slave queue.
+ * Function check if given member tx queue contains packets that make mode 4
+ * handshake complete. It will drain member queue.
  * return 0 if handshake not completed, 1 if handshake was complete,
  */
 static int
-bond_handshake_done(struct slave_conf *slave)
+bond_handshake_done(struct member_conf *member)
 {
 	const uint8_t expected_state = STATE_LACP_ACTIVE | STATE_SYNCHRONIZATION |
 			STATE_AGGREGATION | STATE_COLLECTING | STATE_DISTRIBUTING;
 
-	return slave->lacp_parnter_state == expected_state;
+	return member->lacp_parnter_state == expected_state;
 }
 
 static unsigned
@@ -603,32 +603,32 @@ bond_get_update_timeout_ms(void)
 static int
 bond_handshake(void)
 {
-	struct slave_conf *slave;
+	struct member_conf *member;
 	struct rte_mbuf *buf[MAX_PKT_BURST];
 	uint16_t nb_pkts;
-	uint8_t all_slaves_done, i, j;
-	uint8_t status[RTE_DIM(test_params.slave_ports)] = { 0 };
+	uint8_t all_members_done, i, j;
+	uint8_t status[RTE_DIM(test_params.member_ports)] = { 0 };
 	const unsigned delay = bond_get_update_timeout_ms();
 
 	/* Exchange LACP frames */
-	all_slaves_done = 0;
-	for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
+	all_members_done = 0;
+	for (i = 0; i < 30 && all_members_done == 0; ++i) {
 		rte_delay_ms(delay);
 
-		all_slaves_done = 1;
-		FOR_EACH_SLAVE(j, slave) {
-			/* If response already send, skip slave */
+		all_members_done = 1;
+		FOR_EACH_MEMBER(j, member) {
+			/* If response already send, skip member */
 			if (status[j] != 0)
 				continue;
 
-			if (bond_handshake_reply(slave) < 0) {
-				all_slaves_done = 0;
+			if (bond_handshake_reply(member) < 0) {
+				all_members_done = 0;
 				break;
 			}
 
-			status[j] = bond_handshake_done(slave);
+			status[j] = bond_handshake_done(member);
 			if (status[j] == 0)
-				all_slaves_done = 0;
+				all_members_done = 0;
 		}
 
 		nb_pkts = bond_tx(NULL, 0);
@@ -639,26 +639,26 @@ bond_handshake(void)
 		TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
 	}
 	/* If response didn't send - report failure */
-	TEST_ASSERT_EQUAL(all_slaves_done, 1, "Bond handshake failed\n");
+	TEST_ASSERT_EQUAL(all_members_done, 1, "Bond handshake failed\n");
 
 	/* If flags doesn't match - report failure */
-	return all_slaves_done == 1 ? TEST_SUCCESS : TEST_FAILED;
+	return all_members_done == 1 ? TEST_SUCCESS : TEST_FAILED;
 }
 
-#define TEST_LACP_SLAVE_COUT RTE_DIM(test_params.slave_ports)
+#define TEST_LACP_MEMBER_COUT RTE_DIM(test_params.member_ports)
 static int
 test_mode4_lacp(void)
 {
 	int retval;
 
-	retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+	retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 	/* Test LACP handshake function */
 	retval = bond_handshake();
 	TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -668,7 +668,7 @@ test_mode4_agg_mode_selection(void)
 {
 	int retval;
 	/* Test and verify for Stable mode */
-	retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+	retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 
@@ -684,12 +684,12 @@ test_mode4_agg_mode_selection(void)
 	TEST_ASSERT_EQUAL(retval, AGG_STABLE,
 			"Wrong agg mode received from bonding device");
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 
 	/* test and verify for Bandwidth mode */
-	retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+	retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 
@@ -706,11 +706,11 @@ test_mode4_agg_mode_selection(void)
 	TEST_ASSERT_EQUAL(retval, AGG_BANDWIDTH,
 			"Wrong agg mode received from bonding device");
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	/* test and verify selection for count mode */
-	retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+	retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 
@@ -726,7 +726,7 @@ test_mode4_agg_mode_selection(void)
 	TEST_ASSERT_EQUAL(retval, AGG_COUNT,
 			"Wrong agg mode received from bonding device");
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -780,7 +780,7 @@ generate_packets(struct rte_ether_addr *src_mac,
 }
 
 static int
-generate_and_put_packets(struct slave_conf *slave,
+generate_and_put_packets(struct member_conf *member,
 			struct rte_ether_addr *src_mac,
 			struct rte_ether_addr *dst_mac, uint16_t count)
 {
@@ -791,12 +791,12 @@ generate_and_put_packets(struct slave_conf *slave,
 	if (retval != (int)count)
 		return retval;
 
-	retval = slave_put_pkts(slave, pkts, count);
+	retval = member_put_pkts(member, pkts, count);
 	if (retval > 0 && retval != count)
 		free_pkts(&pkts[retval], count - retval);
 
 	TEST_ASSERT_EQUAL(retval, count,
-		"Failed to enqueue packets into slave %u RX queue", slave->port_id);
+		"Failed to enqueue packets into member %u RX queue", member->port_id);
 
 	return TEST_SUCCESS;
 }
@@ -804,7 +804,7 @@ generate_and_put_packets(struct slave_conf *slave,
 static int
 test_mode4_rx(void)
 {
-	struct slave_conf *slave;
+	struct member_conf *member;
 	uint16_t i, j;
 
 	uint16_t expected_pkts_cnt;
@@ -819,7 +819,7 @@ test_mode4_rx(void)
 	struct rte_ether_addr dst_mac;
 	struct rte_ether_addr bonded_mac;
 
-	retval = initialize_bonded_device_with_slaves(TEST_PROMISC_SLAVE_COUNT,
+	retval = initialize_bonded_device_with_members(TEST_PROMISC_MEMBER_COUNT,
 						      0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
@@ -838,7 +838,7 @@ test_mode4_rx(void)
 	dst_mac.addr_bytes[0] += 2;
 
 	/* First try with promiscuous mode enabled.
-	 * Add 2 packets to each slave. First with bonding MAC address, second with
+	 * Add 2 packets to each member. First with bonding MAC address, second with
 	 * different. Check if we received all of them. */
 	retval = rte_eth_promiscuous_enable(test_params.bonded_port_id);
 	TEST_ASSERT_SUCCESS(retval,
@@ -846,16 +846,16 @@ test_mode4_rx(void)
 			test_params.bonded_port_id, rte_strerror(-retval));
 
 	expected_pkts_cnt = 0;
-	FOR_EACH_SLAVE(i, slave) {
-		retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
-		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
-			slave->port_id);
+	FOR_EACH_MEMBER(i, member) {
+		retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
+		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+			member->port_id);
 
-		retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
-		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
-			slave->port_id);
+		retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
+		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+			member->port_id);
 
-		/* Expect 2 packets per slave */
+		/* Expect 2 packets per member */
 		expected_pkts_cnt += 2;
 	}
 
@@ -894,16 +894,16 @@ test_mode4_rx(void)
 		test_params.bonded_port_id, rte_strerror(-retval));
 
 	expected_pkts_cnt = 0;
-	FOR_EACH_SLAVE(i, slave) {
-		retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
-		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
-			slave->port_id);
+	FOR_EACH_MEMBER(i, member) {
+		retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
+		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+			member->port_id);
 
-		retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
-		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
-			slave->port_id);
+		retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
+		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+			member->port_id);
 
-		/* Expect only one packet per slave */
+		/* Expect only one packet per member */
 		expected_pkts_cnt += 1;
 	}
 
@@ -927,19 +927,19 @@ test_mode4_rx(void)
 	TEST_ASSERT_EQUAL(retval, expected_pkts_cnt,
 		"Expected %u packets but received only %d", expected_pkts_cnt, retval);
 
-	/* Link down test: simulate link down for first slave. */
+	/* Link down test: simulate link down for first member. */
 	delay = bond_get_update_timeout_ms();
 
-	uint8_t slave_down_id = INVALID_PORT_ID;
+	uint8_t member_down_id = INVALID_PORT_ID;
 
-	/* Find first slave and make link down on it*/
-	FOR_EACH_SLAVE(i, slave) {
-		rte_eth_dev_set_link_down(slave->port_id);
-		slave_down_id = slave->port_id;
+	/* Find first member and make link down on it*/
+	FOR_EACH_MEMBER(i, member) {
+		rte_eth_dev_set_link_down(member->port_id);
+		member_down_id = member->port_id;
 		break;
 	}
 
-	RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
+	RTE_VERIFY(member_down_id != INVALID_PORT_ID);
 
 	/* Give some time to rearrange bonding */
 	for (i = 0; i < 3; i++) {
@@ -949,16 +949,16 @@ test_mode4_rx(void)
 
 	TEST_ASSERT_SUCCESS(bond_handshake(), "Handshake after link down failed");
 
-	/* Put packet to each slave */
-	FOR_EACH_SLAVE(i, slave) {
+	/* Put packet to each member */
+	FOR_EACH_MEMBER(i, member) {
 		void *pkt = NULL;
 
-		dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
-		retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
+		dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
+		retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
 		TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
 
-		src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
-		retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
+		src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
+		retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
 		TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
 
 		retval = bond_rx(pkts, RTE_DIM(pkts));
@@ -967,36 +967,36 @@ test_mode4_rx(void)
 		if (retval > 0)
 			free_pkts(pkts, retval);
 
-		while (rte_ring_dequeue(slave->rx_queue, (void **)&pkt) == 0)
+		while (rte_ring_dequeue(member->rx_queue, (void **)&pkt) == 0)
 			rte_pktmbuf_free(pkt);
 
-		if (slave_down_id == slave->port_id)
+		if (member_down_id == member->port_id)
 			TEST_ASSERT_EQUAL(retval, 0, "Packets received unexpectedly.");
 		else
 			TEST_ASSERT_NOT_EQUAL(retval, 0,
-				"Expected to receive some packets on slave %u.",
-				slave->port_id);
-		rte_eth_dev_start(slave->port_id);
+				"Expected to receive some packets on member %u.",
+				member->port_id);
+		rte_eth_dev_start(member->port_id);
 
 		for (j = 0; j < 5; j++) {
-			TEST_ASSERT(bond_handshake_reply(slave) >= 0,
+			TEST_ASSERT(bond_handshake_reply(member) >= 0,
 				"Handshake after link up");
 
-			if (bond_handshake_done(slave) == 1)
+			if (bond_handshake_done(member) == 1)
 				break;
 		}
 
-		TEST_ASSERT(j < 5, "Failed to aggregate slave after link up");
+		TEST_ASSERT(j < 5, "Failed to aggregate member after link up");
 	}
 
-	remove_slaves_and_stop_bonded_device();
+	remove_members_and_stop_bonded_device();
 	return TEST_SUCCESS;
 }
 
 static int
 test_mode4_tx_burst(void)
 {
-	struct slave_conf *slave;
+	struct member_conf *member;
 	uint16_t i, j;
 
 	uint16_t exp_pkts_cnt, pkts_cnt = 0;
@@ -1008,7 +1008,7 @@ test_mode4_tx_burst(void)
 		{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 } };
 	struct rte_ether_addr bonded_mac;
 
-	retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
+	retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 	retval = bond_handshake();
@@ -1036,19 +1036,19 @@ test_mode4_tx_burst(void)
 
 	TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
 
-	/* Check if packets were transmitted properly. Every slave should have
+	/* Check if packets were transmitted properly. Every member should have
 	 * at least one packet, and sum must match. Under normal operation
 	 * there should be no LACP nor MARKER frames. */
 	pkts_cnt = 0;
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_MEMBER(i, member) {
 		uint16_t normal_cnt, slow_cnt;
 
-		retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
+		retval = member_get_pkts(member, pkts, RTE_DIM(pkts));
 		normal_cnt = 0;
 		slow_cnt = 0;
 
 		for (j = 0; j < retval; j++) {
-			if (make_lacp_reply(slave, pkts[j]) == 1)
+			if (make_lacp_reply(member, pkts[j]) == 1)
 				normal_cnt++;
 			else
 				slow_cnt++;
@@ -1056,11 +1056,11 @@ test_mode4_tx_burst(void)
 
 		free_pkts(pkts, normal_cnt + slow_cnt);
 		TEST_ASSERT_EQUAL(slow_cnt, 0,
-			"slave %u unexpectedly transmitted %d SLOW packets", slave->port_id,
+			"member %u unexpectedly transmitted %d SLOW packets", member->port_id,
 			slow_cnt);
 
 		TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
-			"slave %u did not transmitted any packets", slave->port_id);
+			"member %u did not transmitted any packets", member->port_id);
 
 		pkts_cnt += normal_cnt;
 	}
@@ -1068,19 +1068,21 @@ test_mode4_tx_burst(void)
 	TEST_ASSERT_EQUAL(exp_pkts_cnt, pkts_cnt,
 		"Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
 
-	/* Link down test:
-	 * simulate link down for first slave. */
+	/*
+	 * Link down test:
+	 * simulate link down for first member.
+	 */
 	delay = bond_get_update_timeout_ms();
 
-	uint8_t slave_down_id = INVALID_PORT_ID;
+	uint8_t member_down_id = INVALID_PORT_ID;
 
-	FOR_EACH_SLAVE(i, slave) {
-		rte_eth_dev_set_link_down(slave->port_id);
-		slave_down_id = slave->port_id;
+	FOR_EACH_MEMBER(i, member) {
+		rte_eth_dev_set_link_down(member->port_id);
+		member_down_id = member->port_id;
 		break;
 	}
 
-	RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
+	RTE_VERIFY(member_down_id != INVALID_PORT_ID);
 
 	/* Give some time to rearrange bonding. */
 	for (i = 0; i < 3; i++) {
@@ -1110,19 +1112,19 @@ test_mode4_tx_burst(void)
 
 	TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
 
-	/* Check if packets was transmitted properly. Every slave should have
+	/* Check if packets was transmitted properly. Every member should have
 	 * at least one packet, and sum must match. Under normal operation
 	 * there should be no LACP nor MARKER frames. */
 	pkts_cnt = 0;
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_MEMBER(i, member) {
 		uint16_t normal_cnt, slow_cnt;
 
-		retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
+		retval = member_get_pkts(member, pkts, RTE_DIM(pkts));
 		normal_cnt = 0;
 		slow_cnt = 0;
 
 		for (j = 0; j < retval; j++) {
-			if (make_lacp_reply(slave, pkts[j]) == 1)
+			if (make_lacp_reply(member, pkts[j]) == 1)
 				normal_cnt++;
 			else
 				slow_cnt++;
@@ -1130,17 +1132,17 @@ test_mode4_tx_burst(void)
 
 		free_pkts(pkts, normal_cnt + slow_cnt);
 
-		if (slave_down_id == slave->port_id) {
+		if (member_down_id == member->port_id) {
 			TEST_ASSERT_EQUAL(normal_cnt + slow_cnt, 0,
-				"slave %u enexpectedly transmitted %u packets",
-				normal_cnt + slow_cnt, slave->port_id);
+				"member %u enexpectedly transmitted %u packets",
+				normal_cnt + slow_cnt, member->port_id);
 		} else {
 			TEST_ASSERT_EQUAL(slow_cnt, 0,
-				"slave %u unexpectedly transmitted %d SLOW packets",
-				slave->port_id, slow_cnt);
+				"member %u unexpectedly transmitted %d SLOW packets",
+				member->port_id, slow_cnt);
 
 			TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
-				"slave %u did not transmitted any packets", slave->port_id);
+				"member %u did not transmitted any packets", member->port_id);
 		}
 
 		pkts_cnt += normal_cnt;
@@ -1149,11 +1151,11 @@ test_mode4_tx_burst(void)
 	TEST_ASSERT_EQUAL(exp_pkts_cnt, pkts_cnt,
 		"Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
 
-	return remove_slaves_and_stop_bonded_device();
+	return remove_members_and_stop_bonded_device();
 }
 
 static void
-init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
+init_marker(struct rte_mbuf *pkt, struct member_conf *member)
 {
 	struct marker_header *marker_hdr = rte_pktmbuf_mtod(pkt,
 			struct marker_header *);
@@ -1166,7 +1168,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
 	rte_ether_addr_copy(&parnter_mac_default,
 			&marker_hdr->eth_hdr.src_addr);
 	marker_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
-		slave->port_id;
+		member->port_id;
 
 	marker_hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
 
@@ -1177,7 +1179,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
 			offsetof(struct marker, reserved_90) -
 			offsetof(struct marker, requester_port);
 	RTE_VERIFY(marker_hdr->marker.info_length == 16);
-	marker_hdr->marker.requester_port = slave->port_id + 1;
+	marker_hdr->marker.requester_port = member->port_id + 1;
 	marker_hdr->marker.tlv_type_terminator = TLV_TYPE_TERMINATOR_INFORMATION;
 	marker_hdr->marker.terminator_length = 0;
 }
@@ -1185,7 +1187,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
 static int
 test_mode4_marker(void)
 {
-	struct slave_conf *slave;
+	struct member_conf *member;
 	struct rte_mbuf *pkts[MAX_PKT_BURST];
 	struct rte_mbuf *marker_pkt;
 	struct marker_header *marker_hdr;
@@ -1196,7 +1198,7 @@ test_mode4_marker(void)
 	uint8_t i, j;
 	const uint16_t ethtype_slow_be = rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
 
-	retval = initialize_bonded_device_with_slaves(TEST_MARKER_SLAVE_COUT,
+	retval = initialize_bonded_device_with_members(TEST_MARKER_MEMBER_COUT,
 						      0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
@@ -1205,17 +1207,17 @@ test_mode4_marker(void)
 	TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
 
 	delay = bond_get_update_timeout_ms();
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_MEMBER(i, member) {
 		marker_pkt = rte_pktmbuf_alloc(test_params.mbuf_pool);
 		TEST_ASSERT_NOT_NULL(marker_pkt, "Failed to allocate marker packet");
-		init_marker(marker_pkt, slave);
+		init_marker(marker_pkt, member);
 
-		retval = slave_put_pkts(slave, &marker_pkt, 1);
+		retval = member_put_pkts(member, &marker_pkt, 1);
 		if (retval != 1)
 			rte_pktmbuf_free(marker_pkt);
 
 		TEST_ASSERT_EQUAL(retval, 1,
-			"Failed to send marker packet to slave %u", slave->port_id);
+			"Failed to send marker packet to member %u", member->port_id);
 
 		for (j = 0; j < 20; ++j) {
 			rte_delay_ms(delay);
@@ -1233,13 +1235,13 @@ test_mode4_marker(void)
 
 			/* Check if LACP packet was send by state machines
 			   First and only packet must be a maker response */
-			retval = slave_get_pkts(slave, pkts, MAX_PKT_BURST);
+			retval = member_get_pkts(member, pkts, MAX_PKT_BURST);
 			if (retval == 0)
 				continue;
 			if (retval > 1)
 				free_pkts(pkts, retval);
 
-			TEST_ASSERT_EQUAL(retval, 1, "failed to get slave packets");
+			TEST_ASSERT_EQUAL(retval, 1, "failed to get member packets");
 			nb_pkts = retval;
 
 			marker_hdr = rte_pktmbuf_mtod(pkts[0], struct marker_header *);
@@ -1263,7 +1265,7 @@ test_mode4_marker(void)
 		TEST_ASSERT(j < 20, "Marker response not found");
 	}
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval,	"Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -1272,7 +1274,7 @@ test_mode4_marker(void)
 static int
 test_mode4_expired(void)
 {
-	struct slave_conf *slave, *exp_slave = NULL;
+	struct member_conf *member, *exp_member = NULL;
 	struct rte_mbuf *pkts[MAX_PKT_BURST];
 	int retval;
 	uint32_t old_delay;
@@ -1282,7 +1284,7 @@ test_mode4_expired(void)
 
 	struct rte_eth_bond_8023ad_conf conf;
 
-	retval = initialize_bonded_device_with_slaves(TEST_EXPIRED_SLAVE_COUNT,
+	retval = initialize_bonded_device_with_members(TEST_EXPIRED_MEMBER_COUNT,
 						      0);
 	/* Set custom timeouts to make test last shorter. */
 	rte_eth_bond_8023ad_conf_get(test_params.bonded_port_id, &conf);
@@ -1298,8 +1300,8 @@ test_mode4_expired(void)
 
 	/* Wait for new settings to be applied. */
 	for (i = 0; i < old_delay/conf.update_timeout_ms * 2; i++) {
-		FOR_EACH_SLAVE(j, slave)
-			bond_handshake_reply(slave);
+		FOR_EACH_MEMBER(j, member)
+			bond_handshake_reply(member);
 
 		rte_delay_ms(conf.update_timeout_ms);
 	}
@@ -1307,13 +1309,13 @@ test_mode4_expired(void)
 	retval = bond_handshake();
 	TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
 
-	/* Find first slave */
-	FOR_EACH_SLAVE(i, slave) {
-		exp_slave = slave;
+	/* Find first member */
+	FOR_EACH_MEMBER(i, member) {
+		exp_member = member;
 		break;
 	}
 
-	RTE_VERIFY(exp_slave != NULL);
+	RTE_VERIFY(exp_member != NULL);
 
 	/* When one of partners do not send or respond to LACP frame in
 	 * conf.long_timeout_ms time, internal state machines should detect this
@@ -1325,16 +1327,16 @@ test_mode4_expired(void)
 		TEST_ASSERT_EQUAL(retval, 0, "Unexpectedly received %d packets",
 			retval);
 
-		FOR_EACH_SLAVE(i, slave) {
-			retval = bond_handshake_reply(slave);
+		FOR_EACH_MEMBER(i, member) {
+			retval = bond_handshake_reply(member);
 			TEST_ASSERT(retval >= 0, "Handshake failed");
 
-			/* Remove replay for slave that suppose to be expired. */
-			if (slave == exp_slave) {
-				while (rte_ring_count(slave->rx_queue) > 0) {
+			/* Remove replay for member that suppose to be expired. */
+			if (member == exp_member) {
+				while (rte_ring_count(member->rx_queue) > 0) {
 					void *pkt = NULL;
 
-					rte_ring_dequeue(slave->rx_queue, &pkt);
+					rte_ring_dequeue(member->rx_queue, &pkt);
 					rte_pktmbuf_free(pkt);
 				}
 			}
@@ -1348,17 +1350,17 @@ test_mode4_expired(void)
 			retval);
 	}
 
-	/* After test only expected slave should be in EXPIRED state */
-	FOR_EACH_SLAVE(i, slave) {
-		if (slave == exp_slave)
-			TEST_ASSERT(slave->lacp_parnter_state & STATE_EXPIRED,
-				"Slave %u should be in expired.", slave->port_id);
+	/* After test only expected member should be in EXPIRED state */
+	FOR_EACH_MEMBER(i, member) {
+		if (member == exp_member)
+			TEST_ASSERT(member->lacp_parnter_state & STATE_EXPIRED,
+				"Member %u should be in expired.", member->port_id);
 		else
-			TEST_ASSERT_EQUAL(bond_handshake_done(slave), 1,
-				"Slave %u should be operational.", slave->port_id);
+			TEST_ASSERT_EQUAL(bond_handshake_done(member), 1,
+				"Member %u should be operational.", member->port_id);
 	}
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -1372,17 +1374,17 @@ test_mode4_ext_ctrl(void)
 	 *   . try to transmit lacpdu (should fail)
 	 *   . try to set collecting and distributing flags (should fail)
 	 * reconfigure w/external sm
-	 *   . transmit one lacpdu on each slave using new api
-	 *   . make sure each slave receives one lacpdu using the callback api
-	 *   . transmit one data pdu on each slave (should fail)
+	 *   . transmit one lacpdu on each member using new api
+	 *   . make sure each member receives one lacpdu using the callback api
+	 *   . transmit one data pdu on each member (should fail)
 	 *   . enable distribution and collection, send one data pdu each again
 	 */
 
 	int retval;
-	struct slave_conf *slave = NULL;
+	struct member_conf *member = NULL;
 	uint8_t i;
 
-	struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
+	struct rte_mbuf *lacp_tx_buf[MEMBER_COUNT];
 	struct rte_ether_addr src_mac, dst_mac;
 	struct lacpdu_header lacpdu = {
 		.lacpdu = {
@@ -1396,30 +1398,30 @@ test_mode4_ext_ctrl(void)
 	initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
 			      RTE_ETHER_TYPE_SLOW, 0, 0);
 
-	for (i = 0; i < SLAVE_COUNT; i++) {
+	for (i = 0; i < MEMBER_COUNT; i++) {
 		lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
 		rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
 			   &lacpdu, sizeof(lacpdu));
 		rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
 	}
 
-	retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
+	retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_MEMBER(i, member) {
 		TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_slowtx(
 						test_params.bonded_port_id,
-						slave->port_id, lacp_tx_buf[i]),
-				 "Slave should not allow manual LACP xmit");
+						member->port_id, lacp_tx_buf[i]),
+				 "Member should not allow manual LACP xmit");
 		TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_collect(
 						test_params.bonded_port_id,
-						slave->port_id, 1),
-				 "Slave should not allow external state controls");
+						member->port_id, 1),
+				 "Member should not allow external state controls");
 	}
 
 	free_pkts(lacp_tx_buf, RTE_DIM(lacp_tx_buf));
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Bonded device cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -1430,13 +1432,13 @@ static int
 test_mode4_ext_lacp(void)
 {
 	int retval;
-	struct slave_conf *slave = NULL;
-	uint8_t all_slaves_done = 0, i;
+	struct member_conf *member = NULL;
+	uint8_t all_members_done = 0, i;
 	uint16_t nb_pkts;
 	const unsigned int delay = bond_get_update_timeout_ms();
 
-	struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
-	struct rte_mbuf *buf[SLAVE_COUNT];
+	struct rte_mbuf *lacp_tx_buf[MEMBER_COUNT];
+	struct rte_mbuf *buf[MEMBER_COUNT];
 	struct rte_ether_addr src_mac, dst_mac;
 	struct lacpdu_header lacpdu = {
 		.lacpdu = {
@@ -1450,14 +1452,14 @@ test_mode4_ext_lacp(void)
 	initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
 			      RTE_ETHER_TYPE_SLOW, 0, 0);
 
-	for (i = 0; i < SLAVE_COUNT; i++) {
+	for (i = 0; i < MEMBER_COUNT; i++) {
 		lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
 		rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
 			   &lacpdu, sizeof(lacpdu));
 		rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
 	}
 
-	retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 1);
+	retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 1);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 	memset(lacpdu_rx_count, 0, sizeof(lacpdu_rx_count));
@@ -1466,22 +1468,22 @@ test_mode4_ext_lacp(void)
 	for (i = 0; i < 30; ++i)
 		rte_delay_ms(delay);
 
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_MEMBER(i, member) {
 		retval = rte_eth_bond_8023ad_ext_slowtx(
 						test_params.bonded_port_id,
-						slave->port_id, lacp_tx_buf[i]);
+						member->port_id, lacp_tx_buf[i]);
 		TEST_ASSERT_SUCCESS(retval,
-				    "Slave should allow manual LACP xmit");
+				    "Member should allow manual LACP xmit");
 	}
 
 	nb_pkts = bond_tx(NULL, 0);
 	TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets transmitted unexpectedly");
 
-	FOR_EACH_SLAVE(i, slave) {
-		nb_pkts = slave_get_pkts(slave, buf, RTE_DIM(buf));
-		TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on slave %d\n",
+	FOR_EACH_MEMBER(i, member) {
+		nb_pkts = member_get_pkts(member, buf, RTE_DIM(buf));
+		TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on member %d\n",
 				  nb_pkts, i);
-		slave_put_pkts(slave, buf, nb_pkts);
+		member_put_pkts(member, buf, nb_pkts);
 	}
 
 	nb_pkts = bond_rx(buf, RTE_DIM(buf));
@@ -1489,26 +1491,26 @@ test_mode4_ext_lacp(void)
 	TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
 
 	/* wait for the periodic callback to run */
-	for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
+	for (i = 0; i < 30 && all_members_done == 0; ++i) {
 		uint8_t s, total = 0;
 
 		rte_delay_ms(delay);
-		FOR_EACH_SLAVE(s, slave) {
-			total += lacpdu_rx_count[slave->port_id];
+		FOR_EACH_MEMBER(s, member) {
+			total += lacpdu_rx_count[member->port_id];
 		}
 
-		if (total >= SLAVE_COUNT)
-			all_slaves_done = 1;
+		if (total >= MEMBER_COUNT)
+			all_members_done = 1;
 	}
 
-	FOR_EACH_SLAVE(i, slave) {
-		TEST_ASSERT_EQUAL(lacpdu_rx_count[slave->port_id], 1,
-				  "Slave port %u should have received 1 lacpdu (count=%u)",
-				  slave->port_id,
-				  lacpdu_rx_count[slave->port_id]);
+	FOR_EACH_MEMBER(i, member) {
+		TEST_ASSERT_EQUAL(lacpdu_rx_count[member->port_id], 1,
+				  "Member port %u should have received 1 lacpdu (count=%u)",
+				  member->port_id,
+				  lacpdu_rx_count[member->port_id]);
 	}
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -1517,10 +1519,10 @@ test_mode4_ext_lacp(void)
 static int
 check_environment(void)
 {
-	struct slave_conf *port;
+	struct member_conf *port;
 	uint8_t i, env_state;
-	uint16_t slaves[RTE_DIM(test_params.slave_ports)];
-	int slaves_count;
+	uint16_t members[RTE_DIM(test_params.member_ports)];
+	int members_count;
 
 	env_state = 0;
 	FOR_EACH_PORT(i, port) {
@@ -1540,20 +1542,20 @@ check_environment(void)
 			break;
 	}
 
-	slaves_count = rte_eth_bond_slaves_get(test_params.bonded_port_id,
-			slaves, RTE_DIM(slaves));
+	members_count = rte_eth_bond_members_get(test_params.bonded_port_id,
+			members, RTE_DIM(members));
 
-	if (slaves_count != 0)
+	if (members_count != 0)
 		env_state |= 0x10;
 
 	TEST_ASSERT_EQUAL(env_state, 0,
 		"Environment not clean (port %u):%s%s%s%s%s",
 		port->port_id,
-		env_state & 0x01 ? " slave rx queue not clean" : "",
-		env_state & 0x02 ? " slave tx queue not clean" : "",
-		env_state & 0x04 ? " port marked as enslaved" : "",
-		env_state & 0x80 ? " slave state is not reset" : "",
-		env_state & 0x10 ? " slave count not equal 0" : ".");
+		env_state & 0x01 ? " member rx queue not clean" : "",
+		env_state & 0x02 ? " member tx queue not clean" : "",
+		env_state & 0x04 ? " port marked as enmemberd" : "",
+		env_state & 0x80 ? " member state is not reset" : "",
+		env_state & 0x10 ? " member count not equal 0" : ".");
 
 
 	return TEST_SUCCESS;
@@ -1562,7 +1564,7 @@ check_environment(void)
 static int
 test_mode4_executor(int (*test_func)(void))
 {
-	struct slave_conf *port;
+	struct member_conf *port;
 	int test_result;
 	uint8_t i;
 	void *pkt;
@@ -1581,7 +1583,7 @@ test_mode4_executor(int (*test_func)(void))
 
 	/* Reset environment in case test failed to do that. */
 	if (test_result != TEST_SUCCESS) {
-		TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
+		TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
 			"Failed to stop bonded device");
 
 		FOR_EACH_PORT(i, port) {
diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 464fb2dbd0..1f888b4771 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -27,15 +27,15 @@
 
 #include "test.h"
 
-#define SLAVE_COUNT (4)
+#define MEMBER_COUNT (4)
 
 #define RXTX_RING_SIZE			1024
 #define RXTX_QUEUE_COUNT		4
 
 #define BONDED_DEV_NAME         ("net_bonding_rss")
 
-#define SLAVE_DEV_NAME_FMT      ("net_null%d")
-#define SLAVE_RXTX_QUEUE_FMT      ("rssconf_slave%d_q%d")
+#define MEMBER_DEV_NAME_FMT      ("net_null%d")
+#define MEMBER_RXTX_QUEUE_FMT      ("rssconf_member%d_q%d")
 
 #define NUM_MBUFS 8191
 #define MBUF_SIZE (1600 + RTE_PKTMBUF_HEADROOM)
@@ -46,7 +46,7 @@
 #define INVALID_PORT_ID         (0xFF)
 #define INVALID_BONDING_MODE    (-1)
 
-struct slave_conf {
+struct member_conf {
 	uint16_t port_id;
 	struct rte_eth_dev_info dev_info;
 
@@ -54,7 +54,7 @@ struct slave_conf {
 	uint8_t rss_key[40];
 	struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
 
-	uint8_t is_slave;
+	uint8_t is_member;
 	struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
 };
 
@@ -62,15 +62,15 @@ struct link_bonding_rssconf_unittest_params {
 	uint8_t bond_port_id;
 	struct rte_eth_dev_info bond_dev_info;
 	struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
-	struct slave_conf slave_ports[SLAVE_COUNT];
+	struct member_conf member_ports[MEMBER_COUNT];
 
 	struct rte_mempool *mbuf_pool;
 };
 
 static struct link_bonding_rssconf_unittest_params test_params  = {
 	.bond_port_id = INVALID_PORT_ID,
-	.slave_ports = {
-		[0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_slave = 0}
+	.member_ports = {
+		[0 ... MEMBER_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_member = 0}
 	},
 	.mbuf_pool = NULL,
 };
@@ -107,14 +107,14 @@ static struct rte_eth_conf rss_pmd_conf = {
 #define FOR_EACH(_i, _item, _array, _size) \
 	for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
 
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
  * in this test.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
  */
 #define FOR_EACH_PORT(_i, _port) \
-	FOR_EACH(_i, _port, test_params.slave_ports, \
-		RTE_DIM(test_params.slave_ports))
+	FOR_EACH(_i, _port, test_params.member_ports, \
+		RTE_DIM(test_params.member_ports))
 
 static int
 configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
@@ -151,21 +151,21 @@ configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
 }
 
 /**
- * Remove all slaves from bonding
+ * Remove all members from bonding
  */
 static int
-remove_slaves(void)
+remove_members(void)
 {
 	unsigned n;
-	struct slave_conf *port;
+	struct member_conf *port;
 
 	FOR_EACH_PORT(n, port) {
-		port = &test_params.slave_ports[n];
-		if (port->is_slave) {
-			TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(
+		port = &test_params.member_ports[n];
+		if (port->is_member) {
+			TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(
 					test_params.bond_port_id, port->port_id),
-					"Cannot remove slave %d from bonding", port->port_id);
-			port->is_slave = 0;
+					"Cannot remove member %d from bonding", port->port_id);
+			port->is_member = 0;
 		}
 	}
 
@@ -173,30 +173,30 @@ remove_slaves(void)
 }
 
 static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
 {
-	TEST_ASSERT_SUCCESS(remove_slaves(), "Removing slaves");
+	TEST_ASSERT_SUCCESS(remove_members(), "Removing members");
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bond_port_id),
 			"Failed to stop port %u", test_params.bond_port_id);
 	return TEST_SUCCESS;
 }
 
 /**
- * Add all slaves to bonding
+ * Add all members to bonding
  */
 static int
-bond_slaves(void)
+bond_members(void)
 {
 	unsigned n;
-	struct slave_conf *port;
+	struct member_conf *port;
 
 	FOR_EACH_PORT(n, port) {
-		port = &test_params.slave_ports[n];
-		if (!port->is_slave) {
-			TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
-					port->port_id), "Cannot attach slave %d to the bonding",
+		port = &test_params.member_ports[n];
+		if (!port->is_member) {
+			TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bond_port_id,
+					port->port_id), "Cannot attach member %d to the bonding",
 					port->port_id);
-			port->is_slave = 1;
+			port->is_member = 1;
 		}
 	}
 
@@ -223,11 +223,11 @@ reta_set(uint16_t port_id, uint8_t value, int reta_size)
 }
 
 /**
- * Check if slaves RETA is synchronized with bonding port. Returns 1 if slave
+ * Check if members RETA is synchronized with bonding port. Returns 1 if member
  * port is synced with bonding port.
  */
 static int
-reta_check_synced(struct slave_conf *port)
+reta_check_synced(struct member_conf *port)
 {
 	unsigned i;
 
@@ -264,10 +264,10 @@ bond_reta_fetch(void) {
 }
 
 /**
- * Fetch slaves RETA
+ * Fetch members RETA
  */
 static int
-slave_reta_fetch(struct slave_conf *port) {
+member_reta_fetch(struct member_conf *port) {
 	unsigned j;
 
 	for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
@@ -280,49 +280,49 @@ slave_reta_fetch(struct slave_conf *port) {
 }
 
 /**
- * Remove and add slave to check if slaves configuration is synced with
- * the bonding ports values after adding new slave.
+ * Remove and add member to check if members configuration is synced with
+ * the bonding ports values after adding new member.
  */
 static int
-slave_remove_and_add(void)
+member_remove_and_add(void)
 {
-	struct slave_conf *port = &(test_params.slave_ports[0]);
+	struct member_conf *port = &(test_params.member_ports[0]);
 
-	/* 1. Remove first slave from bonding */
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params.bond_port_id,
-			port->port_id), "Cannot remove slave #d from bonding");
+	/* 1. Remove first member from bonding */
+	TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(test_params.bond_port_id,
+			port->port_id), "Cannot remove member #d from bonding");
 
-	/* 2. Change removed (ex-)slave and bonding configuration to different
+	/* 2. Change removed (ex-)member and bonding configuration to different
 	 *    values
 	 */
 	reta_set(test_params.bond_port_id, 1, test_params.bond_dev_info.reta_size);
 	bond_reta_fetch();
 
 	reta_set(port->port_id, 2, port->dev_info.reta_size);
-	slave_reta_fetch(port);
+	member_reta_fetch(port);
 
 	TEST_ASSERT(reta_check_synced(port) == 0,
-			"Removed slave didn't should be synchronized with bonding port");
+			"Removed member didn't should be synchronized with bonding port");
 
-	/* 3. Add (ex-)slave and check if configuration changed*/
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
-			port->port_id), "Cannot add slave");
+	/* 3. Add (ex-)member and check if configuration changed*/
+	TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bond_port_id,
+			port->port_id), "Cannot add member");
 
 	bond_reta_fetch();
-	slave_reta_fetch(port);
+	member_reta_fetch(port);
 
 	return reta_check_synced(port);
 }
 
 /**
- * Test configuration propagation over slaves.
+ * Test configuration propagation over members.
  */
 static int
 test_propagate(void)
 {
 	unsigned i;
 	uint8_t n;
-	struct slave_conf *port;
+	struct member_conf *port;
 	uint8_t bond_rss_key[40];
 	struct rte_eth_rss_conf bond_rss_conf;
 
@@ -349,18 +349,18 @@ test_propagate(void)
 
 			retval = rte_eth_dev_rss_hash_update(test_params.bond_port_id,
 					&bond_rss_conf);
-			TEST_ASSERT_SUCCESS(retval, "Cannot set slaves hash function");
+			TEST_ASSERT_SUCCESS(retval, "Cannot set members hash function");
 
 			FOR_EACH_PORT(n, port) {
-				port = &test_params.slave_ports[n];
+				port = &test_params.member_ports[n];
 
 				retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
 						&port->rss_conf);
 				TEST_ASSERT_SUCCESS(retval,
-						"Cannot take slaves RSS configuration");
+						"Cannot take members RSS configuration");
 
 				TEST_ASSERT(port->rss_conf.rss_hf == rss_hf,
-						"Hash function not propagated for slave %d",
+						"Hash function not propagated for member %d",
 						port->port_id);
 			}
 
@@ -376,11 +376,11 @@ test_propagate(void)
 
 		/* Set all keys to zero */
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.member_ports[n];
 			memset(port->rss_conf.rss_key, 0, 40);
 			retval = rte_eth_dev_rss_hash_update(port->port_id,
 					&port->rss_conf);
-			TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RSS keys");
+			TEST_ASSERT_SUCCESS(retval, "Cannot set members RSS keys");
 		}
 
 		memset(bond_rss_key, i, sizeof(bond_rss_key));
@@ -393,18 +393,18 @@ test_propagate(void)
 		TEST_ASSERT_SUCCESS(retval, "Cannot set bonded port RSS keys");
 
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.member_ports[n];
 
 			retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
 					&(port->rss_conf));
 
 			TEST_ASSERT_SUCCESS(retval,
-					"Cannot take slaves RSS configuration");
+					"Cannot take members RSS configuration");
 
 			/* compare keys */
 			retval = memcmp(port->rss_conf.rss_key, bond_rss_key,
 					sizeof(bond_rss_key));
-			TEST_ASSERT(retval == 0, "Key value not propagated for slave %d",
+			TEST_ASSERT(retval == 0, "Key value not propagated for member %d",
 					port->port_id);
 		}
 	}
@@ -416,10 +416,10 @@ test_propagate(void)
 
 		/* Set all keys to zero */
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.member_ports[n];
 			retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
 					port->dev_info.reta_size);
-			TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RETA");
+			TEST_ASSERT_SUCCESS(retval, "Cannot set members RETA");
 		}
 
 		TEST_ASSERT_SUCCESS(reta_set(test_params.bond_port_id,
@@ -429,9 +429,9 @@ test_propagate(void)
 		bond_reta_fetch();
 
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.member_ports[n];
 
-			slave_reta_fetch(port);
+			member_reta_fetch(port);
 			TEST_ASSERT(reta_check_synced(port) == 1, "RETAs inconsistent");
 		}
 	}
@@ -459,29 +459,29 @@ test_rss(void)
 			"Error during getting device (port %u) info: %s\n",
 			test_params.bond_port_id, strerror(-ret));
 
-	TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
+	TEST_ASSERT_SUCCESS(bond_members(), "Bonding members failed");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
 			"Failed to start bonding port (%d).", test_params.bond_port_id);
 
 	TEST_ASSERT_SUCCESS(test_propagate(), "Propagation test failed");
 
-	TEST_ASSERT(slave_remove_and_add() == 1, "remove and add slaves success.");
+	TEST_ASSERT(member_remove_and_add() == 1, "remove and add members success.");
 
-	remove_slaves_and_stop_bonded_device();
+	remove_members_and_stop_bonded_device();
 
 	return TEST_SUCCESS;
 }
 
 
 /**
- * Test RSS configuration over bonded and slaves.
+ * Test RSS configuration over bonded and members.
  */
 static int
 test_rss_config_lazy(void)
 {
 	struct rte_eth_rss_conf bond_rss_conf = {0};
-	struct slave_conf *port;
+	struct member_conf *port;
 	uint8_t rss_key[40];
 	uint64_t rss_hf;
 	int retval;
@@ -502,18 +502,18 @@ test_rss_config_lazy(void)
 		TEST_ASSERT(retval != 0, "Succeeded in setting bonded port hash function");
 	}
 
-	/* Set all keys to zero for all slaves */
+	/* Set all keys to zero for all members */
 	FOR_EACH_PORT(n, port) {
-		port = &test_params.slave_ports[n];
+		port = &test_params.member_ports[n];
 		retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
 						       &port->rss_conf);
-		TEST_ASSERT_SUCCESS(retval, "Cannot get slaves RSS configuration");
+		TEST_ASSERT_SUCCESS(retval, "Cannot get members RSS configuration");
 		memset(port->rss_key, 0, sizeof(port->rss_key));
 		port->rss_conf.rss_key = port->rss_key;
 		port->rss_conf.rss_key_len = sizeof(port->rss_key);
 		retval = rte_eth_dev_rss_hash_update(port->port_id,
 						     &port->rss_conf);
-		TEST_ASSERT(retval != 0, "Succeeded in setting slaves RSS keys");
+		TEST_ASSERT(retval != 0, "Succeeded in setting members RSS keys");
 	}
 
 	/* Set RSS keys for bonded port */
@@ -529,10 +529,10 @@ test_rss_config_lazy(void)
 	/*  Test RETA propagation */
 	for (i = 0; i < RXTX_QUEUE_COUNT; i++) {
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.member_ports[n];
 			retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
 					  port->dev_info.reta_size);
-			TEST_ASSERT(retval != 0, "Succeeded in setting slaves RETA");
+			TEST_ASSERT(retval != 0, "Succeeded in setting members RETA");
 		}
 
 		retval = reta_set(test_params.bond_port_id, i % RXTX_QUEUE_COUNT,
@@ -560,14 +560,14 @@ test_rss_lazy(void)
 			"Error during getting device (port %u) info: %s\n",
 			test_params.bond_port_id, strerror(-ret));
 
-	TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
+	TEST_ASSERT_SUCCESS(bond_members(), "Bonding members failed");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
 			"Failed to start bonding port (%d).", test_params.bond_port_id);
 
 	TEST_ASSERT_SUCCESS(test_rss_config_lazy(), "Succeeded in setting RSS hash when RX_RSS mq_mode is turned off");
 
-	remove_slaves_and_stop_bonded_device();
+	remove_members_and_stop_bonded_device();
 
 	return TEST_SUCCESS;
 }
@@ -579,13 +579,13 @@ test_setup(void)
 	int retval;
 	int port_id;
 	char name[256];
-	struct slave_conf *port;
+	struct member_conf *port;
 	struct rte_ether_addr mac_addr = { .addr_bytes = {0} };
 
 	if (test_params.mbuf_pool == NULL) {
 
 		test_params.mbuf_pool = rte_pktmbuf_pool_create(
-			"RSS_MBUF_POOL", NUM_MBUFS * SLAVE_COUNT,
+			"RSS_MBUF_POOL", NUM_MBUFS * MEMBER_COUNT,
 			MBUF_CACHE_SIZE, 0, MBUF_SIZE, rte_socket_id());
 
 		TEST_ASSERT(test_params.mbuf_pool != NULL,
@@ -594,10 +594,10 @@ test_setup(void)
 
 	/* Create / initialize ring eth devs. */
 	FOR_EACH_PORT(n, port) {
-		port = &test_params.slave_ports[n];
+		port = &test_params.member_ports[n];
 
 		port_id = rte_eth_dev_count_avail();
-		snprintf(name, sizeof(name), SLAVE_DEV_NAME_FMT, port_id);
+		snprintf(name, sizeof(name), MEMBER_DEV_NAME_FMT, port_id);
 
 		retval = rte_vdev_init(name, "size=64,copy=0");
 		TEST_ASSERT_SUCCESS(retval, "Failed to create null device '%s'\n",
@@ -647,7 +647,7 @@ test_setup(void)
 static void
 testsuite_teardown(void)
 {
-	struct slave_conf *port;
+	struct member_conf *port;
 	uint8_t i;
 
 	/* Only stop ports.
@@ -685,7 +685,7 @@ test_rssconf_executor(int (*test_func)(void))
 
 	/* Reset environment in case test failed to do that. */
 	if (test_result != TEST_SUCCESS) {
-		TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
+		TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
 			"Failed to stop bonded device");
 	}
 
diff --git a/doc/guides/howto/lm_bond_virtio_sriov.rst b/doc/guides/howto/lm_bond_virtio_sriov.rst
index e854ae214e..c06d1bc43c 100644
--- a/doc/guides/howto/lm_bond_virtio_sriov.rst
+++ b/doc/guides/howto/lm_bond_virtio_sriov.rst
@@ -17,8 +17,8 @@ Test Setup
 ----------
 
 A bonded device is created in the VM.
-The virtio and VF PMD's are added as slaves to the bonded device.
-The VF is set as the primary slave of the bonded device.
+The virtio and VF PMD's are added as members to the bonded device.
+The VF is set as the primary member of the bonded device.
 
 A bridge must be set up on the Host connecting the tap device, which is the
 backend of the Virtio device and the Physical Function (PF) device.
@@ -116,13 +116,13 @@ Bonding is port 2 (P2).
 
    testpmd> create bonded device 1 0
    Created new bonded device net_bond_testpmd_0 on (port 2).
-   testpmd> add bonding slave 0 2
-   testpmd> add bonding slave 1 2
+   testpmd> add bonding member 0 2
+   testpmd> add bonding member 1 2
    testpmd> show bonding config 2
 
 The syntax of the ``testpmd`` command is:
 
-set bonding primary (slave id) (port id)
+set bonding primary (member id) (port id)
 
 Set primary to P1 before starting bonding port.
 
@@ -139,7 +139,7 @@ Set primary to P1 before starting bonding port.
 
    testpmd> show bonding config 2
 
-Primary is now P1. There are 2 active slaves.
+Primary is now P1. There are 2 active members.
 
 Use P2 only for forwarding.
 
@@ -151,7 +151,7 @@ Use P2 only for forwarding.
    testpmd> start
    testpmd> show bonding config 2
 
-Primary is now P1. There are 2 active slaves.
+Primary is now P1. There are 2 active members.
 
 .. code-block:: console
 
@@ -163,10 +163,10 @@ VF traffic is seen at P1 and P2.
 
    testpmd> clear port stats all
    testpmd> set bonding primary 0 2
-   testpmd> remove bonding slave 1 2
+   testpmd> remove bonding member 1 2
    testpmd> show bonding config 2
 
-Primary is now P0. There is 1 active slave.
+Primary is now P0. There is 1 active member.
 
 .. code-block:: console
 
@@ -210,7 +210,7 @@ On host_server_1: Terminal 1
 
    testpmd> show bonding config 2
 
-Primary is now P0. There is 1 active slave.
+Primary is now P0. There is 1 active member.
 
 .. code-block:: console
 
@@ -346,7 +346,7 @@ The ``mac_addr`` command only works with the Kernel PF for Niantic.
    testpmd> show port stats all.
    testpmd> show config fwd
    testpmd> show bonding config 2
-   testpmd> add bonding slave 1 2
+   testpmd> add bonding member 1 2
    testpmd> set bonding primary 1 2
    testpmd> show bonding config 2
    testpmd> show port stats all
@@ -355,7 +355,7 @@ VF traffic is seen at P1 (VF) and P2 (Bonded device).
 
 .. code-block:: console
 
-   testpmd> remove bonding slave 0 2
+   testpmd> remove bonding member 0 2
    testpmd> show bonding config 2
    testpmd> port stop 0
    testpmd> port close 0
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 0b09b0c50a..43b2622022 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -781,8 +781,8 @@ DPDK implements a light-weight library to allow PMDs to be bonded together and p
 
 .. code-block:: console
 
-    dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=<PCI B:D.F device 1>,slave=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
-    (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
+    dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,member=<PCI B:D.F device 1>,member=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
+    (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,member=0000:82:00.0,member=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
 
 Vector Processing
 -----------------
diff --git a/doc/guides/prog_guide/img/bond-mode-1.svg b/doc/guides/prog_guide/img/bond-mode-1.svg
index 7c81b856b7..5a9271facf 100644
--- a/doc/guides/prog_guide/img/bond-mode-1.svg
+++ b/doc/guides/prog_guide/img/bond-mode-1.svg
@@ -53,7 +53,7 @@
      v:langID="1033"
      v:metric="true"
      v:viewMarkup="false"><v:userDefs><v:ud
-         v:nameU="msvSubprocessMaster"
+         v:nameU="msvSubprocessMain"
          v:prompt=""
          v:val="VT4(Rectangle)" /><v:ud
          v:nameU="msvNoAutoConnect"
diff --git a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
index 1f66154e35..58e5ef41da 100644
--- a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
+++ b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
@@ -16,14 +16,14 @@ allows physical PMDs to be bonded together to create a single logical PMD.
 The Link Bonding PMD library(librte_net_bond) supports bonding of groups of
 ``rte_eth_dev`` ports of the same speed and duplex to provide similar
 capabilities to that found in Linux bonding driver to allow the aggregation
-of multiple (slave) NICs into a single logical interface between a server
+of multiple (member) NICs into a single logical interface between a server
 and a switch. The new bonded PMD will then process these interfaces based on
 the mode of operation specified to provide support for features such as
 redundant links, fault tolerance and/or load balancing.
 
 The librte_net_bond library exports a C API which provides an API for the
 creation of bonded devices as well as the configuration and management of the
-bonded device and its slave devices.
+bonded device and its member devices.
 
 .. note::
 
@@ -45,7 +45,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
 
 
     This mode provides load balancing and fault tolerance by transmission of
-    packets in sequential order from the first available slave device through
+    packets in sequential order from the first available member device through
     the last. Packets are bulk dequeued from devices then serviced in a
     round-robin manner. This mode does not guarantee in order reception of
     packets and down stream should be able to handle out of order packets.
@@ -57,9 +57,9 @@ Currently the Link Bonding PMD library supports following modes of operation:
    Active Backup (Mode 1)
 
 
-    In this mode only one slave in the bond is active at any time, a different
-    slave becomes active if, and only if, the primary active slave fails,
-    thereby providing fault tolerance to slave failure. The single logical
+    In this mode only one member in the bond is active at any time, a different
+    member becomes active if, and only if, the primary active member fails,
+    thereby providing fault tolerance to member failure. The single logical
     bonded interface's MAC address is externally visible on only one NIC (port)
     to avoid confusing the network switch.
 
@@ -73,10 +73,10 @@ Currently the Link Bonding PMD library supports following modes of operation:
     This mode provides transmit load balancing (based on the selected
     transmission policy) and fault tolerance. The default policy (layer2) uses
     a simple calculation based on the packet flow source and destination MAC
-    addresses as well as the number of active slaves available to the bonded
-    device to classify the packet to a specific slave to transmit on. Alternate
+    addresses as well as the number of active members available to the bonded
+    device to classify the packet to a specific member to transmit on. Alternate
     transmission policies supported are layer 2+3, this takes the IP source and
-    destination addresses into the calculation of the transmit slave port and
+    destination addresses into the calculation of the transmit member port and
     the final supported policy is layer 3+4, this uses IP source and
     destination addresses as well as the TCP/UDP source and destination port.
 
@@ -92,7 +92,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
    Broadcast (Mode 3)
 
 
-    This mode provides fault tolerance by transmission of packets on all slave
+    This mode provides fault tolerance by transmission of packets on all member
     ports.
 
 *   **Link Aggregation 802.3AD (Mode 4):**
@@ -114,7 +114,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
        intervals period of less than 100ms.
 
     #. Calls to ``rte_eth_tx_burst`` must have a buffer size of at least 2xN,
-       where N is the number of slaves. This is a space required for LACP
+       where N is the number of members. This is a space required for LACP
        frames. Additionally LACP packets are included in the statistics, but
        they are not returned to the application.
 
@@ -126,7 +126,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
 
 
     This mode provides an adaptive transmit load balancing. It dynamically
-    changes the transmitting slave, according to the computed load. Statistics
+    changes the transmitting member, according to the computed load. Statistics
     are collected in 100ms intervals and scheduled every 10ms.
 
 
@@ -140,74 +140,74 @@ The Link Bonding Library supports the creation of bonded devices at application
 startup time during EAL initialization using the ``--vdev`` option as well as
 programmatically via the C API ``rte_eth_bond_create`` function.
 
-Bonded devices support the dynamical addition and removal of slave devices using
-the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove`` APIs.
+Bonded devices support the dynamical addition and removal of member devices using
+the ``rte_eth_bond_member_add`` / ``rte_eth_bond_member_remove`` APIs.
 
-After a slave device is added to a bonded device slave is stopped using
+After a member device is added to a bonded device member is stopped using
 ``rte_eth_dev_stop`` and then reconfigured using ``rte_eth_dev_configure``
 the RX and TX queues are also reconfigured using ``rte_eth_tx_queue_setup`` /
 ``rte_eth_rx_queue_setup`` with the parameters use to configure the bonding
 device. If RSS is enabled for bonding device, this mode is also enabled on new
-slave and configured as well.
+member and configured as well.
 Any flow which was configured to the bond device also is configured to the added
-slave.
+member.
 
 Setting up multi-queue mode for bonding device to RSS, makes it fully
-RSS-capable, so all slaves are synchronized with its configuration. This mode is
-intended to provide RSS configuration on slaves transparent for client
+RSS-capable, so all members are synchronized with its configuration. This mode is
+intended to provide RSS configuration on members transparent for client
 application implementation.
 
 Bonding device stores its own version of RSS settings i.e. RETA, RSS hash
-function and RSS key, used to set up its slaves. That let to define the meaning
+function and RSS key, used to set up its members. That let to define the meaning
 of RSS configuration of bonding device as desired configuration of whole bonding
-(as one unit), without pointing any of slave inside. It is required to ensure
+(as one unit), without pointing any of member inside. It is required to ensure
 consistency and made it more error-proof.
 
 RSS hash function set for bonding device, is a maximal set of RSS hash functions
-supported by all bonded slaves. RETA size is a GCD of all its RETA's sizes, so
-it can be easily used as a pattern providing expected behavior, even if slave
+supported by all bonded members. RETA size is a GCD of all its RETA's sizes, so
+it can be easily used as a pattern providing expected behavior, even if member
 RETAs' sizes are different. If RSS Key is not set for bonded device, it's not
-changed on the slaves and default key for device is used.
+changed on the members and default key for device is used.
 
-As RSS configurations, there is flow consistency in the bonded slaves for the
+As RSS configurations, there is flow consistency in the bonded members for the
 next rte flow operations:
 
 Validate:
-	- Validate flow for each slave, failure at least for one slave causes to
+	- Validate flow for each member, failure at least for one member causes to
 	  bond validation failure.
 
 Create:
-	- Create the flow in all slaves.
-	- Save all the slaves created flows objects in bonding internal flow
+	- Create the flow in all members.
+	- Save all the members created flows objects in bonding internal flow
 	  structure.
-	- Failure in flow creation for existed slave rejects the flow.
-	- Failure in flow creation for new slaves in slave adding time rejects
-	  the slave.
+	- Failure in flow creation for existed member rejects the flow.
+	- Failure in flow creation for new members in member adding time rejects
+	  the member.
 
 Destroy:
-	- Destroy the flow in all slaves and release the bond internal flow
+	- Destroy the flow in all members and release the bond internal flow
 	  memory.
 
 Flush:
-	- Destroy all the bonding PMD flows in all the slaves.
+	- Destroy all the bonding PMD flows in all the members.
 
 .. note::
 
-    Don't call slaves flush directly, It destroys all the slave flows which
+    Don't call members flush directly, It destroys all the member flows which
     may include external flows or the bond internal LACP flow.
 
 Query:
-	- Summarize flow counters from all the slaves, relevant only for
+	- Summarize flow counters from all the members, relevant only for
 	  ``RTE_FLOW_ACTION_TYPE_COUNT``.
 
 Isolate:
-	- Call to flow isolate for all slaves.
-	- Failure in flow isolation for existed slave rejects the isolate mode.
-	- Failure in flow isolation for new slaves in slave adding time rejects
-	  the slave.
+	- Call to flow isolate for all members.
+	- Failure in flow isolation for existed member rejects the isolate mode.
+	- Failure in flow isolation for new members in member adding time rejects
+	  the member.
 
 All settings are managed through the bonding port API and always are propagated
-in one direction (from bonding to slaves).
+in one direction (from bonding to members).
 
 Link Status Change Interrupts / Polling
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -215,16 +215,16 @@ Link Status Change Interrupts / Polling
 Link bonding devices support the registration of a link status change callback,
 using the ``rte_eth_dev_callback_register`` API, this will be called when the
 status of the bonding device changes. For example in the case of a bonding
-device which has 3 slaves, the link status will change to up when one slave
-becomes active or change to down when all slaves become inactive. There is no
-callback notification when a single slave changes state and the previous
-conditions are not met. If a user wishes to monitor individual slaves then they
-must register callbacks with that slave directly.
+device which has 3 members, the link status will change to up when one member
+becomes active or change to down when all members become inactive. There is no
+callback notification when a single member changes state and the previous
+conditions are not met. If a user wishes to monitor individual members then they
+must register callbacks with that member directly.
 
 The link bonding library also supports devices which do not implement link
 status change interrupts, this is achieved by polling the devices link status at
 a defined period which is set using the ``rte_eth_bond_link_monitoring_set``
-API, the default polling interval is 10ms. When a device is added as a slave to
+API, the default polling interval is 10ms. When a device is added as a member to
 a bonding device it is determined using the ``RTE_PCI_DRV_INTR_LSC`` flag
 whether the device supports interrupts or whether the link status should be
 monitored by polling it.
@@ -233,30 +233,30 @@ Requirements / Limitations
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 The current implementation only supports devices that support the same speed
-and duplex to be added as a slaves to the same bonded device. The bonded device
-inherits these attributes from the first active slave added to the bonded
-device and then all further slaves added to the bonded device must support
+and duplex to be added as a members to the same bonded device. The bonded device
+inherits these attributes from the first active member added to the bonded
+device and then all further members added to the bonded device must support
 these parameters.
 
-A bonding device must have a minimum of one slave before the bonding device
+A bonding device must have a minimum of one member before the bonding device
 itself can be started.
 
 To use a bonding device dynamic RSS configuration feature effectively, it is
-also required, that all slaves should be RSS-capable and support, at least one
+also required, that all members should be RSS-capable and support, at least one
 common hash function available for each of them. Changing RSS key is only
-possible, when all slave devices support the same key size.
+possible, when all member devices support the same key size.
 
-To prevent inconsistency on how slaves process packets, once a device is added
+To prevent inconsistency on how members process packets, once a device is added
 to a bonding device, RSS and rte flow configurations should be managed through
-the bonding device API, and not directly on the slave.
+the bonding device API, and not directly on the member.
 
 Like all other PMD, all functions exported by a PMD are lock-free functions
 that are assumed not to be invoked in parallel on different logical cores to
 work on the same target object.
 
 It should also be noted that the PMD receive function should not be invoked
-directly on a slave devices after they have been to a bonded device since
-packets read directly from the slave device will no longer be available to the
+directly on a member devices after they have been to a bonded device since
+packets read directly from the member device will no longer be available to the
 bonded device to read.
 
 Configuration
@@ -265,25 +265,25 @@ Configuration
 Link bonding devices are created using the ``rte_eth_bond_create`` API
 which requires a unique device name, the bonding mode,
 and the socket Id to allocate the bonding device's resources on.
-The other configurable parameters for a bonded device are its slave devices,
-its primary slave, a user defined MAC address and transmission policy to use if
+The other configurable parameters for a bonded device are its member devices,
+its primary member, a user defined MAC address and transmission policy to use if
 the device is in balance XOR mode.
 
-Slave Devices
-^^^^^^^^^^^^^
+Member Devices
+^^^^^^^^^^^^^^
 
-Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` slave devices
-of the same speed and duplex. Ethernet devices can be added as a slave to a
-maximum of one bonded device. Slave devices are reconfigured with the
+Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` member devices
+of the same speed and duplex. Ethernet devices can be added as a member to a
+maximum of one bonded device. Member devices are reconfigured with the
 configuration of the bonded device on being added to a bonded device.
 
-The bonded also guarantees to return the MAC address of the slave device to its
-original value of removal of a slave from it.
+The bonded also guarantees to return the MAC address of the member device to its
+original value of removal of a member from it.
 
-Primary Slave
-^^^^^^^^^^^^^
+Primary Member
+^^^^^^^^^^^^^^
 
-The primary slave is used to define the default port to use when a bonded
+The primary member is used to define the default port to use when a bonded
 device is in active backup mode. A different port will only be used if, and
 only if, the current primary port goes down. If the user does not specify a
 primary port it will default to being the first port added to the bonded device.
@@ -292,14 +292,14 @@ MAC Address
 ^^^^^^^^^^^
 
 The bonded device can be configured with a user specified MAC address, this
-address will be inherited by the some/all slave devices depending on the
+address will be inherited by the some/all member devices depending on the
 operating mode. If the device is in active backup mode then only the primary
-device will have the user specified MAC, all other slaves will retain their
-original MAC address. In mode 0, 2, 3, 4 all slaves devices are configure with
+device will have the user specified MAC, all other members will retain their
+original MAC address. In mode 0, 2, 3, 4 all members devices are configure with
 the bonded devices MAC address.
 
 If a user defined MAC address is not defined then the bonded device will
-default to using the primary slaves MAC address.
+default to using the primary members MAC address.
 
 Balance XOR Transmit Policies
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -310,17 +310,17 @@ Balance XOR mode. Layer 2, Layer 2+3, Layer 3+4.
 *   **Layer 2:**   Ethernet MAC address based balancing is the default
     transmission policy for Balance XOR bonding mode. It uses a simple XOR
     calculation on the source MAC address and destination MAC address of the
-    packet and then calculate the modulus of this value to calculate the slave
+    packet and then calculate the modulus of this value to calculate the member
     device to transmit the packet on.
 
 *   **Layer 2 + 3:** Ethernet MAC address & IP Address based balancing uses a
     combination of source/destination MAC addresses and the source/destination
-    IP addresses of the data packet to decide which slave port the packet will
+    IP addresses of the data packet to decide which member port the packet will
     be transmitted on.
 
 *   **Layer 3 + 4:**  IP Address & UDP Port based  balancing uses a combination
     of source/destination IP Address and the source/destination UDP ports of
-    the packet of the data packet to decide which slave port the packet will be
+    the packet of the data packet to decide which member port the packet will be
     transmitted on.
 
 All these policies support 802.1Q VLAN Ethernet packets, as well as IPv4, IPv6
@@ -350,13 +350,13 @@ device configure API ``rte_eth_dev_configure`` and then the RX and TX queues
 which will be used must be setup using ``rte_eth_tx_queue_setup`` /
 ``rte_eth_rx_queue_setup``.
 
-Slave devices can be dynamically added and removed from a link bonding device
-using the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove``
-APIs but at least one slave device must be added to the link bonding device
+Member devices can be dynamically added and removed from a link bonding device
+using the ``rte_eth_bond_member_add`` / ``rte_eth_bond_member_remove``
+APIs but at least one member device must be added to the link bonding device
 before it can be started using ``rte_eth_dev_start``.
 
-The link status of a bonded device is dictated by that of its slaves, if all
-slave device link status are down or if all slaves are removed from the link
+The link status of a bonded device is dictated by that of its members, if all
+member device link status are down or if all members are removed from the link
 bonding device then the link status of the bonding device will go down.
 
 It is also possible to configure / query the configuration of the control
@@ -390,7 +390,7 @@ long as the following two rules are respected:
     where X can be any combination of numbers and/or letters,
     and the name is no greater than 32 characters long.
 
-*   A least one slave device is provided with for each bonded device definition.
+*   A least one member device is provided with for each bonded device definition.
 
 *   The operation mode of the bonded device being created is provided.
 
@@ -404,20 +404,20 @@ The different options are:
 
         mode=2
 
-*   slave: Defines the PMD device which will be added as slave to the bonded
+*   member: Defines the PMD device which will be added as member to the bonded
     device. This option can be selected multiple times, for each device to be
-    added as a slave. Physical devices should be specified using their PCI
+    added as a member. Physical devices should be specified using their PCI
     address, in the format domain:bus:devid.function
 
 .. code-block:: console
 
-        slave=0000:0a:00.0,slave=0000:0a:00.1
+        member=0000:0a:00.0,member=0000:0a:00.1
 
-*   primary: Optional parameter which defines the primary slave port,
-    is used in active backup mode to select the primary slave for data TX/RX if
+*   primary: Optional parameter which defines the primary member port,
+    is used in active backup mode to select the primary member for data TX/RX if
     it is available. The primary port also is used to select the MAC address to
-    use when it is not defined by the user. This defaults to the first slave
-    added to the device if it is specified. The primary device must be a slave
+    use when it is not defined by the user. This defaults to the first member
+    added to the device if it is specified. The primary device must be a member
     of the bonded device.
 
 .. code-block:: console
@@ -432,7 +432,7 @@ The different options are:
         socket_id=0
 
 *   mac: Optional parameter to select a MAC address for link bonding device,
-    this overrides the value of the primary slave device.
+    this overrides the value of the primary member device.
 
 .. code-block:: console
 
@@ -474,29 +474,29 @@ The different options are:
 Examples of Usage
 ^^^^^^^^^^^^^^^^^
 
-Create a bonded device in round robin mode with two slaves specified by their PCI address:
+Create a bonded device in round robin mode with two members specified by their PCI address:
 
 .. code-block:: console
 
-    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00' -- --port-topology=chained
+    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,member=0000:0a:00.01,member=0000:04:00.00' -- --port-topology=chained
 
-Create a bonded device in round robin mode with two slaves specified by their PCI address and an overriding MAC address:
+Create a bonded device in round robin mode with two members specified by their PCI address and an overriding MAC address:
 
 .. code-block:: console
 
-    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
+    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,member=0000:0a:00.01,member=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
 
-Create a bonded device in active backup mode with two slaves specified, and a primary slave specified by their PCI addresses:
+Create a bonded device in active backup mode with two members specified, and a primary member specified by their PCI addresses:
 
 .. code-block:: console
 
-    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,slave=0000:0a:00.01,slave=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
+    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,member=0000:0a:00.01,member=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
 
-Create a bonded device in balance mode with two slaves specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
+Create a bonded device in balance mode with two members specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
 
 .. code-block:: console
 
-    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,slave=0000:0a:00.01,slave=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
+    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,member=0000:0a:00.01,member=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
 
 .. _bonding_testpmd_commands:
 
@@ -517,28 +517,28 @@ For example, to create a bonded device in mode 1 on socket 0::
    testpmd> create bonded device 1 0
    created new bonded device (port X)
 
-add bonding slave
-~~~~~~~~~~~~~~~~~
+add bonding member
+~~~~~~~~~~~~~~~~~~
 
 Adds Ethernet device to a Link Bonding device::
 
-   testpmd> add bonding slave (slave id) (port id)
+   testpmd> add bonding member (member id) (port id)
 
 For example, to add Ethernet device (port 6) to a Link Bonding device (port 10)::
 
-   testpmd> add bonding slave 6 10
+   testpmd> add bonding member 6 10
 
 
-remove bonding slave
-~~~~~~~~~~~~~~~~~~~~
+remove bonding member
+~~~~~~~~~~~~~~~~~~~~~
 
-Removes an Ethernet slave device from a Link Bonding device::
+Removes an Ethernet member device from a Link Bonding device::
 
-   testpmd> remove bonding slave (slave id) (port id)
+   testpmd> remove bonding member (member id) (port id)
 
-For example, to remove Ethernet slave device (port 6) to a Link Bonding device (port 10)::
+For example, to remove Ethernet member device (port 6) to a Link Bonding device (port 10)::
 
-   testpmd> remove bonding slave 6 10
+   testpmd> remove bonding member 6 10
 
 set bonding mode
 ~~~~~~~~~~~~~~~~
@@ -554,11 +554,11 @@ For example, to set the bonding mode of a Link Bonding device (port 10) to broad
 set bonding primary
 ~~~~~~~~~~~~~~~~~~~
 
-Set an Ethernet slave device as the primary device on a Link Bonding device::
+Set an Ethernet member device as the primary device on a Link Bonding device::
 
-   testpmd> set bonding primary (slave id) (port id)
+   testpmd> set bonding primary (member id) (port id)
 
-For example, to set the Ethernet slave device (port 6) as the primary port of a Link Bonding device (port 10)::
+For example, to set the Ethernet member device (port 6) as the primary port of a Link Bonding device (port 10)::
 
    testpmd> set bonding primary 6 10
 
@@ -590,7 +590,7 @@ set bonding mon_period
 
 Set the link status monitoring polling period in milliseconds for a bonding device.
 
-This adds support for PMD slave devices which do not support link status interrupts.
+This adds support for PMD member devices which do not support link status interrupts.
 When the mon_period is set to a value greater than 0 then all PMD's which do not support
 link status ISR will be queried every polling interval to check if their link status has changed::
 
@@ -604,7 +604,7 @@ For example, to set the link status monitoring polling period of bonded device (
 set bonding lacp dedicated_queue
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Enable dedicated tx/rx queues on bonding devices slaves to handle LACP control plane traffic
+Enable dedicated tx/rx queues on bonding devices members to handle LACP control plane traffic
 when in mode 4 (link-aggregation-802.3ad)::
 
    testpmd> set bonding lacp dedicated_queues (port_id) (enable|disable)
@@ -627,13 +627,13 @@ it also shows link-aggregation-802.3ad information if the link mode is mode 4::
    testpmd> show bonding config (port id)
 
 For example,
-to show the configuration a Link Bonding device (port 9) with 3 slave devices (1, 3, 4)
+to show the configuration a Link Bonding device (port 9) with 3 member devices (1, 3, 4)
 in balance mode with a transmission policy of layer 2+3::
 
    testpmd> show bonding config 9
      - Dev basic:
         Bonding mode: BALANCE(2)
         Balance Xmit Policy: BALANCE_XMIT_POLICY_LAYER23
-        Slaves (3): [1 3 4]
-        Active Slaves (3): [1 3 4]
+        Members (3): [1 3 4]
+        Active Members (3): [1 3 4]
         Primary: [3]
diff --git a/drivers/net/bonding/bonding_testpmd.c b/drivers/net/bonding/bonding_testpmd.c
index b3c12cada0..1fe85839ed 100644
--- a/drivers/net/bonding/bonding_testpmd.c
+++ b/drivers/net/bonding/bonding_testpmd.c
@@ -279,7 +279,7 @@ struct cmd_set_bonding_primary_result {
 	cmdline_fixed_string_t set;
 	cmdline_fixed_string_t bonding;
 	cmdline_fixed_string_t primary;
-	portid_t slave_id;
+	portid_t member_id;
 	portid_t port_id;
 };
 
@@ -287,13 +287,13 @@ static void cmd_set_bonding_primary_parsed(void *parsed_result,
 	__rte_unused struct cmdline *cl, __rte_unused void *data)
 {
 	struct cmd_set_bonding_primary_result *res = parsed_result;
-	portid_t master_port_id = res->port_id;
-	portid_t slave_port_id = res->slave_id;
+	portid_t main_port_id = res->port_id;
+	portid_t member_port_id = res->member_id;
 
-	/* Set the primary slave for a bonded device. */
-	if (rte_eth_bond_primary_set(master_port_id, slave_port_id) != 0) {
-		fprintf(stderr, "\t Failed to set primary slave for port = %d.\n",
-			master_port_id);
+	/* Set the primary member for a bonded device. */
+	if (rte_eth_bond_primary_set(main_port_id, member_port_id) != 0) {
+		fprintf(stderr, "\t Failed to set primary member for port = %d.\n",
+			main_port_id);
 		return;
 	}
 	init_port_config();
@@ -308,141 +308,141 @@ static cmdline_parse_token_string_t cmd_setbonding_primary_bonding =
 static cmdline_parse_token_string_t cmd_setbonding_primary_primary =
 	TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_primary_result,
 		primary, "primary");
-static cmdline_parse_token_num_t cmd_setbonding_primary_slave =
+static cmdline_parse_token_num_t cmd_setbonding_primary_member =
 	TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
-		slave_id, RTE_UINT16);
+		member_id, RTE_UINT16);
 static cmdline_parse_token_num_t cmd_setbonding_primary_port =
 	TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
 		port_id, RTE_UINT16);
 
 static cmdline_parse_inst_t cmd_set_bonding_primary = {
 	.f = cmd_set_bonding_primary_parsed,
-	.help_str = "set bonding primary <slave_id> <port_id>: "
-		"Set the primary slave for port_id",
+	.help_str = "set bonding primary <member_id> <port_id>: "
+		"Set the primary member for port_id",
 	.data = NULL,
 	.tokens = {
 		(void *)&cmd_setbonding_primary_set,
 		(void *)&cmd_setbonding_primary_bonding,
 		(void *)&cmd_setbonding_primary_primary,
-		(void *)&cmd_setbonding_primary_slave,
+		(void *)&cmd_setbonding_primary_member,
 		(void *)&cmd_setbonding_primary_port,
 		NULL
 	}
 };
 
-/* *** ADD SLAVE *** */
-struct cmd_add_bonding_slave_result {
+/* *** ADD Member *** */
+struct cmd_add_bonding_member_result {
 	cmdline_fixed_string_t add;
 	cmdline_fixed_string_t bonding;
-	cmdline_fixed_string_t slave;
-	portid_t slave_id;
+	cmdline_fixed_string_t member;
+	portid_t member_id;
 	portid_t port_id;
 };
 
-static void cmd_add_bonding_slave_parsed(void *parsed_result,
+static void cmd_add_bonding_member_parsed(void *parsed_result,
 	__rte_unused struct cmdline *cl, __rte_unused void *data)
 {
-	struct cmd_add_bonding_slave_result *res = parsed_result;
-	portid_t master_port_id = res->port_id;
-	portid_t slave_port_id = res->slave_id;
+	struct cmd_add_bonding_member_result *res = parsed_result;
+	portid_t main_port_id = res->port_id;
+	portid_t member_port_id = res->member_id;
 
-	/* add the slave for a bonded device. */
-	if (rte_eth_bond_slave_add(master_port_id, slave_port_id) != 0) {
+	/* add the member for a bonded device. */
+	if (rte_eth_bond_member_add(main_port_id, member_port_id) != 0) {
 		fprintf(stderr,
-			"\t Failed to add slave %d to master port = %d.\n",
-			slave_port_id, master_port_id);
+			"\t Failed to add member %d to main port = %d.\n",
+			member_port_id, main_port_id);
 		return;
 	}
-	ports[master_port_id].update_conf = 1;
+	ports[main_port_id].update_conf = 1;
 	init_port_config();
-	set_port_slave_flag(slave_port_id);
+	set_port_member_flag(member_port_id);
 }
 
-static cmdline_parse_token_string_t cmd_addbonding_slave_add =
-	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_add =
+	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
 		add, "add");
-static cmdline_parse_token_string_t cmd_addbonding_slave_bonding =
-	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_bonding =
+	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
 		bonding, "bonding");
-static cmdline_parse_token_string_t cmd_addbonding_slave_slave =
-	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
-		slave, "slave");
-static cmdline_parse_token_num_t cmd_addbonding_slave_slaveid =
-	TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
-		slave_id, RTE_UINT16);
-static cmdline_parse_token_num_t cmd_addbonding_slave_port =
-	TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_member =
+	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
+		member, "member");
+static cmdline_parse_token_num_t cmd_addbonding_member_memberid =
+	TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_member_result,
+		member_id, RTE_UINT16);
+static cmdline_parse_token_num_t cmd_addbonding_member_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_member_result,
 		port_id, RTE_UINT16);
 
-static cmdline_parse_inst_t cmd_add_bonding_slave = {
-	.f = cmd_add_bonding_slave_parsed,
-	.help_str = "add bonding slave <slave_id> <port_id>: "
-		"Add a slave device to a bonded device",
+static cmdline_parse_inst_t cmd_add_bonding_member = {
+	.f = cmd_add_bonding_member_parsed,
+	.help_str = "add bonding member <member_id> <port_id>: "
+		"Add a member device to a bonded device",
 	.data = NULL,
 	.tokens = {
-		(void *)&cmd_addbonding_slave_add,
-		(void *)&cmd_addbonding_slave_bonding,
-		(void *)&cmd_addbonding_slave_slave,
-		(void *)&cmd_addbonding_slave_slaveid,
-		(void *)&cmd_addbonding_slave_port,
+		(void *)&cmd_addbonding_member_add,
+		(void *)&cmd_addbonding_member_bonding,
+		(void *)&cmd_addbonding_member_member,
+		(void *)&cmd_addbonding_member_memberid,
+		(void *)&cmd_addbonding_member_port,
 		NULL
 	}
 };
 
-/* *** REMOVE SLAVE *** */
-struct cmd_remove_bonding_slave_result {
+/* *** REMOVE Member *** */
+struct cmd_remove_bonding_member_result {
 	cmdline_fixed_string_t remove;
 	cmdline_fixed_string_t bonding;
-	cmdline_fixed_string_t slave;
-	portid_t slave_id;
+	cmdline_fixed_string_t member;
+	portid_t member_id;
 	portid_t port_id;
 };
 
-static void cmd_remove_bonding_slave_parsed(void *parsed_result,
+static void cmd_remove_bonding_member_parsed(void *parsed_result,
 	__rte_unused struct cmdline *cl, __rte_unused void *data)
 {
-	struct cmd_remove_bonding_slave_result *res = parsed_result;
-	portid_t master_port_id = res->port_id;
-	portid_t slave_port_id = res->slave_id;
+	struct cmd_remove_bonding_member_result *res = parsed_result;
+	portid_t main_port_id = res->port_id;
+	portid_t member_port_id = res->member_id;
 
-	/* remove the slave from a bonded device. */
-	if (rte_eth_bond_slave_remove(master_port_id, slave_port_id) != 0) {
+	/* remove the member from a bonded device. */
+	if (rte_eth_bond_member_remove(main_port_id, member_port_id) != 0) {
 		fprintf(stderr,
-			"\t Failed to remove slave %d from master port = %d.\n",
-			slave_port_id, master_port_id);
+			"\t Failed to remove member %d from main port = %d.\n",
+			member_port_id, main_port_id);
 		return;
 	}
 	init_port_config();
-	clear_port_slave_flag(slave_port_id);
+	clear_port_member_flag(member_port_id);
 }
 
-static cmdline_parse_token_string_t cmd_removebonding_slave_remove =
-	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_remove =
+	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
 		remove, "remove");
-static cmdline_parse_token_string_t cmd_removebonding_slave_bonding =
-	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_bonding =
+	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
 		bonding, "bonding");
-static cmdline_parse_token_string_t cmd_removebonding_slave_slave =
-	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
-		slave, "slave");
-static cmdline_parse_token_num_t cmd_removebonding_slave_slaveid =
-	TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
-		slave_id, RTE_UINT16);
-static cmdline_parse_token_num_t cmd_removebonding_slave_port =
-	TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_member =
+	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
+		member, "member");
+static cmdline_parse_token_num_t cmd_removebonding_member_memberid =
+	TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_member_result,
+		member_id, RTE_UINT16);
+static cmdline_parse_token_num_t cmd_removebonding_member_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_member_result,
 		port_id, RTE_UINT16);
 
-static cmdline_parse_inst_t cmd_remove_bonding_slave = {
-	.f = cmd_remove_bonding_slave_parsed,
-	.help_str = "remove bonding slave <slave_id> <port_id>: "
-		"Remove a slave device from a bonded device",
+static cmdline_parse_inst_t cmd_remove_bonding_member = {
+	.f = cmd_remove_bonding_member_parsed,
+	.help_str = "remove bonding member <member_id> <port_id>: "
+		"Remove a member device from a bonded device",
 	.data = NULL,
 	.tokens = {
-		(void *)&cmd_removebonding_slave_remove,
-		(void *)&cmd_removebonding_slave_bonding,
-		(void *)&cmd_removebonding_slave_slave,
-		(void *)&cmd_removebonding_slave_slaveid,
-		(void *)&cmd_removebonding_slave_port,
+		(void *)&cmd_removebonding_member_remove,
+		(void *)&cmd_removebonding_member_bonding,
+		(void *)&cmd_removebonding_member_member,
+		(void *)&cmd_removebonding_member_memberid,
+		(void *)&cmd_removebonding_member_port,
 		NULL
 	}
 };
@@ -706,18 +706,18 @@ static struct testpmd_driver_commands bonding_cmds = {
 	},
 	{
 		&cmd_set_bonding_primary,
-		"set bonding primary (slave_id) (port_id)\n"
-		"	Set the primary slave for a bonded device.\n",
+		"set bonding primary (member_id) (port_id)\n"
+		"	Set the primary member for a bonded device.\n",
 	},
 	{
-		&cmd_add_bonding_slave,
-		"add bonding slave (slave_id) (port_id)\n"
-		"	Add a slave device to a bonded device.\n",
+		&cmd_add_bonding_member,
+		"add bonding member (member_id) (port_id)\n"
+		"	Add a member device to a bonded device.\n",
 	},
 	{
-		&cmd_remove_bonding_slave,
-		"remove bonding slave (slave_id) (port_id)\n"
-		"	Remove a slave device from a bonded device.\n",
+		&cmd_remove_bonding_member,
+		"remove bonding member (member_id) (port_id)\n"
+		"	Remove a member device from a bonded device.\n",
 	},
 	{
 		&cmd_create_bonded_device,
diff --git a/drivers/net/bonding/eth_bond_8023ad_private.h b/drivers/net/bonding/eth_bond_8023ad_private.h
index a5e1fffea1..77892c0601 100644
--- a/drivers/net/bonding/eth_bond_8023ad_private.h
+++ b/drivers/net/bonding/eth_bond_8023ad_private.h
@@ -15,10 +15,10 @@
 #include "rte_eth_bond_8023ad.h"
 
 #define BOND_MODE_8023AX_UPDATE_TIMEOUT_MS  100
-/** Maximum number of packets to one slave queued in TX ring. */
-#define BOND_MODE_8023AX_SLAVE_RX_PKTS        3
-/** Maximum number of LACP packets from one slave queued in TX ring. */
-#define BOND_MODE_8023AX_SLAVE_TX_PKTS        1
+/** Maximum number of packets to one member queued in TX ring. */
+#define BOND_MODE_8023AX_MEMBER_RX_PKTS        3
+/** Maximum number of LACP packets from one member queued in TX ring. */
+#define BOND_MODE_8023AX_MEMBER_TX_PKTS        1
 /**
  * Timeouts definitions (5.4.4 in 802.1AX documentation).
  */
@@ -113,7 +113,7 @@ struct port {
 	enum rte_bond_8023ad_selection selected;
 
 	/** Indicates if either allmulti or promisc has been enforced on the
-	 * slave so that we can receive lacp packets
+	 * member so that we can receive lacp packets
 	 */
 #define BOND_8023AD_FORCED_ALLMULTI (1 << 0)
 #define BOND_8023AD_FORCED_PROMISC (1 << 1)
@@ -162,8 +162,8 @@ struct mode8023ad_private {
 	uint8_t external_sm;
 	struct rte_ether_addr mac_addr;
 
-	struct rte_eth_link slave_link;
-	/***< slave link properties */
+	struct rte_eth_link member_link;
+	/***< member link properties */
 
 	/**
 	 * Configuration of dedicated hardware queues for control plane
@@ -208,7 +208,7 @@ bond_mode_8023ad_setup(struct rte_eth_dev *dev,
 /**
  * @internal
  *
- * Enables 802.1AX mode and all active slaves on bonded interface.
+ * Enables 802.1AX mode and all active members on bonded interface.
  *
  * @param dev Bonded interface
  * @return
@@ -220,7 +220,7 @@ bond_mode_8023ad_enable(struct rte_eth_dev *dev);
 /**
  * @internal
  *
- * Disables 802.1AX mode of the bonded interface and slaves.
+ * Disables 802.1AX mode of the bonded interface and members.
  *
  * @param dev Bonded interface
  * @return
@@ -256,43 +256,43 @@ bond_mode_8023ad_stop(struct rte_eth_dev *dev);
  *
  * Passes given slow packet to state machines management logic.
  * @param internals Bonded device private data.
- * @param slave_id Slave port id.
+ * @param member_id Member port id.
  * @param slot_pkt Slow packet.
  */
 void
 bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
-				 uint16_t slave_id, struct rte_mbuf *pkt);
+				 uint16_t member_id, struct rte_mbuf *pkt);
 
 /**
  * @internal
  *
- * Appends given slave used slave
+ * Appends given member used member
  *
  * @param dev       Bonded interface.
- * @param port_id   Slave port ID to be added
+ * @param port_id   Member port ID to be added
  *
  * @return
  *  0 on success, negative value otherwise.
  */
 void
-bond_mode_8023ad_activate_slave(struct rte_eth_dev *dev, uint16_t port_id);
+bond_mode_8023ad_activate_member(struct rte_eth_dev *dev, uint16_t port_id);
 
 /**
  * @internal
  *
- * Denitializes and removes given slave from 802.1AX mode.
+ * Denitializes and removes given member from 802.1AX mode.
  *
  * @param dev       Bonded interface.
- * @param slave_num Position of slave in active_slaves array
+ * @param member_num Position of member in active_members array
  *
  * @return
  *  0 on success, negative value otherwise.
  */
 int
-bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *dev, uint16_t slave_pos);
+bond_mode_8023ad_deactivate_member(struct rte_eth_dev *dev, uint16_t member_pos);
 
 /**
- * Updates state when MAC was changed on bonded device or one of its slaves.
+ * Updates state when MAC was changed on bonded device or one of its members.
  * @param bond_dev Bonded device
  */
 void
@@ -300,10 +300,10 @@ bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev);
 
 int
 bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
-		uint16_t slave_port);
+		uint16_t member_port);
 
 int
-bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port);
+bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t member_port);
 
 int
 bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id);
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index d4f1fb27d4..93d03b0a79 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -18,8 +18,8 @@
 #include "eth_bond_8023ad_private.h"
 #include "rte_eth_bond_alb.h"
 
-#define PMD_BOND_SLAVE_PORT_KVARG			("slave")
-#define PMD_BOND_PRIMARY_SLAVE_KVARG		("primary")
+#define PMD_BOND_MEMBER_PORT_KVARG			("member")
+#define PMD_BOND_PRIMARY_MEMBER_KVARG		("primary")
 #define PMD_BOND_MODE_KVARG					("mode")
 #define PMD_BOND_AGG_MODE_KVARG				("agg_mode")
 #define PMD_BOND_XMIT_POLICY_KVARG			("xmit_policy")
@@ -50,8 +50,8 @@ extern const struct rte_flow_ops bond_flow_ops;
 /** Port Queue Mapping Structure */
 struct bond_rx_queue {
 	uint16_t queue_id;
-	/**< Next active_slave to poll */
-	uint16_t active_slave;
+	/**< Next active_member to poll */
+	uint16_t active_member;
 	/**< Queue Id */
 	struct bond_dev_private *dev_private;
 	/**< Reference to eth_dev private structure */
@@ -74,19 +74,19 @@ struct bond_tx_queue {
 	/**< Copy of TX configuration structure for queue */
 };
 
-/** Bonded slave devices structure */
-struct bond_ethdev_slave_ports {
-	uint16_t slaves[RTE_MAX_ETHPORTS];	/**< Slave port id array */
-	uint16_t slave_count;				/**< Number of slaves */
+/** Bonded member devices structure */
+struct bond_ethdev_member_ports {
+	uint16_t members[RTE_MAX_ETHPORTS];	/**< Member port id array */
+	uint16_t member_count;				/**< Number of members */
 };
 
-struct bond_slave_details {
+struct bond_member_details {
 	uint16_t port_id;
 
 	uint8_t link_status_poll_enabled;
 	uint8_t link_status_wait_to_complete;
 	uint8_t last_link_status;
-	/**< Port Id of slave eth_dev */
+	/**< Port Id of member eth_dev */
 	struct rte_ether_addr persisted_mac_addr;
 
 	uint16_t reta_size;
@@ -94,7 +94,7 @@ struct bond_slave_details {
 
 struct rte_flow {
 	TAILQ_ENTRY(rte_flow) next;
-	/* Slaves flows */
+	/* Members flows */
 	struct rte_flow *flows[RTE_MAX_ETHPORTS];
 	/* Flow description for synchronization */
 	struct rte_flow_conv_rule rule;
@@ -102,7 +102,7 @@ struct rte_flow {
 };
 
 typedef void (*burst_xmit_hash_t)(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves);
+		uint16_t member_count, uint16_t *members);
 
 /** Link Bonding PMD device private configuration Structure */
 struct bond_dev_private {
@@ -112,8 +112,8 @@ struct bond_dev_private {
 	rte_spinlock_t lock;
 	rte_spinlock_t lsc_lock;
 
-	uint16_t primary_port;			/**< Primary Slave Port */
-	uint16_t current_primary_port;		/**< Primary Slave Port */
+	uint16_t primary_port;			/**< Primary Member Port */
+	uint16_t current_primary_port;		/**< Primary Member Port */
 	uint16_t user_defined_primary_port;
 	/**< Flag for whether primary port is user defined or not */
 
@@ -137,16 +137,16 @@ struct bond_dev_private {
 	uint16_t nb_rx_queues;			/**< Total number of rx queues */
 	uint16_t nb_tx_queues;			/**< Total number of tx queues*/
 
-	uint16_t active_slave_count;		/**< Number of active slaves */
-	uint16_t active_slaves[RTE_MAX_ETHPORTS];    /**< Active slave list */
+	uint16_t active_member_count;		/**< Number of active members */
+	uint16_t active_members[RTE_MAX_ETHPORTS];    /**< Active member list */
 
-	uint16_t slave_count;			/**< Number of bonded slaves */
-	struct bond_slave_details slaves[RTE_MAX_ETHPORTS];
-	/**< Array of bonded slaves details */
+	uint16_t member_count;			/**< Number of bonded members */
+	struct bond_member_details members[RTE_MAX_ETHPORTS];
+	/**< Array of bonded members details */
 
 	struct mode8023ad_private mode4;
-	uint16_t tlb_slaves_order[RTE_MAX_ETHPORTS];
-	/**< TLB active slaves send order */
+	uint16_t tlb_members_order[RTE_MAX_ETHPORTS];
+	/**< TLB active members send order */
 	struct mode_alb_private mode6;
 
 	uint64_t rx_offload_capa;       /** Rx offload capability */
@@ -177,7 +177,7 @@ struct bond_dev_private {
 	uint8_t rss_key_len;				/**< hash key length in bytes. */
 
 	struct rte_kvargs *kvlist;
-	uint8_t slave_update_idx;
+	uint8_t member_update_idx;
 
 	bool kvargs_processing_is_done;
 
@@ -191,19 +191,21 @@ struct bond_dev_private {
 extern const struct eth_dev_ops default_dev_ops;
 
 int
-check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev);
+check_for_main_bonded_ethdev(const struct rte_eth_dev *eth_dev);
 
 int
 check_for_bonded_ethdev(const struct rte_eth_dev *eth_dev);
 
-/* Search given slave array to find position of given id.
- * Return slave pos or slaves_count if not found. */
+/*
+ * Search given member array to find position of given id.
+ * Return member pos or members_count if not found.
+ */
 static inline uint16_t
-find_slave_by_id(uint16_t *slaves, uint16_t slaves_count, uint16_t slave_id) {
+find_member_by_id(uint16_t *members, uint16_t members_count, uint16_t member_id) {
 
 	uint16_t pos;
-	for (pos = 0; pos < slaves_count; pos++) {
-		if (slave_id == slaves[pos])
+	for (pos = 0; pos < members_count; pos++) {
+		if (member_id == members[pos])
 			break;
 	}
 
@@ -217,13 +219,13 @@ int
 valid_bonded_port_id(uint16_t port_id);
 
 int
-valid_slave_port_id(struct bond_dev_private *internals, uint16_t port_id);
+valid_member_port_id(struct bond_dev_private *internals, uint16_t port_id);
 
 void
-deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
+deactivate_member(struct rte_eth_dev *eth_dev, uint16_t port_id);
 
 void
-activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
+activate_member(struct rte_eth_dev *eth_dev, uint16_t port_id);
 
 int
 mac_address_set(struct rte_eth_dev *eth_dev,
@@ -234,66 +236,66 @@ mac_address_get(struct rte_eth_dev *eth_dev,
 		struct rte_ether_addr *dst_mac_addr);
 
 int
-mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev);
+mac_address_members_update(struct rte_eth_dev *bonded_eth_dev);
 
 int
-slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
-		uint16_t slave_port_id);
+member_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+		uint16_t member_port_id);
 
 int
-slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
-		uint16_t slave_port_id);
+member_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+		uint16_t member_port_id);
 
 int
 bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode);
 
 int
-slave_configure(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev);
+member_configure(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *member_eth_dev);
 
 int
-slave_start(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev);
+member_start(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *member_eth_dev);
 
 void
-slave_remove(struct bond_dev_private *internals,
-		struct rte_eth_dev *slave_eth_dev);
+member_remove(struct bond_dev_private *internals,
+		struct rte_eth_dev *member_eth_dev);
 
 void
-slave_add(struct bond_dev_private *internals,
-		struct rte_eth_dev *slave_eth_dev);
+member_add(struct bond_dev_private *internals,
+		struct rte_eth_dev *member_eth_dev);
 
 void
 burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves);
+		uint16_t member_count, uint16_t *members);
 
 void
 burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves);
+		uint16_t member_count, uint16_t *members);
 
 void
 burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves);
+		uint16_t member_count, uint16_t *members);
 
 
 void
 bond_ethdev_primary_set(struct bond_dev_private *internals,
-		uint16_t slave_port_id);
+		uint16_t member_port_id);
 
 int
 bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 		void *param, void *ret_param);
 
 int
-bond_ethdev_parse_slave_port_kvarg(const char *key,
+bond_ethdev_parse_member_port_kvarg(const char *key,
 		const char *value, void *extra_args);
 
 int
-bond_ethdev_parse_slave_mode_kvarg(const char *key,
+bond_ethdev_parse_member_mode_kvarg(const char *key,
 		const char *value, void *extra_args);
 
 int
-bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_agg_mode_kvarg(const char *key __rte_unused,
 		const char *value, void *extra_args);
 
 int
@@ -301,7 +303,7 @@ bond_ethdev_parse_socket_id_kvarg(const char *key,
 		const char *value, void *extra_args);
 
 int
-bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key,
+bond_ethdev_parse_primary_member_port_id_kvarg(const char *key,
 		const char *value, void *extra_args);
 
 int
@@ -323,7 +325,7 @@ void
 bond_tlb_enable(struct bond_dev_private *internals);
 
 void
-bond_tlb_activate_slave(struct bond_dev_private *internals);
+bond_tlb_activate_member(struct bond_dev_private *internals);
 
 int
 bond_ethdev_stop(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/bonding/rte_eth_bond.h b/drivers/net/bonding/rte_eth_bond.h
index 874aa91a5f..b90242264d 100644
--- a/drivers/net/bonding/rte_eth_bond.h
+++ b/drivers/net/bonding/rte_eth_bond.h
@@ -10,7 +10,7 @@
  *
  * RTE Link Bonding Ethernet Device
  * Link Bonding for 1GbE and 10GbE ports to allow the aggregation of multiple
- * (slave) NICs into a single logical interface. The bonded device processes
+ * (member) NICs into a single logical interface. The bonded device processes
  * these interfaces based on the mode of operation specified and supported.
  * This implementation supports 4 modes of operation round robin, active backup
  * balance and broadcast. Providing redundant links, fault tolerance and/or
@@ -28,24 +28,28 @@ extern "C" {
 #define BONDING_MODE_ROUND_ROBIN		(0)
 /**< Round Robin (Mode 0).
  * In this mode all transmitted packets will be balanced equally across all
- * active slaves of the bonded in a round robin fashion. */
+ * active members of the bonded in a round robin fashion.
+ */
 #define BONDING_MODE_ACTIVE_BACKUP		(1)
 /**< Active Backup (Mode 1).
  * In this mode all packets transmitted will be transmitted on the primary
- * slave until such point as the primary slave is no longer available and then
- * transmitted packets will be sent on the next available slaves. The primary
- * slave can be defined by the user but defaults to the first active slave
- * available if not specified. */
+ * member until such point as the primary member is no longer available and then
+ * transmitted packets will be sent on the next available members. The primary
+ * member can be defined by the user but defaults to the first active member
+ * available if not specified.
+ */
 #define BONDING_MODE_BALANCE			(2)
 /**< Balance (Mode 2).
  * In this mode all packets transmitted will be balanced across the available
- * slaves using one of three available transmit policies - l2, l2+3 or l3+4.
+ * members using one of three available transmit policies - l2, l2+3 or l3+4.
  * See BALANCE_XMIT_POLICY macros definitions for further details on transmit
- * policies. */
+ * policies.
+ */
 #define BONDING_MODE_BROADCAST			(3)
 /**< Broadcast (Mode 3).
  * In this mode all transmitted packets will be transmitted on all available
- * active slaves of the bonded. */
+ * active members of the bonded.
+ */
 #define BONDING_MODE_8023AD				(4)
 /**< 802.3AD (Mode 4).
  *
@@ -62,22 +66,22 @@ extern "C" {
  * be handled with the expected latency and this may cause the link status to be
  * incorrectly marked as down or failure to correctly negotiate with peers.
  * - For optimal performance during initial handshaking the array of mbufs provided
- * to rx_burst should be at least 2 times the slave count size.
- *
+ * to rx_burst should be at least 2 times the member count size.
  */
 #define BONDING_MODE_TLB	(5)
 /**< Adaptive TLB (Mode 5)
  * This mode provides an adaptive transmit load balancing. It dynamically
- * changes the transmitting slave, according to the computed load. Statistics
- * are collected in 100ms intervals and scheduled every 10ms */
+ * changes the transmitting member, according to the computed load. Statistics
+ * are collected in 100ms intervals and scheduled every 10ms.
+ */
 #define BONDING_MODE_ALB	(6)
 /**< Adaptive Load Balancing (Mode 6)
  * This mode includes adaptive TLB and receive load balancing (RLB). In RLB the
  * bonding driver intercepts ARP replies send by local system and overwrites its
  * source MAC address, so that different peers send data to the server on
- * different slave interfaces. When local system sends ARP request, it saves IP
+ * different member interfaces. When local system sends ARP request, it saves IP
  * information from it. When ARP reply from that peer is received, its MAC is
- * stored, one of slave MACs assigned and ARP reply send to that peer.
+ * stored, one of member MACs assigned and ARP reply send to that peer.
  */
 
 /* Balance Mode Transmit Policies */
@@ -113,28 +117,44 @@ int
 rte_eth_bond_free(const char *name);
 
 /**
- * Add a rte_eth_dev device as a slave to the bonded device
+ * Add a rte_eth_dev device as a member to the bonded device
  *
  * @param bonded_port_id	Port ID of bonded device.
- * @param slave_port_id		Port ID of slave device.
+ * @param member_port_id		Port ID of member device.
  *
  * @return
  *	0 on success, negative value otherwise
  */
+__rte_experimental
 int
-rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_member_add(uint16_t bonded_port_id, uint16_t member_port_id);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t member_port_id)
+{
+	return rte_eth_bond_member_add(bonded_port_id, member_port_id);
+}
 
 /**
- * Remove a slave rte_eth_dev device from the bonded device
+ * Remove a member rte_eth_dev device from the bonded device
  *
  * @param bonded_port_id	Port ID of bonded device.
- * @param slave_port_id		Port ID of slave device.
+ * @param member_port_id		Port ID of member device.
  *
  * @return
  *	0 on success, negative value otherwise
  */
+__rte_experimental
 int
-rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_member_remove(uint16_t bonded_port_id, uint16_t member_port_id);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t member_port_id)
+{
+	return rte_eth_bond_member_remove(bonded_port_id, member_port_id);
+}
 
 /**
  * Set link bonding mode of bonded device
@@ -160,65 +180,83 @@ int
 rte_eth_bond_mode_get(uint16_t bonded_port_id);
 
 /**
- * Set slave rte_eth_dev as primary slave of bonded device
+ * Set member rte_eth_dev as primary member of bonded device
  *
  * @param bonded_port_id	Port ID of bonded device.
- * @param slave_port_id		Port ID of slave device.
+ * @param member_port_id		Port ID of member device.
  *
  * @return
  *	0 on success, negative value otherwise
  */
 int
-rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t member_port_id);
 
 /**
- * Get primary slave of bonded device
+ * Get primary member of bonded device
  *
  * @param bonded_port_id	Port ID of bonded device.
  *
  * @return
- *	Port Id of primary slave on success, -1 on failure
+ *	Port Id of primary member on success, -1 on failure
  */
 int
 rte_eth_bond_primary_get(uint16_t bonded_port_id);
 
 /**
- * Populate an array with list of the slaves port id's of the bonded device
+ * Populate an array with list of the members port id's of the bonded device
  *
  * @param bonded_port_id	Port ID of bonded eth_dev to interrogate
- * @param slaves			Array to be populated with the current active slaves
- * @param len				Length of slaves array
+ * @param members			Array to be populated with the current active members
+ * @param len				Length of members array
  *
  * @return
- *	Number of slaves associated with bonded device on success,
+ *	Number of members associated with bonded device on success,
  *	negative value otherwise
  */
+__rte_experimental
 int
-rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
-			uint16_t len);
+rte_eth_bond_members_get(uint16_t bonded_port_id, uint16_t members[],
+		uint16_t len);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t members[],
+		uint16_t len)
+{
+	return rte_eth_bond_members_get(bonded_port_id, members, len);
+}
 
 /**
- * Populate an array with list of the active slaves port id's of the bonded
+ * Populate an array with list of the active members port id's of the bonded
  * device.
  *
  * @param bonded_port_id	Port ID of bonded eth_dev to interrogate
- * @param slaves			Array to be populated with the current active slaves
- * @param len				Length of slaves array
+ * @param members			Array to be populated with the current active members
+ * @param len				Length of members array
  *
  * @return
- *	Number of active slaves associated with bonded device on success,
+ *	Number of active members associated with bonded device on success,
  *	negative value otherwise
  */
+__rte_experimental
 int
-rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
-				uint16_t len);
+rte_eth_bond_active_members_get(uint16_t bonded_port_id, uint16_t members[],
+		uint16_t len);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t members[],
+		uint16_t len)
+{
+	return rte_eth_bond_active_members_get(bonded_port_id, members, len);
+}
 
 /**
- * Set explicit MAC address to use on bonded device and it's slaves.
+ * Set explicit MAC address to use on bonded device and it's members.
  *
  * @param bonded_port_id	Port ID of bonded device.
  * @param mac_addr			MAC Address to use on bonded device overriding
- *							slaves MAC addresses
+ *							members MAC addresses
  *
  * @return
  *	0 on success, negative value otherwise
@@ -228,8 +266,8 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
 		struct rte_ether_addr *mac_addr);
 
 /**
- * Reset bonded device to use MAC from primary slave on bonded device and it's
- * slaves.
+ * Reset bonded device to use MAC from primary member on bonded device and it's
+ * members.
  *
  * @param bonded_port_id	Port ID of bonded device.
  *
@@ -266,7 +304,7 @@ rte_eth_bond_xmit_policy_get(uint16_t bonded_port_id);
 
 /**
  * Set the link monitoring frequency (in ms) for monitoring the link status of
- * slave devices
+ * member devices
  *
  * @param bonded_port_id	Port ID of bonded device.
  * @param internal_ms		Monitoring interval in milliseconds
@@ -280,7 +318,7 @@ rte_eth_bond_link_monitoring_set(uint16_t bonded_port_id, uint32_t internal_ms);
 
 /**
  * Get the current link monitoring frequency (in ms) for monitoring of the link
- * status of slave devices
+ * status of member devices
  *
  * @param bonded_port_id	Port ID of bonded device.
  *
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 4a266bb2ca..ac9f414e74 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -19,7 +19,7 @@ static void bond_mode_8023ad_ext_periodic_cb(void *arg);
 #define MODE4_DEBUG(fmt, ...)				\
 	rte_log(RTE_LOG_DEBUG, bond_logtype,		\
 		"%6u [Port %u: %s] " fmt,		\
-		bond_dbg_get_time_diff_ms(), slave_id,	\
+		bond_dbg_get_time_diff_ms(), member_id,	\
 		__func__, ##__VA_ARGS__)
 
 static uint64_t start_time;
@@ -184,9 +184,9 @@ set_warning_flags(struct port *port, uint16_t flags)
 }
 
 static void
-show_warnings(uint16_t slave_id)
+show_warnings(uint16_t member_id)
 {
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[member_id];
 	uint8_t warnings;
 
 	do {
@@ -205,36 +205,36 @@ show_warnings(uint16_t slave_id)
 
 	if (warnings & WRN_RX_QUEUE_FULL) {
 		RTE_BOND_LOG(DEBUG,
-			     "Slave %u: failed to enqueue LACP packet into RX ring.\n"
+			     "Member %u: failed to enqueue LACP packet into RX ring.\n"
 			     "Receive and transmit functions must be invoked on bonded"
 			     "interface at least 10 times per second or LACP will notwork correctly",
-			     slave_id);
+			     member_id);
 	}
 
 	if (warnings & WRN_TX_QUEUE_FULL) {
 		RTE_BOND_LOG(DEBUG,
-			     "Slave %u: failed to enqueue LACP packet into TX ring.\n"
+			     "Member %u: failed to enqueue LACP packet into TX ring.\n"
 			     "Receive and transmit functions must be invoked on bonded"
 			     "interface at least 10 times per second or LACP will not work correctly",
-			     slave_id);
+			     member_id);
 	}
 
 	if (warnings & WRN_RX_MARKER_TO_FAST)
-		RTE_BOND_LOG(INFO, "Slave %u: marker to early - ignoring.",
-			     slave_id);
+		RTE_BOND_LOG(INFO, "Member %u: marker to early - ignoring.",
+			     member_id);
 
 	if (warnings & WRN_UNKNOWN_SLOW_TYPE) {
 		RTE_BOND_LOG(INFO,
-			"Slave %u: ignoring unknown slow protocol frame type",
-			     slave_id);
+			"Member %u: ignoring unknown slow protocol frame type",
+			     member_id);
 	}
 
 	if (warnings & WRN_UNKNOWN_MARKER_TYPE)
-		RTE_BOND_LOG(INFO, "Slave %u: ignoring unknown marker type",
-			     slave_id);
+		RTE_BOND_LOG(INFO, "Member %u: ignoring unknown marker type",
+			     member_id);
 
 	if (warnings & WRN_NOT_LACP_CAPABLE)
-		MODE4_DEBUG("Port %u is not LACP capable!\n", slave_id);
+		MODE4_DEBUG("Port %u is not LACP capable!\n", member_id);
 }
 
 static void
@@ -256,10 +256,10 @@ record_default(struct port *port)
  * @param port			Port on which LACPDU was received.
  */
 static void
-rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
+rx_machine(struct bond_dev_private *internals, uint16_t member_id,
 		struct lacpdu *lacp)
 {
-	struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *agg, *port = &bond_mode_8023ad_ports[member_id];
 	uint64_t timeout;
 
 	if (SM_FLAG(port, BEGIN)) {
@@ -389,9 +389,9 @@ rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
  * @param port			Port to handle state machine.
  */
 static void
-periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
+periodic_machine(struct bond_dev_private *internals, uint16_t member_id)
 {
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[member_id];
 	/* Calculate if either site is LACP enabled */
 	uint64_t timeout;
 	uint8_t active = ACTOR_STATE(port, LACP_ACTIVE) ||
@@ -451,9 +451,9 @@ periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
  * @param port			Port to handle state machine.
  */
 static void
-mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
+mux_machine(struct bond_dev_private *internals, uint16_t member_id)
 {
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[member_id];
 
 	/* Save current state for later use */
 	const uint8_t state_mask = STATE_SYNCHRONIZATION | STATE_DISTRIBUTING |
@@ -527,8 +527,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
 				SM_FLAG_SET(port, NTT);
 				MODE4_DEBUG("COLLECTING -> DISTRIBUTING\n");
 				RTE_BOND_LOG(INFO,
-					"Bond %u: slave id %u distributing started.",
-					internals->port_id, slave_id);
+					"Bond %u: member id %u distributing started.",
+					internals->port_id, member_id);
 			}
 		} else {
 			if (!PARTNER_STATE(port, COLLECTING)) {
@@ -538,8 +538,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
 				SM_FLAG_SET(port, NTT);
 				MODE4_DEBUG("DISTRIBUTING -> COLLECTING\n");
 				RTE_BOND_LOG(INFO,
-					"Bond %u: slave id %u distributing stopped.",
-					internals->port_id, slave_id);
+					"Bond %u: member id %u distributing stopped.",
+					internals->port_id, member_id);
 			}
 		}
 	}
@@ -554,9 +554,9 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
  * @param port
  */
 static void
-tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
+tx_machine(struct bond_dev_private *internals, uint16_t member_id)
 {
-	struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *agg, *port = &bond_mode_8023ad_ports[member_id];
 
 	struct rte_mbuf *lacp_pkt = NULL;
 	struct lacpdu_header *hdr;
@@ -587,7 +587,7 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
 
 	/* Source and destination MAC */
 	rte_ether_addr_copy(&lacp_mac_addr, &hdr->eth_hdr.dst_addr);
-	rte_eth_macaddr_get(slave_id, &hdr->eth_hdr.src_addr);
+	rte_eth_macaddr_get(member_id, &hdr->eth_hdr.src_addr);
 	hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
 
 	lacpdu = &hdr->lacpdu;
@@ -635,10 +635,10 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
 			return;
 		}
 	} else {
-		uint16_t pkts_sent = rte_eth_tx_prepare(slave_id,
+		uint16_t pkts_sent = rte_eth_tx_prepare(member_id,
 				internals->mode4.dedicated_queues.tx_qid,
 				&lacp_pkt, 1);
-		pkts_sent = rte_eth_tx_burst(slave_id,
+		pkts_sent = rte_eth_tx_burst(member_id,
 				internals->mode4.dedicated_queues.tx_qid,
 				&lacp_pkt, pkts_sent);
 		if (pkts_sent != 1) {
@@ -679,40 +679,40 @@ max_index(uint64_t *a, int n)
  * @param port_pos			Port to assign.
  */
 static void
-selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
+selection_logic(struct bond_dev_private *internals, uint16_t member_id)
 {
 	struct port *agg, *port;
-	uint16_t slaves_count, new_agg_id, i, j = 0;
-	uint16_t *slaves;
+	uint16_t members_count, new_agg_id, i, j = 0;
+	uint16_t *members;
 	uint64_t agg_bandwidth[RTE_MAX_ETHPORTS] = {0};
 	uint64_t agg_count[RTE_MAX_ETHPORTS] = {0};
-	uint16_t default_slave = 0;
+	uint16_t default_member = 0;
 	struct rte_eth_link link_info;
 	uint16_t agg_new_idx = 0;
 	int ret;
 
-	slaves = internals->active_slaves;
-	slaves_count = internals->active_slave_count;
-	port = &bond_mode_8023ad_ports[slave_id];
+	members = internals->active_members;
+	members_count = internals->active_member_count;
+	port = &bond_mode_8023ad_ports[member_id];
 
 	/* Search for aggregator suitable for this port */
-	for (i = 0; i < slaves_count; ++i) {
-		agg = &bond_mode_8023ad_ports[slaves[i]];
+	for (i = 0; i < members_count; ++i) {
+		agg = &bond_mode_8023ad_ports[members[i]];
 		/* Skip ports that are not aggregators */
-		if (agg->aggregator_port_id != slaves[i])
+		if (agg->aggregator_port_id != members[i])
 			continue;
 
-		ret = rte_eth_link_get_nowait(slaves[i], &link_info);
+		ret = rte_eth_link_get_nowait(members[i], &link_info);
 		if (ret < 0) {
 			RTE_BOND_LOG(ERR,
-				"Slave (port %u) link get failed: %s\n",
-				slaves[i], rte_strerror(-ret));
+				"Member (port %u) link get failed: %s\n",
+				members[i], rte_strerror(-ret));
 			continue;
 		}
 		agg_count[i] += 1;
 		agg_bandwidth[i] += link_info.link_speed;
 
-		/* Actors system ID is not checked since all slave device have the same
+		/* Actors system ID is not checked since all member device have the same
 		 * ID (MAC address). */
 		if ((agg->actor.key == port->actor.key &&
 			agg->partner.system_priority == port->partner.system_priority &&
@@ -724,31 +724,31 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
 				rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) != 0) {
 
 			if (j == 0)
-				default_slave = i;
+				default_member = i;
 			j++;
 		}
 	}
 
 	switch (internals->mode4.agg_selection) {
 	case AGG_COUNT:
-		agg_new_idx = max_index(agg_count, slaves_count);
-		new_agg_id = slaves[agg_new_idx];
+		agg_new_idx = max_index(agg_count, members_count);
+		new_agg_id = members[agg_new_idx];
 		break;
 	case AGG_BANDWIDTH:
-		agg_new_idx = max_index(agg_bandwidth, slaves_count);
-		new_agg_id = slaves[agg_new_idx];
+		agg_new_idx = max_index(agg_bandwidth, members_count);
+		new_agg_id = members[agg_new_idx];
 		break;
 	case AGG_STABLE:
-		if (default_slave == slaves_count)
-			new_agg_id = slaves[slave_id];
+		if (default_member == members_count)
+			new_agg_id = members[member_id];
 		else
-			new_agg_id = slaves[default_slave];
+			new_agg_id = members[default_member];
 		break;
 	default:
-		if (default_slave == slaves_count)
-			new_agg_id = slaves[slave_id];
+		if (default_member == members_count)
+			new_agg_id = members[member_id];
 		else
-			new_agg_id = slaves[default_slave];
+			new_agg_id = members[default_member];
 		break;
 	}
 
@@ -758,7 +758,7 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
 		MODE4_DEBUG("-> SELECTED: ID=%3u\n"
 			"\t%s aggregator ID=%3u\n",
 			port->aggregator_port_id,
-			port->aggregator_port_id == slave_id ?
+			port->aggregator_port_id == member_id ?
 				"aggregator not found, using default" : "aggregator found",
 			port->aggregator_port_id);
 	}
@@ -802,7 +802,7 @@ link_speed_key(uint16_t speed) {
 }
 
 static void
-rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
+rx_machine_update(struct bond_dev_private *internals, uint16_t member_id,
 		struct rte_mbuf *lacp_pkt) {
 	struct lacpdu_header *lacp;
 	struct lacpdu_actor_partner_params *partner;
@@ -813,7 +813,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
 		RTE_ASSERT(lacp->lacpdu.subtype == SLOW_SUBTYPE_LACP);
 
 		partner = &lacp->lacpdu.partner;
-		port = &bond_mode_8023ad_ports[slave_id];
+		port = &bond_mode_8023ad_ports[member_id];
 		agg = &bond_mode_8023ad_ports[port->aggregator_port_id];
 
 		if (rte_is_zero_ether_addr(&partner->port_params.system) ||
@@ -822,7 +822,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
 			/* This LACP frame is sending to the bonding port
 			 * so pass it to rx_machine.
 			 */
-			rx_machine(internals, slave_id, &lacp->lacpdu);
+			rx_machine(internals, member_id, &lacp->lacpdu);
 		} else {
 			char preferred_system_name[RTE_ETHER_ADDR_FMT_SIZE];
 			char self_system_name[RTE_ETHER_ADDR_FMT_SIZE];
@@ -837,16 +837,16 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
 		}
 		rte_pktmbuf_free(lacp_pkt);
 	} else
-		rx_machine(internals, slave_id, NULL);
+		rx_machine(internals, member_id, NULL);
 }
 
 static void
 bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
-			uint16_t slave_id)
+			uint16_t member_id)
 {
 #define DEDICATED_QUEUE_BURST_SIZE 32
 	struct rte_mbuf *lacp_pkt[DEDICATED_QUEUE_BURST_SIZE];
-	uint16_t rx_count = rte_eth_rx_burst(slave_id,
+	uint16_t rx_count = rte_eth_rx_burst(member_id,
 				internals->mode4.dedicated_queues.rx_qid,
 				lacp_pkt, DEDICATED_QUEUE_BURST_SIZE);
 
@@ -854,10 +854,10 @@ bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
 		uint16_t i;
 
 		for (i = 0; i < rx_count; i++)
-			bond_mode_8023ad_handle_slow_pkt(internals, slave_id,
+			bond_mode_8023ad_handle_slow_pkt(internals, member_id,
 					lacp_pkt[i]);
 	} else {
-		rx_machine_update(internals, slave_id, NULL);
+		rx_machine_update(internals, member_id, NULL);
 	}
 }
 
@@ -868,23 +868,23 @@ bond_mode_8023ad_periodic_cb(void *arg)
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
 	struct port *port;
 	struct rte_eth_link link_info;
-	struct rte_ether_addr slave_addr;
+	struct rte_ether_addr member_addr;
 	struct rte_mbuf *lacp_pkt = NULL;
-	uint16_t slave_id;
+	uint16_t member_id;
 	uint16_t i;
 
 
 	/* Update link status on each port */
-	for (i = 0; i < internals->active_slave_count; i++) {
+	for (i = 0; i < internals->active_member_count; i++) {
 		uint16_t key;
 		int ret;
 
-		slave_id = internals->active_slaves[i];
-		ret = rte_eth_link_get_nowait(slave_id, &link_info);
+		member_id = internals->active_members[i];
+		ret = rte_eth_link_get_nowait(member_id, &link_info);
 		if (ret < 0) {
 			RTE_BOND_LOG(ERR,
-				"Slave (port %u) link get failed: %s\n",
-				slave_id, rte_strerror(-ret));
+				"Member (port %u) link get failed: %s\n",
+				member_id, rte_strerror(-ret));
 		}
 
 		if (ret >= 0 && link_info.link_status != 0) {
@@ -895,8 +895,8 @@ bond_mode_8023ad_periodic_cb(void *arg)
 			key = 0;
 		}
 
-		rte_eth_macaddr_get(slave_id, &slave_addr);
-		port = &bond_mode_8023ad_ports[slave_id];
+		rte_eth_macaddr_get(member_id, &member_addr);
+		port = &bond_mode_8023ad_ports[member_id];
 
 		key = rte_cpu_to_be_16(key);
 		if (key != port->actor.key) {
@@ -907,16 +907,16 @@ bond_mode_8023ad_periodic_cb(void *arg)
 			SM_FLAG_SET(port, NTT);
 		}
 
-		if (!rte_is_same_ether_addr(&port->actor.system, &slave_addr)) {
-			rte_ether_addr_copy(&slave_addr, &port->actor.system);
-			if (port->aggregator_port_id == slave_id)
+		if (!rte_is_same_ether_addr(&port->actor.system, &member_addr)) {
+			rte_ether_addr_copy(&member_addr, &port->actor.system);
+			if (port->aggregator_port_id == member_id)
 				SM_FLAG_SET(port, NTT);
 		}
 	}
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		slave_id = internals->active_slaves[i];
-		port = &bond_mode_8023ad_ports[slave_id];
+	for (i = 0; i < internals->active_member_count; i++) {
+		member_id = internals->active_members[i];
+		port = &bond_mode_8023ad_ports[member_id];
 
 		if ((port->actor.key &
 				rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) == 0) {
@@ -947,19 +947,19 @@ bond_mode_8023ad_periodic_cb(void *arg)
 			if (retval != 0)
 				lacp_pkt = NULL;
 
-			rx_machine_update(internals, slave_id, lacp_pkt);
+			rx_machine_update(internals, member_id, lacp_pkt);
 		} else {
 			bond_mode_8023ad_dedicated_rxq_process(internals,
-					slave_id);
+					member_id);
 		}
 
-		periodic_machine(internals, slave_id);
-		mux_machine(internals, slave_id);
-		tx_machine(internals, slave_id);
-		selection_logic(internals, slave_id);
+		periodic_machine(internals, member_id);
+		mux_machine(internals, member_id);
+		tx_machine(internals, member_id);
+		selection_logic(internals, member_id);
 
 		SM_FLAG_CLR(port, BEGIN);
-		show_warnings(slave_id);
+		show_warnings(member_id);
 	}
 
 	rte_eal_alarm_set(internals->mode4.update_timeout_us,
@@ -967,34 +967,34 @@ bond_mode_8023ad_periodic_cb(void *arg)
 }
 
 static int
-bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
+bond_mode_8023ad_register_lacp_mac(uint16_t member_id)
 {
 	int ret;
 
-	ret = rte_eth_allmulticast_enable(slave_id);
+	ret = rte_eth_allmulticast_enable(member_id);
 	if (ret != 0) {
 		RTE_BOND_LOG(ERR,
 			"failed to enable allmulti mode for port %u: %s",
-			slave_id, rte_strerror(-ret));
+			member_id, rte_strerror(-ret));
 	}
-	if (rte_eth_allmulticast_get(slave_id)) {
+	if (rte_eth_allmulticast_get(member_id)) {
 		RTE_BOND_LOG(DEBUG, "forced allmulti for port %u",
-			     slave_id);
-		bond_mode_8023ad_ports[slave_id].forced_rx_flags =
+			     member_id);
+		bond_mode_8023ad_ports[member_id].forced_rx_flags =
 				BOND_8023AD_FORCED_ALLMULTI;
 		return 0;
 	}
 
-	ret = rte_eth_promiscuous_enable(slave_id);
+	ret = rte_eth_promiscuous_enable(member_id);
 	if (ret != 0) {
 		RTE_BOND_LOG(ERR,
 			"failed to enable promiscuous mode for port %u: %s",
-			slave_id, rte_strerror(-ret));
+			member_id, rte_strerror(-ret));
 	}
-	if (rte_eth_promiscuous_get(slave_id)) {
+	if (rte_eth_promiscuous_get(member_id)) {
 		RTE_BOND_LOG(DEBUG, "forced promiscuous for port %u",
-			     slave_id);
-		bond_mode_8023ad_ports[slave_id].forced_rx_flags =
+			     member_id);
+		bond_mode_8023ad_ports[member_id].forced_rx_flags =
 				BOND_8023AD_FORCED_PROMISC;
 		return 0;
 	}
@@ -1003,27 +1003,27 @@ bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
 }
 
 static void
-bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
+bond_mode_8023ad_unregister_lacp_mac(uint16_t member_id)
 {
 	int ret;
 
-	switch (bond_mode_8023ad_ports[slave_id].forced_rx_flags) {
+	switch (bond_mode_8023ad_ports[member_id].forced_rx_flags) {
 	case BOND_8023AD_FORCED_ALLMULTI:
-		RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", slave_id);
-		ret = rte_eth_allmulticast_disable(slave_id);
+		RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", member_id);
+		ret = rte_eth_allmulticast_disable(member_id);
 		if (ret != 0)
 			RTE_BOND_LOG(ERR,
 				"failed to disable allmulti mode for port %u: %s",
-				slave_id, rte_strerror(-ret));
+				member_id, rte_strerror(-ret));
 		break;
 
 	case BOND_8023AD_FORCED_PROMISC:
-		RTE_BOND_LOG(DEBUG, "unset promisc for port %u", slave_id);
-		ret = rte_eth_promiscuous_disable(slave_id);
+		RTE_BOND_LOG(DEBUG, "unset promisc for port %u", member_id);
+		ret = rte_eth_promiscuous_disable(member_id);
 		if (ret != 0)
 			RTE_BOND_LOG(ERR,
 				"failed to disable promiscuous mode for port %u: %s",
-				slave_id, rte_strerror(-ret));
+				member_id, rte_strerror(-ret));
 		break;
 
 	default:
@@ -1032,12 +1032,12 @@ bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
 }
 
 void
-bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
-				uint16_t slave_id)
+bond_mode_8023ad_activate_member(struct rte_eth_dev *bond_dev,
+				uint16_t member_id)
 {
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
 
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[member_id];
 	struct port_params initial = {
 			.system = { { 0 } },
 			.system_priority = rte_cpu_to_be_16(0xFFFF),
@@ -1053,15 +1053,15 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 	struct bond_tx_queue *bd_tx_q;
 	uint16_t q_id;
 
-	/* Given slave mus not be in active list */
-	RTE_ASSERT(find_slave_by_id(internals->active_slaves,
-	internals->active_slave_count, slave_id) == internals->active_slave_count);
+	/* Given member mus not be in active list */
+	RTE_ASSERT(find_member_by_id(internals->active_members,
+	internals->active_member_count, member_id) == internals->active_member_count);
 	RTE_SET_USED(internals); /* used only for assert when enabled */
 
 	memcpy(&port->actor, &initial, sizeof(struct port_params));
 	/* Standard requires that port ID must be grater than 0.
 	 * Add 1 do get corresponding port_number */
-	port->actor.port_number = rte_cpu_to_be_16(slave_id + 1);
+	port->actor.port_number = rte_cpu_to_be_16(member_id + 1);
 
 	memcpy(&port->partner, &initial, sizeof(struct port_params));
 	memcpy(&port->partner_admin, &initial, sizeof(struct port_params));
@@ -1072,11 +1072,11 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 	port->sm_flags = SM_FLAGS_BEGIN;
 
 	/* use this port as aggregator */
-	port->aggregator_port_id = slave_id;
+	port->aggregator_port_id = member_id;
 
-	if (bond_mode_8023ad_register_lacp_mac(slave_id) < 0) {
-		RTE_BOND_LOG(WARNING, "slave %u is most likely broken and won't receive LACP packets",
-			     slave_id);
+	if (bond_mode_8023ad_register_lacp_mac(member_id) < 0) {
+		RTE_BOND_LOG(WARNING, "member %u is most likely broken and won't receive LACP packets",
+			     member_id);
 	}
 
 	timer_cancel(&port->warning_timer);
@@ -1087,22 +1087,24 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 	RTE_ASSERT(port->rx_ring == NULL);
 	RTE_ASSERT(port->tx_ring == NULL);
 
-	socket_id = rte_eth_dev_socket_id(slave_id);
+	socket_id = rte_eth_dev_socket_id(member_id);
 	if (socket_id == -1)
 		socket_id = rte_socket_id();
 
 	element_size = sizeof(struct slow_protocol_frame) +
 				RTE_PKTMBUF_HEADROOM;
 
-	/* The size of the mempool should be at least:
-	 * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */
-	total_tx_desc = BOND_MODE_8023AX_SLAVE_TX_PKTS;
+	/*
+	 * The size of the mempool should be at least:
+	 * the sum of the TX descriptors + BOND_MODE_8023AX_MEMBER_TX_PKTS.
+	 */
+	total_tx_desc = BOND_MODE_8023AX_MEMBER_TX_PKTS;
 	for (q_id = 0; q_id < bond_dev->data->nb_tx_queues; q_id++) {
 		bd_tx_q = (struct bond_tx_queue*)bond_dev->data->tx_queues[q_id];
 		total_tx_desc += bd_tx_q->nb_tx_desc;
 	}
 
-	snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
+	snprintf(mem_name, RTE_DIM(mem_name), "member_port%u_pool", member_id);
 	port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
 		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
 			32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
@@ -1111,39 +1113,39 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 	/* Any memory allocation failure in initialization is critical because
 	 * resources can't be free, so reinitialization is impossible. */
 	if (port->mbuf_pool == NULL) {
-		rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
-			slave_id, mem_name, rte_strerror(rte_errno));
+		rte_panic("Member %u: Failed to create memory pool '%s': %s\n",
+			member_id, mem_name, rte_strerror(rte_errno));
 	}
 
-	snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_rx", slave_id);
+	snprintf(mem_name, RTE_DIM(mem_name), "member_%u_rx", member_id);
 	port->rx_ring = rte_ring_create(mem_name,
-			rte_align32pow2(BOND_MODE_8023AX_SLAVE_RX_PKTS), socket_id, 0);
+			rte_align32pow2(BOND_MODE_8023AX_MEMBER_RX_PKTS), socket_id, 0);
 
 	if (port->rx_ring == NULL) {
-		rte_panic("Slave %u: Failed to create rx ring '%s': %s\n", slave_id,
+		rte_panic("Member %u: Failed to create rx ring '%s': %s\n", member_id,
 			mem_name, rte_strerror(rte_errno));
 	}
 
 	/* TX ring is at least one pkt longer to make room for marker packet. */
-	snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_tx", slave_id);
+	snprintf(mem_name, RTE_DIM(mem_name), "member_%u_tx", member_id);
 	port->tx_ring = rte_ring_create(mem_name,
-			rte_align32pow2(BOND_MODE_8023AX_SLAVE_TX_PKTS + 1), socket_id, 0);
+			rte_align32pow2(BOND_MODE_8023AX_MEMBER_TX_PKTS + 1), socket_id, 0);
 
 	if (port->tx_ring == NULL) {
-		rte_panic("Slave %u: Failed to create tx ring '%s': %s\n", slave_id,
+		rte_panic("Member %u: Failed to create tx ring '%s': %s\n", member_id,
 			mem_name, rte_strerror(rte_errno));
 	}
 }
 
 int
-bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
-		uint16_t slave_id)
+bond_mode_8023ad_deactivate_member(struct rte_eth_dev *bond_dev __rte_unused,
+		uint16_t member_id)
 {
 	void *pkt = NULL;
 	struct port *port = NULL;
 	uint8_t old_partner_state;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 
 	ACTOR_STATE_CLR(port, AGGREGATION);
 	port->selected = UNSELECTED;
@@ -1151,7 +1153,7 @@ bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
 	old_partner_state = port->partner_state;
 	record_default(port);
 
-	bond_mode_8023ad_unregister_lacp_mac(slave_id);
+	bond_mode_8023ad_unregister_lacp_mac(member_id);
 
 	/* If partner timeout state changes then disable timer */
 	if (!((old_partner_state ^ port->partner_state) &
@@ -1174,30 +1176,30 @@ void
 bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev)
 {
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
-	struct rte_ether_addr slave_addr;
-	struct port *slave, *agg_slave;
-	uint16_t slave_id, i, j;
+	struct rte_ether_addr member_addr;
+	struct port *member, *agg_member;
+	uint16_t member_id, i, j;
 
 	bond_mode_8023ad_stop(bond_dev);
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		slave_id = internals->active_slaves[i];
-		slave = &bond_mode_8023ad_ports[slave_id];
-		rte_eth_macaddr_get(slave_id, &slave_addr);
+	for (i = 0; i < internals->active_member_count; i++) {
+		member_id = internals->active_members[i];
+		member = &bond_mode_8023ad_ports[member_id];
+		rte_eth_macaddr_get(member_id, &member_addr);
 
-		if (rte_is_same_ether_addr(&slave_addr, &slave->actor.system))
+		if (rte_is_same_ether_addr(&member_addr, &member->actor.system))
 			continue;
 
-		rte_ether_addr_copy(&slave_addr, &slave->actor.system);
+		rte_ether_addr_copy(&member_addr, &member->actor.system);
 		/* Do nothing if this port is not an aggregator. In other case
 		 * Set NTT flag on every port that use this aggregator. */
-		if (slave->aggregator_port_id != slave_id)
+		if (member->aggregator_port_id != member_id)
 			continue;
 
-		for (j = 0; j < internals->active_slave_count; j++) {
-			agg_slave = &bond_mode_8023ad_ports[internals->active_slaves[j]];
-			if (agg_slave->aggregator_port_id == slave_id)
-				SM_FLAG_SET(agg_slave, NTT);
+		for (j = 0; j < internals->active_member_count; j++) {
+			agg_member = &bond_mode_8023ad_ports[internals->active_members[j]];
+			if (agg_member->aggregator_port_id == member_id)
+				SM_FLAG_SET(agg_member, NTT);
 		}
 	}
 
@@ -1288,9 +1290,9 @@ bond_mode_8023ad_enable(struct rte_eth_dev *bond_dev)
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
 	uint16_t i;
 
-	for (i = 0; i < internals->active_slave_count; i++)
-		bond_mode_8023ad_activate_slave(bond_dev,
-				internals->active_slaves[i]);
+	for (i = 0; i < internals->active_member_count; i++)
+		bond_mode_8023ad_activate_member(bond_dev,
+				internals->active_members[i]);
 
 	return 0;
 }
@@ -1326,10 +1328,10 @@ bond_mode_8023ad_stop(struct rte_eth_dev *bond_dev)
 
 void
 bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
-				  uint16_t slave_id, struct rte_mbuf *pkt)
+				  uint16_t member_id, struct rte_mbuf *pkt)
 {
 	struct mode8023ad_private *mode4 = &internals->mode4;
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[member_id];
 	struct marker_header *m_hdr;
 	uint64_t marker_timer, old_marker_timer;
 	int retval;
@@ -1362,7 +1364,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
 		} while (unlikely(retval == 0));
 
 		m_hdr->marker.tlv_type_marker = MARKER_TLV_TYPE_RESP;
-		rte_eth_macaddr_get(slave_id, &m_hdr->eth_hdr.src_addr);
+		rte_eth_macaddr_get(member_id, &m_hdr->eth_hdr.src_addr);
 
 		if (internals->mode4.dedicated_queues.enabled == 0) {
 			if (rte_ring_enqueue(port->tx_ring, pkt) != 0) {
@@ -1373,10 +1375,10 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
 			}
 		} else {
 			/* Send packet directly to the slow queue */
-			uint16_t tx_count = rte_eth_tx_prepare(slave_id,
+			uint16_t tx_count = rte_eth_tx_prepare(member_id,
 					internals->mode4.dedicated_queues.tx_qid,
 					&pkt, 1);
-			tx_count = rte_eth_tx_burst(slave_id,
+			tx_count = rte_eth_tx_burst(member_id,
 					internals->mode4.dedicated_queues.tx_qid,
 					&pkt, tx_count);
 			if (tx_count != 1) {
@@ -1394,7 +1396,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
 				goto free_out;
 			}
 		} else
-			rx_machine_update(internals, slave_id, pkt);
+			rx_machine_update(internals, member_id, pkt);
 	} else {
 		wrn = WRN_UNKNOWN_SLOW_TYPE;
 		goto free_out;
@@ -1517,8 +1519,8 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
 
 
 int
-rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
-		struct rte_eth_bond_8023ad_slave_info *info)
+rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
+		struct rte_eth_bond_8023ad_member_info *info)
 {
 	struct rte_eth_dev *bond_dev;
 	struct bond_dev_private *internals;
@@ -1531,12 +1533,12 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
 	bond_dev = &rte_eth_devices[port_id];
 
 	internals = bond_dev->data->dev_private;
-	if (find_slave_by_id(internals->active_slaves,
-			internals->active_slave_count, slave_id) ==
-				internals->active_slave_count)
+	if (find_member_by_id(internals->active_members,
+			internals->active_member_count, member_id) ==
+				internals->active_member_count)
 		return -EINVAL;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 	info->selected = port->selected;
 
 	info->actor_state = port->actor_state;
@@ -1550,7 +1552,7 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
 }
 
 static int
-bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
+bond_8023ad_ext_validate(uint16_t port_id, uint16_t member_id)
 {
 	struct rte_eth_dev *bond_dev;
 	struct bond_dev_private *internals;
@@ -1565,9 +1567,9 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
 		return -EINVAL;
 
 	internals = bond_dev->data->dev_private;
-	if (find_slave_by_id(internals->active_slaves,
-			internals->active_slave_count, slave_id) ==
-				internals->active_slave_count)
+	if (find_member_by_id(internals->active_members,
+			internals->active_member_count, member_id) ==
+				internals->active_member_count)
 		return -EINVAL;
 
 	mode4 = &internals->mode4;
@@ -1578,17 +1580,17 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
 }
 
 int
-rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
 				int enabled)
 {
 	struct port *port;
 	int res;
 
-	res = bond_8023ad_ext_validate(port_id, slave_id);
+	res = bond_8023ad_ext_validate(port_id, member_id);
 	if (res != 0)
 		return res;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 
 	if (enabled)
 		ACTOR_STATE_SET(port, COLLECTING);
@@ -1599,17 +1601,17 @@ rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
 }
 
 int
-rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
 				int enabled)
 {
 	struct port *port;
 	int res;
 
-	res = bond_8023ad_ext_validate(port_id, slave_id);
+	res = bond_8023ad_ext_validate(port_id, member_id);
 	if (res != 0)
 		return res;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 
 	if (enabled)
 		ACTOR_STATE_SET(port, DISTRIBUTING);
@@ -1620,45 +1622,45 @@ rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
 }
 
 int
-rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id)
+rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id)
 {
 	struct port *port;
 	int err;
 
-	err = bond_8023ad_ext_validate(port_id, slave_id);
+	err = bond_8023ad_ext_validate(port_id, member_id);
 	if (err != 0)
 		return err;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 	return ACTOR_STATE(port, DISTRIBUTING);
 }
 
 int
-rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id)
+rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id)
 {
 	struct port *port;
 	int err;
 
-	err = bond_8023ad_ext_validate(port_id, slave_id);
+	err = bond_8023ad_ext_validate(port_id, member_id);
 	if (err != 0)
 		return err;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 	return ACTOR_STATE(port, COLLECTING);
 }
 
 int
-rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t member_id,
 		struct rte_mbuf *lacp_pkt)
 {
 	struct port *port;
 	int res;
 
-	res = bond_8023ad_ext_validate(port_id, slave_id);
+	res = bond_8023ad_ext_validate(port_id, member_id);
 	if (res != 0)
 		return res;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 
 	if (rte_pktmbuf_pkt_len(lacp_pkt) < sizeof(struct lacpdu_header))
 		return -EINVAL;
@@ -1683,11 +1685,11 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
 	struct mode8023ad_private *mode4 = &internals->mode4;
 	struct port *port;
 	void *pkt = NULL;
-	uint16_t i, slave_id;
+	uint16_t i, member_id;
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		slave_id = internals->active_slaves[i];
-		port = &bond_mode_8023ad_ports[slave_id];
+	for (i = 0; i < internals->active_member_count; i++) {
+		member_id = internals->active_members[i];
+		port = &bond_mode_8023ad_ports[member_id];
 
 		if (rte_ring_dequeue(port->rx_ring, &pkt) == 0) {
 			struct rte_mbuf *lacp_pkt = pkt;
@@ -1700,7 +1702,7 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
 			/* This is LACP frame so pass it to rx callback.
 			 * Callback is responsible for freeing mbuf.
 			 */
-			mode4->slowrx_cb(slave_id, lacp_pkt);
+			mode4->slowrx_cb(member_id, lacp_pkt);
 		}
 	}
 
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.h b/drivers/net/bonding/rte_eth_bond_8023ad.h
index 7ad8d6d00b..3144ee378a 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.h
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.h
@@ -35,7 +35,7 @@ extern "C" {
 #define MARKER_TLV_TYPE_INFO                0x01
 #define MARKER_TLV_TYPE_RESP                0x02
 
-typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t slave_id,
+typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t member_id,
 						  struct rte_mbuf *lacp_pkt);
 
 enum rte_bond_8023ad_selection {
@@ -66,13 +66,13 @@ struct port_params {
 	uint16_t system_priority;
 	/**< System priority (unused in current implementation) */
 	struct rte_ether_addr system;
-	/**< System ID - Slave MAC address, same as bonding MAC address */
+	/**< System ID - Member MAC address, same as bonding MAC address */
 	uint16_t key;
 	/**< Speed information (implementation dependent) and duplex. */
 	uint16_t port_priority;
 	/**< Priority of this (unused in current implementation) */
 	uint16_t port_number;
-	/**< Port number. It corresponds to slave port id. */
+	/**< Port number. It corresponds to member port id. */
 } __rte_packed __rte_aligned(2);
 
 struct lacpdu_actor_partner_params {
@@ -141,7 +141,7 @@ struct rte_eth_bond_8023ad_conf {
 	enum rte_bond_8023ad_agg_selection agg_selection;
 };
 
-struct rte_eth_bond_8023ad_slave_info {
+struct rte_eth_bond_8023ad_member_info {
 	enum rte_bond_8023ad_selection selected;
 	uint8_t actor_state;
 	struct port_params actor;
@@ -184,104 +184,113 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
 /**
  * @internal
  *
- * Function returns current state of given slave device.
+ * Function returns current state of given member device.
  *
- * @param slave_id  Port id of valid slave.
+ * @param member_id  Port id of valid member.
  * @param conf		buffer for configuration
  * @return
  *   0 - if ok
- *   -EINVAL if conf is NULL or slave id is invalid (not a slave of given
+ *   -EINVAL if conf is NULL or member id is invalid (not a member of given
  *       bonded device or is not inactive).
  */
+__rte_experimental
 int
-rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
-		struct rte_eth_bond_8023ad_slave_info *conf);
+rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
+		struct rte_eth_bond_8023ad_member_info *conf);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t member_id,
+		struct rte_eth_bond_8023ad_member_info *conf)
+{
+	return rte_eth_bond_8023ad_member_info(port_id, member_id, conf);
+}
 
 #ifdef __cplusplus
 }
 #endif
 
 /**
- * Configure a slave port to start collecting.
+ * Configure a member port to start collecting.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port id of valid slave.
+ * @param member_id	Port id of valid member.
  * @param enabled	Non-zero when collection enabled.
  * @return
  *   0 - if ok
- *   -EINVAL if slave is not valid.
+ *   -EINVAL if member is not valid.
  */
 int
-rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
 				int enabled);
 
 /**
- * Get COLLECTING flag from slave port actor state.
+ * Get COLLECTING flag from member port actor state.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port id of valid slave.
+ * @param member_id	Port id of valid member.
  * @return
  *   0 - if not set
  *   1 - if set
- *   -EINVAL if slave is not valid.
+ *   -EINVAL if member is not valid.
  */
 int
-rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id);
+rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id);
 
 /**
- * Configure a slave port to start distributing.
+ * Configure a member port to start distributing.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port id of valid slave.
+ * @param member_id	Port id of valid member.
  * @param enabled	Non-zero when distribution enabled.
  * @return
  *   0 - if ok
- *   -EINVAL if slave is not valid.
+ *   -EINVAL if member is not valid.
  */
 int
-rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
 				int enabled);
 
 /**
- * Get DISTRIBUTING flag from slave port actor state.
+ * Get DISTRIBUTING flag from member port actor state.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port id of valid slave.
+ * @param member_id	Port id of valid member.
  * @return
  *   0 - if not set
  *   1 - if set
- *   -EINVAL if slave is not valid.
+ *   -EINVAL if member is not valid.
  */
 int
-rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id);
+rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id);
 
 /**
  * LACPDU transmit path for external 802.3ad state machine.  Caller retains
  * ownership of the packet on failure.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port ID of valid slave device.
+ * @param member_id	Port ID of valid member device.
  * @param lacp_pkt	mbuf containing LACPDU.
  *
  * @return
  *   0 on success, negative value otherwise.
  */
 int
-rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t member_id,
 		struct rte_mbuf *lacp_pkt);
 
 /**
- * Enable dedicated hw queues for 802.3ad control plane traffic on slaves
+ * Enable dedicated hw queues for 802.3ad control plane traffic on members
  *
- * This function creates an additional tx and rx queue on each slave for
+ * This function creates an additional tx and rx queue on each member for
  * dedicated 802.3ad control plane traffic . A flow filtering rule is
- * programmed on each slave to redirect all LACP slow packets to that rx queue
+ * programmed on each member to redirect all LACP slow packets to that rx queue
  * for processing in the LACP state machine, this removes the need to filter
  * these packets in the bonded devices data path. The additional tx queue is
  * used to enable the LACP state machine to enqueue LACP packets directly to
- * slave hw independently of the bonded devices data path.
+ * member hw independently of the bonded devices data path.
  *
- * To use this feature all slaves must support the programming of the flow
+ * To use this feature all members must support the programming of the flow
  * filter rule required for rx and have enough queues that one rx and tx queue
  * can be reserved for the LACP state machines control packets.
  *
@@ -296,7 +305,7 @@ int
 rte_eth_bond_8023ad_dedicated_queues_enable(uint16_t port_id);
 
 /**
- * Disable slow queue on slaves
+ * Disable slow queue on members
  *
  * This function disables hardware slow packet filter.
  *
diff --git a/drivers/net/bonding/rte_eth_bond_alb.c b/drivers/net/bonding/rte_eth_bond_alb.c
index 86335a7971..56945e2349 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.c
+++ b/drivers/net/bonding/rte_eth_bond_alb.c
@@ -19,13 +19,13 @@ simple_hash(uint8_t *hash_start, int hash_size)
 }
 
 static uint16_t
-calculate_slave(struct bond_dev_private *internals)
+calculate_member(struct bond_dev_private *internals)
 {
 	uint16_t idx;
 
-	idx = (internals->mode6.last_slave + 1) % internals->active_slave_count;
-	internals->mode6.last_slave = idx;
-	return internals->active_slaves[idx];
+	idx = (internals->mode6.last_member + 1) % internals->active_member_count;
+	internals->mode6.last_member = idx;
+	return internals->active_members[idx];
 }
 
 int
@@ -41,7 +41,7 @@ bond_mode_alb_enable(struct rte_eth_dev *bond_dev)
 	/* Fill hash table with initial values */
 	memset(hash_table, 0, sizeof(struct client_data) * ALB_HASH_TABLE_SIZE);
 	rte_spinlock_init(&internals->mode6.lock);
-	internals->mode6.last_slave = ALB_NULL_INDEX;
+	internals->mode6.last_member = ALB_NULL_INDEX;
 	internals->mode6.ntt = 0;
 
 	/* Initialize memory pool for ARP packets to send */
@@ -96,7 +96,7 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
 	/*
 	 * We got reply for ARP Request send by the application. We need to
 	 * update client table when received data differ from what is stored
-	 * in ALB table and issue sending update packet to that slave.
+	 * in ALB table and issue sending update packet to that member.
 	 */
 	rte_spinlock_lock(&internals->mode6.lock);
 	if (client_info->in_use == 0 ||
@@ -112,8 +112,8 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
 		client_info->cli_ip = arp->arp_data.arp_sip;
 		rte_ether_addr_copy(&arp->arp_data.arp_sha,
 				&client_info->cli_mac);
-		client_info->slave_idx = calculate_slave(internals);
-		rte_eth_macaddr_get(client_info->slave_idx,
+		client_info->member_idx = calculate_member(internals);
+		rte_eth_macaddr_get(client_info->member_idx,
 				&client_info->app_mac);
 		rte_ether_addr_copy(&client_info->app_mac,
 				&arp->arp_data.arp_tha);
@@ -166,33 +166,33 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
 						&arp->arp_data.arp_tha,
 						&client_info->cli_mac);
 				}
-				rte_eth_macaddr_get(client_info->slave_idx,
+				rte_eth_macaddr_get(client_info->member_idx,
 						&client_info->app_mac);
 				rte_ether_addr_copy(&client_info->app_mac,
 						&arp->arp_data.arp_sha);
 				memcpy(client_info->vlan, eth_h + 1, offset);
 				client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
 				rte_spinlock_unlock(&internals->mode6.lock);
-				return client_info->slave_idx;
+				return client_info->member_idx;
 			}
 		}
 
-		/* Assign new slave to this client and update src mac in ARP */
+		/* Assign new member to this client and update src mac in ARP */
 		client_info->in_use = 1;
 		client_info->ntt = 0;
 		client_info->app_ip = arp->arp_data.arp_sip;
 		rte_ether_addr_copy(&arp->arp_data.arp_tha,
 				&client_info->cli_mac);
 		client_info->cli_ip = arp->arp_data.arp_tip;
-		client_info->slave_idx = calculate_slave(internals);
-		rte_eth_macaddr_get(client_info->slave_idx,
+		client_info->member_idx = calculate_member(internals);
+		rte_eth_macaddr_get(client_info->member_idx,
 				&client_info->app_mac);
 		rte_ether_addr_copy(&client_info->app_mac,
 				&arp->arp_data.arp_sha);
 		memcpy(client_info->vlan, eth_h + 1, offset);
 		client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
 		rte_spinlock_unlock(&internals->mode6.lock);
-		return client_info->slave_idx;
+		return client_info->member_idx;
 	}
 
 	/* If packet is not ARP Reply, send it on current primary port. */
@@ -208,7 +208,7 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
 {
 	struct rte_ether_hdr *eth_h;
 	struct rte_arp_hdr *arp_h;
-	uint16_t slave_idx;
+	uint16_t member_idx;
 
 	rte_spinlock_lock(&internals->mode6.lock);
 	eth_h = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
@@ -238,10 +238,10 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
 	arp_h->arp_plen = sizeof(uint32_t);
 	arp_h->arp_opcode = rte_cpu_to_be_16(RTE_ARP_OP_REPLY);
 
-	slave_idx = client_info->slave_idx;
+	member_idx = client_info->member_idx;
 	rte_spinlock_unlock(&internals->mode6.lock);
 
-	return slave_idx;
+	return member_idx;
 }
 
 void
@@ -252,18 +252,18 @@ bond_mode_alb_client_list_upd(struct rte_eth_dev *bond_dev)
 
 	int i;
 
-	/* If active slave count is 0, it's pointless to refresh alb table */
-	if (internals->active_slave_count <= 0)
+	/* If active member count is 0, it's pointless to refresh alb table */
+	if (internals->active_member_count <= 0)
 		return;
 
 	rte_spinlock_lock(&internals->mode6.lock);
-	internals->mode6.last_slave = ALB_NULL_INDEX;
+	internals->mode6.last_member = ALB_NULL_INDEX;
 
 	for (i = 0; i < ALB_HASH_TABLE_SIZE; i++) {
 		client_info = &internals->mode6.client_table[i];
 		if (client_info->in_use) {
-			client_info->slave_idx = calculate_slave(internals);
-			rte_eth_macaddr_get(client_info->slave_idx, &client_info->app_mac);
+			client_info->member_idx = calculate_member(internals);
+			rte_eth_macaddr_get(client_info->member_idx, &client_info->app_mac);
 			internals->mode6.ntt = 1;
 		}
 	}
diff --git a/drivers/net/bonding/rte_eth_bond_alb.h b/drivers/net/bonding/rte_eth_bond_alb.h
index 4e9aeda9bc..beb2e619f9 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.h
+++ b/drivers/net/bonding/rte_eth_bond_alb.h
@@ -22,8 +22,8 @@ struct client_data {
 	uint32_t cli_ip;
 	/**< Client IP address */
 
-	uint16_t slave_idx;
-	/**< Index of slave on which we connect with that client */
+	uint16_t member_idx;
+	/**< Index of member on which we connect with that client */
 	uint8_t in_use;
 	/**< Flag indicating if entry in client table is currently used */
 	uint8_t ntt;
@@ -42,8 +42,8 @@ struct mode_alb_private {
 	/**< Mempool for creating ARP update packets */
 	uint8_t ntt;
 	/**< Flag indicating if we need to send update to any client on next tx */
-	uint32_t last_slave;
-	/**< Index of last used slave in client table */
+	uint32_t last_member;
+	/**< Index of last used member in client table */
 	rte_spinlock_t lock;
 };
 
@@ -72,9 +72,9 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
 		struct bond_dev_private *internals);
 
 /**
- * Function handles ARP packet transmission. It also decides on which slave
- * send that packet. If packet is ARP Request, it is send on primary slave.
- * If it is ARP Reply, it is send on slave stored in client table for that
+ * Function handles ARP packet transmission. It also decides on which member
+ * send that packet. If packet is ARP Request, it is send on primary member.
+ * If it is ARP Reply, it is send on member stored in client table for that
  * connection. On Reply function also updates data in client table.
  *
  * @param eth_h			ETH header of transmitted packet.
@@ -82,7 +82,7 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
  * @param internals		Bonding data.
  *
  * @return
- * Index of slave on which packet should be sent.
+ * Index of member on which packet should be sent.
  */
 uint16_t
 bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
@@ -96,14 +96,14 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
  * @param internals		Bonding data.
  *
  * @return
- * Index of slave on which packet should be sent.
+ * Index of member on which packet should be sent.
  */
 uint16_t
 bond_mode_alb_arp_upd(struct client_data *client_info,
 		struct rte_mbuf *pkt, struct bond_dev_private *internals);
 
 /**
- * Function updates slave indexes of active connections.
+ * Function updates member indexes of active connections.
  *
  * @param bond_dev		Pointer to bonded device struct.
  */
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index c0178369b4..b6512a098a 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -37,7 +37,7 @@ valid_bonded_port_id(uint16_t port_id)
 }
 
 int
-check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
+check_for_main_bonded_ethdev(const struct rte_eth_dev *eth_dev)
 {
 	int i;
 	struct bond_dev_private *internals;
@@ -47,31 +47,31 @@ check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
 
 	internals = eth_dev->data->dev_private;
 
-	/* Check if any of slave devices is a bonded device */
-	for (i = 0; i < internals->slave_count; i++)
-		if (valid_bonded_port_id(internals->slaves[i].port_id) == 0)
+	/* Check if any of member devices is a bonded device */
+	for (i = 0; i < internals->member_count; i++)
+		if (valid_bonded_port_id(internals->members[i].port_id) == 0)
 			return 1;
 
 	return 0;
 }
 
 int
-valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
+valid_member_port_id(struct bond_dev_private *internals, uint16_t member_port_id)
 {
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(slave_port_id, -1);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(member_port_id, -1);
 
-	/* Verify that slave_port_id refers to a non bonded port */
-	if (check_for_bonded_ethdev(&rte_eth_devices[slave_port_id]) == 0 &&
+	/* Verify that member_port_id refers to a non bonded port */
+	if (check_for_bonded_ethdev(&rte_eth_devices[member_port_id]) == 0 &&
 			internals->mode == BONDING_MODE_8023AD) {
-		RTE_BOND_LOG(ERR, "Cannot add slave to bonded device in 802.3ad"
-				" mode as slave is also a bonded device, only "
+		RTE_BOND_LOG(ERR, "Cannot add member to bonded device in 802.3ad"
+				" mode as member is also a bonded device, only "
 				"physical devices can be support in this mode.");
 		return -1;
 	}
 
-	if (internals->port_id == slave_port_id) {
+	if (internals->port_id == member_port_id) {
 		RTE_BOND_LOG(ERR,
-			"Cannot add the bonded device itself as its slave.");
+			"Cannot add the bonded device itself as its member.");
 		return -1;
 	}
 
@@ -79,61 +79,63 @@ valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
 }
 
 void
-activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
+activate_member(struct rte_eth_dev *eth_dev, uint16_t port_id)
 {
 	struct bond_dev_private *internals = eth_dev->data->dev_private;
-	uint16_t active_count = internals->active_slave_count;
+	uint16_t active_count = internals->active_member_count;
 
 	if (internals->mode == BONDING_MODE_8023AD)
-		bond_mode_8023ad_activate_slave(eth_dev, port_id);
+		bond_mode_8023ad_activate_member(eth_dev, port_id);
 
 	if (internals->mode == BONDING_MODE_TLB
 			|| internals->mode == BONDING_MODE_ALB) {
 
-		internals->tlb_slaves_order[active_count] = port_id;
+		internals->tlb_members_order[active_count] = port_id;
 	}
 
-	RTE_ASSERT(internals->active_slave_count <
-			(RTE_DIM(internals->active_slaves) - 1));
+	RTE_ASSERT(internals->active_member_count <
+			(RTE_DIM(internals->active_members) - 1));
 
-	internals->active_slaves[internals->active_slave_count] = port_id;
-	internals->active_slave_count++;
+	internals->active_members[internals->active_member_count] = port_id;
+	internals->active_member_count++;
 
 	if (internals->mode == BONDING_MODE_TLB)
-		bond_tlb_activate_slave(internals);
+		bond_tlb_activate_member(internals);
 	if (internals->mode == BONDING_MODE_ALB)
 		bond_mode_alb_client_list_upd(eth_dev);
 }
 
 void
-deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
+deactivate_member(struct rte_eth_dev *eth_dev, uint16_t port_id)
 {
-	uint16_t slave_pos;
+	uint16_t member_pos;
 	struct bond_dev_private *internals = eth_dev->data->dev_private;
-	uint16_t active_count = internals->active_slave_count;
+	uint16_t active_count = internals->active_member_count;
 
 	if (internals->mode == BONDING_MODE_8023AD) {
 		bond_mode_8023ad_stop(eth_dev);
-		bond_mode_8023ad_deactivate_slave(eth_dev, port_id);
+		bond_mode_8023ad_deactivate_member(eth_dev, port_id);
 	} else if (internals->mode == BONDING_MODE_TLB
 			|| internals->mode == BONDING_MODE_ALB)
 		bond_tlb_disable(internals);
 
-	slave_pos = find_slave_by_id(internals->active_slaves, active_count,
+	member_pos = find_member_by_id(internals->active_members, active_count,
 			port_id);
 
-	/* If slave was not at the end of the list
-	 * shift active slaves up active array list */
-	if (slave_pos < active_count) {
+	/*
+	 * If member was not at the end of the list
+	 * shift active members up active array list.
+	 */
+	if (member_pos < active_count) {
 		active_count--;
-		memmove(internals->active_slaves + slave_pos,
-				internals->active_slaves + slave_pos + 1,
-				(active_count - slave_pos) *
-					sizeof(internals->active_slaves[0]));
+		memmove(internals->active_members + member_pos,
+				internals->active_members + member_pos + 1,
+				(active_count - member_pos) *
+					sizeof(internals->active_members[0]));
 	}
 
-	RTE_ASSERT(active_count < RTE_DIM(internals->active_slaves));
-	internals->active_slave_count = active_count;
+	RTE_ASSERT(active_count < RTE_DIM(internals->active_members));
+	internals->active_member_count = active_count;
 
 	if (eth_dev->data->dev_started) {
 		if (internals->mode == BONDING_MODE_8023AD) {
@@ -192,7 +194,7 @@ rte_eth_bond_free(const char *name)
 }
 
 static int
-slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
+member_vlan_filter_set(uint16_t bonded_port_id, uint16_t member_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
@@ -224,7 +226,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
 			if (unlikely(slab & mask)) {
 				uint16_t vlan_id = pos + i;
 
-				res = rte_eth_dev_vlan_filter(slave_port_id,
+				res = rte_eth_dev_vlan_filter(member_port_id,
 							      vlan_id, 1);
 			}
 		}
@@ -236,45 +238,45 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
 }
 
 static int
-slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
+member_rte_flow_prepare(uint16_t member_id, struct bond_dev_private *internals)
 {
 	struct rte_flow *flow;
 	struct rte_flow_error ferror;
-	uint16_t slave_port_id = internals->slaves[slave_id].port_id;
+	uint16_t member_port_id = internals->members[member_id].port_id;
 
 	if (internals->flow_isolated_valid != 0) {
-		if (rte_eth_dev_stop(slave_port_id) != 0) {
+		if (rte_eth_dev_stop(member_port_id) != 0) {
 			RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
-				     slave_port_id);
+				     member_port_id);
 			return -1;
 		}
 
-		if (rte_flow_isolate(slave_port_id, internals->flow_isolated,
+		if (rte_flow_isolate(member_port_id, internals->flow_isolated,
 		    &ferror)) {
-			RTE_BOND_LOG(ERR, "rte_flow_isolate failed for slave"
-				     " %d: %s", slave_id, ferror.message ?
+			RTE_BOND_LOG(ERR, "rte_flow_isolate failed for member"
+				     " %d: %s", member_id, ferror.message ?
 				     ferror.message : "(no stated reason)");
 			return -1;
 		}
 	}
 	TAILQ_FOREACH(flow, &internals->flow_list, next) {
-		flow->flows[slave_id] = rte_flow_create(slave_port_id,
+		flow->flows[member_id] = rte_flow_create(member_port_id,
 							flow->rule.attr,
 							flow->rule.pattern,
 							flow->rule.actions,
 							&ferror);
-		if (flow->flows[slave_id] == NULL) {
-			RTE_BOND_LOG(ERR, "Cannot create flow for slave"
-				     " %d: %s", slave_id,
+		if (flow->flows[member_id] == NULL) {
+			RTE_BOND_LOG(ERR, "Cannot create flow for member"
+				     " %d: %s", member_id,
 				     ferror.message ? ferror.message :
 				     "(no stated reason)");
-			/* Destroy successful bond flows from the slave */
+			/* Destroy successful bond flows from the member */
 			TAILQ_FOREACH(flow, &internals->flow_list, next) {
-				if (flow->flows[slave_id] != NULL) {
-					rte_flow_destroy(slave_port_id,
-							 flow->flows[slave_id],
+				if (flow->flows[member_id] != NULL) {
+					rte_flow_destroy(member_port_id,
+							 flow->flows[member_id],
 							 &ferror);
-					flow->flows[slave_id] = NULL;
+					flow->flows[member_id] = NULL;
 				}
 			}
 			return -1;
@@ -284,7 +286,7 @@ slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
 }
 
 static void
-eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_rx_first(struct bond_dev_private *internals,
 					 const struct rte_eth_dev_info *di)
 {
 	struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
@@ -292,20 +294,20 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
 	internals->reta_size = di->reta_size;
 	internals->rss_key_len = di->hash_key_size;
 
-	/* Inherit Rx offload capabilities from the first slave device */
+	/* Inherit Rx offload capabilities from the first member device */
 	internals->rx_offload_capa = di->rx_offload_capa;
 	internals->rx_queue_offload_capa = di->rx_queue_offload_capa;
 	internals->flow_type_rss_offloads = di->flow_type_rss_offloads;
 
-	/* Inherit maximum Rx packet size from the first slave device */
+	/* Inherit maximum Rx packet size from the first member device */
 	internals->candidate_max_rx_pktlen = di->max_rx_pktlen;
 
-	/* Inherit default Rx queue settings from the first slave device */
+	/* Inherit default Rx queue settings from the first member device */
 	memcpy(rxconf_i, &di->default_rxconf, sizeof(*rxconf_i));
 
 	/*
 	 * Turn off descriptor prefetch and writeback by default for all
-	 * slave devices. Applications may tweak this setting if need be.
+	 * member devices. Applications may tweak this setting if need be.
 	 */
 	rxconf_i->rx_thresh.pthresh = 0;
 	rxconf_i->rx_thresh.hthresh = 0;
@@ -314,26 +316,26 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
 	/* Setting this to zero should effectively enable default values */
 	rxconf_i->rx_free_thresh = 0;
 
-	/* Disable deferred start by default for all slave devices */
+	/* Disable deferred start by default for all member devices */
 	rxconf_i->rx_deferred_start = 0;
 }
 
 static void
-eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_tx_first(struct bond_dev_private *internals,
 					 const struct rte_eth_dev_info *di)
 {
 	struct rte_eth_txconf *txconf_i = &internals->default_txconf;
 
-	/* Inherit Tx offload capabilities from the first slave device */
+	/* Inherit Tx offload capabilities from the first member device */
 	internals->tx_offload_capa = di->tx_offload_capa;
 	internals->tx_queue_offload_capa = di->tx_queue_offload_capa;
 
-	/* Inherit default Tx queue settings from the first slave device */
+	/* Inherit default Tx queue settings from the first member device */
 	memcpy(txconf_i, &di->default_txconf, sizeof(*txconf_i));
 
 	/*
 	 * Turn off descriptor prefetch and writeback by default for all
-	 * slave devices. Applications may tweak this setting if need be.
+	 * member devices. Applications may tweak this setting if need be.
 	 */
 	txconf_i->tx_thresh.pthresh = 0;
 	txconf_i->tx_thresh.hthresh = 0;
@@ -341,17 +343,17 @@ eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
 
 	/*
 	 * Setting these parameters to zero assumes that default
-	 * values will be configured implicitly by slave devices.
+	 * values will be configured implicitly by member devices.
 	 */
 	txconf_i->tx_free_thresh = 0;
 	txconf_i->tx_rs_thresh = 0;
 
-	/* Disable deferred start by default for all slave devices */
+	/* Disable deferred start by default for all member devices */
 	txconf_i->tx_deferred_start = 0;
 }
 
 static void
-eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_rx_next(struct bond_dev_private *internals,
 					const struct rte_eth_dev_info *di)
 {
 	struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
@@ -362,32 +364,32 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
 	internals->flow_type_rss_offloads &= di->flow_type_rss_offloads;
 
 	/*
-	 * If at least one slave device suggests enabling this
-	 * setting by default, enable it for all slave devices
+	 * If at least one member device suggests enabling this
+	 * setting by default, enable it for all member devices
 	 * since disabling it may not be necessarily supported.
 	 */
 	if (rxconf->rx_drop_en == 1)
 		rxconf_i->rx_drop_en = 1;
 
 	/*
-	 * Adding a new slave device may cause some of previously inherited
+	 * Adding a new member device may cause some of previously inherited
 	 * offloads to be withdrawn from the internal rx_queue_offload_capa
 	 * value. Thus, the new internal value of default Rx queue offloads
 	 * has to be masked by rx_queue_offload_capa to make sure that only
 	 * commonly supported offloads are preserved from both the previous
-	 * value and the value being inherited from the new slave device.
+	 * value and the value being inherited from the new member device.
 	 */
 	rxconf_i->offloads = (rxconf_i->offloads | rxconf->offloads) &
 			     internals->rx_queue_offload_capa;
 
 	/*
-	 * RETA size is GCD of all slaves RETA sizes, so, if all sizes will be
+	 * RETA size is GCD of all members RETA sizes, so, if all sizes will be
 	 * the power of 2, the lower one is GCD
 	 */
 	if (internals->reta_size > di->reta_size)
 		internals->reta_size = di->reta_size;
 	if (internals->rss_key_len > di->hash_key_size) {
-		RTE_BOND_LOG(WARNING, "slave has different rss key size, "
+		RTE_BOND_LOG(WARNING, "member has different rss key size, "
 				"configuring rss may fail");
 		internals->rss_key_len = di->hash_key_size;
 	}
@@ -398,7 +400,7 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
 }
 
 static void
-eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_tx_next(struct bond_dev_private *internals,
 					const struct rte_eth_dev_info *di)
 {
 	struct rte_eth_txconf *txconf_i = &internals->default_txconf;
@@ -408,34 +410,34 @@ eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
 	internals->tx_queue_offload_capa &= di->tx_queue_offload_capa;
 
 	/*
-	 * Adding a new slave device may cause some of previously inherited
+	 * Adding a new member device may cause some of previously inherited
 	 * offloads to be withdrawn from the internal tx_queue_offload_capa
 	 * value. Thus, the new internal value of default Tx queue offloads
 	 * has to be masked by tx_queue_offload_capa to make sure that only
 	 * commonly supported offloads are preserved from both the previous
-	 * value and the value being inherited from the new slave device.
+	 * value and the value being inherited from the new member device.
 	 */
 	txconf_i->offloads = (txconf_i->offloads | txconf->offloads) &
 			     internals->tx_queue_offload_capa;
 }
 
 static void
-eth_bond_slave_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
-		const struct rte_eth_desc_lim *slave_desc_lim)
+eth_bond_member_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
+		const struct rte_eth_desc_lim *member_desc_lim)
 {
-	memcpy(bond_desc_lim, slave_desc_lim, sizeof(*bond_desc_lim));
+	memcpy(bond_desc_lim, member_desc_lim, sizeof(*bond_desc_lim));
 }
 
 static int
-eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
-		const struct rte_eth_desc_lim *slave_desc_lim)
+eth_bond_member_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
+		const struct rte_eth_desc_lim *member_desc_lim)
 {
 	bond_desc_lim->nb_max = RTE_MIN(bond_desc_lim->nb_max,
-					slave_desc_lim->nb_max);
+					member_desc_lim->nb_max);
 	bond_desc_lim->nb_min = RTE_MAX(bond_desc_lim->nb_min,
-					slave_desc_lim->nb_min);
+					member_desc_lim->nb_min);
 	bond_desc_lim->nb_align = RTE_MAX(bond_desc_lim->nb_align,
-					  slave_desc_lim->nb_align);
+					  member_desc_lim->nb_align);
 
 	if (bond_desc_lim->nb_min > bond_desc_lim->nb_max ||
 	    bond_desc_lim->nb_align > bond_desc_lim->nb_max) {
@@ -444,22 +446,22 @@ eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
 	}
 
 	/* Treat maximum number of segments equal to 0 as unspecified */
-	if (slave_desc_lim->nb_seg_max != 0 &&
+	if (member_desc_lim->nb_seg_max != 0 &&
 	    (bond_desc_lim->nb_seg_max == 0 ||
-	     slave_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
-		bond_desc_lim->nb_seg_max = slave_desc_lim->nb_seg_max;
-	if (slave_desc_lim->nb_mtu_seg_max != 0 &&
+	     member_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
+		bond_desc_lim->nb_seg_max = member_desc_lim->nb_seg_max;
+	if (member_desc_lim->nb_mtu_seg_max != 0 &&
 	    (bond_desc_lim->nb_mtu_seg_max == 0 ||
-	     slave_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
-		bond_desc_lim->nb_mtu_seg_max = slave_desc_lim->nb_mtu_seg_max;
+	     member_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
+		bond_desc_lim->nb_mtu_seg_max = member_desc_lim->nb_mtu_seg_max;
 
 	return 0;
 }
 
 static int
-__eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
+__eth_bond_member_add_lock_free(uint16_t bonded_port_id, uint16_t member_port_id)
 {
-	struct rte_eth_dev *bonded_eth_dev, *slave_eth_dev;
+	struct rte_eth_dev *bonded_eth_dev, *member_eth_dev;
 	struct bond_dev_private *internals;
 	struct rte_eth_link link_props;
 	struct rte_eth_dev_info dev_info;
@@ -468,78 +470,78 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	internals = bonded_eth_dev->data->dev_private;
 
-	if (valid_slave_port_id(internals, slave_port_id) != 0)
+	if (valid_member_port_id(internals, member_port_id) != 0)
 		return -1;
 
-	slave_eth_dev = &rte_eth_devices[slave_port_id];
-	if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDED_SLAVE) {
-		RTE_BOND_LOG(ERR, "Slave device is already a slave of a bonded device");
+	member_eth_dev = &rte_eth_devices[member_port_id];
+	if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDED_MEMBER) {
+		RTE_BOND_LOG(ERR, "Member device is already a member of a bonded device");
 		return -1;
 	}
 
-	ret = rte_eth_dev_info_get(slave_port_id, &dev_info);
+	ret = rte_eth_dev_info_get(member_port_id, &dev_info);
 	if (ret != 0) {
 		RTE_BOND_LOG(ERR,
 			"%s: Error during getting device (port %u) info: %s\n",
-			__func__, slave_port_id, strerror(-ret));
+			__func__, member_port_id, strerror(-ret));
 
 		return ret;
 	}
 	if (dev_info.max_rx_pktlen < internals->max_rx_pktlen) {
-		RTE_BOND_LOG(ERR, "Slave (port %u) max_rx_pktlen too small",
-			     slave_port_id);
+		RTE_BOND_LOG(ERR, "Member (port %u) max_rx_pktlen too small",
+			     member_port_id);
 		return -1;
 	}
 
-	slave_add(internals, slave_eth_dev);
+	member_add(internals, member_eth_dev);
 
-	/* We need to store slaves reta_size to be able to synchronize RETA for all
-	 * slave devices even if its sizes are different.
+	/* We need to store members reta_size to be able to synchronize RETA for all
+	 * member devices even if its sizes are different.
 	 */
-	internals->slaves[internals->slave_count].reta_size = dev_info.reta_size;
+	internals->members[internals->member_count].reta_size = dev_info.reta_size;
 
-	if (internals->slave_count < 1) {
-		/* if MAC is not user defined then use MAC of first slave add to
+	if (internals->member_count < 1) {
+		/* if MAC is not user defined then use MAC of first member add to
 		 * bonded device */
 		if (!internals->user_defined_mac) {
 			if (mac_address_set(bonded_eth_dev,
-					    slave_eth_dev->data->mac_addrs)) {
+					    member_eth_dev->data->mac_addrs)) {
 				RTE_BOND_LOG(ERR, "Failed to set MAC address");
 				return -1;
 			}
 		}
 
-		/* Make primary slave */
-		internals->primary_port = slave_port_id;
-		internals->current_primary_port = slave_port_id;
+		/* Make primary member */
+		internals->primary_port = member_port_id;
+		internals->current_primary_port = member_port_id;
 
 		internals->speed_capa = dev_info.speed_capa;
 
-		/* Inherit queues settings from first slave */
-		internals->nb_rx_queues = slave_eth_dev->data->nb_rx_queues;
-		internals->nb_tx_queues = slave_eth_dev->data->nb_tx_queues;
+		/* Inherit queues settings from first member */
+		internals->nb_rx_queues = member_eth_dev->data->nb_rx_queues;
+		internals->nb_tx_queues = member_eth_dev->data->nb_tx_queues;
 
-		eth_bond_slave_inherit_dev_info_rx_first(internals, &dev_info);
-		eth_bond_slave_inherit_dev_info_tx_first(internals, &dev_info);
+		eth_bond_member_inherit_dev_info_rx_first(internals, &dev_info);
+		eth_bond_member_inherit_dev_info_tx_first(internals, &dev_info);
 
-		eth_bond_slave_inherit_desc_lim_first(&internals->rx_desc_lim,
+		eth_bond_member_inherit_desc_lim_first(&internals->rx_desc_lim,
 						      &dev_info.rx_desc_lim);
-		eth_bond_slave_inherit_desc_lim_first(&internals->tx_desc_lim,
+		eth_bond_member_inherit_desc_lim_first(&internals->tx_desc_lim,
 						      &dev_info.tx_desc_lim);
 	} else {
 		int ret;
 
 		internals->speed_capa &= dev_info.speed_capa;
-		eth_bond_slave_inherit_dev_info_rx_next(internals, &dev_info);
-		eth_bond_slave_inherit_dev_info_tx_next(internals, &dev_info);
+		eth_bond_member_inherit_dev_info_rx_next(internals, &dev_info);
+		eth_bond_member_inherit_dev_info_tx_next(internals, &dev_info);
 
-		ret = eth_bond_slave_inherit_desc_lim_next(
-				&internals->rx_desc_lim, &dev_info.rx_desc_lim);
+		ret = eth_bond_member_inherit_desc_lim_next(&internals->rx_desc_lim,
+							&dev_info.rx_desc_lim);
 		if (ret != 0)
 			return ret;
 
-		ret = eth_bond_slave_inherit_desc_lim_next(
-				&internals->tx_desc_lim, &dev_info.tx_desc_lim);
+		ret = eth_bond_member_inherit_desc_lim_next(&internals->tx_desc_lim,
+							&dev_info.tx_desc_lim);
 		if (ret != 0)
 			return ret;
 	}
@@ -552,79 +554,81 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
 	bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf &=
 			internals->flow_type_rss_offloads;
 
-	if (slave_rte_flow_prepare(internals->slave_count, internals) != 0) {
-		RTE_BOND_LOG(ERR, "Failed to prepare new slave flows: port=%d",
-			     slave_port_id);
+	if (member_rte_flow_prepare(internals->member_count, internals) != 0) {
+		RTE_BOND_LOG(ERR, "Failed to prepare new member flows: port=%d",
+			     member_port_id);
 		return -1;
 	}
 
-	/* Add additional MAC addresses to the slave */
-	if (slave_add_mac_addresses(bonded_eth_dev, slave_port_id) != 0) {
-		RTE_BOND_LOG(ERR, "Failed to add mac address(es) to slave %hu",
-				slave_port_id);
+	/* Add additional MAC addresses to the member */
+	if (member_add_mac_addresses(bonded_eth_dev, member_port_id) != 0) {
+		RTE_BOND_LOG(ERR, "Failed to add mac address(es) to member %hu",
+				member_port_id);
 		return -1;
 	}
 
-	internals->slave_count++;
+	internals->member_count++;
 
 	if (bonded_eth_dev->data->dev_started) {
-		if (slave_configure(bonded_eth_dev, slave_eth_dev) != 0) {
-			internals->slave_count--;
-			RTE_BOND_LOG(ERR, "rte_bond_slaves_configure: port=%d",
-					slave_port_id);
+		if (member_configure(bonded_eth_dev, member_eth_dev) != 0) {
+			internals->member_count--;
+			RTE_BOND_LOG(ERR, "rte_bond_members_configure: port=%d",
+					member_port_id);
 			return -1;
 		}
-		if (slave_start(bonded_eth_dev, slave_eth_dev) != 0) {
-			internals->slave_count--;
-			RTE_BOND_LOG(ERR, "rte_bond_slaves_start: port=%d",
-					slave_port_id);
+		if (member_start(bonded_eth_dev, member_eth_dev) != 0) {
+			internals->member_count--;
+			RTE_BOND_LOG(ERR, "rte_bond_members_start: port=%d",
+					member_port_id);
 			return -1;
 		}
 	}
 
-	/* Update all slave devices MACs */
-	mac_address_slaves_update(bonded_eth_dev);
+	/* Update all member devices MACs */
+	mac_address_members_update(bonded_eth_dev);
 
 	/* Register link status change callback with bonded device pointer as
 	 * argument*/
-	rte_eth_dev_callback_register(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
+	rte_eth_dev_callback_register(member_port_id, RTE_ETH_EVENT_INTR_LSC,
 			bond_ethdev_lsc_event_callback, &bonded_eth_dev->data->port_id);
 
-	/* If bonded device is started then we can add the slave to our active
-	 * slave array */
+	/*
+	 * If bonded device is started then we can add the member to our active
+	 * member array.
+	 */
 	if (bonded_eth_dev->data->dev_started) {
-		ret = rte_eth_link_get_nowait(slave_port_id, &link_props);
+		ret = rte_eth_link_get_nowait(member_port_id, &link_props);
 		if (ret < 0) {
-			rte_eth_dev_callback_unregister(slave_port_id,
+			rte_eth_dev_callback_unregister(member_port_id,
 					RTE_ETH_EVENT_INTR_LSC,
 					bond_ethdev_lsc_event_callback,
 					&bonded_eth_dev->data->port_id);
-			internals->slave_count--;
+			internals->member_count--;
 			RTE_BOND_LOG(ERR,
-				"Slave (port %u) link get failed: %s\n",
-				slave_port_id, rte_strerror(-ret));
+				"Member (port %u) link get failed: %s\n",
+				member_port_id, rte_strerror(-ret));
 			return -1;
 		}
 
 		if (link_props.link_status == RTE_ETH_LINK_UP) {
-			if (internals->active_slave_count == 0 &&
+			if (internals->active_member_count == 0 &&
 			    !internals->user_defined_primary_port)
 				bond_ethdev_primary_set(internals,
-							slave_port_id);
+							member_port_id);
 		}
 	}
 
-	/* Add slave details to bonded device */
-	slave_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDED_SLAVE;
+	/* Add member details to bonded device */
+	member_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDED_MEMBER;
 
-	slave_vlan_filter_set(bonded_port_id, slave_port_id);
+	member_vlan_filter_set(bonded_port_id, member_port_id);
 
 	return 0;
 
 }
 
 int
-rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_member_add(uint16_t bonded_port_id, uint16_t member_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
@@ -637,12 +641,12 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	internals = bonded_eth_dev->data->dev_private;
 
-	if (valid_slave_port_id(internals, slave_port_id) != 0)
+	if (valid_member_port_id(internals, member_port_id) != 0)
 		return -1;
 
 	rte_spinlock_lock(&internals->lock);
 
-	retval = __eth_bond_slave_add_lock_free(bonded_port_id, slave_port_id);
+	retval = __eth_bond_member_add_lock_free(bonded_port_id, member_port_id);
 
 	rte_spinlock_unlock(&internals->lock);
 
@@ -650,93 +654,95 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
 }
 
 static int
-__eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
-				   uint16_t slave_port_id)
+__eth_bond_member_remove_lock_free(uint16_t bonded_port_id,
+				   uint16_t member_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
-	struct rte_eth_dev *slave_eth_dev;
+	struct rte_eth_dev *member_eth_dev;
 	struct rte_flow_error flow_error;
 	struct rte_flow *flow;
-	int i, slave_idx;
+	int i, member_idx;
 
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	internals = bonded_eth_dev->data->dev_private;
 
-	if (valid_slave_port_id(internals, slave_port_id) < 0)
+	if (valid_member_port_id(internals, member_port_id) < 0)
 		return -1;
 
-	/* first remove from active slave list */
-	slave_idx = find_slave_by_id(internals->active_slaves,
-		internals->active_slave_count, slave_port_id);
+	/* first remove from active member list */
+	member_idx = find_member_by_id(internals->active_members,
+		internals->active_member_count, member_port_id);
 
-	if (slave_idx < internals->active_slave_count)
-		deactivate_slave(bonded_eth_dev, slave_port_id);
+	if (member_idx < internals->active_member_count)
+		deactivate_member(bonded_eth_dev, member_port_id);
 
-	slave_idx = -1;
-	/* now find in slave list */
-	for (i = 0; i < internals->slave_count; i++)
-		if (internals->slaves[i].port_id == slave_port_id) {
-			slave_idx = i;
+	member_idx = -1;
+	/* now find in member list */
+	for (i = 0; i < internals->member_count; i++)
+		if (internals->members[i].port_id == member_port_id) {
+			member_idx = i;
 			break;
 		}
 
-	if (slave_idx < 0) {
-		RTE_BOND_LOG(ERR, "Couldn't find slave in port list, slave count %u",
-				internals->slave_count);
+	if (member_idx < 0) {
+		RTE_BOND_LOG(ERR, "Could not find member in port list, member count %u",
+				internals->member_count);
 		return -1;
 	}
 
 	/* Un-register link status change callback with bonded device pointer as
 	 * argument*/
-	rte_eth_dev_callback_unregister(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
+	rte_eth_dev_callback_unregister(member_port_id, RTE_ETH_EVENT_INTR_LSC,
 			bond_ethdev_lsc_event_callback,
 			&rte_eth_devices[bonded_port_id].data->port_id);
 
-	/* Restore original MAC address of slave device */
-	rte_eth_dev_default_mac_addr_set(slave_port_id,
-			&(internals->slaves[slave_idx].persisted_mac_addr));
+	/* Restore original MAC address of member device */
+	rte_eth_dev_default_mac_addr_set(member_port_id,
+			&internals->members[member_idx].persisted_mac_addr);
 
-	/* remove additional MAC addresses from the slave */
-	slave_remove_mac_addresses(bonded_eth_dev, slave_port_id);
+	/* remove additional MAC addresses from the member */
+	member_remove_mac_addresses(bonded_eth_dev, member_port_id);
 
 	/*
-	 * Remove bond device flows from slave device.
+	 * Remove bond device flows from member device.
 	 * Note: don't restore flow isolate mode.
 	 */
 	TAILQ_FOREACH(flow, &internals->flow_list, next) {
-		if (flow->flows[slave_idx] != NULL) {
-			rte_flow_destroy(slave_port_id, flow->flows[slave_idx],
+		if (flow->flows[member_idx] != NULL) {
+			rte_flow_destroy(member_port_id, flow->flows[member_idx],
 					 &flow_error);
-			flow->flows[slave_idx] = NULL;
+			flow->flows[member_idx] = NULL;
 		}
 	}
 
-	slave_eth_dev = &rte_eth_devices[slave_port_id];
-	slave_remove(internals, slave_eth_dev);
-	slave_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDED_SLAVE);
+	member_eth_dev = &rte_eth_devices[member_port_id];
+	member_remove(internals, member_eth_dev);
+	member_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDED_MEMBER);
 
-	/*  first slave in the active list will be the primary by default,
+	/*  first member in the active list will be the primary by default,
 	 *  otherwise use first device in list */
-	if (internals->current_primary_port == slave_port_id) {
-		if (internals->active_slave_count > 0)
-			internals->current_primary_port = internals->active_slaves[0];
-		else if (internals->slave_count > 0)
-			internals->current_primary_port = internals->slaves[0].port_id;
+	if (internals->current_primary_port == member_port_id) {
+		if (internals->active_member_count > 0)
+			internals->current_primary_port = internals->active_members[0];
+		else if (internals->member_count > 0)
+			internals->current_primary_port = internals->members[0].port_id;
 		else
 			internals->primary_port = 0;
-		mac_address_slaves_update(bonded_eth_dev);
+		mac_address_members_update(bonded_eth_dev);
 	}
 
-	if (internals->active_slave_count < 1) {
-		/* if no slaves are any longer attached to bonded device and MAC is not
+	if (internals->active_member_count < 1) {
+		/*
+		 * if no members are any longer attached to bonded device and MAC is not
 		 * user defined then clear MAC of bonded device as it will be reset
-		 * when a new slave is added */
-		if (internals->slave_count < 1 && !internals->user_defined_mac)
+		 * when a new member is added.
+		 */
+		if (internals->member_count < 1 && !internals->user_defined_mac)
 			memset(rte_eth_devices[bonded_port_id].data->mac_addrs, 0,
 					sizeof(*(rte_eth_devices[bonded_port_id].data->mac_addrs)));
 	}
-	if (internals->slave_count == 0) {
+	if (internals->member_count == 0) {
 		internals->rx_offload_capa = 0;
 		internals->tx_offload_capa = 0;
 		internals->rx_queue_offload_capa = 0;
@@ -750,7 +756,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
 }
 
 int
-rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_member_remove(uint16_t bonded_port_id, uint16_t member_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
@@ -764,7 +770,7 @@ rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
 
 	rte_spinlock_lock(&internals->lock);
 
-	retval = __eth_bond_slave_remove_lock_free(bonded_port_id, slave_port_id);
+	retval = __eth_bond_member_remove_lock_free(bonded_port_id, member_port_id);
 
 	rte_spinlock_unlock(&internals->lock);
 
@@ -781,7 +787,7 @@ rte_eth_bond_mode_set(uint16_t bonded_port_id, uint8_t mode)
 
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 
-	if (check_for_master_bonded_ethdev(bonded_eth_dev) != 0 &&
+	if (check_for_main_bonded_ethdev(bonded_eth_dev) != 0 &&
 			mode == BONDING_MODE_8023AD)
 		return -1;
 
@@ -802,7 +808,7 @@ rte_eth_bond_mode_get(uint16_t bonded_port_id)
 }
 
 int
-rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t member_port_id)
 {
 	struct bond_dev_private *internals;
 
@@ -811,13 +817,13 @@ rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
 
 	internals = rte_eth_devices[bonded_port_id].data->dev_private;
 
-	if (valid_slave_port_id(internals, slave_port_id) != 0)
+	if (valid_member_port_id(internals, member_port_id) != 0)
 		return -1;
 
 	internals->user_defined_primary_port = 1;
-	internals->primary_port = slave_port_id;
+	internals->primary_port = member_port_id;
 
-	bond_ethdev_primary_set(internals, slave_port_id);
+	bond_ethdev_primary_set(internals, member_port_id);
 
 	return 0;
 }
@@ -832,14 +838,14 @@ rte_eth_bond_primary_get(uint16_t bonded_port_id)
 
 	internals = rte_eth_devices[bonded_port_id].data->dev_private;
 
-	if (internals->slave_count < 1)
+	if (internals->member_count < 1)
 		return -1;
 
 	return internals->current_primary_port;
 }
 
 int
-rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_members_get(uint16_t bonded_port_id, uint16_t members[],
 			uint16_t len)
 {
 	struct bond_dev_private *internals;
@@ -848,22 +854,22 @@ rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
 	if (valid_bonded_port_id(bonded_port_id) != 0)
 		return -1;
 
-	if (slaves == NULL)
+	if (members == NULL)
 		return -1;
 
 	internals = rte_eth_devices[bonded_port_id].data->dev_private;
 
-	if (internals->slave_count > len)
+	if (internals->member_count > len)
 		return -1;
 
-	for (i = 0; i < internals->slave_count; i++)
-		slaves[i] = internals->slaves[i].port_id;
+	for (i = 0; i < internals->member_count; i++)
+		members[i] = internals->members[i].port_id;
 
-	return internals->slave_count;
+	return internals->member_count;
 }
 
 int
-rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_active_members_get(uint16_t bonded_port_id, uint16_t members[],
 		uint16_t len)
 {
 	struct bond_dev_private *internals;
@@ -871,18 +877,18 @@ rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
 	if (valid_bonded_port_id(bonded_port_id) != 0)
 		return -1;
 
-	if (slaves == NULL)
+	if (members == NULL)
 		return -1;
 
 	internals = rte_eth_devices[bonded_port_id].data->dev_private;
 
-	if (internals->active_slave_count > len)
+	if (internals->active_member_count > len)
 		return -1;
 
-	memcpy(slaves, internals->active_slaves,
-	internals->active_slave_count * sizeof(internals->active_slaves[0]));
+	memcpy(members, internals->active_members,
+	internals->active_member_count * sizeof(internals->active_members[0]));
 
-	return internals->active_slave_count;
+	return internals->active_member_count;
 }
 
 int
@@ -904,9 +910,9 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
 
 	internals->user_defined_mac = 1;
 
-	/* Update all slave devices MACs*/
-	if (internals->slave_count > 0)
-		return mac_address_slaves_update(bonded_eth_dev);
+	/* Update all member devices MACs*/
+	if (internals->member_count > 0)
+		return mac_address_members_update(bonded_eth_dev);
 
 	return 0;
 }
@@ -925,30 +931,30 @@ rte_eth_bond_mac_address_reset(uint16_t bonded_port_id)
 
 	internals->user_defined_mac = 0;
 
-	if (internals->slave_count > 0) {
-		int slave_port;
-		/* Get the primary slave location based on the primary port
-		 * number as, while slave_add(), we will keep the primary
-		 * slave based on slave_count,but not based on the primary port.
+	if (internals->member_count > 0) {
+		int member_port;
+		/* Get the primary member location based on the primary port
+		 * number as, while member_add(), we will keep the primary
+		 * member based on member_count,but not based on the primary port.
 		 */
-		for (slave_port = 0; slave_port < internals->slave_count;
-		     slave_port++) {
-			if (internals->slaves[slave_port].port_id ==
+		for (member_port = 0; member_port < internals->member_count;
+		     member_port++) {
+			if (internals->members[member_port].port_id ==
 			    internals->primary_port)
 				break;
 		}
 
 		/* Set MAC Address of Bonded Device */
 		if (mac_address_set(bonded_eth_dev,
-			&internals->slaves[slave_port].persisted_mac_addr)
+			&internals->members[member_port].persisted_mac_addr)
 				!= 0) {
 			RTE_BOND_LOG(ERR, "Failed to set MAC address on bonded device");
 			return -1;
 		}
-		/* Update all slave devices MAC addresses */
-		return mac_address_slaves_update(bonded_eth_dev);
+		/* Update all member devices MAC addresses */
+		return mac_address_members_update(bonded_eth_dev);
 	}
-	/* No need to update anything as no slaves present */
+	/* No need to update anything as no members present */
 	return 0;
 }
 
diff --git a/drivers/net/bonding/rte_eth_bond_args.c b/drivers/net/bonding/rte_eth_bond_args.c
index 6553166f5c..cbc905f700 100644
--- a/drivers/net/bonding/rte_eth_bond_args.c
+++ b/drivers/net/bonding/rte_eth_bond_args.c
@@ -12,8 +12,8 @@
 #include "eth_bond_private.h"
 
 const char *pmd_bond_init_valid_arguments[] = {
-	PMD_BOND_SLAVE_PORT_KVARG,
-	PMD_BOND_PRIMARY_SLAVE_KVARG,
+	PMD_BOND_MEMBER_PORT_KVARG,
+	PMD_BOND_PRIMARY_MEMBER_KVARG,
 	PMD_BOND_MODE_KVARG,
 	PMD_BOND_XMIT_POLICY_KVARG,
 	PMD_BOND_SOCKET_ID_KVARG,
@@ -109,31 +109,31 @@ parse_port_id(const char *port_str)
 }
 
 int
-bond_ethdev_parse_slave_port_kvarg(const char *key,
+bond_ethdev_parse_member_port_kvarg(const char *key,
 		const char *value, void *extra_args)
 {
-	struct bond_ethdev_slave_ports *slave_ports;
+	struct bond_ethdev_member_ports *member_ports;
 
 	if (value == NULL || extra_args == NULL)
 		return -1;
 
-	slave_ports = extra_args;
+	member_ports = extra_args;
 
-	if (strcmp(key, PMD_BOND_SLAVE_PORT_KVARG) == 0) {
+	if (strcmp(key, PMD_BOND_MEMBER_PORT_KVARG) == 0) {
 		int port_id = parse_port_id(value);
 		if (port_id < 0) {
-			RTE_BOND_LOG(ERR, "Invalid slave port value (%s) specified",
+			RTE_BOND_LOG(ERR, "Invalid member port value (%s) specified",
 				     value);
 			return -1;
 		} else
-			slave_ports->slaves[slave_ports->slave_count++] =
+			member_ports->members[member_ports->member_count++] =
 					port_id;
 	}
 	return 0;
 }
 
 int
-bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_mode_kvarg(const char *key __rte_unused,
 		const char *value, void *extra_args)
 {
 	uint8_t *mode;
@@ -160,13 +160,13 @@ bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
 	case BONDING_MODE_ALB:
 		return 0;
 	default:
-		RTE_BOND_LOG(ERR, "Invalid slave mode value (%s) specified", value);
+		RTE_BOND_LOG(ERR, "Invalid member mode value (%s) specified", value);
 		return -1;
 	}
 }
 
 int
-bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_agg_mode_kvarg(const char *key __rte_unused,
 		const char *value, void *extra_args)
 {
 	uint8_t *agg_mode;
@@ -221,19 +221,19 @@ bond_ethdev_parse_socket_id_kvarg(const char *key __rte_unused,
 }
 
 int
-bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_primary_member_port_id_kvarg(const char *key __rte_unused,
 		const char *value, void *extra_args)
 {
-	int primary_slave_port_id;
+	int primary_member_port_id;
 
 	if (value == NULL || extra_args == NULL)
 		return -1;
 
-	primary_slave_port_id = parse_port_id(value);
-	if (primary_slave_port_id < 0)
+	primary_member_port_id = parse_port_id(value);
+	if (primary_member_port_id < 0)
 		return -1;
 
-	*(uint16_t *)extra_args = (uint16_t)primary_slave_port_id;
+	*(uint16_t *)extra_args = (uint16_t)primary_member_port_id;
 
 	return 0;
 }
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 65b77faae7..71a91675f7 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -69,12 +69,12 @@ bond_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	int i;
 	int ret;
 
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_flow_validate(internals->slaves[i].port_id, attr,
+	for (i = 0; i < internals->member_count; i++) {
+		ret = rte_flow_validate(internals->members[i].port_id, attr,
 					patterns, actions, err);
 		if (ret) {
 			RTE_BOND_LOG(ERR, "Operation rte_flow_validate failed"
-				     " for slave %d with error %d", i, ret);
+				     " for member %d with error %d", i, ret);
 			return ret;
 		}
 	}
@@ -97,11 +97,11 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 				   NULL, rte_strerror(ENOMEM));
 		return NULL;
 	}
-	for (i = 0; i < internals->slave_count; i++) {
-		flow->flows[i] = rte_flow_create(internals->slaves[i].port_id,
+	for (i = 0; i < internals->member_count; i++) {
+		flow->flows[i] = rte_flow_create(internals->members[i].port_id,
 						 attr, patterns, actions, err);
 		if (unlikely(flow->flows[i] == NULL)) {
-			RTE_BOND_LOG(ERR, "Failed to create flow on slave %d",
+			RTE_BOND_LOG(ERR, "Failed to create flow on member %d",
 				     i);
 			goto err;
 		}
@@ -109,10 +109,10 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	TAILQ_INSERT_TAIL(&internals->flow_list, flow, next);
 	return flow;
 err:
-	/* Destroy all slaves flows. */
-	for (i = 0; i < internals->slave_count; i++) {
+	/* Destroy all members flows. */
+	for (i = 0; i < internals->member_count; i++) {
 		if (flow->flows[i] != NULL)
-			rte_flow_destroy(internals->slaves[i].port_id,
+			rte_flow_destroy(internals->members[i].port_id,
 					 flow->flows[i], err);
 	}
 	bond_flow_release(&flow);
@@ -127,15 +127,15 @@ bond_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
 	int i;
 	int ret = 0;
 
-	for (i = 0; i < internals->slave_count; i++) {
+	for (i = 0; i < internals->member_count; i++) {
 		int lret;
 
 		if (unlikely(flow->flows[i] == NULL))
 			continue;
-		lret = rte_flow_destroy(internals->slaves[i].port_id,
+		lret = rte_flow_destroy(internals->members[i].port_id,
 					flow->flows[i], err);
 		if (unlikely(lret != 0)) {
-			RTE_BOND_LOG(ERR, "Failed to destroy flow on slave %d:"
+			RTE_BOND_LOG(ERR, "Failed to destroy flow on member %d:"
 				     " %d", i, lret);
 			ret = lret;
 		}
@@ -154,7 +154,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
 	int ret = 0;
 	int lret;
 
-	/* Destroy all bond flows from its slaves instead of flushing them to
+	/* Destroy all bond flows from its members instead of flushing them to
 	 * keep the LACP flow or any other external flows.
 	 */
 	RTE_TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
@@ -163,7 +163,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
 			ret = lret;
 	}
 	if (unlikely(ret != 0))
-		RTE_BOND_LOG(ERR, "Failed to flush flow in all slaves");
+		RTE_BOND_LOG(ERR, "Failed to flush flow in all members");
 	return ret;
 }
 
@@ -174,26 +174,26 @@ bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
 		      struct rte_flow_error *err)
 {
 	struct bond_dev_private *internals = dev->data->dev_private;
-	struct rte_flow_query_count slave_count;
+	struct rte_flow_query_count member_count;
 	int i;
 	int ret;
 
 	count->bytes = 0;
 	count->hits = 0;
-	rte_memcpy(&slave_count, count, sizeof(slave_count));
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_flow_query(internals->slaves[i].port_id,
+	rte_memcpy(&member_count, count, sizeof(member_count));
+	for (i = 0; i < internals->member_count; i++) {
+		ret = rte_flow_query(internals->members[i].port_id,
 				     flow->flows[i], action,
-				     &slave_count, err);
+				     &member_count, err);
 		if (unlikely(ret != 0)) {
 			RTE_BOND_LOG(ERR, "Failed to query flow on"
-				     " slave %d: %d", i, ret);
+				     " member %d: %d", i, ret);
 			return ret;
 		}
-		count->bytes += slave_count.bytes;
-		count->hits += slave_count.hits;
-		slave_count.bytes = 0;
-		slave_count.hits = 0;
+		count->bytes += member_count.bytes;
+		count->hits += member_count.hits;
+		member_count.bytes = 0;
+		member_count.hits = 0;
 	}
 	return 0;
 }
@@ -221,11 +221,11 @@ bond_flow_isolate(struct rte_eth_dev *dev, int set,
 	int i;
 	int ret;
 
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_flow_isolate(internals->slaves[i].port_id, set, err);
+	for (i = 0; i < internals->member_count; i++) {
+		ret = rte_flow_isolate(internals->members[i].port_id, set, err);
 		if (unlikely(ret != 0)) {
 			RTE_BOND_LOG(ERR, "Operation rte_flow_isolate failed"
-				     " for slave %d with error %d", i, ret);
+				     " for member %d with error %d", i, ret);
 			internals->flow_isolated_valid = 0;
 			return ret;
 		}
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index f0c4f7d26b..0e17febcf6 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -61,33 +61,35 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	struct bond_dev_private *internals;
 
 	uint16_t num_rx_total = 0;
-	uint16_t slave_count;
-	uint16_t active_slave;
+	uint16_t member_count;
+	uint16_t active_member;
 	int i;
 
 	/* Cast to structure, containing bonded device's port id and queue id */
 	struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue;
 	internals = bd_rx_q->dev_private;
-	slave_count = internals->active_slave_count;
-	active_slave = bd_rx_q->active_slave;
+	member_count = internals->active_member_count;
+	active_member = bd_rx_q->active_member;
 
-	for (i = 0; i < slave_count && nb_pkts; i++) {
-		uint16_t num_rx_slave;
+	for (i = 0; i < member_count && nb_pkts; i++) {
+		uint16_t num_rx_member;
 
-		/* Offset of pointer to *bufs increases as packets are received
-		 * from other slaves */
-		num_rx_slave =
-			rte_eth_rx_burst(internals->active_slaves[active_slave],
+		/*
+		 * Offset of pointer to *bufs increases as packets are received
+		 * from other members.
+		 */
+		num_rx_member =
+			rte_eth_rx_burst(internals->active_members[active_member],
 					 bd_rx_q->queue_id,
 					 bufs + num_rx_total, nb_pkts);
-		num_rx_total += num_rx_slave;
-		nb_pkts -= num_rx_slave;
-		if (++active_slave >= slave_count)
-			active_slave = 0;
+		num_rx_total += num_rx_member;
+		nb_pkts -= num_rx_member;
+		if (++active_member >= member_count)
+			active_member = 0;
 	}
 
-	if (++bd_rx_q->active_slave >= slave_count)
-		bd_rx_q->active_slave = 0;
+	if (++bd_rx_q->active_member >= member_count)
+		bd_rx_q->active_member = 0;
 	return num_rx_total;
 }
 
@@ -158,8 +160,8 @@ const struct rte_flow_attr flow_attr_8023ad = {
 
 int
 bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
-		uint16_t slave_port) {
-	struct rte_eth_dev_info slave_info;
+		uint16_t member_port) {
+	struct rte_eth_dev_info member_info;
 	struct rte_flow_error error;
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
 
@@ -177,29 +179,29 @@ bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
 		}
 	};
 
-	int ret = rte_flow_validate(slave_port, &flow_attr_8023ad,
+	int ret = rte_flow_validate(member_port, &flow_attr_8023ad,
 			flow_item_8023ad, actions, &error);
 	if (ret < 0) {
-		RTE_BOND_LOG(ERR, "%s: %s (slave_port=%d queue_id=%d)",
-				__func__, error.message, slave_port,
+		RTE_BOND_LOG(ERR, "%s: %s (member_port=%d queue_id=%d)",
+				__func__, error.message, member_port,
 				internals->mode4.dedicated_queues.rx_qid);
 		return -1;
 	}
 
-	ret = rte_eth_dev_info_get(slave_port, &slave_info);
+	ret = rte_eth_dev_info_get(member_port, &member_info);
 	if (ret != 0) {
 		RTE_BOND_LOG(ERR,
 			"%s: Error during getting device (port %u) info: %s\n",
-			__func__, slave_port, strerror(-ret));
+			__func__, member_port, strerror(-ret));
 
 		return ret;
 	}
 
-	if (slave_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
-			slave_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
+	if (member_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
+			member_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
 		RTE_BOND_LOG(ERR,
-			"%s: Slave %d capabilities doesn't allow allocating additional queues",
-			__func__, slave_port);
+			"%s: Member %d capabilities doesn't allow allocating additional queues",
+			__func__, member_port);
 		return -1;
 	}
 
@@ -214,8 +216,8 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
 	uint16_t idx;
 	int ret;
 
-	/* Verify if all slaves in bonding supports flow director and */
-	if (internals->slave_count > 0) {
+	/* Verify if all members in bonding supports flow director and */
+	if (internals->member_count > 0) {
 		ret = rte_eth_dev_info_get(bond_dev->data->port_id, &bond_info);
 		if (ret != 0) {
 			RTE_BOND_LOG(ERR,
@@ -229,9 +231,9 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
 		internals->mode4.dedicated_queues.rx_qid = bond_info.nb_rx_queues;
 		internals->mode4.dedicated_queues.tx_qid = bond_info.nb_tx_queues;
 
-		for (idx = 0; idx < internals->slave_count; idx++) {
+		for (idx = 0; idx < internals->member_count; idx++) {
 			if (bond_ethdev_8023ad_flow_verify(bond_dev,
-					internals->slaves[idx].port_id) != 0)
+					internals->members[idx].port_id) != 0)
 				return -1;
 		}
 	}
@@ -240,7 +242,7 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
 }
 
 int
-bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
+bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t member_port) {
 
 	struct rte_flow_error error;
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
@@ -258,12 +260,12 @@ bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
 		}
 	};
 
-	internals->mode4.dedicated_queues.flow[slave_port] = rte_flow_create(slave_port,
+	internals->mode4.dedicated_queues.flow[member_port] = rte_flow_create(member_port,
 			&flow_attr_8023ad, flow_item_8023ad, actions, &error);
-	if (internals->mode4.dedicated_queues.flow[slave_port] == NULL) {
+	if (internals->mode4.dedicated_queues.flow[member_port] == NULL) {
 		RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_set: %s "
-				"(slave_port=%d queue_id=%d)",
-				error.message, slave_port,
+				"(member_port=%d queue_id=%d)",
+				error.message, member_port,
 				internals->mode4.dedicated_queues.rx_qid);
 		return -1;
 	}
@@ -304,10 +306,10 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 	const uint16_t ether_type_slow_be =
 		rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
 	uint16_t num_rx_total = 0;	/* Total number of received packets */
-	uint16_t slaves[RTE_MAX_ETHPORTS];
-	uint16_t slave_count, idx;
+	uint16_t members[RTE_MAX_ETHPORTS];
+	uint16_t member_count, idx;
 
-	uint8_t collecting;  /* current slave collecting status */
+	uint8_t collecting;  /* current member collecting status */
 	const uint8_t promisc = rte_eth_promiscuous_get(internals->port_id);
 	const uint8_t allmulti = rte_eth_allmulticast_get(internals->port_id);
 	uint8_t subtype;
@@ -315,24 +317,24 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 	uint16_t j;
 	uint16_t k;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy member list to protect against member up/down changes during tx
 	 * bursting */
-	slave_count = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * slave_count);
+	member_count = internals->active_member_count;
+	memcpy(members, internals->active_members,
+			sizeof(internals->active_members[0]) * member_count);
 
-	idx = bd_rx_q->active_slave;
-	if (idx >= slave_count) {
-		bd_rx_q->active_slave = 0;
+	idx = bd_rx_q->active_member;
+	if (idx >= member_count) {
+		bd_rx_q->active_member = 0;
 		idx = 0;
 	}
-	for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) {
+	for (i = 0; i < member_count && num_rx_total < nb_pkts; i++) {
 		j = num_rx_total;
-		collecting = ACTOR_STATE(&bond_mode_8023ad_ports[slaves[idx]],
+		collecting = ACTOR_STATE(&bond_mode_8023ad_ports[members[idx]],
 					 COLLECTING);
 
-		/* Read packets from this slave */
-		num_rx_total += rte_eth_rx_burst(slaves[idx], bd_rx_q->queue_id,
+		/* Read packets from this member */
+		num_rx_total += rte_eth_rx_burst(members[idx], bd_rx_q->queue_id,
 				&bufs[num_rx_total], nb_pkts - num_rx_total);
 
 		for (k = j; k < 2 && k < num_rx_total; k++)
@@ -348,7 +350,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 
 			/* Remove packet from array if:
 			 * - it is slow packet but no dedicated rxq is present,
-			 * - slave is not in collecting state,
+			 * - member is not in collecting state,
 			 * - bonding interface is not in promiscuous mode and
 			 *   packet address isn't in mac_addrs array:
 			 *   - packet is unicast,
@@ -367,7 +369,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 				  !allmulti)))) {
 				if (hdr->ether_type == ether_type_slow_be) {
 					bond_mode_8023ad_handle_slow_pkt(
-					    internals, slaves[idx], bufs[j]);
+					    internals, members[idx], bufs[j]);
 				} else
 					rte_pktmbuf_free(bufs[j]);
 
@@ -380,12 +382,12 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 			} else
 				j++;
 		}
-		if (unlikely(++idx == slave_count))
+		if (unlikely(++idx == member_count))
 			idx = 0;
 	}
 
-	if (++bd_rx_q->active_slave >= slave_count)
-		bd_rx_q->active_slave = 0;
+	if (++bd_rx_q->active_member >= member_count)
+		bd_rx_q->active_member = 0;
 
 	return num_rx_total;
 }
@@ -406,7 +408,7 @@ bond_ethdev_rx_burst_8023ad_fast_queue(void *queue, struct rte_mbuf **bufs,
 
 #if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
 uint32_t burstnumberRX;
-uint32_t burstnumberTX;
+uint32_t burst_number_TX;
 
 #ifdef RTE_LIBRTE_BOND_DEBUG_ALB
 
@@ -583,59 +585,61 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 	struct bond_dev_private *internals;
 	struct bond_tx_queue *bd_tx_q;
 
-	struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_pkts];
-	uint16_t slave_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
+	struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS][nb_pkts];
+	uint16_t member_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
 
-	uint16_t num_of_slaves;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t num_of_members;
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	uint16_t num_tx_total = 0, num_tx_slave;
+	uint16_t num_tx_total = 0, num_tx_member;
 
-	static int slave_idx = 0;
-	int i, cslave_idx = 0, tx_fail_total = 0;
+	static int member_idx;
+	int i, cmember_idx = 0, tx_fail_total = 0;
 
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy member list to protect against member up/down changes during tx
 	 * bursting */
-	num_of_slaves = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * num_of_slaves);
+	num_of_members = internals->active_member_count;
+	memcpy(members, internals->active_members,
+			sizeof(internals->active_members[0]) * num_of_members);
 
-	if (num_of_slaves < 1)
+	if (num_of_members < 1)
 		return num_tx_total;
 
-	/* Populate slaves mbuf with which packets are to be sent on it  */
+	/* Populate members mbuf with which packets are to be sent on it  */
 	for (i = 0; i < nb_pkts; i++) {
-		cslave_idx = (slave_idx + i) % num_of_slaves;
-		slave_bufs[cslave_idx][(slave_nb_pkts[cslave_idx])++] = bufs[i];
+		cmember_idx = (member_idx + i) % num_of_members;
+		member_bufs[cmember_idx][(member_nb_pkts[cmember_idx])++] = bufs[i];
 	}
 
-	/* increment current slave index so the next call to tx burst starts on the
-	 * next slave */
-	slave_idx = ++cslave_idx;
+	/*
+	 * increment current member index so the next call to tx burst starts on the
+	 * next member.
+	 */
+	member_idx = ++cmember_idx;
 
-	/* Send packet burst on each slave device */
-	for (i = 0; i < num_of_slaves; i++) {
-		if (slave_nb_pkts[i] > 0) {
-			num_tx_slave = rte_eth_tx_prepare(slaves[i],
-					bd_tx_q->queue_id, slave_bufs[i],
-					slave_nb_pkts[i]);
-			num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
-					slave_bufs[i], num_tx_slave);
+	/* Send packet burst on each member device */
+	for (i = 0; i < num_of_members; i++) {
+		if (member_nb_pkts[i] > 0) {
+			num_tx_member = rte_eth_tx_prepare(members[i],
+					bd_tx_q->queue_id, member_bufs[i],
+					member_nb_pkts[i]);
+			num_tx_member = rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
+					member_bufs[i], num_tx_member);
 
 			/* if tx burst fails move packets to end of bufs */
-			if (unlikely(num_tx_slave < slave_nb_pkts[i])) {
-				int tx_fail_slave = slave_nb_pkts[i] - num_tx_slave;
+			if (unlikely(num_tx_member < member_nb_pkts[i])) {
+				int tx_fail_member = member_nb_pkts[i] - num_tx_member;
 
-				tx_fail_total += tx_fail_slave;
+				tx_fail_total += tx_fail_member;
 
 				memcpy(&bufs[nb_pkts - tx_fail_total],
-				       &slave_bufs[i][num_tx_slave],
-				       tx_fail_slave * sizeof(bufs[0]));
+				       &member_bufs[i][num_tx_member],
+				       tx_fail_member * sizeof(bufs[0]));
 			}
-			num_tx_total += num_tx_slave;
+			num_tx_total += num_tx_member;
 		}
 	}
 
@@ -653,7 +657,7 @@ bond_ethdev_tx_burst_active_backup(void *queue,
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	if (internals->active_slave_count < 1)
+	if (internals->active_member_count < 1)
 		return 0;
 
 	nb_prep_pkts = rte_eth_tx_prepare(internals->current_primary_port,
@@ -699,7 +703,7 @@ ipv6_hash(struct rte_ipv6_hdr *ipv6_hdr)
 
 void
 burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves)
+		uint16_t member_count, uint16_t *members)
 {
 	struct rte_ether_hdr *eth_hdr;
 	uint32_t hash;
@@ -710,13 +714,13 @@ burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
 
 		hash = ether_hash(eth_hdr);
 
-		slaves[i] = (hash ^= hash >> 8) % slave_count;
+		members[i] = (hash ^= hash >> 8) % member_count;
 	}
 }
 
 void
 burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves)
+		uint16_t member_count, uint16_t *members)
 {
 	uint16_t i;
 	struct rte_ether_hdr *eth_hdr;
@@ -748,13 +752,13 @@ burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
 		hash ^= hash >> 16;
 		hash ^= hash >> 8;
 
-		slaves[i] = hash % slave_count;
+		members[i] = hash % member_count;
 	}
 }
 
 void
 burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves)
+		uint16_t member_count, uint16_t *members)
 {
 	struct rte_ether_hdr *eth_hdr;
 	uint16_t proto;
@@ -822,30 +826,29 @@ burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
 		hash ^= hash >> 16;
 		hash ^= hash >> 8;
 
-		slaves[i] = hash % slave_count;
+		members[i] = hash % member_count;
 	}
 }
 
-struct bwg_slave {
+struct bwg_member {
 	uint64_t bwg_left_int;
 	uint64_t bwg_left_remainder;
-	uint16_t slave;
+	uint16_t member;
 };
 
 void
-bond_tlb_activate_slave(struct bond_dev_private *internals) {
+bond_tlb_activate_member(struct bond_dev_private *internals) {
 	int i;
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		tlb_last_obytets[internals->active_slaves[i]] = 0;
-	}
+	for (i = 0; i < internals->active_member_count; i++)
+		tlb_last_obytets[internals->active_members[i]] = 0;
 }
 
 static int
 bandwidth_cmp(const void *a, const void *b)
 {
-	const struct bwg_slave *bwg_a = a;
-	const struct bwg_slave *bwg_b = b;
+	const struct bwg_member *bwg_a = a;
+	const struct bwg_member *bwg_b = b;
 	int64_t diff = (int64_t)bwg_b->bwg_left_int - (int64_t)bwg_a->bwg_left_int;
 	int64_t diff2 = (int64_t)bwg_b->bwg_left_remainder -
 			(int64_t)bwg_a->bwg_left_remainder;
@@ -863,14 +866,14 @@ bandwidth_cmp(const void *a, const void *b)
 
 static void
 bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
-		struct bwg_slave *bwg_slave)
+		struct bwg_member *bwg_member)
 {
 	struct rte_eth_link link_status;
 	int ret;
 
 	ret = rte_eth_link_get_nowait(port_id, &link_status);
 	if (ret < 0) {
-		RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
+		RTE_BOND_LOG(ERR, "Member (port %u) link get failed: %s",
 			     port_id, rte_strerror(-ret));
 		return;
 	}
@@ -878,51 +881,51 @@ bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
 	if (link_bwg == 0)
 		return;
 	link_bwg = link_bwg * (update_idx+1) * REORDER_PERIOD_MS;
-	bwg_slave->bwg_left_int = (link_bwg - 1000*load) / link_bwg;
-	bwg_slave->bwg_left_remainder = (link_bwg - 1000*load) % link_bwg;
+	bwg_member->bwg_left_int = (link_bwg - 1000 * load) / link_bwg;
+	bwg_member->bwg_left_remainder = (link_bwg - 1000 * load) % link_bwg;
 }
 
 static void
-bond_ethdev_update_tlb_slave_cb(void *arg)
+bond_ethdev_update_tlb_member_cb(void *arg)
 {
 	struct bond_dev_private *internals = arg;
-	struct rte_eth_stats slave_stats;
-	struct bwg_slave bwg_array[RTE_MAX_ETHPORTS];
-	uint16_t slave_count;
+	struct rte_eth_stats member_stats;
+	struct bwg_member bwg_array[RTE_MAX_ETHPORTS];
+	uint16_t member_count;
 	uint64_t tx_bytes;
 
 	uint8_t update_stats = 0;
-	uint16_t slave_id;
+	uint16_t member_id;
 	uint16_t i;
 
-	internals->slave_update_idx++;
+	internals->member_update_idx++;
 
 
-	if (internals->slave_update_idx >= REORDER_PERIOD_MS)
+	if (internals->member_update_idx >= REORDER_PERIOD_MS)
 		update_stats = 1;
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		slave_id = internals->active_slaves[i];
-		rte_eth_stats_get(slave_id, &slave_stats);
-		tx_bytes = slave_stats.obytes - tlb_last_obytets[slave_id];
-		bandwidth_left(slave_id, tx_bytes,
-				internals->slave_update_idx, &bwg_array[i]);
-		bwg_array[i].slave = slave_id;
+	for (i = 0; i < internals->active_member_count; i++) {
+		member_id = internals->active_members[i];
+		rte_eth_stats_get(member_id, &member_stats);
+		tx_bytes = member_stats.obytes - tlb_last_obytets[member_id];
+		bandwidth_left(member_id, tx_bytes,
+				internals->member_update_idx, &bwg_array[i]);
+		bwg_array[i].member = member_id;
 
 		if (update_stats) {
-			tlb_last_obytets[slave_id] = slave_stats.obytes;
+			tlb_last_obytets[member_id] = member_stats.obytes;
 		}
 	}
 
 	if (update_stats == 1)
-		internals->slave_update_idx = 0;
+		internals->member_update_idx = 0;
 
-	slave_count = i;
-	qsort(bwg_array, slave_count, sizeof(bwg_array[0]), bandwidth_cmp);
-	for (i = 0; i < slave_count; i++)
-		internals->tlb_slaves_order[i] = bwg_array[i].slave;
+	member_count = i;
+	qsort(bwg_array, member_count, sizeof(bwg_array[0]), bandwidth_cmp);
+	for (i = 0; i < member_count; i++)
+		internals->tlb_members_order[i] = bwg_array[i].member;
 
-	rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_slave_cb,
+	rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_member_cb,
 			(struct bond_dev_private *)internals);
 }
 
@@ -937,29 +940,29 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	uint16_t num_tx_total = 0, num_tx_prep;
 	uint16_t i, j;
 
-	uint16_t num_of_slaves = internals->active_slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t num_of_members = internals->active_member_count;
+	uint16_t members[RTE_MAX_ETHPORTS];
 
 	struct rte_ether_hdr *ether_hdr;
-	struct rte_ether_addr primary_slave_addr;
-	struct rte_ether_addr active_slave_addr;
+	struct rte_ether_addr primary_member_addr;
+	struct rte_ether_addr active_member_addr;
 
-	if (num_of_slaves < 1)
+	if (num_of_members < 1)
 		return num_tx_total;
 
-	memcpy(slaves, internals->tlb_slaves_order,
-				sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
+	memcpy(members, internals->tlb_members_order,
+				sizeof(internals->tlb_members_order[0]) * num_of_members);
 
 
-	rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
+	rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_member_addr);
 
 	if (nb_pkts > 3) {
 		for (i = 0; i < 3; i++)
 			rte_prefetch0(rte_pktmbuf_mtod(bufs[i], void*));
 	}
 
-	for (i = 0; i < num_of_slaves; i++) {
-		rte_eth_macaddr_get(slaves[i], &active_slave_addr);
+	for (i = 0; i < num_of_members; i++) {
+		rte_eth_macaddr_get(members[i], &active_member_addr);
 		for (j = num_tx_total; j < nb_pkts; j++) {
 			if (j + 3 < nb_pkts)
 				rte_prefetch0(rte_pktmbuf_mtod(bufs[j+3], void*));
@@ -967,17 +970,18 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			ether_hdr = rte_pktmbuf_mtod(bufs[j],
 						struct rte_ether_hdr *);
 			if (rte_is_same_ether_addr(&ether_hdr->src_addr,
-							&primary_slave_addr))
-				rte_ether_addr_copy(&active_slave_addr,
+							&primary_member_addr))
+				rte_ether_addr_copy(&active_member_addr,
 						&ether_hdr->src_addr);
 #if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
-					mode6_debug("TX IPv4:", ether_hdr, slaves[i], &burstnumberTX);
+					mode6_debug("TX IPv4:", ether_hdr, members[i],
+						&burst_number_TX);
 #endif
 		}
 
-		num_tx_prep = rte_eth_tx_prepare(slaves[i], bd_tx_q->queue_id,
+		num_tx_prep = rte_eth_tx_prepare(members[i], bd_tx_q->queue_id,
 				bufs + num_tx_total, nb_pkts - num_tx_total);
-		num_tx_total += rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+		num_tx_total += rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
 				bufs + num_tx_total, num_tx_prep);
 
 		if (num_tx_total == nb_pkts)
@@ -990,13 +994,13 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 void
 bond_tlb_disable(struct bond_dev_private *internals)
 {
-	rte_eal_alarm_cancel(bond_ethdev_update_tlb_slave_cb, internals);
+	rte_eal_alarm_cancel(bond_ethdev_update_tlb_member_cb, internals);
 }
 
 void
 bond_tlb_enable(struct bond_dev_private *internals)
 {
-	bond_ethdev_update_tlb_slave_cb(internals);
+	bond_ethdev_update_tlb_member_cb(internals);
 }
 
 static uint16_t
@@ -1011,11 +1015,11 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	struct client_data *client_info;
 
 	/*
-	 * We create transmit buffers for every slave and one additional to send
+	 * We create transmit buffers for every member and one additional to send
 	 * through tlb. In worst case every packet will be send on one port.
 	 */
-	struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
-	uint16_t slave_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
+	struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
+	uint16_t member_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
 
 	/*
 	 * We create separate transmit buffers for update packets as they won't
@@ -1029,7 +1033,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 	uint16_t num_send, num_not_send = 0;
 	uint16_t num_tx_total = 0;
-	uint16_t slave_idx;
+	uint16_t member_idx;
 
 	int i, j;
 
@@ -1040,19 +1044,19 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		offset = get_vlan_offset(eth_h, &ether_type);
 
 		if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP)) {
-			slave_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
+			member_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
 
 			/* Change src mac in eth header */
-			rte_eth_macaddr_get(slave_idx, &eth_h->src_addr);
+			rte_eth_macaddr_get(member_idx, &eth_h->src_addr);
 
-			/* Add packet to slave tx buffer */
-			slave_bufs[slave_idx][slave_bufs_pkts[slave_idx]] = bufs[i];
-			slave_bufs_pkts[slave_idx]++;
+			/* Add packet to member tx buffer */
+			member_bufs[member_idx][member_bufs_pkts[member_idx]] = bufs[i];
+			member_bufs_pkts[member_idx]++;
 		} else {
 			/* If packet is not ARP, send it with TLB policy */
-			slave_bufs[RTE_MAX_ETHPORTS][slave_bufs_pkts[RTE_MAX_ETHPORTS]] =
+			member_bufs[RTE_MAX_ETHPORTS][member_bufs_pkts[RTE_MAX_ETHPORTS]] =
 					bufs[i];
-			slave_bufs_pkts[RTE_MAX_ETHPORTS]++;
+			member_bufs_pkts[RTE_MAX_ETHPORTS]++;
 		}
 	}
 
@@ -1062,7 +1066,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			client_info = &internals->mode6.client_table[i];
 
 			if (client_info->in_use) {
-				/* Allocate new packet to send ARP update on current slave */
+				/* Allocate new packet to send ARP update on current member */
 				upd_pkt = rte_pktmbuf_alloc(internals->mode6.mempool);
 				if (upd_pkt == NULL) {
 					RTE_BOND_LOG(ERR,
@@ -1076,44 +1080,44 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				upd_pkt->data_len = pkt_size;
 				upd_pkt->pkt_len = pkt_size;
 
-				slave_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
+				member_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
 						internals);
 
 				/* Add packet to update tx buffer */
-				update_bufs[slave_idx][update_bufs_pkts[slave_idx]] = upd_pkt;
-				update_bufs_pkts[slave_idx]++;
+				update_bufs[member_idx][update_bufs_pkts[member_idx]] = upd_pkt;
+				update_bufs_pkts[member_idx]++;
 			}
 		}
 		internals->mode6.ntt = 0;
 	}
 
-	/* Send ARP packets on proper slaves */
+	/* Send ARP packets on proper members */
 	for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
-		if (slave_bufs_pkts[i] > 0) {
+		if (member_bufs_pkts[i] > 0) {
 			num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
-					slave_bufs[i], slave_bufs_pkts[i]);
+					member_bufs[i], member_bufs_pkts[i]);
 			num_send = rte_eth_tx_burst(i, bd_tx_q->queue_id,
-					slave_bufs[i], num_send);
-			for (j = 0; j < slave_bufs_pkts[i] - num_send; j++) {
+					member_bufs[i], num_send);
+			for (j = 0; j < member_bufs_pkts[i] - num_send; j++) {
 				bufs[nb_pkts - 1 - num_not_send - j] =
-						slave_bufs[i][nb_pkts - 1 - j];
+						member_bufs[i][nb_pkts - 1 - j];
 			}
 
 			num_tx_total += num_send;
-			num_not_send += slave_bufs_pkts[i] - num_send;
+			num_not_send += member_bufs_pkts[i] - num_send;
 
 #if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
 	/* Print TX stats including update packets */
-			for (j = 0; j < slave_bufs_pkts[i]; j++) {
-				eth_h = rte_pktmbuf_mtod(slave_bufs[i][j],
+			for (j = 0; j < member_bufs_pkts[i]; j++) {
+				eth_h = rte_pktmbuf_mtod(member_bufs[i][j],
 							struct rte_ether_hdr *);
-				mode6_debug("TX ARP:", eth_h, i, &burstnumberTX);
+				mode6_debug("TX ARP:", eth_h, i, &burst_number_TX);
 			}
 #endif
 		}
 	}
 
-	/* Send update packets on proper slaves */
+	/* Send update packets on proper members */
 	for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
 		if (update_bufs_pkts[i] > 0) {
 			num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
@@ -1127,21 +1131,21 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			for (j = 0; j < update_bufs_pkts[i]; j++) {
 				eth_h = rte_pktmbuf_mtod(update_bufs[i][j],
 							struct rte_ether_hdr *);
-				mode6_debug("TX ARPupd:", eth_h, i, &burstnumberTX);
+				mode6_debug("TX ARPupd:", eth_h, i, &burst_number_TX);
 			}
 #endif
 		}
 	}
 
 	/* Send non-ARP packets using tlb policy */
-	if (slave_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
+	if (member_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
 		num_send = bond_ethdev_tx_burst_tlb(queue,
-				slave_bufs[RTE_MAX_ETHPORTS],
-				slave_bufs_pkts[RTE_MAX_ETHPORTS]);
+				member_bufs[RTE_MAX_ETHPORTS],
+				member_bufs_pkts[RTE_MAX_ETHPORTS]);
 
-		for (j = 0; j < slave_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
+		for (j = 0; j < member_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
 			bufs[nb_pkts - 1 - num_not_send - j] =
-					slave_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
+					member_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
 		}
 
 		num_tx_total += num_send;
@@ -1152,59 +1156,59 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 static inline uint16_t
 tx_burst_balance(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
-		 uint16_t *slave_port_ids, uint16_t slave_count)
+		 uint16_t *member_port_ids, uint16_t member_count)
 {
 	struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
 	struct bond_dev_private *internals = bd_tx_q->dev_private;
 
-	/* Array to sort mbufs for transmission on each slave into */
-	struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_bufs];
-	/* Number of mbufs for transmission on each slave */
-	uint16_t slave_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
-	/* Mapping array generated by hash function to map mbufs to slaves */
-	uint16_t bufs_slave_port_idxs[nb_bufs];
+	/* Array to sort mbufs for transmission on each member into */
+	struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS][nb_bufs];
+	/* Number of mbufs for transmission on each member */
+	uint16_t member_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
+	/* Mapping array generated by hash function to map mbufs to members */
+	uint16_t bufs_member_port_idxs[nb_bufs];
 
-	uint16_t slave_tx_count;
+	uint16_t member_tx_count;
 	uint16_t total_tx_count = 0, total_tx_fail_count = 0;
 
 	uint16_t i;
 
 	/*
-	 * Populate slaves mbuf with the packets which are to be sent on it
-	 * selecting output slave using hash based on xmit policy
+	 * Populate members mbuf with the packets which are to be sent on it
+	 * selecting output member using hash based on xmit policy
 	 */
-	internals->burst_xmit_hash(bufs, nb_bufs, slave_count,
-			bufs_slave_port_idxs);
+	internals->burst_xmit_hash(bufs, nb_bufs, member_count,
+			bufs_member_port_idxs);
 
 	for (i = 0; i < nb_bufs; i++) {
-		/* Populate slave mbuf arrays with mbufs for that slave. */
-		uint16_t slave_idx = bufs_slave_port_idxs[i];
+		/* Populate member mbuf arrays with mbufs for that member. */
+		uint16_t member_idx = bufs_member_port_idxs[i];
 
-		slave_bufs[slave_idx][slave_nb_bufs[slave_idx]++] = bufs[i];
+		member_bufs[member_idx][member_nb_bufs[member_idx]++] = bufs[i];
 	}
 
-	/* Send packet burst on each slave device */
-	for (i = 0; i < slave_count; i++) {
-		if (slave_nb_bufs[i] == 0)
+	/* Send packet burst on each member device */
+	for (i = 0; i < member_count; i++) {
+		if (member_nb_bufs[i] == 0)
 			continue;
 
-		slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
-				bd_tx_q->queue_id, slave_bufs[i],
-				slave_nb_bufs[i]);
-		slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
-				bd_tx_q->queue_id, slave_bufs[i],
-				slave_tx_count);
+		member_tx_count = rte_eth_tx_prepare(member_port_ids[i],
+				bd_tx_q->queue_id, member_bufs[i],
+				member_nb_bufs[i]);
+		member_tx_count = rte_eth_tx_burst(member_port_ids[i],
+				bd_tx_q->queue_id, member_bufs[i],
+				member_tx_count);
 
-		total_tx_count += slave_tx_count;
+		total_tx_count += member_tx_count;
 
 		/* If tx burst fails move packets to end of bufs */
-		if (unlikely(slave_tx_count < slave_nb_bufs[i])) {
-			int slave_tx_fail_count = slave_nb_bufs[i] -
-					slave_tx_count;
-			total_tx_fail_count += slave_tx_fail_count;
+		if (unlikely(member_tx_count < member_nb_bufs[i])) {
+			int member_tx_fail_count = member_nb_bufs[i] -
+					member_tx_count;
+			total_tx_fail_count += member_tx_fail_count;
 			memcpy(&bufs[nb_bufs - total_tx_fail_count],
-			       &slave_bufs[i][slave_tx_count],
-			       slave_tx_fail_count * sizeof(bufs[0]));
+			       &member_bufs[i][member_tx_count],
+			       member_tx_fail_count * sizeof(bufs[0]));
 		}
 	}
 
@@ -1218,23 +1222,23 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 	struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
 	struct bond_dev_private *internals = bd_tx_q->dev_private;
 
-	uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
-	uint16_t slave_count;
+	uint16_t member_port_ids[RTE_MAX_ETHPORTS];
+	uint16_t member_count;
 
 	if (unlikely(nb_bufs == 0))
 		return 0;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy member list to protect against member up/down changes during tx
 	 * bursting
 	 */
-	slave_count = internals->active_slave_count;
-	if (unlikely(slave_count < 1))
+	member_count = internals->active_member_count;
+	if (unlikely(member_count < 1))
 		return 0;
 
-	memcpy(slave_port_ids, internals->active_slaves,
-			sizeof(slave_port_ids[0]) * slave_count);
-	return tx_burst_balance(queue, bufs, nb_bufs, slave_port_ids,
-				slave_count);
+	memcpy(member_port_ids, internals->active_members,
+			sizeof(member_port_ids[0]) * member_count);
+	return tx_burst_balance(queue, bufs, nb_bufs, member_port_ids,
+				member_count);
 }
 
 static inline uint16_t
@@ -1244,31 +1248,31 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
 	struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
 	struct bond_dev_private *internals = bd_tx_q->dev_private;
 
-	uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
-	uint16_t slave_count;
+	uint16_t member_port_ids[RTE_MAX_ETHPORTS];
+	uint16_t member_count;
 
-	uint16_t dist_slave_port_ids[RTE_MAX_ETHPORTS];
-	uint16_t dist_slave_count;
+	uint16_t dist_member_port_ids[RTE_MAX_ETHPORTS];
+	uint16_t dist_member_count;
 
-	uint16_t slave_tx_count;
+	uint16_t member_tx_count;
 
 	uint16_t i;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy member list to protect against member up/down changes during tx
 	 * bursting */
-	slave_count = internals->active_slave_count;
-	if (unlikely(slave_count < 1))
+	member_count = internals->active_member_count;
+	if (unlikely(member_count < 1))
 		return 0;
 
-	memcpy(slave_port_ids, internals->active_slaves,
-			sizeof(slave_port_ids[0]) * slave_count);
+	memcpy(member_port_ids, internals->active_members,
+			sizeof(member_port_ids[0]) * member_count);
 
 	if (dedicated_txq)
 		goto skip_tx_ring;
 
 	/* Check for LACP control packets and send if available */
-	for (i = 0; i < slave_count; i++) {
-		struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
+	for (i = 0; i < member_count; i++) {
+		struct port *port = &bond_mode_8023ad_ports[member_port_ids[i]];
 		struct rte_mbuf *ctrl_pkt = NULL;
 
 		if (likely(rte_ring_empty(port->tx_ring)))
@@ -1276,15 +1280,15 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
 
 		if (rte_ring_dequeue(port->tx_ring,
 				     (void **)&ctrl_pkt) != -ENOENT) {
-			slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
+			member_tx_count = rte_eth_tx_prepare(member_port_ids[i],
 					bd_tx_q->queue_id, &ctrl_pkt, 1);
-			slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
-					bd_tx_q->queue_id, &ctrl_pkt, slave_tx_count);
+			member_tx_count = rte_eth_tx_burst(member_port_ids[i],
+					bd_tx_q->queue_id, &ctrl_pkt, member_tx_count);
 			/*
 			 * re-enqueue LAG control plane packets to buffering
 			 * ring if transmission fails so the packet isn't lost.
 			 */
-			if (slave_tx_count != 1)
+			if (member_tx_count != 1)
 				rte_ring_enqueue(port->tx_ring,	ctrl_pkt);
 		}
 	}
@@ -1293,20 +1297,20 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
 	if (unlikely(nb_bufs == 0))
 		return 0;
 
-	dist_slave_count = 0;
-	for (i = 0; i < slave_count; i++) {
-		struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
+	dist_member_count = 0;
+	for (i = 0; i < member_count; i++) {
+		struct port *port = &bond_mode_8023ad_ports[member_port_ids[i]];
 
 		if (ACTOR_STATE(port, DISTRIBUTING))
-			dist_slave_port_ids[dist_slave_count++] =
-					slave_port_ids[i];
+			dist_member_port_ids[dist_member_count++] =
+					member_port_ids[i];
 	}
 
-	if (unlikely(dist_slave_count < 1))
+	if (unlikely(dist_member_count < 1))
 		return 0;
 
-	return tx_burst_balance(queue, bufs, nb_bufs, dist_slave_port_ids,
-				dist_slave_count);
+	return tx_burst_balance(queue, bufs, nb_bufs, dist_member_port_ids,
+				dist_member_count);
 }
 
 static uint16_t
@@ -1330,78 +1334,78 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 	struct bond_dev_private *internals;
 	struct bond_tx_queue *bd_tx_q;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 	uint8_t tx_failed_flag = 0;
-	uint16_t num_of_slaves;
+	uint16_t num_of_members;
 
 	uint16_t max_nb_of_tx_pkts = 0;
 
-	int slave_tx_total[RTE_MAX_ETHPORTS];
-	int i, most_successful_tx_slave = -1;
+	int member_tx_total[RTE_MAX_ETHPORTS];
+	int i, most_successful_tx_member = -1;
 
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy member list to protect against member up/down changes during tx
 	 * bursting */
-	num_of_slaves = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * num_of_slaves);
+	num_of_members = internals->active_member_count;
+	memcpy(members, internals->active_members,
+			sizeof(internals->active_members[0]) * num_of_members);
 
-	if (num_of_slaves < 1)
+	if (num_of_members < 1)
 		return 0;
 
 	/* It is rare that bond different PMDs together, so just call tx-prepare once */
-	nb_pkts = rte_eth_tx_prepare(slaves[0], bd_tx_q->queue_id, bufs, nb_pkts);
+	nb_pkts = rte_eth_tx_prepare(members[0], bd_tx_q->queue_id, bufs, nb_pkts);
 
 	/* Increment reference count on mbufs */
 	for (i = 0; i < nb_pkts; i++)
-		rte_pktmbuf_refcnt_update(bufs[i], num_of_slaves - 1);
+		rte_pktmbuf_refcnt_update(bufs[i], num_of_members - 1);
 
-	/* Transmit burst on each active slave */
-	for (i = 0; i < num_of_slaves; i++) {
-		slave_tx_total[i] = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+	/* Transmit burst on each active member */
+	for (i = 0; i < num_of_members; i++) {
+		member_tx_total[i] = rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
 					bufs, nb_pkts);
 
-		if (unlikely(slave_tx_total[i] < nb_pkts))
+		if (unlikely(member_tx_total[i] < nb_pkts))
 			tx_failed_flag = 1;
 
-		/* record the value and slave index for the slave which transmits the
+		/* record the value and member index for the member which transmits the
 		 * maximum number of packets */
-		if (slave_tx_total[i] > max_nb_of_tx_pkts) {
-			max_nb_of_tx_pkts = slave_tx_total[i];
-			most_successful_tx_slave = i;
+		if (member_tx_total[i] > max_nb_of_tx_pkts) {
+			max_nb_of_tx_pkts = member_tx_total[i];
+			most_successful_tx_member = i;
 		}
 	}
 
-	/* if slaves fail to transmit packets from burst, the calling application
+	/* if members fail to transmit packets from burst, the calling application
 	 * is not expected to know about multiple references to packets so we must
-	 * handle failures of all packets except those of the most successful slave
+	 * handle failures of all packets except those of the most successful member
 	 */
 	if (unlikely(tx_failed_flag))
-		for (i = 0; i < num_of_slaves; i++)
-			if (i != most_successful_tx_slave)
-				while (slave_tx_total[i] < nb_pkts)
-					rte_pktmbuf_free(bufs[slave_tx_total[i]++]);
+		for (i = 0; i < num_of_members; i++)
+			if (i != most_successful_tx_member)
+				while (member_tx_total[i] < nb_pkts)
+					rte_pktmbuf_free(bufs[member_tx_total[i]++]);
 
 	return max_nb_of_tx_pkts;
 }
 
 static void
-link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
+link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *member_link)
 {
 	struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
 
 	if (bond_ctx->mode == BONDING_MODE_8023AD) {
 		/**
 		 * If in mode 4 then save the link properties of the first
-		 * slave, all subsequent slaves must match these properties
+		 * member, all subsequent members must match these properties
 		 */
-		struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
+		struct rte_eth_link *bond_link = &bond_ctx->mode4.member_link;
 
-		bond_link->link_autoneg = slave_link->link_autoneg;
-		bond_link->link_duplex = slave_link->link_duplex;
-		bond_link->link_speed = slave_link->link_speed;
+		bond_link->link_autoneg = member_link->link_autoneg;
+		bond_link->link_duplex = member_link->link_duplex;
+		bond_link->link_speed = member_link->link_speed;
 	} else {
 		/**
 		 * In any other mode the link properties are set to default
@@ -1414,16 +1418,16 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
 
 static int
 link_properties_valid(struct rte_eth_dev *ethdev,
-		struct rte_eth_link *slave_link)
+		struct rte_eth_link *member_link)
 {
 	struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
 
 	if (bond_ctx->mode == BONDING_MODE_8023AD) {
-		struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
+		struct rte_eth_link *bond_link = &bond_ctx->mode4.member_link;
 
-		if (bond_link->link_duplex != slave_link->link_duplex ||
-			bond_link->link_autoneg != slave_link->link_autoneg ||
-			bond_link->link_speed != slave_link->link_speed)
+		if (bond_link->link_duplex != member_link->link_duplex ||
+			bond_link->link_autoneg != member_link->link_autoneg ||
+			bond_link->link_speed != member_link->link_speed)
 			return -1;
 	}
 
@@ -1480,11 +1484,11 @@ mac_address_set(struct rte_eth_dev *eth_dev,
 static const struct rte_ether_addr null_mac_addr;
 
 /*
- * Add additional MAC addresses to the slave
+ * Add additional MAC addresses to the member
  */
 int
-slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
-		uint16_t slave_port_id)
+member_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+		uint16_t member_port_id)
 {
 	int i, ret;
 	struct rte_ether_addr *mac_addr;
@@ -1494,11 +1498,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
 		if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
 			break;
 
-		ret = rte_eth_dev_mac_addr_add(slave_port_id, mac_addr, 0);
+		ret = rte_eth_dev_mac_addr_add(member_port_id, mac_addr, 0);
 		if (ret < 0) {
 			/* rollback */
 			for (i--; i > 0; i--)
-				rte_eth_dev_mac_addr_remove(slave_port_id,
+				rte_eth_dev_mac_addr_remove(member_port_id,
 					&bonded_eth_dev->data->mac_addrs[i]);
 			return ret;
 		}
@@ -1508,11 +1512,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
 }
 
 /*
- * Remove additional MAC addresses from the slave
+ * Remove additional MAC addresses from the member
  */
 int
-slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
-		uint16_t slave_port_id)
+member_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+		uint16_t member_port_id)
 {
 	int i, rc, ret;
 	struct rte_ether_addr *mac_addr;
@@ -1523,7 +1527,7 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
 		if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
 			break;
 
-		ret = rte_eth_dev_mac_addr_remove(slave_port_id, mac_addr);
+		ret = rte_eth_dev_mac_addr_remove(member_port_id, mac_addr);
 		/* save only the first error */
 		if (ret < 0 && rc == 0)
 			rc = ret;
@@ -1533,26 +1537,26 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
 }
 
 int
-mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
+mac_address_members_update(struct rte_eth_dev *bonded_eth_dev)
 {
 	struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
 	bool set;
 	int i;
 
-	/* Update slave devices MAC addresses */
-	if (internals->slave_count < 1)
+	/* Update member devices MAC addresses */
+	if (internals->member_count < 1)
 		return -1;
 
 	switch (internals->mode) {
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
-		for (i = 0; i < internals->slave_count; i++) {
+		for (i = 0; i < internals->member_count; i++) {
 			if (rte_eth_dev_default_mac_addr_set(
-					internals->slaves[i].port_id,
+					internals->members[i].port_id,
 					bonded_eth_dev->data->mac_addrs)) {
 				RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
-						internals->slaves[i].port_id);
+						internals->members[i].port_id);
 				return -1;
 			}
 		}
@@ -1565,8 +1569,8 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
 	case BONDING_MODE_ALB:
 	default:
 		set = true;
-		for (i = 0; i < internals->slave_count; i++) {
-			if (internals->slaves[i].port_id ==
+		for (i = 0; i < internals->member_count; i++) {
+			if (internals->members[i].port_id ==
 					internals->current_primary_port) {
 				if (rte_eth_dev_default_mac_addr_set(
 						internals->current_primary_port,
@@ -1577,10 +1581,10 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
 				}
 			} else {
 				if (rte_eth_dev_default_mac_addr_set(
-						internals->slaves[i].port_id,
-						&internals->slaves[i].persisted_mac_addr)) {
+						internals->members[i].port_id,
+						&internals->members[i].persisted_mac_addr)) {
 					RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
-							internals->slaves[i].port_id);
+							internals->members[i].port_id);
 				}
 			}
 		}
@@ -1655,55 +1659,55 @@ bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode)
 
 
 static int
-slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev)
+member_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *member_eth_dev)
 {
 	int errval = 0;
 	struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
-	struct port *port = &bond_mode_8023ad_ports[slave_eth_dev->data->port_id];
+	struct port *port = &bond_mode_8023ad_ports[member_eth_dev->data->port_id];
 
 	if (port->slow_pool == NULL) {
 		char mem_name[256];
-		int slave_id = slave_eth_dev->data->port_id;
+		int member_id = member_eth_dev->data->port_id;
 
-		snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_slow_pool",
-				slave_id);
+		snprintf(mem_name, RTE_DIM(mem_name), "member_port%u_slow_pool",
+				member_id);
 		port->slow_pool = rte_pktmbuf_pool_create(mem_name, 8191,
 			250, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
-			slave_eth_dev->data->numa_node);
+			member_eth_dev->data->numa_node);
 
 		/* Any memory allocation failure in initialization is critical because
 		 * resources can't be free, so reinitialization is impossible. */
 		if (port->slow_pool == NULL) {
-			rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
-				slave_id, mem_name, rte_strerror(rte_errno));
+			rte_panic("Member %u: Failed to create memory pool '%s': %s\n",
+				member_id, mem_name, rte_strerror(rte_errno));
 		}
 	}
 
 	if (internals->mode4.dedicated_queues.enabled == 1) {
 		/* Configure slow Rx queue */
 
-		errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id,
+		errval = rte_eth_rx_queue_setup(member_eth_dev->data->port_id,
 				internals->mode4.dedicated_queues.rx_qid, 128,
-				rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+				rte_eth_dev_socket_id(member_eth_dev->data->port_id),
 				NULL, port->slow_pool);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 					"rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
-					slave_eth_dev->data->port_id,
+					member_eth_dev->data->port_id,
 					internals->mode4.dedicated_queues.rx_qid,
 					errval);
 			return errval;
 		}
 
-		errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id,
+		errval = rte_eth_tx_queue_setup(member_eth_dev->data->port_id,
 				internals->mode4.dedicated_queues.tx_qid, 512,
-				rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+				rte_eth_dev_socket_id(member_eth_dev->data->port_id),
 				NULL);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 				"rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
-				slave_eth_dev->data->port_id,
+				member_eth_dev->data->port_id,
 				internals->mode4.dedicated_queues.tx_qid,
 				errval);
 			return errval;
@@ -1713,8 +1717,8 @@ slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
 }
 
 int
-slave_configure(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev)
+member_configure(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *member_eth_dev)
 {
 	uint16_t nb_rx_queues;
 	uint16_t nb_tx_queues;
@@ -1723,45 +1727,45 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 
 	struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
 
-	/* Stop slave */
-	errval = rte_eth_dev_stop(slave_eth_dev->data->port_id);
+	/* Stop member */
+	errval = rte_eth_dev_stop(member_eth_dev->data->port_id);
 	if (errval != 0)
 		RTE_BOND_LOG(ERR, "rte_eth_dev_stop: port %u, err (%d)",
-			     slave_eth_dev->data->port_id, errval);
+			     member_eth_dev->data->port_id, errval);
 
-	/* Enable interrupts on slave device if supported */
-	if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
-		slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
+	/* Enable interrupts on member device if supported */
+	if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+		member_eth_dev->data->dev_conf.intr_conf.lsc = 1;
 
-	/* If RSS is enabled for bonding, try to enable it for slaves  */
+	/* If RSS is enabled for bonding, try to enable it for members  */
 	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		/* rss_key won't be empty if RSS is configured in bonded dev */
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
+		member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
 					internals->rss_key_len;
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
+		member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
 					internals->rss_key;
 
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
+		member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
 				bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
-		slave_eth_dev->data->dev_conf.rxmode.mq_mode =
+		member_eth_dev->data->dev_conf.rxmode.mq_mode =
 				bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
 	} else {
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
-		slave_eth_dev->data->dev_conf.rxmode.mq_mode =
+		member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
+		member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
+		member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
+		member_eth_dev->data->dev_conf.rxmode.mq_mode =
 				bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
 	}
 
-	slave_eth_dev->data->dev_conf.rxmode.mtu =
+	member_eth_dev->data->dev_conf.rxmode.mtu =
 			bonded_eth_dev->data->dev_conf.rxmode.mtu;
-	slave_eth_dev->data->dev_conf.link_speeds =
+	member_eth_dev->data->dev_conf.link_speeds =
 			bonded_eth_dev->data->dev_conf.link_speeds;
 
-	slave_eth_dev->data->dev_conf.txmode.offloads =
+	member_eth_dev->data->dev_conf.txmode.offloads =
 			bonded_eth_dev->data->dev_conf.txmode.offloads;
 
-	slave_eth_dev->data->dev_conf.rxmode.offloads =
+	member_eth_dev->data->dev_conf.rxmode.offloads =
 			bonded_eth_dev->data->dev_conf.rxmode.offloads;
 
 	nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
@@ -1775,28 +1779,28 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 	}
 
 	/* Configure device */
-	errval = rte_eth_dev_configure(slave_eth_dev->data->port_id,
+	errval = rte_eth_dev_configure(member_eth_dev->data->port_id,
 			nb_rx_queues, nb_tx_queues,
-			&(slave_eth_dev->data->dev_conf));
+			&member_eth_dev->data->dev_conf);
 	if (errval != 0) {
-		RTE_BOND_LOG(ERR, "Cannot configure slave device: port %u, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+		RTE_BOND_LOG(ERR, "Cannot configure member device: port %u, err (%d)",
+				member_eth_dev->data->port_id, errval);
 		return errval;
 	}
 
-	errval = rte_eth_dev_set_mtu(slave_eth_dev->data->port_id,
+	errval = rte_eth_dev_set_mtu(member_eth_dev->data->port_id,
 				     bonded_eth_dev->data->mtu);
 	if (errval != 0 && errval != -ENOTSUP) {
 		RTE_BOND_LOG(ERR, "rte_eth_dev_set_mtu: port %u, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				member_eth_dev->data->port_id, errval);
 		return errval;
 	}
 	return 0;
 }
 
 int
-slave_start(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev)
+member_start(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *member_eth_dev)
 {
 	int errval = 0;
 	struct bond_rx_queue *bd_rx_q;
@@ -1804,19 +1808,20 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
 	uint16_t q_id;
 	struct rte_flow_error flow_error;
 	struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
+	uint16_t member_port_id = member_eth_dev->data->port_id;
 
 	/* Setup Rx Queues */
 	for (q_id = 0; q_id < bonded_eth_dev->data->nb_rx_queues; q_id++) {
 		bd_rx_q = (struct bond_rx_queue *)bonded_eth_dev->data->rx_queues[q_id];
 
-		errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id, q_id,
+		errval = rte_eth_rx_queue_setup(member_port_id, q_id,
 				bd_rx_q->nb_rx_desc,
-				rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+				rte_eth_dev_socket_id(member_port_id),
 				&(bd_rx_q->rx_conf), bd_rx_q->mb_pool);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 					"rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
-					slave_eth_dev->data->port_id, q_id, errval);
+					member_port_id, q_id, errval);
 			return errval;
 		}
 	}
@@ -1825,58 +1830,58 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
 	for (q_id = 0; q_id < bonded_eth_dev->data->nb_tx_queues; q_id++) {
 		bd_tx_q = (struct bond_tx_queue *)bonded_eth_dev->data->tx_queues[q_id];
 
-		errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id, q_id,
+		errval = rte_eth_tx_queue_setup(member_port_id, q_id,
 				bd_tx_q->nb_tx_desc,
-				rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+				rte_eth_dev_socket_id(member_port_id),
 				&bd_tx_q->tx_conf);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 				"rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
-				slave_eth_dev->data->port_id, q_id, errval);
+				member_port_id, q_id, errval);
 			return errval;
 		}
 	}
 
 	if (internals->mode == BONDING_MODE_8023AD &&
 			internals->mode4.dedicated_queues.enabled == 1) {
-		if (slave_configure_slow_queue(bonded_eth_dev, slave_eth_dev)
+		if (member_configure_slow_queue(bonded_eth_dev, member_eth_dev)
 				!= 0)
 			return errval;
 
 		errval = bond_ethdev_8023ad_flow_verify(bonded_eth_dev,
-				slave_eth_dev->data->port_id);
+				member_port_id);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 				"bond_ethdev_8023ad_flow_verify: port=%d, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				member_port_id, errval);
 			return errval;
 		}
 
-		if (internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id] != NULL) {
-			errval = rte_flow_destroy(slave_eth_dev->data->port_id,
-					internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id],
+		if (internals->mode4.dedicated_queues.flow[member_port_id] != NULL) {
+			errval = rte_flow_destroy(member_port_id,
+					internals->mode4.dedicated_queues.flow[member_port_id],
 					&flow_error);
 			RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_destroy: port=%d, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				member_port_id, errval);
 		}
 	}
 
 	/* Start device */
-	errval = rte_eth_dev_start(slave_eth_dev->data->port_id);
+	errval = rte_eth_dev_start(member_port_id);
 	if (errval != 0) {
 		RTE_BOND_LOG(ERR, "rte_eth_dev_start: port=%u, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				member_port_id, errval);
 		return -1;
 	}
 
 	if (internals->mode == BONDING_MODE_8023AD &&
 			internals->mode4.dedicated_queues.enabled == 1) {
 		errval = bond_ethdev_8023ad_flow_set(bonded_eth_dev,
-				slave_eth_dev->data->port_id);
+				member_port_id);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 				"bond_ethdev_8023ad_flow_set: port=%d, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				member_port_id, errval);
 			return errval;
 		}
 	}
@@ -1888,27 +1893,27 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
 
 		internals = bonded_eth_dev->data->dev_private;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			if (internals->slaves[i].port_id == slave_eth_dev->data->port_id) {
+		for (i = 0; i < internals->member_count; i++) {
+			if (internals->members[i].port_id == member_port_id) {
 				errval = rte_eth_dev_rss_reta_update(
-						slave_eth_dev->data->port_id,
+						member_port_id,
 						&internals->reta_conf[0],
-						internals->slaves[i].reta_size);
+						internals->members[i].reta_size);
 				if (errval != 0) {
 					RTE_BOND_LOG(WARNING,
-						     "rte_eth_dev_rss_reta_update on slave port %d fails (err %d)."
+						     "rte_eth_dev_rss_reta_update on member port %d fails (err %d)."
 						     " RSS Configuration for bonding may be inconsistent.",
-						     slave_eth_dev->data->port_id, errval);
+						     member_port_id, errval);
 				}
 				break;
 			}
 		}
 	}
 
-	/* If lsc interrupt is set, check initial slave's link status */
-	if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
-		slave_eth_dev->dev_ops->link_update(slave_eth_dev, 0);
-		bond_ethdev_lsc_event_callback(slave_eth_dev->data->port_id,
+	/* If lsc interrupt is set, check initial member's link status */
+	if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
+		member_eth_dev->dev_ops->link_update(member_eth_dev, 0);
+		bond_ethdev_lsc_event_callback(member_port_id,
 			RTE_ETH_EVENT_INTR_LSC, &bonded_eth_dev->data->port_id,
 			NULL);
 	}
@@ -1917,75 +1922,74 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
 }
 
 void
-slave_remove(struct bond_dev_private *internals,
-		struct rte_eth_dev *slave_eth_dev)
+member_remove(struct bond_dev_private *internals,
+		struct rte_eth_dev *member_eth_dev)
 {
 	uint16_t i;
 
-	for (i = 0; i < internals->slave_count; i++)
-		if (internals->slaves[i].port_id ==
-				slave_eth_dev->data->port_id)
+	for (i = 0; i < internals->member_count; i++)
+		if (internals->members[i].port_id ==
+				member_eth_dev->data->port_id)
 			break;
 
-	if (i < (internals->slave_count - 1)) {
+	if (i < (internals->member_count - 1)) {
 		struct rte_flow *flow;
 
-		memmove(&internals->slaves[i], &internals->slaves[i + 1],
-				sizeof(internals->slaves[0]) *
-				(internals->slave_count - i - 1));
+		memmove(&internals->members[i], &internals->members[i + 1],
+				sizeof(internals->members[0]) *
+				(internals->member_count - i - 1));
 		TAILQ_FOREACH(flow, &internals->flow_list, next) {
 			memmove(&flow->flows[i], &flow->flows[i + 1],
 				sizeof(flow->flows[0]) *
-				(internals->slave_count - i - 1));
-			flow->flows[internals->slave_count - 1] = NULL;
+				(internals->member_count - i - 1));
+			flow->flows[internals->member_count - 1] = NULL;
 		}
 	}
 
-	internals->slave_count--;
+	internals->member_count--;
 
-	/* force reconfiguration of slave interfaces */
-	rte_eth_dev_internal_reset(slave_eth_dev);
+	/* force reconfiguration of member interfaces */
+	rte_eth_dev_internal_reset(member_eth_dev);
 }
 
 static void
-bond_ethdev_slave_link_status_change_monitor(void *cb_arg);
+bond_ethdev_member_link_status_change_monitor(void *cb_arg);
 
 void
-slave_add(struct bond_dev_private *internals,
-		struct rte_eth_dev *slave_eth_dev)
+member_add(struct bond_dev_private *internals,
+		struct rte_eth_dev *member_eth_dev)
 {
-	struct bond_slave_details *slave_details =
-			&internals->slaves[internals->slave_count];
+	struct bond_member_details *member_details =
+			&internals->members[internals->member_count];
 
-	slave_details->port_id = slave_eth_dev->data->port_id;
-	slave_details->last_link_status = 0;
+	member_details->port_id = member_eth_dev->data->port_id;
+	member_details->last_link_status = 0;
 
-	/* Mark slave devices that don't support interrupts so we can
+	/* Mark member devices that don't support interrupts so we can
 	 * compensate when we start the bond
 	 */
-	if (!(slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) {
-		slave_details->link_status_poll_enabled = 1;
-	}
+	if (!(member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC))
+		member_details->link_status_poll_enabled = 1;
 
-	slave_details->link_status_wait_to_complete = 0;
+	member_details->link_status_wait_to_complete = 0;
 	/* clean tlb_last_obytes when adding port for bonding device */
-	memcpy(&(slave_details->persisted_mac_addr), slave_eth_dev->data->mac_addrs,
+	memcpy(&member_details->persisted_mac_addr, member_eth_dev->data->mac_addrs,
 			sizeof(struct rte_ether_addr));
 }
 
 void
 bond_ethdev_primary_set(struct bond_dev_private *internals,
-		uint16_t slave_port_id)
+		uint16_t member_port_id)
 {
 	int i;
 
-	if (internals->active_slave_count < 1)
-		internals->current_primary_port = slave_port_id;
+	if (internals->active_member_count < 1)
+		internals->current_primary_port = member_port_id;
 	else
-		/* Search bonded device slave ports for new proposed primary port */
-		for (i = 0; i < internals->active_slave_count; i++) {
-			if (internals->active_slaves[i] == slave_port_id)
-				internals->current_primary_port = slave_port_id;
+		/* Search bonded device member ports for new proposed primary port */
+		for (i = 0; i < internals->active_member_count; i++) {
+			if (internals->active_members[i] == member_port_id)
+				internals->current_primary_port = member_port_id;
 		}
 }
 
@@ -1998,9 +2002,9 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 	struct bond_dev_private *internals;
 	int i;
 
-	/* slave eth dev will be started by bonded device */
+	/* member eth dev will be started by bonded device */
 	if (check_for_bonded_ethdev(eth_dev)) {
-		RTE_BOND_LOG(ERR, "User tried to explicitly start a slave eth_dev (%d)",
+		RTE_BOND_LOG(ERR, "User tried to explicitly start a member eth_dev (%d)",
 				eth_dev->data->port_id);
 		return -1;
 	}
@@ -2010,17 +2014,17 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 
 	internals = eth_dev->data->dev_private;
 
-	if (internals->slave_count == 0) {
-		RTE_BOND_LOG(ERR, "Cannot start port since there are no slave devices");
+	if (internals->member_count == 0) {
+		RTE_BOND_LOG(ERR, "Cannot start port since there are no member devices");
 		goto out_err;
 	}
 
 	if (internals->user_defined_mac == 0) {
 		struct rte_ether_addr *new_mac_addr = NULL;
 
-		for (i = 0; i < internals->slave_count; i++)
-			if (internals->slaves[i].port_id == internals->primary_port)
-				new_mac_addr = &internals->slaves[i].persisted_mac_addr;
+		for (i = 0; i < internals->member_count; i++)
+			if (internals->members[i].port_id == internals->primary_port)
+				new_mac_addr = &internals->members[i].persisted_mac_addr;
 
 		if (new_mac_addr == NULL)
 			goto out_err;
@@ -2042,28 +2046,28 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 	}
 
 
-	/* Reconfigure each slave device if starting bonded device */
-	for (i = 0; i < internals->slave_count; i++) {
-		struct rte_eth_dev *slave_ethdev =
-				&(rte_eth_devices[internals->slaves[i].port_id]);
-		if (slave_configure(eth_dev, slave_ethdev) != 0) {
+	/* Reconfigure each member device if starting bonded device */
+	for (i = 0; i < internals->member_count; i++) {
+		struct rte_eth_dev *member_ethdev =
+				&(rte_eth_devices[internals->members[i].port_id]);
+		if (member_configure(eth_dev, member_ethdev) != 0) {
 			RTE_BOND_LOG(ERR,
-				"bonded port (%d) failed to reconfigure slave device (%d)",
+				"bonded port (%d) failed to reconfigure member device (%d)",
 				eth_dev->data->port_id,
-				internals->slaves[i].port_id);
+				internals->members[i].port_id);
 			goto out_err;
 		}
-		if (slave_start(eth_dev, slave_ethdev) != 0) {
+		if (member_start(eth_dev, member_ethdev) != 0) {
 			RTE_BOND_LOG(ERR,
-				"bonded port (%d) failed to start slave device (%d)",
+				"bonded port (%d) failed to start member device (%d)",
 				eth_dev->data->port_id,
-				internals->slaves[i].port_id);
+				internals->members[i].port_id);
 			goto out_err;
 		}
-		/* We will need to poll for link status if any slave doesn't
+		/* We will need to poll for link status if any member doesn't
 		 * support interrupts
 		 */
-		if (internals->slaves[i].link_status_poll_enabled)
+		if (internals->members[i].link_status_poll_enabled)
 			internals->link_status_polling_enabled = 1;
 	}
 
@@ -2071,12 +2075,12 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 	if (internals->link_status_polling_enabled) {
 		rte_eal_alarm_set(
 			internals->link_status_polling_interval_ms * 1000,
-			bond_ethdev_slave_link_status_change_monitor,
+			bond_ethdev_member_link_status_change_monitor,
 			(void *)&rte_eth_devices[internals->port_id]);
 	}
 
-	/* Update all slave devices MACs*/
-	if (mac_address_slaves_update(eth_dev) != 0)
+	/* Update all member devices MACs*/
+	if (mac_address_members_update(eth_dev) != 0)
 		goto out_err;
 
 	if (internals->user_defined_primary_port)
@@ -2132,8 +2136,8 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 		bond_mode_8023ad_stop(eth_dev);
 
 		/* Discard all messages to/from mode 4 state machines */
-		for (i = 0; i < internals->active_slave_count; i++) {
-			port = &bond_mode_8023ad_ports[internals->active_slaves[i]];
+		for (i = 0; i < internals->active_member_count; i++) {
+			port = &bond_mode_8023ad_ports[internals->active_members[i]];
 
 			RTE_ASSERT(port->rx_ring != NULL);
 			while (rte_ring_dequeue(port->rx_ring, &pkt) != -ENOENT)
@@ -2148,30 +2152,30 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 	if (internals->mode == BONDING_MODE_TLB ||
 			internals->mode == BONDING_MODE_ALB) {
 		bond_tlb_disable(internals);
-		for (i = 0; i < internals->active_slave_count; i++)
-			tlb_last_obytets[internals->active_slaves[i]] = 0;
+		for (i = 0; i < internals->active_member_count; i++)
+			tlb_last_obytets[internals->active_members[i]] = 0;
 	}
 
 	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 0;
 
 	internals->link_status_polling_enabled = 0;
-	for (i = 0; i < internals->slave_count; i++) {
-		uint16_t slave_id = internals->slaves[i].port_id;
+	for (i = 0; i < internals->member_count; i++) {
+		uint16_t member_id = internals->members[i].port_id;
 
-		internals->slaves[i].last_link_status = 0;
-		ret = rte_eth_dev_stop(slave_id);
+		internals->members[i].last_link_status = 0;
+		ret = rte_eth_dev_stop(member_id);
 		if (ret != 0) {
 			RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
-				     slave_id);
+				     member_id);
 			return ret;
 		}
 
-		/* active slaves need to be deactivated. */
-		if (find_slave_by_id(internals->active_slaves,
-				internals->active_slave_count, slave_id) !=
-					internals->active_slave_count)
-			deactivate_slave(eth_dev, slave_id);
+		/* active members need to be deactivated. */
+		if (find_member_by_id(internals->active_members,
+				internals->active_member_count, member_id) !=
+					internals->active_member_count)
+			deactivate_member(eth_dev, member_id);
 	}
 
 	return 0;
@@ -2188,8 +2192,8 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
 	/* Flush flows in all back-end devices before removing them */
 	bond_flow_ops.flush(dev, &ferror);
 
-	while (internals->slave_count != skipped) {
-		uint16_t port_id = internals->slaves[skipped].port_id;
+	while (internals->member_count != skipped) {
+		uint16_t port_id = internals->members[skipped].port_id;
 		int ret;
 
 		ret = rte_eth_dev_stop(port_id);
@@ -2203,7 +2207,7 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
 			continue;
 		}
 
-		if (rte_eth_bond_slave_remove(bond_port_id, port_id) != 0) {
+		if (rte_eth_bond_member_remove(bond_port_id, port_id) != 0) {
 			RTE_BOND_LOG(ERR,
 				     "Failed to remove port %d from bonded device %s",
 				     port_id, dev->device->name);
@@ -2246,7 +2250,7 @@ static int
 bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 {
 	struct bond_dev_private *internals = dev->data->dev_private;
-	struct bond_slave_details slave;
+	struct bond_member_details member;
 	int ret;
 
 	uint16_t max_nb_rx_queues = UINT16_MAX;
@@ -2259,31 +2263,31 @@ bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 			RTE_ETHER_MAX_JUMBO_FRAME_LEN;
 
 	/* Max number of tx/rx queues that the bonded device can support is the
-	 * minimum values of the bonded slaves, as all slaves must be capable
+	 * minimum values of the bonded members, as all members must be capable
 	 * of supporting the same number of tx/rx queues.
 	 */
-	if (internals->slave_count > 0) {
-		struct rte_eth_dev_info slave_info;
+	if (internals->member_count > 0) {
+		struct rte_eth_dev_info member_info;
 		uint16_t idx;
 
-		for (idx = 0; idx < internals->slave_count; idx++) {
-			slave = internals->slaves[idx];
-			ret = rte_eth_dev_info_get(slave.port_id, &slave_info);
+		for (idx = 0; idx < internals->member_count; idx++) {
+			member = internals->members[idx];
+			ret = rte_eth_dev_info_get(member.port_id, &member_info);
 			if (ret != 0) {
 				RTE_BOND_LOG(ERR,
 					"%s: Error during getting device (port %u) info: %s\n",
 					__func__,
-					slave.port_id,
+					member.port_id,
 					strerror(-ret));
 
 				return ret;
 			}
 
-			if (slave_info.max_rx_queues < max_nb_rx_queues)
-				max_nb_rx_queues = slave_info.max_rx_queues;
+			if (member_info.max_rx_queues < max_nb_rx_queues)
+				max_nb_rx_queues = member_info.max_rx_queues;
 
-			if (slave_info.max_tx_queues < max_nb_tx_queues)
-				max_nb_tx_queues = slave_info.max_tx_queues;
+			if (member_info.max_tx_queues < max_nb_tx_queues)
+				max_nb_tx_queues = member_info.max_tx_queues;
 		}
 	}
 
@@ -2332,7 +2336,7 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 	uint16_t i;
 	struct bond_dev_private *internals = dev->data->dev_private;
 
-	/* don't do this while a slave is being added */
+	/* don't do this while a member is being added */
 	rte_spinlock_lock(&internals->lock);
 
 	if (on)
@@ -2340,13 +2344,13 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 	else
 		rte_bitmap_clear(internals->vlan_filter_bmp, vlan_id);
 
-	for (i = 0; i < internals->slave_count; i++) {
-		uint16_t port_id = internals->slaves[i].port_id;
+	for (i = 0; i < internals->member_count; i++) {
+		uint16_t port_id = internals->members[i].port_id;
 
 		res = rte_eth_dev_vlan_filter(port_id, vlan_id, on);
 		if (res == ENOTSUP)
 			RTE_BOND_LOG(WARNING,
-				     "Setting VLAN filter on slave port %u not supported.",
+				     "Setting VLAN filter on member port %u not supported.",
 				     port_id);
 	}
 
@@ -2424,14 +2428,14 @@ bond_ethdev_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id)
 }
 
 static void
-bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
+bond_ethdev_member_link_status_change_monitor(void *cb_arg)
 {
-	struct rte_eth_dev *bonded_ethdev, *slave_ethdev;
+	struct rte_eth_dev *bonded_ethdev, *member_ethdev;
 	struct bond_dev_private *internals;
 
-	/* Default value for polling slave found is true as we don't want to
+	/* Default value for polling member found is true as we don't want to
 	 * disable the polling thread if we cannot get the lock */
-	int i, polling_slave_found = 1;
+	int i, polling_member_found = 1;
 
 	if (cb_arg == NULL)
 		return;
@@ -2443,28 +2447,28 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
 		!internals->link_status_polling_enabled)
 		return;
 
-	/* If device is currently being configured then don't check slaves link
+	/* If device is currently being configured then don't check members link
 	 * status, wait until next period */
 	if (rte_spinlock_trylock(&internals->lock)) {
-		if (internals->slave_count > 0)
-			polling_slave_found = 0;
+		if (internals->member_count > 0)
+			polling_member_found = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			if (!internals->slaves[i].link_status_poll_enabled)
+		for (i = 0; i < internals->member_count; i++) {
+			if (!internals->members[i].link_status_poll_enabled)
 				continue;
 
-			slave_ethdev = &rte_eth_devices[internals->slaves[i].port_id];
-			polling_slave_found = 1;
+			member_ethdev = &rte_eth_devices[internals->members[i].port_id];
+			polling_member_found = 1;
 
-			/* Update slave link status */
-			(*slave_ethdev->dev_ops->link_update)(slave_ethdev,
-					internals->slaves[i].link_status_wait_to_complete);
+			/* Update member link status */
+			(*member_ethdev->dev_ops->link_update)(member_ethdev,
+					internals->members[i].link_status_wait_to_complete);
 
 			/* if link status has changed since last checked then call lsc
 			 * event callback */
-			if (slave_ethdev->data->dev_link.link_status !=
-					internals->slaves[i].last_link_status) {
-				bond_ethdev_lsc_event_callback(internals->slaves[i].port_id,
+			if (member_ethdev->data->dev_link.link_status !=
+					internals->members[i].last_link_status) {
+				bond_ethdev_lsc_event_callback(internals->members[i].port_id,
 						RTE_ETH_EVENT_INTR_LSC,
 						&bonded_ethdev->data->port_id,
 						NULL);
@@ -2473,10 +2477,10 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
 		rte_spinlock_unlock(&internals->lock);
 	}
 
-	if (polling_slave_found)
-		/* Set alarm to continue monitoring link status of slave ethdev's */
+	if (polling_member_found)
+		/* Set alarm to continue monitoring link status of member ethdev's */
 		rte_eal_alarm_set(internals->link_status_polling_interval_ms * 1000,
-				bond_ethdev_slave_link_status_change_monitor, cb_arg);
+				bond_ethdev_member_link_status_change_monitor, cb_arg);
 }
 
 static int
@@ -2485,7 +2489,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 	int (*link_update)(uint16_t port_id, struct rte_eth_link *eth_link);
 
 	struct bond_dev_private *bond_ctx;
-	struct rte_eth_link slave_link;
+	struct rte_eth_link member_link;
 
 	bool one_link_update_succeeded;
 	uint32_t idx;
@@ -2496,7 +2500,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 	ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	if (ethdev->data->dev_started == 0 ||
-			bond_ctx->active_slave_count == 0) {
+			bond_ctx->active_member_count == 0) {
 		ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 		return 0;
 	}
@@ -2512,51 +2516,51 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 	case BONDING_MODE_BROADCAST:
 		/**
 		 * Setting link speed to UINT32_MAX to ensure we pick up the
-		 * value of the first active slave
+		 * value of the first active member
 		 */
 		ethdev->data->dev_link.link_speed = UINT32_MAX;
 
 		/**
-		 * link speed is minimum value of all the slaves link speed as
-		 * packet loss will occur on this slave if transmission at rates
+		 * link speed is minimum value of all the members link speed as
+		 * packet loss will occur on this member if transmission at rates
 		 * greater than this are attempted
 		 */
-		for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
-			ret = link_update(bond_ctx->active_slaves[idx],
-					  &slave_link);
+		for (idx = 0; idx < bond_ctx->active_member_count; idx++) {
+			ret = link_update(bond_ctx->active_members[idx],
+					  &member_link);
 			if (ret < 0) {
 				ethdev->data->dev_link.link_speed =
 					RTE_ETH_SPEED_NUM_NONE;
 				RTE_BOND_LOG(ERR,
-					"Slave (port %u) link get failed: %s",
-					bond_ctx->active_slaves[idx],
+					"Member (port %u) link get failed: %s",
+					bond_ctx->active_members[idx],
 					rte_strerror(-ret));
 				return 0;
 			}
 
-			if (slave_link.link_speed <
+			if (member_link.link_speed <
 					ethdev->data->dev_link.link_speed)
 				ethdev->data->dev_link.link_speed =
-						slave_link.link_speed;
+						member_link.link_speed;
 		}
 		break;
 	case BONDING_MODE_ACTIVE_BACKUP:
-		/* Current primary slave */
-		ret = link_update(bond_ctx->current_primary_port, &slave_link);
+		/* Current primary member */
+		ret = link_update(bond_ctx->current_primary_port, &member_link);
 		if (ret < 0) {
-			RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
+			RTE_BOND_LOG(ERR, "Member (port %u) link get failed: %s",
 				bond_ctx->current_primary_port,
 				rte_strerror(-ret));
 			return 0;
 		}
 
-		ethdev->data->dev_link.link_speed = slave_link.link_speed;
+		ethdev->data->dev_link.link_speed = member_link.link_speed;
 		break;
 	case BONDING_MODE_8023AD:
 		ethdev->data->dev_link.link_autoneg =
-				bond_ctx->mode4.slave_link.link_autoneg;
+				bond_ctx->mode4.member_link.link_autoneg;
 		ethdev->data->dev_link.link_duplex =
-				bond_ctx->mode4.slave_link.link_duplex;
+				bond_ctx->mode4.member_link.link_duplex;
 		/* fall through */
 		/* to update link speed */
 	case BONDING_MODE_ROUND_ROBIN:
@@ -2566,29 +2570,29 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 	default:
 		/**
 		 * In theses mode the maximum theoretical link speed is the sum
-		 * of all the slaves
+		 * of all the members
 		 */
 		ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		one_link_update_succeeded = false;
 
-		for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
-			ret = link_update(bond_ctx->active_slaves[idx],
-					&slave_link);
+		for (idx = 0; idx < bond_ctx->active_member_count; idx++) {
+			ret = link_update(bond_ctx->active_members[idx],
+					&member_link);
 			if (ret < 0) {
 				RTE_BOND_LOG(ERR,
-					"Slave (port %u) link get failed: %s",
-					bond_ctx->active_slaves[idx],
+					"Member (port %u) link get failed: %s",
+					bond_ctx->active_members[idx],
 					rte_strerror(-ret));
 				continue;
 			}
 
 			one_link_update_succeeded = true;
 			ethdev->data->dev_link.link_speed +=
-					slave_link.link_speed;
+					member_link.link_speed;
 		}
 
 		if (!one_link_update_succeeded) {
-			RTE_BOND_LOG(ERR, "All slaves link get failed");
+			RTE_BOND_LOG(ERR, "All members link get failed");
 			return 0;
 		}
 	}
@@ -2602,27 +2606,27 @@ static int
 bond_ethdev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 {
 	struct bond_dev_private *internals = dev->data->dev_private;
-	struct rte_eth_stats slave_stats;
+	struct rte_eth_stats member_stats;
 	int i, j;
 
-	for (i = 0; i < internals->slave_count; i++) {
-		rte_eth_stats_get(internals->slaves[i].port_id, &slave_stats);
+	for (i = 0; i < internals->member_count; i++) {
+		rte_eth_stats_get(internals->members[i].port_id, &member_stats);
 
-		stats->ipackets += slave_stats.ipackets;
-		stats->opackets += slave_stats.opackets;
-		stats->ibytes += slave_stats.ibytes;
-		stats->obytes += slave_stats.obytes;
-		stats->imissed += slave_stats.imissed;
-		stats->ierrors += slave_stats.ierrors;
-		stats->oerrors += slave_stats.oerrors;
-		stats->rx_nombuf += slave_stats.rx_nombuf;
+		stats->ipackets += member_stats.ipackets;
+		stats->opackets += member_stats.opackets;
+		stats->ibytes += member_stats.ibytes;
+		stats->obytes += member_stats.obytes;
+		stats->imissed += member_stats.imissed;
+		stats->ierrors += member_stats.ierrors;
+		stats->oerrors += member_stats.oerrors;
+		stats->rx_nombuf += member_stats.rx_nombuf;
 
 		for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
-			stats->q_ipackets[j] += slave_stats.q_ipackets[j];
-			stats->q_opackets[j] += slave_stats.q_opackets[j];
-			stats->q_ibytes[j] += slave_stats.q_ibytes[j];
-			stats->q_obytes[j] += slave_stats.q_obytes[j];
-			stats->q_errors[j] += slave_stats.q_errors[j];
+			stats->q_ipackets[j] += member_stats.q_ipackets[j];
+			stats->q_opackets[j] += member_stats.q_opackets[j];
+			stats->q_ibytes[j] += member_stats.q_ibytes[j];
+			stats->q_obytes[j] += member_stats.q_obytes[j];
+			stats->q_errors[j] += member_stats.q_errors[j];
 		}
 
 	}
@@ -2638,8 +2642,8 @@ bond_ethdev_stats_reset(struct rte_eth_dev *dev)
 	int err;
 	int ret;
 
-	for (i = 0, err = 0; i < internals->slave_count; i++) {
-		ret = rte_eth_stats_reset(internals->slaves[i].port_id);
+	for (i = 0, err = 0; i < internals->member_count; i++) {
+		ret = rte_eth_stats_reset(internals->members[i].port_id);
 		if (ret != 0)
 			err = ret;
 	}
@@ -2656,15 +2660,15 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
 	uint16_t port_id;
 
 	switch (internals->mode) {
-	/* Promiscuous mode is propagated to all slaves */
+	/* Promiscuous mode is propagated to all members */
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD: {
-		unsigned int slave_ok = 0;
+		unsigned int member_ok = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			port_id = internals->slaves[i].port_id;
+		for (i = 0; i < internals->member_count; i++) {
+			port_id = internals->members[i].port_id;
 
 			ret = rte_eth_promiscuous_enable(port_id);
 			if (ret != 0)
@@ -2672,23 +2676,23 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
 					"Failed to enable promiscuous mode for port %u: %s",
 					port_id, rte_strerror(-ret));
 			else
-				slave_ok++;
+				member_ok++;
 		}
 		/*
 		 * Report success if operation is successful on at least
-		 * on one slave. Otherwise return last error code.
+		 * on one member. Otherwise return last error code.
 		 */
-		if (slave_ok > 0)
+		if (member_ok > 0)
 			ret = 0;
 		break;
 	}
-	/* Promiscuous mode is propagated only to primary slave */
+	/* Promiscuous mode is propagated only to primary member */
 	case BONDING_MODE_ACTIVE_BACKUP:
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
 		/* Do not touch promisc when there cannot be primary ports */
-		if (internals->slave_count == 0)
+		if (internals->member_count == 0)
 			break;
 		port_id = internals->current_primary_port;
 		ret = rte_eth_promiscuous_enable(port_id);
@@ -2710,20 +2714,20 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
 	uint16_t port_id;
 
 	switch (internals->mode) {
-	/* Promiscuous mode is propagated to all slaves */
+	/* Promiscuous mode is propagated to all members */
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD: {
-		unsigned int slave_ok = 0;
+		unsigned int member_ok = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			port_id = internals->slaves[i].port_id;
+		for (i = 0; i < internals->member_count; i++) {
+			port_id = internals->members[i].port_id;
 
 			if (internals->mode == BONDING_MODE_8023AD &&
 			    bond_mode_8023ad_ports[port_id].forced_rx_flags ==
 					BOND_8023AD_FORCED_PROMISC) {
-				slave_ok++;
+				member_ok++;
 				continue;
 			}
 			ret = rte_eth_promiscuous_disable(port_id);
@@ -2732,23 +2736,23 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
 					"Failed to disable promiscuous mode for port %u: %s",
 					port_id, rte_strerror(-ret));
 			else
-				slave_ok++;
+				member_ok++;
 		}
 		/*
 		 * Report success if operation is successful on at least
-		 * on one slave. Otherwise return last error code.
+		 * on one member. Otherwise return last error code.
 		 */
-		if (slave_ok > 0)
+		if (member_ok > 0)
 			ret = 0;
 		break;
 	}
-	/* Promiscuous mode is propagated only to primary slave */
+	/* Promiscuous mode is propagated only to primary member */
 	case BONDING_MODE_ACTIVE_BACKUP:
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
 		/* Do not touch promisc when there cannot be primary ports */
-		if (internals->slave_count == 0)
+		if (internals->member_count == 0)
 			break;
 		port_id = internals->current_primary_port;
 		ret = rte_eth_promiscuous_disable(port_id);
@@ -2772,7 +2776,7 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD:
-		/* As promiscuous mode is propagated to all slaves for these
+		/* As promiscuous mode is propagated to all members for these
 		 * mode, no need to update for bonding device.
 		 */
 		break;
@@ -2780,9 +2784,9 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
-		/* As promiscuous mode is propagated only to primary slave
+		/* As promiscuous mode is propagated only to primary member
 		 * for these mode. When active/standby switchover, promiscuous
-		 * mode should be set to new primary slave according to bonding
+		 * mode should be set to new primary member according to bonding
 		 * device.
 		 */
 		if (rte_eth_promiscuous_get(internals->port_id) == 1)
@@ -2803,15 +2807,15 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
 	uint16_t port_id;
 
 	switch (internals->mode) {
-	/* allmulti mode is propagated to all slaves */
+	/* allmulti mode is propagated to all members */
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD: {
-		unsigned int slave_ok = 0;
+		unsigned int member_ok = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			port_id = internals->slaves[i].port_id;
+		for (i = 0; i < internals->member_count; i++) {
+			port_id = internals->members[i].port_id;
 
 			ret = rte_eth_allmulticast_enable(port_id);
 			if (ret != 0)
@@ -2819,23 +2823,23 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
 					"Failed to enable allmulti mode for port %u: %s",
 					port_id, rte_strerror(-ret));
 			else
-				slave_ok++;
+				member_ok++;
 		}
 		/*
 		 * Report success if operation is successful on at least
-		 * on one slave. Otherwise return last error code.
+		 * on one member. Otherwise return last error code.
 		 */
-		if (slave_ok > 0)
+		if (member_ok > 0)
 			ret = 0;
 		break;
 	}
-	/* allmulti mode is propagated only to primary slave */
+	/* allmulti mode is propagated only to primary member */
 	case BONDING_MODE_ACTIVE_BACKUP:
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
 		/* Do not touch allmulti when there cannot be primary ports */
-		if (internals->slave_count == 0)
+		if (internals->member_count == 0)
 			break;
 		port_id = internals->current_primary_port;
 		ret = rte_eth_allmulticast_enable(port_id);
@@ -2857,15 +2861,15 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
 	uint16_t port_id;
 
 	switch (internals->mode) {
-	/* allmulti mode is propagated to all slaves */
+	/* allmulti mode is propagated to all members */
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD: {
-		unsigned int slave_ok = 0;
+		unsigned int member_ok = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			uint16_t port_id = internals->slaves[i].port_id;
+		for (i = 0; i < internals->member_count; i++) {
+			uint16_t port_id = internals->members[i].port_id;
 
 			if (internals->mode == BONDING_MODE_8023AD &&
 			    bond_mode_8023ad_ports[port_id].forced_rx_flags ==
@@ -2878,23 +2882,23 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
 					"Failed to disable allmulti mode for port %u: %s",
 					port_id, rte_strerror(-ret));
 			else
-				slave_ok++;
+				member_ok++;
 		}
 		/*
 		 * Report success if operation is successful on at least
-		 * on one slave. Otherwise return last error code.
+		 * on one member. Otherwise return last error code.
 		 */
-		if (slave_ok > 0)
+		if (member_ok > 0)
 			ret = 0;
 		break;
 	}
-	/* allmulti mode is propagated only to primary slave */
+	/* allmulti mode is propagated only to primary member */
 	case BONDING_MODE_ACTIVE_BACKUP:
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
 		/* Do not touch allmulti when there cannot be primary ports */
-		if (internals->slave_count == 0)
+		if (internals->member_count == 0)
 			break;
 		port_id = internals->current_primary_port;
 		ret = rte_eth_allmulticast_disable(port_id);
@@ -2918,7 +2922,7 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD:
-		/* As allmulticast mode is propagated to all slaves for these
+		/* As allmulticast mode is propagated to all members for these
 		 * mode, no need to update for bonding device.
 		 */
 		break;
@@ -2926,9 +2930,9 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
-		/* As allmulticast mode is propagated only to primary slave
+		/* As allmulticast mode is propagated only to primary member
 		 * for these mode. When active/standby switchover, allmulticast
-		 * mode should be set to new primary slave according to bonding
+		 * mode should be set to new primary member according to bonding
 		 * device.
 		 */
 		if (rte_eth_allmulticast_get(internals->port_id) == 1)
@@ -2961,8 +2965,8 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 	int ret;
 
 	uint8_t lsc_flag = 0;
-	int valid_slave = 0;
-	uint16_t active_pos, slave_idx;
+	int valid_member = 0;
+	uint16_t active_pos, member_idx;
 	uint16_t i;
 
 	if (type != RTE_ETH_EVENT_INTR_LSC || param == NULL)
@@ -2979,62 +2983,62 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 	if (!bonded_eth_dev->data->dev_started)
 		return rc;
 
-	/* verify that port_id is a valid slave of bonded port */
-	for (i = 0; i < internals->slave_count; i++) {
-		if (internals->slaves[i].port_id == port_id) {
-			valid_slave = 1;
-			slave_idx = i;
+	/* verify that port_id is a valid member of bonded port */
+	for (i = 0; i < internals->member_count; i++) {
+		if (internals->members[i].port_id == port_id) {
+			valid_member = 1;
+			member_idx = i;
 			break;
 		}
 	}
 
-	if (!valid_slave)
+	if (!valid_member)
 		return rc;
 
 	/* Synchronize lsc callback parallel calls either by real link event
-	 * from the slaves PMDs or by the bonding PMD itself.
+	 * from the members PMDs or by the bonding PMD itself.
 	 */
 	rte_spinlock_lock(&internals->lsc_lock);
 
 	/* Search for port in active port list */
-	active_pos = find_slave_by_id(internals->active_slaves,
-			internals->active_slave_count, port_id);
+	active_pos = find_member_by_id(internals->active_members,
+			internals->active_member_count, port_id);
 
 	ret = rte_eth_link_get_nowait(port_id, &link);
 	if (ret < 0)
-		RTE_BOND_LOG(ERR, "Slave (port %u) link get failed", port_id);
+		RTE_BOND_LOG(ERR, "Member (port %u) link get failed", port_id);
 
 	if (ret == 0 && link.link_status) {
-		if (active_pos < internals->active_slave_count)
+		if (active_pos < internals->active_member_count)
 			goto link_update;
 
 		/* check link state properties if bonded link is up*/
 		if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
 			if (link_properties_valid(bonded_eth_dev, &link) != 0)
 				RTE_BOND_LOG(ERR, "Invalid link properties "
-					     "for slave %d in bonding mode %d",
+					     "for member %d in bonding mode %d",
 					     port_id, internals->mode);
 		} else {
-			/* inherit slave link properties */
+			/* inherit member link properties */
 			link_properties_set(bonded_eth_dev, &link);
 		}
 
-		/* If no active slave ports then set this port to be
+		/* If no active member ports then set this port to be
 		 * the primary port.
 		 */
-		if (internals->active_slave_count < 1) {
-			/* If first active slave, then change link status */
+		if (internals->active_member_count < 1) {
+			/* If first active member, then change link status */
 			bonded_eth_dev->data->dev_link.link_status =
 								RTE_ETH_LINK_UP;
 			internals->current_primary_port = port_id;
 			lsc_flag = 1;
 
-			mac_address_slaves_update(bonded_eth_dev);
+			mac_address_members_update(bonded_eth_dev);
 			bond_ethdev_promiscuous_update(bonded_eth_dev);
 			bond_ethdev_allmulticast_update(bonded_eth_dev);
 		}
 
-		activate_slave(bonded_eth_dev, port_id);
+		activate_member(bonded_eth_dev, port_id);
 
 		/* If the user has defined the primary port then default to
 		 * using it.
@@ -3043,24 +3047,24 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 				internals->primary_port == port_id)
 			bond_ethdev_primary_set(internals, port_id);
 	} else {
-		if (active_pos == internals->active_slave_count)
+		if (active_pos == internals->active_member_count)
 			goto link_update;
 
-		/* Remove from active slave list */
-		deactivate_slave(bonded_eth_dev, port_id);
+		/* Remove from active member list */
+		deactivate_member(bonded_eth_dev, port_id);
 
-		if (internals->active_slave_count < 1)
+		if (internals->active_member_count < 1)
 			lsc_flag = 1;
 
-		/* Update primary id, take first active slave from list or if none
+		/* Update primary id, take first active member from list or if none
 		 * available set to -1 */
 		if (port_id == internals->current_primary_port) {
-			if (internals->active_slave_count > 0)
+			if (internals->active_member_count > 0)
 				bond_ethdev_primary_set(internals,
-						internals->active_slaves[0]);
+						internals->active_members[0]);
 			else
 				internals->current_primary_port = internals->primary_port;
-			mac_address_slaves_update(bonded_eth_dev);
+			mac_address_members_update(bonded_eth_dev);
 			bond_ethdev_promiscuous_update(bonded_eth_dev);
 			bond_ethdev_allmulticast_update(bonded_eth_dev);
 		}
@@ -3069,10 +3073,10 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 link_update:
 	/**
 	 * Update bonded device link properties after any change to active
-	 * slaves
+	 * members
 	 */
 	bond_ethdev_link_update(bonded_eth_dev, 0);
-	internals->slaves[slave_idx].last_link_status = link.link_status;
+	internals->members[member_idx].last_link_status = link.link_status;
 
 	if (lsc_flag) {
 		/* Cancel any possible outstanding interrupts if delays are enabled */
@@ -3114,7 +3118,7 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
 {
 	unsigned i, j;
 	int result = 0;
-	int slave_reta_size;
+	int member_reta_size;
 	unsigned reta_count;
 	struct bond_dev_private *internals = dev->data->dev_private;
 
@@ -3137,11 +3141,11 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
 		memcpy(&internals->reta_conf[i], &internals->reta_conf[0],
 				sizeof(internals->reta_conf[0]) * reta_count);
 
-	/* Propagate RETA over slaves */
-	for (i = 0; i < internals->slave_count; i++) {
-		slave_reta_size = internals->slaves[i].reta_size;
-		result = rte_eth_dev_rss_reta_update(internals->slaves[i].port_id,
-				&internals->reta_conf[0], slave_reta_size);
+	/* Propagate RETA over members */
+	for (i = 0; i < internals->member_count; i++) {
+		member_reta_size = internals->members[i].reta_size;
+		result = rte_eth_dev_rss_reta_update(internals->members[i].port_id,
+				&internals->reta_conf[0], member_reta_size);
 		if (result < 0)
 			return result;
 	}
@@ -3194,8 +3198,8 @@ bond_ethdev_rss_hash_update(struct rte_eth_dev *dev,
 		bond_rss_conf.rss_key_len = internals->rss_key_len;
 	}
 
-	for (i = 0; i < internals->slave_count; i++) {
-		result = rte_eth_dev_rss_hash_update(internals->slaves[i].port_id,
+	for (i = 0; i < internals->member_count; i++) {
+		result = rte_eth_dev_rss_hash_update(internals->members[i].port_id,
 				&bond_rss_conf);
 		if (result < 0)
 			return result;
@@ -3221,21 +3225,21 @@ bond_ethdev_rss_hash_conf_get(struct rte_eth_dev *dev,
 static int
 bond_ethdev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 {
-	struct rte_eth_dev *slave_eth_dev;
+	struct rte_eth_dev *member_eth_dev;
 	struct bond_dev_private *internals = dev->data->dev_private;
 	int ret, i;
 
 	rte_spinlock_lock(&internals->lock);
 
-	for (i = 0; i < internals->slave_count; i++) {
-		slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
-		if (*slave_eth_dev->dev_ops->mtu_set == NULL) {
+	for (i = 0; i < internals->member_count; i++) {
+		member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+		if (*member_eth_dev->dev_ops->mtu_set == NULL) {
 			rte_spinlock_unlock(&internals->lock);
 			return -ENOTSUP;
 		}
 	}
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_eth_dev_set_mtu(internals->slaves[i].port_id, mtu);
+	for (i = 0; i < internals->member_count; i++) {
+		ret = rte_eth_dev_set_mtu(internals->members[i].port_id, mtu);
 		if (ret < 0) {
 			rte_spinlock_unlock(&internals->lock);
 			return ret;
@@ -3271,29 +3275,29 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
 			struct rte_ether_addr *mac_addr,
 			__rte_unused uint32_t index, uint32_t vmdq)
 {
-	struct rte_eth_dev *slave_eth_dev;
+	struct rte_eth_dev *member_eth_dev;
 	struct bond_dev_private *internals = dev->data->dev_private;
 	int ret, i;
 
 	rte_spinlock_lock(&internals->lock);
 
-	for (i = 0; i < internals->slave_count; i++) {
-		slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
-		if (*slave_eth_dev->dev_ops->mac_addr_add == NULL ||
-			 *slave_eth_dev->dev_ops->mac_addr_remove == NULL) {
+	for (i = 0; i < internals->member_count; i++) {
+		member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+		if (*member_eth_dev->dev_ops->mac_addr_add == NULL ||
+			 *member_eth_dev->dev_ops->mac_addr_remove == NULL) {
 			ret = -ENOTSUP;
 			goto end;
 		}
 	}
 
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_eth_dev_mac_addr_add(internals->slaves[i].port_id,
+	for (i = 0; i < internals->member_count; i++) {
+		ret = rte_eth_dev_mac_addr_add(internals->members[i].port_id,
 				mac_addr, vmdq);
 		if (ret < 0) {
 			/* rollback */
 			for (i--; i >= 0; i--)
 				rte_eth_dev_mac_addr_remove(
-					internals->slaves[i].port_id, mac_addr);
+					internals->members[i].port_id, mac_addr);
 			goto end;
 		}
 	}
@@ -3307,22 +3311,22 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
 static void
 bond_ethdev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index)
 {
-	struct rte_eth_dev *slave_eth_dev;
+	struct rte_eth_dev *member_eth_dev;
 	struct bond_dev_private *internals = dev->data->dev_private;
 	int i;
 
 	rte_spinlock_lock(&internals->lock);
 
-	for (i = 0; i < internals->slave_count; i++) {
-		slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
-		if (*slave_eth_dev->dev_ops->mac_addr_remove == NULL)
+	for (i = 0; i < internals->member_count; i++) {
+		member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+		if (*member_eth_dev->dev_ops->mac_addr_remove == NULL)
 			goto end;
 	}
 
 	struct rte_ether_addr *mac_addr = &dev->data->mac_addrs[index];
 
-	for (i = 0; i < internals->slave_count; i++)
-		rte_eth_dev_mac_addr_remove(internals->slaves[i].port_id,
+	for (i = 0; i < internals->member_count; i++)
+		rte_eth_dev_mac_addr_remove(internals->members[i].port_id,
 				mac_addr);
 
 end:
@@ -3402,30 +3406,30 @@ dump_basic(const struct rte_eth_dev *dev, FILE *f)
 		fprintf(f, "\n");
 	}
 
-	if (internals->slave_count > 0) {
-		fprintf(f, "\tSlaves (%u): [", internals->slave_count);
-		for (i = 0; i < internals->slave_count - 1; i++)
-			fprintf(f, "%u ", internals->slaves[i].port_id);
+	if (internals->member_count > 0) {
+		fprintf(f, "\tMembers (%u): [", internals->member_count);
+		for (i = 0; i < internals->member_count - 1; i++)
+			fprintf(f, "%u ", internals->members[i].port_id);
 
-		fprintf(f, "%u]\n", internals->slaves[internals->slave_count - 1].port_id);
+		fprintf(f, "%u]\n", internals->members[internals->member_count - 1].port_id);
 	} else {
-		fprintf(f, "\tSlaves: []\n");
+		fprintf(f, "\tMembers: []\n");
 	}
 
-	if (internals->active_slave_count > 0) {
-		fprintf(f, "\tActive Slaves (%u): [", internals->active_slave_count);
-		for (i = 0; i < internals->active_slave_count - 1; i++)
-			fprintf(f, "%u ", internals->active_slaves[i]);
+	if (internals->active_member_count > 0) {
+		fprintf(f, "\tActive Members (%u): [", internals->active_member_count);
+		for (i = 0; i < internals->active_member_count - 1; i++)
+			fprintf(f, "%u ", internals->active_members[i]);
 
-		fprintf(f, "%u]\n", internals->active_slaves[internals->active_slave_count - 1]);
+		fprintf(f, "%u]\n", internals->active_members[internals->active_member_count - 1]);
 
 	} else {
-		fprintf(f, "\tActive Slaves: []\n");
+		fprintf(f, "\tActive Members: []\n");
 	}
 
 	if (internals->user_defined_primary_port)
 		fprintf(f, "\tUser Defined Primary: [%u]\n", internals->primary_port);
-	if (internals->slave_count > 0)
+	if (internals->member_count > 0)
 		fprintf(f, "\tCurrent Primary: [%u]\n", internals->current_primary_port);
 }
 
@@ -3471,7 +3475,7 @@ dump_lacp_port_param(const struct port_params *params, FILE *f)
 }
 
 static void
-dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
+dump_lacp_member(const struct rte_eth_bond_8023ad_member_info *info, FILE *f)
 {
 	char a_state[256] = { 0 };
 	char p_state[256] = { 0 };
@@ -3520,18 +3524,18 @@ dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
 static void
 dump_lacp(uint16_t port_id, FILE *f)
 {
-	struct rte_eth_bond_8023ad_slave_info slave_info;
+	struct rte_eth_bond_8023ad_member_info member_info;
 	struct rte_eth_bond_8023ad_conf port_conf;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
-	int num_active_slaves;
+	uint16_t members[RTE_MAX_ETHPORTS];
+	int num_active_members;
 	int i, ret;
 
 	fprintf(f, "  - Lacp info:\n");
 
-	num_active_slaves = rte_eth_bond_active_slaves_get(port_id, slaves,
+	num_active_members = rte_eth_bond_active_members_get(port_id, members,
 			RTE_MAX_ETHPORTS);
-	if (num_active_slaves < 0) {
-		fprintf(f, "\tFailed to get active slave list for port %u\n",
+	if (num_active_members < 0) {
+		fprintf(f, "\tFailed to get active member list for port %u\n",
 				port_id);
 		return;
 	}
@@ -3545,16 +3549,16 @@ dump_lacp(uint16_t port_id, FILE *f)
 	}
 	dump_lacp_conf(&port_conf, f);
 
-	for (i = 0; i < num_active_slaves; i++) {
-		ret = rte_eth_bond_8023ad_slave_info(port_id, slaves[i],
-				&slave_info);
+	for (i = 0; i < num_active_members; i++) {
+		ret = rte_eth_bond_8023ad_member_info(port_id, members[i],
+				&member_info);
 		if (ret) {
-			fprintf(f, "\tGet slave device %u 8023ad info failed\n",
-				slaves[i]);
+			fprintf(f, "\tGet member device %u 8023ad info failed\n",
+				members[i]);
 			return;
 		}
-		fprintf(f, "\tSlave Port: %u\n", slaves[i]);
-		dump_lacp_slave(&slave_info, f);
+		fprintf(f, "\tMember Port: %u\n", members[i]);
+		dump_lacp_member(&member_info, f);
 	}
 }
 
@@ -3655,8 +3659,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
 	internals->link_down_delay_ms = 0;
 	internals->link_up_delay_ms = 0;
 
-	internals->slave_count = 0;
-	internals->active_slave_count = 0;
+	internals->member_count = 0;
+	internals->active_member_count = 0;
 	internals->rx_offload_capa = 0;
 	internals->tx_offload_capa = 0;
 	internals->rx_queue_offload_capa = 0;
@@ -3684,8 +3688,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
 	internals->rx_desc_lim.nb_align = 1;
 	internals->tx_desc_lim.nb_align = 1;
 
-	memset(internals->active_slaves, 0, sizeof(internals->active_slaves));
-	memset(internals->slaves, 0, sizeof(internals->slaves));
+	memset(internals->active_members, 0, sizeof(internals->active_members));
+	memset(internals->members, 0, sizeof(internals->members));
 
 	TAILQ_INIT(&internals->flow_list);
 	internals->flow_isolated_valid = 0;
@@ -3770,7 +3774,7 @@ bond_probe(struct rte_vdev_device *dev)
 	/* Parse link bonding mode */
 	if (rte_kvargs_count(kvlist, PMD_BOND_MODE_KVARG) == 1) {
 		if (rte_kvargs_process(kvlist, PMD_BOND_MODE_KVARG,
-				&bond_ethdev_parse_slave_mode_kvarg,
+				&bond_ethdev_parse_member_mode_kvarg,
 				&bonding_mode) != 0) {
 			RTE_BOND_LOG(ERR, "Invalid mode for bonded device %s",
 					name);
@@ -3815,7 +3819,7 @@ bond_probe(struct rte_vdev_device *dev)
 	if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
 		if (rte_kvargs_process(kvlist,
 				PMD_BOND_AGG_MODE_KVARG,
-				&bond_ethdev_parse_slave_agg_mode_kvarg,
+				&bond_ethdev_parse_member_agg_mode_kvarg,
 				&agg_mode) != 0) {
 			RTE_BOND_LOG(ERR,
 					"Failed to parse agg selection mode for bonded device %s",
@@ -3865,7 +3869,7 @@ bond_remove(struct rte_vdev_device *dev)
 	RTE_ASSERT(eth_dev->device == &dev->device);
 
 	internals = eth_dev->data->dev_private;
-	if (internals->slave_count != 0)
+	if (internals->member_count != 0)
 		return -EBUSY;
 
 	if (eth_dev->data->dev_started == 1) {
@@ -3877,7 +3881,7 @@ bond_remove(struct rte_vdev_device *dev)
 	return ret;
 }
 
-/* this part will resolve the slave portids after all the other pdev and vdev
+/* this part will resolve the member portids after all the other pdev and vdev
  * have been allocated */
 static int
 bond_ethdev_configure(struct rte_eth_dev *dev)
@@ -3959,7 +3963,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 	if (link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		if ((link_speeds &
 		    (internals->speed_capa & ~RTE_ETH_LINK_SPEED_FIXED)) == 0) {
-			RTE_BOND_LOG(ERR, "the fixed speed is not supported by all slave devices.");
+			RTE_BOND_LOG(ERR, "the fixed speed is not supported by all member devices.");
 			return -EINVAL;
 		}
 		/*
@@ -4041,7 +4045,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 	if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
 		if (rte_kvargs_process(kvlist,
 				       PMD_BOND_AGG_MODE_KVARG,
-				       &bond_ethdev_parse_slave_agg_mode_kvarg,
+				       &bond_ethdev_parse_member_agg_mode_kvarg,
 				       &agg_mode) != 0) {
 			RTE_BOND_LOG(ERR,
 				     "Failed to parse agg selection mode for bonded device %s",
@@ -4059,60 +4063,60 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	/* Parse/add slave ports to bonded device */
-	if (rte_kvargs_count(kvlist, PMD_BOND_SLAVE_PORT_KVARG) > 0) {
-		struct bond_ethdev_slave_ports slave_ports;
+	/* Parse/add member ports to bonded device */
+	if (rte_kvargs_count(kvlist, PMD_BOND_MEMBER_PORT_KVARG) > 0) {
+		struct bond_ethdev_member_ports member_ports;
 		unsigned i;
 
-		memset(&slave_ports, 0, sizeof(slave_ports));
+		memset(&member_ports, 0, sizeof(member_ports));
 
-		if (rte_kvargs_process(kvlist, PMD_BOND_SLAVE_PORT_KVARG,
-				       &bond_ethdev_parse_slave_port_kvarg, &slave_ports) != 0) {
+		if (rte_kvargs_process(kvlist, PMD_BOND_MEMBER_PORT_KVARG,
+				       &bond_ethdev_parse_member_port_kvarg, &member_ports) != 0) {
 			RTE_BOND_LOG(ERR,
-				     "Failed to parse slave ports for bonded device %s",
+				     "Failed to parse member ports for bonded device %s",
 				     name);
 			return -1;
 		}
 
-		for (i = 0; i < slave_ports.slave_count; i++) {
-			if (rte_eth_bond_slave_add(port_id, slave_ports.slaves[i]) != 0) {
+		for (i = 0; i < member_ports.member_count; i++) {
+			if (rte_eth_bond_member_add(port_id, member_ports.members[i]) != 0) {
 				RTE_BOND_LOG(ERR,
-					     "Failed to add port %d as slave to bonded device %s",
-					     slave_ports.slaves[i], name);
+					     "Failed to add port %d as member to bonded device %s",
+					     member_ports.members[i], name);
 			}
 		}
 
 	} else {
-		RTE_BOND_LOG(INFO, "No slaves specified for bonded device %s", name);
+		RTE_BOND_LOG(INFO, "No members specified for bonded device %s", name);
 		return -1;
 	}
 
-	/* Parse/set primary slave port id*/
-	arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_SLAVE_KVARG);
+	/* Parse/set primary member port id*/
+	arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_MEMBER_KVARG);
 	if (arg_count == 1) {
-		uint16_t primary_slave_port_id;
+		uint16_t primary_member_port_id;
 
 		if (rte_kvargs_process(kvlist,
-				       PMD_BOND_PRIMARY_SLAVE_KVARG,
-				       &bond_ethdev_parse_primary_slave_port_id_kvarg,
-				       &primary_slave_port_id) < 0) {
+				       PMD_BOND_PRIMARY_MEMBER_KVARG,
+				       &bond_ethdev_parse_primary_member_port_id_kvarg,
+				       &primary_member_port_id) < 0) {
 			RTE_BOND_LOG(INFO,
-				     "Invalid primary slave port id specified for bonded device %s",
+				     "Invalid primary member port id specified for bonded device %s",
 				     name);
 			return -1;
 		}
 
 		/* Set balance mode transmit policy*/
-		if (rte_eth_bond_primary_set(port_id, primary_slave_port_id)
+		if (rte_eth_bond_primary_set(port_id, primary_member_port_id)
 		    != 0) {
 			RTE_BOND_LOG(ERR,
-				     "Failed to set primary slave port %d on bonded device %s",
-				     primary_slave_port_id, name);
+				     "Failed to set primary member port %d on bonded device %s",
+				     primary_member_port_id, name);
 			return -1;
 		}
 	} else if (arg_count > 1) {
 		RTE_BOND_LOG(INFO,
-			     "Primary slave can be specified only once for bonded device %s",
+			     "Primary member can be specified only once for bonded device %s",
 			     name);
 		return -1;
 	}
@@ -4206,15 +4210,15 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 		return -1;
 	}
 
-	/* configure slaves so we can pass mtu setting */
-	for (i = 0; i < internals->slave_count; i++) {
-		struct rte_eth_dev *slave_ethdev =
-				&(rte_eth_devices[internals->slaves[i].port_id]);
-		if (slave_configure(dev, slave_ethdev) != 0) {
+	/* configure members so we can pass mtu setting */
+	for (i = 0; i < internals->member_count; i++) {
+		struct rte_eth_dev *member_ethdev =
+				&(rte_eth_devices[internals->members[i].port_id]);
+		if (member_configure(dev, member_ethdev) != 0) {
 			RTE_BOND_LOG(ERR,
-				"bonded port (%d) failed to configure slave device (%d)",
+				"bonded port (%d) failed to configure member device (%d)",
 				dev->data->port_id,
-				internals->slaves[i].port_id);
+				internals->members[i].port_id);
 			return -1;
 		}
 	}
@@ -4230,7 +4234,7 @@ RTE_PMD_REGISTER_VDEV(net_bonding, pmd_bond_drv);
 RTE_PMD_REGISTER_ALIAS(net_bonding, eth_bond);
 
 RTE_PMD_REGISTER_PARAM_STRING(net_bonding,
-	"slave=<ifc> "
+	"member=<ifc> "
 	"primary=<ifc> "
 	"mode=[0-6] "
 	"xmit_policy=[l2 | l23 | l34] "
diff --git a/drivers/net/bonding/version.map b/drivers/net/bonding/version.map
index 9333923b4e..56bc143a89 100644
--- a/drivers/net/bonding/version.map
+++ b/drivers/net/bonding/version.map
@@ -12,8 +12,6 @@ DPDK_23 {
 	rte_eth_bond_8023ad_ext_distrib_get;
 	rte_eth_bond_8023ad_ext_slowtx;
 	rte_eth_bond_8023ad_setup;
-	rte_eth_bond_8023ad_slave_info;
-	rte_eth_bond_active_slaves_get;
 	rte_eth_bond_create;
 	rte_eth_bond_free;
 	rte_eth_bond_link_monitoring_set;
@@ -23,11 +21,18 @@ DPDK_23 {
 	rte_eth_bond_mode_set;
 	rte_eth_bond_primary_get;
 	rte_eth_bond_primary_set;
-	rte_eth_bond_slave_add;
-	rte_eth_bond_slave_remove;
-	rte_eth_bond_slaves_get;
 	rte_eth_bond_xmit_policy_get;
 	rte_eth_bond_xmit_policy_set;
 
 	local: *;
 };
+
+EXPERIMENTAL {
+	# added in 23.07
+	global:
+	rte_eth_bond_8023ad_member_info;
+	rte_eth_bond_active_members_get;
+	rte_eth_bond_member_add;
+	rte_eth_bond_member_remove;
+	rte_eth_bond_members_get;
+};
diff --git a/examples/bond/main.c b/examples/bond/main.c
index 9b076bb39f..90f422ec11 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -105,8 +105,8 @@
 		":%02"PRIx8":%02"PRIx8":%02"PRIx8,	\
 		RTE_ETHER_ADDR_BYTES(&addr))
 
-uint16_t slaves[RTE_MAX_ETHPORTS];
-uint16_t slaves_count;
+uint16_t members[RTE_MAX_ETHPORTS];
+uint16_t members_count;
 
 static uint16_t BOND_PORT = 0xffff;
 
@@ -128,7 +128,7 @@ static struct rte_eth_conf port_conf = {
 };
 
 static void
-slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
+member_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
 {
 	int retval;
 	uint16_t nb_rxd = RTE_RX_DESC_DEFAULT;
@@ -252,10 +252,10 @@ bond_port_init(struct rte_mempool *mbuf_pool)
 		rte_exit(EXIT_FAILURE, "port %u: rte_eth_dev_adjust_nb_rx_tx_desc "
 				"failed (res=%d)\n", BOND_PORT, retval);
 
-	for (i = 0; i < slaves_count; i++) {
-		if (rte_eth_bond_slave_add(BOND_PORT, slaves[i]) == -1)
-			rte_exit(-1, "Oooops! adding slave (%u) to bond (%u) failed!\n",
-					slaves[i], BOND_PORT);
+	for (i = 0; i < members_count; i++) {
+		if (rte_eth_bond_member_add(BOND_PORT, members[i]) == -1)
+			rte_exit(-1, "Oooops! adding member (%u) to bond (%u) failed!\n",
+					members[i], BOND_PORT);
 
 	}
 
@@ -283,18 +283,18 @@ bond_port_init(struct rte_mempool *mbuf_pool)
 	if (retval < 0)
 		rte_exit(retval, "Start port %d failed (res=%d)", BOND_PORT, retval);
 
-	printf("Waiting for slaves to become active...");
+	printf("Waiting for members to become active...");
 	while (wait_counter) {
-		uint16_t act_slaves[16] = {0};
-		if (rte_eth_bond_active_slaves_get(BOND_PORT, act_slaves, 16) ==
-				slaves_count) {
+		uint16_t act_members[16] = {0};
+		if (rte_eth_bond_active_members_get(BOND_PORT, act_members, 16) ==
+				members_count) {
 			printf("\n");
 			break;
 		}
 		sleep(1);
 		printf("...");
 		if (--wait_counter == 0)
-			rte_exit(-1, "\nFailed to activate slaves\n");
+			rte_exit(-1, "\nFailed to activate members\n");
 	}
 
 	retval = rte_eth_promiscuous_enable(BOND_PORT);
@@ -631,7 +631,7 @@ static void cmd_help_parsed(__rte_unused void *parsed_result,
 			"send IP	- sends one ARPrequest through bonding for IP.\n"
 			"start		- starts listening ARPs.\n"
 			"stop		- stops lcore_main.\n"
-			"show		- shows some bond info: ex. active slaves etc.\n"
+			"show		- shows some bond info: ex. active members etc.\n"
 			"help		- prints help.\n"
 			"quit		- terminate all threads and quit.\n"
 		       );
@@ -742,13 +742,13 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
 			    struct cmdline *cl,
 			    __rte_unused void *data)
 {
-	uint16_t slaves[16] = {0};
+	uint16_t members[16] = {0};
 	uint8_t len = 16;
 	struct rte_ether_addr addr;
 	uint16_t i;
 	int ret;
 
-	for (i = 0; i < slaves_count; i++) {
+	for (i = 0; i < members_count; i++) {
 		ret = rte_eth_macaddr_get(i, &addr);
 		if (ret != 0) {
 			cmdline_printf(cl,
@@ -763,9 +763,9 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
 
 	rte_spinlock_lock(&global_flag_stru_p->lock);
 	cmdline_printf(cl,
-			"Active_slaves:%d "
+			"Active_members:%d "
 			"packets received:Tot:%d Arp:%d IPv4:%d\n",
-			rte_eth_bond_active_slaves_get(BOND_PORT, slaves, len),
+			rte_eth_bond_active_members_get(BOND_PORT, members, len),
 			global_flag_stru_p->port_packets[0],
 			global_flag_stru_p->port_packets[1],
 			global_flag_stru_p->port_packets[2]);
@@ -836,10 +836,10 @@ main(int argc, char *argv[])
 		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
 
 	/* initialize all ports */
-	slaves_count = nb_ports;
+	members_count = nb_ports;
 	RTE_ETH_FOREACH_DEV(i) {
-		slave_port_init(i, mbuf_pool);
-		slaves[i] = i;
+		member_port_init(i, mbuf_pool);
+		members[i] = i;
 	}
 
 	bond_port_init(mbuf_pool);
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 99fe9e238b..85439e3a41 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -2035,8 +2035,13 @@ struct rte_eth_dev_owner {
 #define RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE  RTE_BIT32(0)
 /** Device supports link state interrupt */
 #define RTE_ETH_DEV_INTR_LSC              RTE_BIT32(1)
-/** Device is a bonded slave */
-#define RTE_ETH_DEV_BONDED_SLAVE          RTE_BIT32(2)
+/** Device is a bonded member */
+#define RTE_ETH_DEV_BONDED_MEMBER          RTE_BIT32(2)
+#define RTE_ETH_DEV_BONDED_SLAVE                         \
+	do {                                             \
+		RTE_DEPRECATED(RTE_ETH_DEV_BONDED_SLAVE) \
+		RTE_ETH_DEV_BONDED_MEMBER                \
+	} while (0)
 /** Device supports device removal interrupt */
 #define RTE_ETH_DEV_INTR_RMV              RTE_BIT32(3)
 /** Device is port representor */
-- 
2.39.1


^ permalink raw reply	[relevance 1%]

* [PATCH v3] net/bonding: replace master/slave to main/member
  2023-05-18  6:32  1% ` [PATCH v2] " Chaoyong He
@ 2023-05-18  7:01  1%   ` Chaoyong He
  2023-05-18  8:44  1%     ` [PATCH v4] " Chaoyong He
  0 siblings, 1 reply; 200+ results
From: Chaoyong He @ 2023-05-18  7:01 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, niklas.soderlund, Long Wu, James Hershaw

From: Long Wu <long.wu@corigine.com>

This patch replaces the usage of the word 'master/slave' with more
appropriate word 'main/member' in bonding PMD as well as in its docs
and examples. Also the test app and testpmd were modified to use the
new wording.

The bonding PMD's public API was modified according to the changes
in word:
rte_eth_bond_8023ad_slave_info is now called
rte_eth_bond_8023ad_member_info,
rte_eth_bond_active_slaves_get is now called
rte_eth_bond_active_members_get,
rte_eth_bond_slave_add is now called
rte_eth_bond_member_add,
rte_eth_bond_slave_remove is now called
rte_eth_bond_member_remove,
rte_eth_bond_slaves_get is now called
rte_eth_bond_members_get.

Also the macro RTE_ETH_DEV_BONDED_SLAVE was renamed to
RTE_ETH_DEV_BONDED_MEMBER.

Mark the old visible API's as deprecated and remove
from the ABI.

Signed-off-by: Long Wu <long.wu@corigine.com>
Acked-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Reviewed-by: James Hershaw <james.hershaw@corigine.com>
---
v2:
* Modify related doc.
* Add 'RTE_DEPRECATED' to related APIs.
v3:
* Fix the check warning about 'CamelCase'.
---
 app/test-pmd/testpmd.c                        |  112 +-
 app/test-pmd/testpmd.h                        |    8 +-
 app/test/test_link_bonding.c                  | 2792 +++++++++--------
 app/test/test_link_bonding_mode4.c            |  588 ++--
 app/test/test_link_bonding_rssconf.c          |  166 +-
 doc/guides/howto/lm_bond_virtio_sriov.rst     |   24 +-
 doc/guides/nics/bnxt.rst                      |    4 +-
 doc/guides/prog_guide/img/bond-mode-1.svg     |    2 +-
 .../link_bonding_poll_mode_drv_lib.rst        |  222 +-
 drivers/net/bonding/bonding_testpmd.c         |  178 +-
 drivers/net/bonding/eth_bond_8023ad_private.h |   40 +-
 drivers/net/bonding/eth_bond_private.h        |  108 +-
 drivers/net/bonding/rte_eth_bond.h            |  126 +-
 drivers/net/bonding/rte_eth_bond_8023ad.c     |  372 +--
 drivers/net/bonding/rte_eth_bond_8023ad.h     |   75 +-
 drivers/net/bonding/rte_eth_bond_alb.c        |   44 +-
 drivers/net/bonding/rte_eth_bond_alb.h        |   20 +-
 drivers/net/bonding/rte_eth_bond_api.c        |  474 +--
 drivers/net/bonding/rte_eth_bond_args.c       |   32 +-
 drivers/net/bonding/rte_eth_bond_flow.c       |   54 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        | 1384 ++++----
 drivers/net/bonding/version.map               |   15 +-
 examples/bond/main.c                          |   40 +-
 lib/ethdev/rte_ethdev.h                       |    9 +-
 24 files changed, 3505 insertions(+), 3384 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 5cb6f92523..d8fd87105a 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -588,27 +588,27 @@ eth_dev_configure_mp(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 }
 
 static int
-change_bonding_slave_port_status(portid_t bond_pid, bool is_stop)
+change_bonding_member_port_status(portid_t bond_pid, bool is_stop)
 {
 #ifdef RTE_NET_BOND
 
-	portid_t slave_pids[RTE_MAX_ETHPORTS];
+	portid_t member_pids[RTE_MAX_ETHPORTS];
 	struct rte_port *port;
-	int num_slaves;
-	portid_t slave_pid;
+	int num_members;
+	portid_t member_pid;
 	int i;
 
-	num_slaves = rte_eth_bond_slaves_get(bond_pid, slave_pids,
+	num_members = rte_eth_bond_members_get(bond_pid, member_pids,
 						RTE_MAX_ETHPORTS);
-	if (num_slaves < 0) {
-		fprintf(stderr, "Failed to get slave list for port = %u\n",
+	if (num_members < 0) {
+		fprintf(stderr, "Failed to get member list for port = %u\n",
 			bond_pid);
-		return num_slaves;
+		return num_members;
 	}
 
-	for (i = 0; i < num_slaves; i++) {
-		slave_pid = slave_pids[i];
-		port = &ports[slave_pid];
+	for (i = 0; i < num_members; i++) {
+		member_pid = member_pids[i];
+		port = &ports[member_pid];
 		port->port_status =
 			is_stop ? RTE_PORT_STOPPED : RTE_PORT_STARTED;
 	}
@@ -632,12 +632,12 @@ eth_dev_start_mp(uint16_t port_id)
 		struct rte_port *port = &ports[port_id];
 
 		/*
-		 * Starting a bonded port also starts all slaves under the bonded
+		 * Starting a bonded port also starts all members under the bonded
 		 * device. So if this port is bond device, we need to modify the
-		 * port status of these slaves.
+		 * port status of these members.
 		 */
 		if (port->bond_flag == 1)
-			return change_bonding_slave_port_status(port_id, false);
+			return change_bonding_member_port_status(port_id, false);
 	}
 
 	return 0;
@@ -656,12 +656,12 @@ eth_dev_stop_mp(uint16_t port_id)
 		struct rte_port *port = &ports[port_id];
 
 		/*
-		 * Stopping a bonded port also stops all slaves under the bonded
+		 * Stopping a bonded port also stops all members under the bonded
 		 * device. So if this port is bond device, we need to modify the
-		 * port status of these slaves.
+		 * port status of these members.
 		 */
 		if (port->bond_flag == 1)
-			return change_bonding_slave_port_status(port_id, true);
+			return change_bonding_member_port_status(port_id, true);
 	}
 
 	return 0;
@@ -2610,7 +2610,7 @@ all_ports_started(void)
 		port = &ports[pi];
 		/* Check if there is a port which is not started */
 		if ((port->port_status != RTE_PORT_STARTED) &&
-			(port->slave_flag == 0))
+			(port->member_flag == 0))
 			return 0;
 	}
 
@@ -2624,7 +2624,7 @@ port_is_stopped(portid_t port_id)
 	struct rte_port *port = &ports[port_id];
 
 	if ((port->port_status != RTE_PORT_STOPPED) &&
-	    (port->slave_flag == 0))
+	    (port->member_flag == 0))
 		return 0;
 	return 1;
 }
@@ -2970,8 +2970,8 @@ fill_xstats_display_info(void)
 
 /*
  * Some capabilities (like, rx_offload_capa and tx_offload_capa) of bonding
- * device in dev_info is zero when no slave is added. And its capability
- * will be updated when add a new slave device. So adding a slave device need
+ * device in dev_info is zero when no member is added. And its capability
+ * will be updated when add a new member device. So adding a member device need
  * to update the port configurations of bonding device.
  */
 static void
@@ -3028,7 +3028,7 @@ start_port(portid_t pid)
 		if (pid != pi && pid != (portid_t)RTE_PORT_ALL)
 			continue;
 
-		if (port_is_bonding_slave(pi)) {
+		if (port_is_bonding_member(pi)) {
 			fprintf(stderr,
 				"Please remove port %d from bonded device.\n",
 				pi);
@@ -3350,7 +3350,7 @@ stop_port(portid_t pid)
 			continue;
 		}
 
-		if (port_is_bonding_slave(pi)) {
+		if (port_is_bonding_member(pi)) {
 			fprintf(stderr,
 				"Please remove port %d from bonded device.\n",
 				pi);
@@ -3439,28 +3439,28 @@ flush_port_owned_resources(portid_t pi)
 }
 
 static void
-clear_bonding_slave_device(portid_t *slave_pids, uint16_t num_slaves)
+clear_bonding_member_device(portid_t *member_pids, uint16_t num_members)
 {
 	struct rte_port *port;
-	portid_t slave_pid;
+	portid_t member_pid;
 	uint16_t i;
 
-	for (i = 0; i < num_slaves; i++) {
-		slave_pid = slave_pids[i];
-		if (port_is_started(slave_pid) == 1) {
-			if (rte_eth_dev_stop(slave_pid) != 0)
+	for (i = 0; i < num_members; i++) {
+		member_pid = member_pids[i];
+		if (port_is_started(member_pid) == 1) {
+			if (rte_eth_dev_stop(member_pid) != 0)
 				fprintf(stderr, "rte_eth_dev_stop failed for port %u\n",
-					slave_pid);
+					member_pid);
 
-			port = &ports[slave_pid];
+			port = &ports[member_pid];
 			port->port_status = RTE_PORT_STOPPED;
 		}
 
-		clear_port_slave_flag(slave_pid);
+		clear_port_member_flag(member_pid);
 
-		/* Close slave device when testpmd quit or is killed. */
+		/* Close member device when testpmd quit or is killed. */
 		if (cl_quit == 1 || f_quit == 1)
-			rte_eth_dev_close(slave_pid);
+			rte_eth_dev_close(member_pid);
 	}
 }
 
@@ -3469,8 +3469,8 @@ close_port(portid_t pid)
 {
 	portid_t pi;
 	struct rte_port *port;
-	portid_t slave_pids[RTE_MAX_ETHPORTS];
-	int num_slaves = 0;
+	portid_t member_pids[RTE_MAX_ETHPORTS];
+	int num_members = 0;
 
 	if (port_id_is_invalid(pid, ENABLED_WARN))
 		return;
@@ -3488,7 +3488,7 @@ close_port(portid_t pid)
 			continue;
 		}
 
-		if (port_is_bonding_slave(pi)) {
+		if (port_is_bonding_member(pi)) {
 			fprintf(stderr,
 				"Please remove port %d from bonded device.\n",
 				pi);
@@ -3505,17 +3505,17 @@ close_port(portid_t pid)
 			flush_port_owned_resources(pi);
 #ifdef RTE_NET_BOND
 			if (port->bond_flag == 1)
-				num_slaves = rte_eth_bond_slaves_get(pi,
-						slave_pids, RTE_MAX_ETHPORTS);
+				num_members = rte_eth_bond_members_get(pi,
+						member_pids, RTE_MAX_ETHPORTS);
 #endif
 			rte_eth_dev_close(pi);
 			/*
-			 * If this port is bonded device, all slaves under the
+			 * If this port is bonded device, all members under the
 			 * device need to be removed or closed.
 			 */
-			if (port->bond_flag == 1 && num_slaves > 0)
-				clear_bonding_slave_device(slave_pids,
-							num_slaves);
+			if (port->bond_flag == 1 && num_members > 0)
+				clear_bonding_member_device(member_pids,
+							num_members);
 		}
 
 		free_xstats_display_info(pi);
@@ -3555,7 +3555,7 @@ reset_port(portid_t pid)
 			continue;
 		}
 
-		if (port_is_bonding_slave(pi)) {
+		if (port_is_bonding_member(pi)) {
 			fprintf(stderr,
 				"Please remove port %d from bonded device.\n",
 				pi);
@@ -4203,38 +4203,38 @@ init_port_config(void)
 	}
 }
 
-void set_port_slave_flag(portid_t slave_pid)
+void set_port_member_flag(portid_t member_pid)
 {
 	struct rte_port *port;
 
-	port = &ports[slave_pid];
-	port->slave_flag = 1;
+	port = &ports[member_pid];
+	port->member_flag = 1;
 }
 
-void clear_port_slave_flag(portid_t slave_pid)
+void clear_port_member_flag(portid_t member_pid)
 {
 	struct rte_port *port;
 
-	port = &ports[slave_pid];
-	port->slave_flag = 0;
+	port = &ports[member_pid];
+	port->member_flag = 0;
 }
 
-uint8_t port_is_bonding_slave(portid_t slave_pid)
+uint8_t port_is_bonding_member(portid_t member_pid)
 {
 	struct rte_port *port;
 	struct rte_eth_dev_info dev_info;
 	int ret;
 
-	port = &ports[slave_pid];
-	ret = eth_dev_info_get_print_err(slave_pid, &dev_info);
+	port = &ports[member_pid];
+	ret = eth_dev_info_get_print_err(member_pid, &dev_info);
 	if (ret != 0) {
 		TESTPMD_LOG(ERR,
 			"Failed to get device info for port id %d,"
-			"cannot determine if the port is a bonded slave",
-			slave_pid);
+			"cannot determine if the port is a bonded member",
+			member_pid);
 		return 0;
 	}
-	if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDED_SLAVE) || (port->slave_flag == 1))
+	if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDED_MEMBER) || (port->member_flag == 1))
 		return 1;
 	return 0;
 }
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index bdfbfd36d3..7bc2f70323 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -321,7 +321,7 @@ struct rte_port {
 	uint32_t                mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
 	queueid_t               queue_nb; /**< nb. of queues for flow rules */
 	uint32_t                queue_sz; /**< size of a queue for flow rules */
-	uint8_t                 slave_flag : 1, /**< bonding slave port */
+	uint8_t                 member_flag : 1, /**< bonding member port */
 				bond_flag : 1, /**< port is bond device */
 				fwd_mac_swap : 1, /**< swap packet MAC before forward */
 				update_conf : 1; /**< need to update bonding device configuration */
@@ -1082,9 +1082,9 @@ void stop_packet_forwarding(void);
 void dev_set_link_up(portid_t pid);
 void dev_set_link_down(portid_t pid);
 void init_port_config(void);
-void set_port_slave_flag(portid_t slave_pid);
-void clear_port_slave_flag(portid_t slave_pid);
-uint8_t port_is_bonding_slave(portid_t slave_pid);
+void set_port_member_flag(portid_t member_pid);
+void clear_port_member_flag(portid_t member_pid);
+uint8_t port_is_bonding_member(portid_t member_pid);
 
 int init_port_dcb_config(portid_t pid, enum dcb_mode_enable dcb_mode,
 		     enum rte_eth_nb_tcs num_tcs,
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 5c496352c2..82daf037f1 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -59,13 +59,13 @@
 #define INVALID_BONDING_MODE	(-1)
 
 
-uint8_t slave_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
+uint8_t member_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
 uint8_t bonded_mac[] = {0xAA, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF };
 
 struct link_bonding_unittest_params {
 	int16_t bonded_port_id;
-	int16_t slave_port_ids[TEST_MAX_NUMBER_OF_PORTS];
-	uint16_t bonded_slave_count;
+	int16_t member_port_ids[TEST_MAX_NUMBER_OF_PORTS];
+	uint16_t bonded_member_count;
 	uint8_t bonding_mode;
 
 	uint16_t nb_rx_q;
@@ -73,7 +73,7 @@ struct link_bonding_unittest_params {
 
 	struct rte_mempool *mbuf_pool;
 
-	struct rte_ether_addr *default_slave_mac;
+	struct rte_ether_addr *default_member_mac;
 	struct rte_ether_addr *default_bonded_mac;
 
 	/* Packet Headers */
@@ -90,8 +90,8 @@ static struct rte_udp_hdr pkt_udp_hdr;
 
 static struct link_bonding_unittest_params default_params  = {
 	.bonded_port_id = -1,
-	.slave_port_ids = { -1 },
-	.bonded_slave_count = 0,
+	.member_port_ids = { -1 },
+	.bonded_member_count = 0,
 	.bonding_mode = BONDING_MODE_ROUND_ROBIN,
 
 	.nb_rx_q = 1,
@@ -99,7 +99,7 @@ static struct link_bonding_unittest_params default_params  = {
 
 	.mbuf_pool = NULL,
 
-	.default_slave_mac = (struct rte_ether_addr *)slave_mac,
+	.default_member_mac = (struct rte_ether_addr *)member_mac,
 	.default_bonded_mac = (struct rte_ether_addr *)bonded_mac,
 
 	.pkt_eth_hdr = NULL,
@@ -202,8 +202,8 @@ configure_ethdev(uint16_t port_id, uint8_t start, uint8_t en_isr)
 	return 0;
 }
 
-static int slaves_initialized;
-static int mac_slaves_initialized;
+static int members_initialized;
+static int mac_members_initialized;
 
 static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
 static pthread_cond_t cvar = PTHREAD_COND_INITIALIZER;
@@ -213,7 +213,7 @@ static int
 test_setup(void)
 {
 	int i, nb_mbuf_per_pool;
-	struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)slave_mac;
+	struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)member_mac;
 
 	/* Allocate ethernet packet header with space for VLAN header */
 	if (test_params->pkt_eth_hdr == NULL) {
@@ -235,7 +235,7 @@ test_setup(void)
 	}
 
 	/* Create / Initialize virtual eth devs */
-	if (!slaves_initialized) {
+	if (!members_initialized) {
 		for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++) {
 			char pmd_name[RTE_ETH_NAME_MAX_LEN];
 
@@ -243,16 +243,16 @@ test_setup(void)
 
 			snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_%d", i);
 
-			test_params->slave_port_ids[i] = virtual_ethdev_create(pmd_name,
+			test_params->member_port_ids[i] = virtual_ethdev_create(pmd_name,
 					mac_addr, rte_socket_id(), 1);
-			TEST_ASSERT(test_params->slave_port_ids[i] >= 0,
+			TEST_ASSERT(test_params->member_port_ids[i] >= 0,
 					"Failed to create virtual virtual ethdev %s", pmd_name);
 
 			TEST_ASSERT_SUCCESS(configure_ethdev(
-					test_params->slave_port_ids[i], 1, 0),
+					test_params->member_port_ids[i], 1, 0),
 					"Failed to configure virtual ethdev %s", pmd_name);
 		}
-		slaves_initialized = 1;
+		members_initialized = 1;
 	}
 
 	return 0;
@@ -261,9 +261,9 @@ test_setup(void)
 static int
 test_create_bonded_device(void)
 {
-	int current_slave_count;
+	int current_member_count;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
 	/* Don't try to recreate bonded device if re-running test suite*/
 	if (test_params->bonded_port_id == -1) {
@@ -281,19 +281,19 @@ test_create_bonded_device(void)
 			test_params->bonding_mode), "Failed to set ethdev %d to mode %d",
 			test_params->bonded_port_id, test_params->bonding_mode);
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(current_slave_count, 0,
-			"Number of slaves %d is great than expected %d.",
-			current_slave_count, 0);
+	TEST_ASSERT_EQUAL(current_member_count, 0,
+			"Number of members %d is great than expected %d.",
+			current_member_count, 0);
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
+	current_member_count = rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(current_slave_count, 0,
-			"Number of active slaves %d is great than expected %d.",
-			current_slave_count, 0);
+	TEST_ASSERT_EQUAL(current_member_count, 0,
+			"Number of active members %d is great than expected %d.",
+			current_member_count, 0);
 
 	return 0;
 }
@@ -329,46 +329,46 @@ test_create_bonded_device_with_invalid_params(void)
 }
 
 static int
-test_add_slave_to_bonded_device(void)
+test_add_member_to_bonded_device(void)
 {
-	int current_slave_count;
+	int current_member_count;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
-			test_params->slave_port_ids[test_params->bonded_slave_count]),
-			"Failed to add slave (%d) to bonded port (%d).",
-			test_params->slave_port_ids[test_params->bonded_slave_count],
+	TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+			test_params->member_port_ids[test_params->bonded_member_count]),
+			"Failed to add member (%d) to bonded port (%d).",
+			test_params->member_port_ids[test_params->bonded_member_count],
 			test_params->bonded_port_id);
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count + 1,
-			"Number of slaves (%d) is greater than expected (%d).",
-			current_slave_count, test_params->bonded_slave_count + 1);
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count + 1,
+			"Number of members (%d) is greater than expected (%d).",
+			current_member_count, test_params->bonded_member_count + 1);
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, 0,
-					"Number of active slaves (%d) is not as expected (%d).\n",
-					current_slave_count, 0);
+	current_member_count = rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, 0,
+					"Number of active members (%d) is not as expected (%d).\n",
+					current_member_count, 0);
 
-	test_params->bonded_slave_count++;
+	test_params->bonded_member_count++;
 
 	return 0;
 }
 
 static int
-test_add_slave_to_invalid_bonded_device(void)
+test_add_member_to_invalid_bonded_device(void)
 {
 	/* Invalid port ID */
-	TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->bonded_port_id + 5,
-			test_params->slave_port_ids[test_params->bonded_slave_count]),
+	TEST_ASSERT_FAIL(rte_eth_bond_member_add(test_params->bonded_port_id + 5,
+			test_params->member_port_ids[test_params->bonded_member_count]),
 			"Expected call to failed as invalid port specified.");
 
 	/* Non bonded device */
-	TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->slave_port_ids[0],
-			test_params->slave_port_ids[test_params->bonded_slave_count]),
+	TEST_ASSERT_FAIL(rte_eth_bond_member_add(test_params->member_port_ids[0],
+			test_params->member_port_ids[test_params->bonded_member_count]),
 			"Expected call to failed as invalid port specified.");
 
 	return 0;
@@ -376,63 +376,63 @@ test_add_slave_to_invalid_bonded_device(void)
 
 
 static int
-test_remove_slave_from_bonded_device(void)
+test_remove_member_from_bonded_device(void)
 {
-	int current_slave_count;
+	int current_member_count;
 	struct rte_ether_addr read_mac_addr, *mac_addr;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params->bonded_port_id,
-			test_params->slave_port_ids[test_params->bonded_slave_count-1]),
-			"Failed to remove slave %d from bonded port (%d).",
-			test_params->slave_port_ids[test_params->bonded_slave_count-1],
+	TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(test_params->bonded_port_id,
+			test_params->member_port_ids[test_params->bonded_member_count-1]),
+			"Failed to remove member %d from bonded port (%d).",
+			test_params->member_port_ids[test_params->bonded_member_count-1],
 			test_params->bonded_port_id);
 
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count - 1,
-			"Number of slaves (%d) is great than expected (%d).\n",
-			current_slave_count, test_params->bonded_slave_count - 1);
+	TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count - 1,
+			"Number of members (%d) is great than expected (%d).\n",
+			current_member_count, test_params->bonded_member_count - 1);
 
 
-	mac_addr = (struct rte_ether_addr *)slave_mac;
+	mac_addr = (struct rte_ether_addr *)member_mac;
 	mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] =
-			test_params->bonded_slave_count-1;
+			test_params->bonded_member_count-1;
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(
-			test_params->slave_port_ids[test_params->bonded_slave_count-1],
+			test_params->member_port_ids[test_params->bonded_member_count-1],
 			&read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[test_params->bonded_slave_count-1]);
+			test_params->member_port_ids[test_params->bonded_member_count-1]);
 	TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
 			"bonded port mac address not set to that of primary port\n");
 
 	rte_eth_stats_reset(
-			test_params->slave_port_ids[test_params->bonded_slave_count-1]);
+			test_params->member_port_ids[test_params->bonded_member_count-1]);
 
 	virtual_ethdev_simulate_link_status_interrupt(test_params->bonded_port_id,
 			0);
 
-	test_params->bonded_slave_count--;
+	test_params->bonded_member_count--;
 
 	return 0;
 }
 
 static int
-test_remove_slave_from_invalid_bonded_device(void)
+test_remove_member_from_invalid_bonded_device(void)
 {
 	/* Invalid port ID */
-	TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
+	TEST_ASSERT_FAIL(rte_eth_bond_member_remove(
 			test_params->bonded_port_id + 5,
-			test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
+			test_params->member_port_ids[test_params->bonded_member_count - 1]),
 			"Expected call to failed as invalid port specified.");
 
 	/* Non bonded device */
-	TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
-			test_params->slave_port_ids[0],
-			test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
+	TEST_ASSERT_FAIL(rte_eth_bond_member_remove(
+			test_params->member_port_ids[0],
+			test_params->member_port_ids[test_params->bonded_member_count - 1]),
 			"Expected call to failed as invalid port specified.");
 
 	return 0;
@@ -441,19 +441,19 @@ test_remove_slave_from_invalid_bonded_device(void)
 static int bonded_id = 2;
 
 static int
-test_add_already_bonded_slave_to_bonded_device(void)
+test_add_already_bonded_member_to_bonded_device(void)
 {
-	int port_id, current_slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int port_id, current_member_count;
+	uint16_t members[RTE_MAX_ETHPORTS];
 	char pmd_name[RTE_ETH_NAME_MAX_LEN];
 
-	test_add_slave_to_bonded_device();
+	test_add_member_to_bonded_device();
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, 1,
-			"Number of slaves (%d) is not that expected (%d).",
-			current_slave_count, 1);
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, 1,
+			"Number of members (%d) is not that expected (%d).",
+			current_member_count, 1);
 
 	snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "%s_%d", BONDED_DEV_NAME, ++bonded_id);
 
@@ -461,93 +461,93 @@ test_add_already_bonded_slave_to_bonded_device(void)
 			rte_socket_id());
 	TEST_ASSERT(port_id >= 0, "Failed to create bonded device.");
 
-	TEST_ASSERT(rte_eth_bond_slave_add(port_id,
-			test_params->slave_port_ids[test_params->bonded_slave_count - 1])
+	TEST_ASSERT(rte_eth_bond_member_add(port_id,
+			test_params->member_port_ids[test_params->bonded_member_count - 1])
 			< 0,
-			"Added slave (%d) to bonded port (%d) unexpectedly.",
-			test_params->slave_port_ids[test_params->bonded_slave_count-1],
+			"Added member (%d) to bonded port (%d) unexpectedly.",
+			test_params->member_port_ids[test_params->bonded_member_count-1],
 			port_id);
 
-	return test_remove_slave_from_bonded_device();
+	return test_remove_member_from_bonded_device();
 }
 
 
 static int
-test_get_slaves_from_bonded_device(void)
+test_get_members_from_bonded_device(void)
 {
-	int current_slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int current_member_count;
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-			"Failed to add slave to bonded device");
+	TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+			"Failed to add member to bonded device");
 
 	/* Invalid port id */
-	current_slave_count = rte_eth_bond_slaves_get(INVALID_PORT_ID, slaves,
+	current_member_count = rte_eth_bond_members_get(INVALID_PORT_ID, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
+	TEST_ASSERT(current_member_count < 0,
 			"Invalid port id unexpectedly succeeded");
 
-	current_slave_count = rte_eth_bond_active_slaves_get(INVALID_PORT_ID,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
+	current_member_count = rte_eth_bond_active_members_get(INVALID_PORT_ID,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT(current_member_count < 0,
 			"Invalid port id unexpectedly succeeded");
 
-	/* Invalid slaves pointer */
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
+	/* Invalid members pointer */
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
 			NULL, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
-			"Invalid slave array unexpectedly succeeded");
+	TEST_ASSERT(current_member_count < 0,
+			"Invalid member array unexpectedly succeeded");
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
+	current_member_count = rte_eth_bond_active_members_get(
 			test_params->bonded_port_id, NULL, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
-			"Invalid slave array unexpectedly succeeded");
+	TEST_ASSERT(current_member_count < 0,
+			"Invalid member array unexpectedly succeeded");
 
 	/* non bonded device*/
-	current_slave_count = rte_eth_bond_slaves_get(
-			test_params->slave_port_ids[0], NULL, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
+	current_member_count = rte_eth_bond_members_get(
+			test_params->member_port_ids[0], NULL, RTE_MAX_ETHPORTS);
+	TEST_ASSERT(current_member_count < 0,
 			"Invalid port id unexpectedly succeeded");
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->slave_port_ids[0],	NULL, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
+	current_member_count = rte_eth_bond_active_members_get(
+			test_params->member_port_ids[0],	NULL, RTE_MAX_ETHPORTS);
+	TEST_ASSERT(current_member_count < 0,
 			"Invalid port id unexpectedly succeeded");
 
-	TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
-			"Failed to remove slaves from bonded device");
+	TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+			"Failed to remove members from bonded device");
 
 	return 0;
 }
 
 
 static int
-test_add_remove_multiple_slaves_to_from_bonded_device(void)
+test_add_remove_multiple_members_to_from_bonded_device(void)
 {
 	int i;
 
 	for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
-		TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-				"Failed to add slave to bonded device");
+		TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+				"Failed to add member to bonded device");
 
 	for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
-		TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
-				"Failed to remove slaves from bonded device");
+		TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+				"Failed to remove members from bonded device");
 
 	return 0;
 }
 
 static void
-enable_bonded_slaves(void)
+enable_bonded_members(void)
 {
 	int i;
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		virtual_ethdev_tx_burst_fn_set_success(test_params->slave_port_ids[i],
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		virtual_ethdev_tx_burst_fn_set_success(test_params->member_port_ids[i],
 				1);
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 1);
+				test_params->member_port_ids[i], 1);
 	}
 }
 
@@ -556,34 +556,36 @@ test_start_bonded_device(void)
 {
 	struct rte_eth_link link_status;
 
-	int current_slave_count, current_bonding_mode, primary_port;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int current_member_count, current_bonding_mode, primary_port;
+	uint16_t members[RTE_MAX_ETHPORTS];
 	int retval;
 
-	/* Add slave to bonded device*/
-	TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-			"Failed to add slave to bonded device");
+	/* Add member to bonded device*/
+	TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+			"Failed to add member to bonded device");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params->bonded_port_id),
 		"Failed to start bonded pmd eth device %d.",
 		test_params->bonded_port_id);
 
-	/* Change link status of virtual pmd so it will be added to the active
-	 * slave list of the bonded device*/
+	/*
+	 * Change link status of virtual pmd so it will be added to the active
+	 * member list of the bonded device.
+	 */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[test_params->bonded_slave_count-1], 1);
+			test_params->member_port_ids[test_params->bonded_member_count-1], 1);
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
-			"Number of slaves (%d) is not expected value (%d).",
-			current_slave_count, test_params->bonded_slave_count);
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+			"Number of members (%d) is not expected value (%d).",
+			current_member_count, test_params->bonded_member_count);
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
-			"Number of active slaves (%d) is not expected value (%d).",
-			current_slave_count, test_params->bonded_slave_count);
+	current_member_count = rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+			"Number of active members (%d) is not expected value (%d).",
+			current_member_count, test_params->bonded_member_count);
 
 	current_bonding_mode = rte_eth_bond_mode_get(test_params->bonded_port_id);
 	TEST_ASSERT_EQUAL(current_bonding_mode, test_params->bonding_mode,
@@ -591,9 +593,9 @@ test_start_bonded_device(void)
 			current_bonding_mode, test_params->bonding_mode);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+	TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
 			"Primary port (%d) is not expected value (%d).",
-			primary_port, test_params->slave_port_ids[0]);
+			primary_port, test_params->member_port_ids[0]);
 
 	retval = rte_eth_link_get(test_params->bonded_port_id, &link_status);
 	TEST_ASSERT(retval >= 0,
@@ -609,8 +611,8 @@ test_start_bonded_device(void)
 static int
 test_stop_bonded_device(void)
 {
-	int current_slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int current_member_count;
+	uint16_t members[RTE_MAX_ETHPORTS];
 
 	struct rte_eth_link link_status;
 	int retval;
@@ -627,29 +629,29 @@ test_stop_bonded_device(void)
 			"Bonded port (%d) status (%d) is not expected value (%d).",
 			test_params->bonded_port_id, link_status.link_status, 0);
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
-			"Number of slaves (%d) is not expected value (%d).",
-			current_slave_count, test_params->bonded_slave_count);
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+			"Number of members (%d) is not expected value (%d).",
+			current_member_count, test_params->bonded_member_count);
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, 0,
-			"Number of active slaves (%d) is not expected value (%d).",
-			current_slave_count, 0);
+	current_member_count = rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, 0,
+			"Number of active members (%d) is not expected value (%d).",
+			current_member_count, 0);
 
 	return 0;
 }
 
 static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
 {
-	/* Clean up and remove slaves from bonded device */
+	/* Clean up and remove members from bonded device */
 	free_virtualpmd_tx_queue();
-	while (test_params->bonded_slave_count > 0)
-		TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
-				"test_remove_slave_from_bonded_device failed");
+	while (test_params->bonded_member_count > 0)
+		TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+				"test_remove_member_from_bonded_device failed");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -681,10 +683,10 @@ test_set_bonding_mode(void)
 				INVALID_PORT_ID);
 
 		/* Non bonded device */
-		TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->slave_port_ids[0],
+		TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->member_port_ids[0],
 				bonding_modes[i]),
 				"Expected call to failed as invalid port (%d) specified.",
-				test_params->slave_port_ids[0]);
+				test_params->member_port_ids[0]);
 
 		TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
 				bonding_modes[i]),
@@ -704,26 +706,26 @@ test_set_bonding_mode(void)
 				INVALID_PORT_ID);
 
 		/* Non bonded device */
-		bonding_mode = rte_eth_bond_mode_get(test_params->slave_port_ids[0]);
+		bonding_mode = rte_eth_bond_mode_get(test_params->member_port_ids[0]);
 		TEST_ASSERT(bonding_mode < 0,
 				"Expected call to failed as invalid port (%d) specified.",
-				test_params->slave_port_ids[0]);
+				test_params->member_port_ids[0]);
 	}
 
-	return remove_slaves_and_stop_bonded_device();
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
-test_set_primary_slave(void)
+test_set_primary_member(void)
 {
 	int i, j, retval;
 	struct rte_ether_addr read_mac_addr;
 	struct rte_ether_addr *expected_mac_addr;
 
-	/* Add 4 slaves to bonded device */
-	for (i = test_params->bonded_slave_count; i < 4; i++)
-		TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-				"Failed to add slave to bonded device.");
+	/* Add 4 members to bonded device */
+	for (i = test_params->bonded_member_count; i < 4; i++)
+		TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+				"Failed to add member to bonded device.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
 			BONDING_MODE_ROUND_ROBIN),
@@ -732,34 +734,34 @@ test_set_primary_slave(void)
 
 	/* Invalid port ID */
 	TEST_ASSERT_FAIL(rte_eth_bond_primary_set(INVALID_PORT_ID,
-			test_params->slave_port_ids[i]),
+			test_params->member_port_ids[i]),
 			"Expected call to failed as invalid port specified.");
 
 	/* Non bonded device */
-	TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->slave_port_ids[i],
-			test_params->slave_port_ids[i]),
+	TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->member_port_ids[i],
+			test_params->member_port_ids[i]),
 			"Expected call to failed as invalid port specified.");
 
-	/* Set slave as primary
-	 * Verify slave it is now primary slave
-	 * Verify that MAC address of bonded device is that of primary slave
-	 * Verify that MAC address of all bonded slaves are that of primary slave
+	/* Set member as primary
+	 * Verify member it is now primary member
+	 * Verify that MAC address of bonded device is that of primary member
+	 * Verify that MAC address of all bonded members are that of primary member
 	 */
 	for (i = 0; i < 4; i++) {
 		TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
-				test_params->slave_port_ids[i]),
+				test_params->member_port_ids[i]),
 				"Failed to set bonded port (%d) primary port to (%d)",
-				test_params->bonded_port_id, test_params->slave_port_ids[i]);
+				test_params->bonded_port_id, test_params->member_port_ids[i]);
 
 		retval = rte_eth_bond_primary_get(test_params->bonded_port_id);
 		TEST_ASSERT(retval >= 0,
 				"Failed to read primary port from bonded port (%d)\n",
 					test_params->bonded_port_id);
 
-		TEST_ASSERT_EQUAL(retval, test_params->slave_port_ids[i],
+		TEST_ASSERT_EQUAL(retval, test_params->member_port_ids[i],
 				"Bonded port (%d) primary port (%d) not expected value (%d)\n",
 				test_params->bonded_port_id, retval,
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 
 		/* stop/start bonded eth dev to apply new MAC */
 		TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
@@ -770,13 +772,14 @@ test_set_primary_slave(void)
 				"Failed to start bonded port %d",
 				test_params->bonded_port_id);
 
-		expected_mac_addr = (struct rte_ether_addr *)&slave_mac;
+		expected_mac_addr = (struct rte_ether_addr *)&member_mac;
 		expected_mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
 
-		/* Check primary slave MAC */
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+		/* Check primary member MAC */
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
 				sizeof(read_mac_addr)),
 				"bonded port mac address not set to that of primary port\n");
@@ -789,16 +792,17 @@ test_set_primary_slave(void)
 				sizeof(read_mac_addr)),
 				"bonded port mac address not set to that of primary port\n");
 
-		/* Check other slaves MACs */
+		/* Check other members MACs */
 		for (j = 0; j < 4; j++) {
 			if (j != i) {
-				TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[j],
+				TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(
+						test_params->member_port_ids[j],
 						&read_mac_addr),
 						"Failed to get mac address (port %d)",
-						test_params->slave_port_ids[j]);
+						test_params->member_port_ids[j]);
 				TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
 						sizeof(read_mac_addr)),
-						"slave port mac address not set to that of primary "
+						"member port mac address not set to that of primary "
 						"port");
 			}
 		}
@@ -809,14 +813,14 @@ test_set_primary_slave(void)
 	TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->bonded_port_id + 10),
 			"read primary port from expectedly");
 
-	/* Test with slave port */
-	TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->slave_port_ids[0]),
+	/* Test with member port */
+	TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->member_port_ids[0]),
 			"read primary port from expectedly\n");
 
-	TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
-			"Failed to stop and remove slaves from bonded device");
+	TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
+			"Failed to stop and remove members from bonded device");
 
-	/* No slaves  */
+	/* No members  */
 	TEST_ASSERT(rte_eth_bond_primary_get(test_params->bonded_port_id)  < 0,
 			"read primary port from expectedly\n");
 
@@ -840,7 +844,7 @@ test_set_explicit_bonded_mac(void)
 
 	/* Non bonded device */
 	TEST_ASSERT_FAIL(rte_eth_bond_mac_address_set(
-			test_params->slave_port_ids[0],	mac_addr),
+			test_params->member_port_ids[0],	mac_addr),
 			"Expected call to failed as invalid port specified.");
 
 	/* NULL MAC address */
@@ -853,10 +857,10 @@ test_set_explicit_bonded_mac(void)
 			"Failed to set MAC address on bonded port (%d)",
 			test_params->bonded_port_id);
 
-	/* Add 4 slaves to bonded device */
-	for (i = test_params->bonded_slave_count; i < 4; i++) {
-		TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-				"Failed to add slave to bonded device.\n");
+	/* Add 4 members to bonded device */
+	for (i = test_params->bonded_member_count; i < 4; i++) {
+		TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+				"Failed to add member to bonded device.\n");
 	}
 
 	/* Check bonded MAC */
@@ -866,14 +870,15 @@ test_set_explicit_bonded_mac(void)
 	TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
 			"bonded port mac address not set to that of primary port");
 
-	/* Check other slaves MACs */
+	/* Check other members MACs */
 	for (i = 0; i < 4; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port mac address not set to that of primary port");
+				"member port mac address not set to that of primary port");
 	}
 
 	/* test resetting mac address on bonded device */
@@ -883,13 +888,13 @@ test_set_explicit_bonded_mac(void)
 			test_params->bonded_port_id);
 
 	TEST_ASSERT_FAIL(
-			rte_eth_bond_mac_address_reset(test_params->slave_port_ids[0]),
+			rte_eth_bond_mac_address_reset(test_params->member_port_ids[0]),
 			"Reset MAC address on bonded port (%d) unexpectedly",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	/* test resetting mac address on bonded device with no slaves */
-	TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
-			"Failed to remove slaves and stop bonded device");
+	/* test resetting mac address on bonded device with no members */
+	TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
+			"Failed to remove members and stop bonded device");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_reset(test_params->bonded_port_id),
 			"Failed to reset MAC address on bonded port (%d)",
@@ -898,25 +903,25 @@ test_set_explicit_bonded_mac(void)
 	return 0;
 }
 
-#define BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT (3)
+#define BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT (3)
 
 static int
 test_set_bonded_port_initialization_mac_assignment(void)
 {
-	int i, slave_count;
+	int i, member_count;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 	static int bonded_port_id = -1;
-	static int slave_port_ids[BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT];
+	static int member_port_ids[BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT];
 
-	struct rte_ether_addr slave_mac_addr, bonded_mac_addr, read_mac_addr;
+	struct rte_ether_addr member_mac_addr, bonded_mac_addr, read_mac_addr;
 
 	/* Initialize default values for MAC addresses */
-	memcpy(&slave_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
-	memcpy(&bonded_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
+	memcpy(&member_mac_addr, member_mac, sizeof(struct rte_ether_addr));
+	memcpy(&bonded_mac_addr, member_mac, sizeof(struct rte_ether_addr));
 
 	/*
-	 * 1. a - Create / configure  bonded / slave ethdevs
+	 * 1. a - Create / configure  bonded / member ethdevs
 	 */
 	if (bonded_port_id == -1) {
 		bonded_port_id = rte_eth_bond_create("net_bonding_mac_ass_test",
@@ -927,46 +932,46 @@ test_set_bonded_port_initialization_mac_assignment(void)
 					"Failed to configure bonded ethdev");
 	}
 
-	if (!mac_slaves_initialized) {
-		for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
+	if (!mac_members_initialized) {
+		for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
 			char pmd_name[RTE_ETH_NAME_MAX_LEN];
 
-			slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
+			member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
 				i + 100;
 
 			snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN,
-				"eth_slave_%d", i);
+				"eth_member_%d", i);
 
-			slave_port_ids[i] = virtual_ethdev_create(pmd_name,
-					&slave_mac_addr, rte_socket_id(), 1);
+			member_port_ids[i] = virtual_ethdev_create(pmd_name,
+					&member_mac_addr, rte_socket_id(), 1);
 
-			TEST_ASSERT(slave_port_ids[i] >= 0,
-					"Failed to create slave ethdev %s",
+			TEST_ASSERT(member_port_ids[i] >= 0,
+					"Failed to create member ethdev %s",
 					pmd_name);
 
-			TEST_ASSERT_SUCCESS(configure_ethdev(slave_port_ids[i], 1, 0),
+			TEST_ASSERT_SUCCESS(configure_ethdev(member_port_ids[i], 1, 0),
 					"Failed to configure virtual ethdev %s",
 					pmd_name);
 		}
-		mac_slaves_initialized = 1;
+		mac_members_initialized = 1;
 	}
 
 
 	/*
-	 * 2. Add slave ethdevs to bonded device
+	 * 2. Add member ethdevs to bonded device
 	 */
-	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(bonded_port_id,
-				slave_port_ids[i]),
-				"Failed to add slave (%d) to bonded port (%d).",
-				slave_port_ids[i], bonded_port_id);
+	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(bonded_port_id,
+				member_port_ids[i]),
+				"Failed to add member (%d) to bonded port (%d).",
+				member_port_ids[i], bonded_port_id);
 	}
 
-	slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
+	member_count = rte_eth_bond_members_get(bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT, slave_count,
-			"Number of slaves (%d) is not as expected (%d)",
-			slave_count, BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT);
+	TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT, member_count,
+			"Number of members (%d) is not as expected (%d)",
+			member_count, BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT);
 
 
 	/*
@@ -982,16 +987,16 @@ test_set_bonded_port_initialization_mac_assignment(void)
 
 
 	/* 4. a - Start bonded ethdev
-	 *    b - Enable slave devices
-	 *    c - Verify bonded/slaves ethdev MAC addresses
+	 *    b - Enable member devices
+	 *    c - Verify bonded/members ethdev MAC addresses
 	 */
 	TEST_ASSERT_SUCCESS(rte_eth_dev_start(bonded_port_id),
 			"Failed to start bonded pmd eth device %d.",
 			bonded_port_id);
 
-	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
+	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				slave_port_ids[i], 1);
+				member_port_ids[i], 1);
 	}
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(bonded_port_id, &read_mac_addr),
@@ -1001,36 +1006,36 @@ test_set_bonded_port_initialization_mac_assignment(void)
 			sizeof(read_mac_addr)),
 			"bonded port mac address not as expected");
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[0]);
+			member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 0 mac address not as expected");
+			"member port 0 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[1]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[1]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 1 mac address not as expected");
+			"member port 1 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[2]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[2]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 2 mac address not as expected");
+			"member port 2 mac address not as expected");
 
 
 	/* 7. a - Change primary port
 	 *    b - Stop / Start bonded port
-	 *    d - Verify slave ethdev MAC addresses
+	 *    d - Verify member ethdev MAC addresses
 	 */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(bonded_port_id,
-			slave_port_ids[2]),
+			member_port_ids[2]),
 			"failed to set primary port on bonded device.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
@@ -1048,94 +1053,94 @@ test_set_bonded_port_initialization_mac_assignment(void)
 			sizeof(read_mac_addr)),
 			"bonded port mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 0 mac address not as expected");
+			"member port 0 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[1]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[1]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 1 mac address not as expected");
+			"member port 1 mac address not as expected");
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[2]);
+			member_port_ids[2]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 2 mac address not as expected");
+			"member port 2 mac address not as expected");
 
 	/* 6. a - Stop bonded ethdev
-	 *    b - remove slave ethdevs
-	 *    c - Verify slave ethdevs MACs are restored
+	 *    b - remove member ethdevs
+	 *    c - Verify member ethdevs MACs are restored
 	 */
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
 			"Failed to stop bonded port %u",
 			bonded_port_id);
 
-	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(bonded_port_id,
-				slave_port_ids[i]),
-				"Failed to remove slave %d from bonded port (%d).",
-				slave_port_ids[i], bonded_port_id);
+	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(bonded_port_id,
+				member_port_ids[i]),
+				"Failed to remove member %d from bonded port (%d).",
+				member_port_ids[i], bonded_port_id);
 	}
 
-	slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
+	member_count = rte_eth_bond_members_get(bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(slave_count, 0,
-			"Number of slaves (%d) is great than expected (%d).",
-			slave_count, 0);
+	TEST_ASSERT_EQUAL(member_count, 0,
+			"Number of members (%d) is great than expected (%d).",
+			member_count, 0);
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 0 mac address not as expected");
+			"member port 0 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[1]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[1]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 1 mac address not as expected");
+			"member port 1 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[2]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[2]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 2 mac address not as expected");
+			"member port 2 mac address not as expected");
 
 	return 0;
 }
 
 
 static int
-initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
-		uint16_t number_of_slaves, uint8_t enable_slave)
+initialize_bonded_device_with_members(uint8_t bonding_mode, uint8_t bond_en_isr,
+		uint16_t number_of_members, uint8_t enable_member)
 {
 	/* Configure bonded device */
 	TEST_ASSERT_SUCCESS(configure_ethdev(test_params->bonded_port_id, 0,
 			bond_en_isr), "Failed to configure bonding port (%d) in mode %d "
-			"with (%d) slaves.", test_params->bonded_port_id, bonding_mode,
-			number_of_slaves);
-
-	/* Add slaves to bonded device */
-	while (number_of_slaves > test_params->bonded_slave_count)
-		TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-				"Failed to add slave (%d to  bonding port (%d).",
-				test_params->bonded_slave_count - 1,
+			"with (%d) members.", test_params->bonded_port_id, bonding_mode,
+			number_of_members);
+
+	/* Add members to bonded device */
+	while (number_of_members > test_params->bonded_member_count)
+		TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+				"Failed to add member (%d to  bonding port (%d).",
+				test_params->bonded_member_count - 1,
 				test_params->bonded_port_id);
 
 	/* Set link bonding mode  */
@@ -1148,40 +1153,40 @@ initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
 		"Failed to start bonded pmd eth device %d.",
 		test_params->bonded_port_id);
 
-	if (enable_slave)
-		enable_bonded_slaves();
+	if (enable_member)
+		enable_bonded_members();
 
 	return 0;
 }
 
 static int
-test_adding_slave_after_bonded_device_started(void)
+test_adding_member_after_bonded_device_started(void)
 {
 	int i;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 0, 4, 0),
-			"Failed to add slaves to bonded device");
+			"Failed to add members to bonded device");
 
-	/* Enabled slave devices */
-	for (i = 0; i < test_params->bonded_slave_count + 1; i++) {
+	/* Enabled member devices */
+	for (i = 0; i < test_params->bonded_member_count + 1; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 1);
+				test_params->member_port_ids[i], 1);
 	}
 
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
-			test_params->slave_port_ids[test_params->bonded_slave_count]),
-			"Failed to add slave to bonded port.\n");
+	TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+			test_params->member_port_ids[test_params->bonded_member_count]),
+			"Failed to add member to bonded port.\n");
 
 	rte_eth_stats_reset(
-			test_params->slave_port_ids[test_params->bonded_slave_count]);
+			test_params->member_port_ids[test_params->bonded_member_count]);
 
-	test_params->bonded_slave_count++;
+	test_params->bonded_member_count++;
 
-	return remove_slaves_and_stop_bonded_device();
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_STATUS_INTERRUPT_SLAVE_COUNT	4
+#define TEST_STATUS_INTERRUPT_MEMBER_COUNT	4
 #define TEST_LSC_WAIT_TIMEOUT_US	500000
 
 int test_lsc_interrupt_count;
@@ -1237,13 +1242,13 @@ lsc_timeout(int wait_us)
 static int
 test_status_interrupt(void)
 {
-	int slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int member_count;
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	/* initialized bonding device with T slaves */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* initialized bonding device with T members */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 1,
-			TEST_STATUS_INTERRUPT_SLAVE_COUNT, 1),
+			TEST_STATUS_INTERRUPT_MEMBER_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	test_lsc_interrupt_count = 0;
@@ -1253,27 +1258,27 @@ test_status_interrupt(void)
 			RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
 			&test_params->bonded_port_id);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT,
-			"Number of active slaves (%d) is not as expected (%d)",
-			slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT);
+	TEST_ASSERT_EQUAL(member_count, TEST_STATUS_INTERRUPT_MEMBER_COUNT,
+			"Number of active members (%d) is not as expected (%d)",
+			member_count, TEST_STATUS_INTERRUPT_MEMBER_COUNT);
 
-	/* Bring all 4 slaves link status to down and test that we have received a
+	/* Bring all 4 members link status to down and test that we have received a
 	 * lsc interrupts */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 0);
+			test_params->member_port_ids[0], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->member_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[2], 0);
+			test_params->member_port_ids[2], 0);
 
 	TEST_ASSERT_EQUAL(test_lsc_interrupt_count, 0,
 			"Received a link status change interrupt unexpectedly");
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->member_port_ids[3], 0);
 
 	TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
 			"timed out waiting for interrupt");
@@ -1281,18 +1286,18 @@ test_status_interrupt(void)
 	TEST_ASSERT(test_lsc_interrupt_count > 0,
 			"Did not receive link status change interrupt");
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(slave_count, 0,
-			"Number of active slaves (%d) is not as expected (%d)",
-			slave_count, 0);
+	TEST_ASSERT_EQUAL(member_count, 0,
+			"Number of active members (%d) is not as expected (%d)",
+			member_count, 0);
 
-	/* bring one slave port up so link status will change */
+	/* bring one member port up so link status will change */
 	test_lsc_interrupt_count = 0;
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 1);
+			test_params->member_port_ids[0], 1);
 
 	TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
 			"timed out waiting for interrupt");
@@ -1301,12 +1306,12 @@ test_status_interrupt(void)
 	TEST_ASSERT(test_lsc_interrupt_count > 0,
 			"Did not receive link status change interrupt");
 
-	/* Verify that calling the same slave lsc interrupt doesn't cause another
+	/* Verify that calling the same member lsc interrupt doesn't cause another
 	 * lsc interrupt from bonded device */
 	test_lsc_interrupt_count = 0;
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 1);
+			test_params->member_port_ids[0], 1);
 
 	TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) != 0,
 			"received unexpected interrupt");
@@ -1320,8 +1325,8 @@ test_status_interrupt(void)
 				RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
 				&test_params->bonded_port_id);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -1398,11 +1403,11 @@ test_roundrobin_tx_burst(void)
 	struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 0, 2, 1),
 			"Failed to initialise bonded device");
 
-	burst_size = 20 * test_params->bonded_slave_count;
+	burst_size = 20 * test_params->bonded_member_count;
 
 	TEST_ASSERT(burst_size <= MAX_PKT_BURST,
 			"Burst size specified is greater than supported.");
@@ -1423,20 +1428,20 @@ test_roundrobin_tx_burst(void)
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
 			burst_size);
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+	/* Verify member ports tx stats */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
 		TEST_ASSERT_EQUAL(port_stats.opackets,
-				(uint64_t)burst_size / test_params->bonded_slave_count,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)\n",
+				(uint64_t)burst_size / test_params->bonded_member_count,
+				"Member Port (%d) opackets value (%u) not as expected (%d)\n",
 				test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-				burst_size / test_params->bonded_slave_count);
+				burst_size / test_params->bonded_member_count);
 	}
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -1444,8 +1449,8 @@ test_roundrobin_tx_burst(void)
 			pkt_burst, burst_size), 0,
 			"tx burst return unexpected value");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -1471,13 +1476,13 @@ free_mbufs(struct rte_mbuf **mbufs, int nb_mbufs)
 		rte_pktmbuf_free(mbufs[i]);
 }
 
-#define TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT		(2)
-#define TEST_RR_SLAVE_TX_FAIL_BURST_SIZE		(64)
-#define TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT		(22)
-#define TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX	(1)
+#define TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT		(2)
+#define TEST_RR_MEMBER_TX_FAIL_BURST_SIZE		(64)
+#define TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT		(22)
+#define TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX	(1)
 
 static int
-test_roundrobin_tx_burst_slave_tx_fail(void)
+test_roundrobin_tx_burst_member_tx_fail(void)
 {
 	struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
 	struct rte_mbuf *expected_tx_fail_pkts[MAX_PKT_BURST];
@@ -1486,49 +1491,51 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
 
 	int i, first_fail_idx, tx_count;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 0,
-			TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+			TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	/* Generate test bursts of packets to transmit */
 	TEST_ASSERT_EQUAL(generate_test_burst(pkt_burst,
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE,
+			TEST_RR_MEMBER_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
+			TEST_RR_MEMBER_TX_FAIL_BURST_SIZE,
 			"Failed to generate test packet burst");
 
 	/* Copy references to packets which we expect not to be transmitted */
-	first_fail_idx = (TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			(TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT *
-			TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)) +
-			TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX;
+	first_fail_idx = (TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+			(TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT *
+			TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT)) +
+			TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX;
 
-	for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+	for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
 		expected_tx_fail_pkts[i] = pkt_burst[first_fail_idx +
-				(i * TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)];
+				(i * TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT)];
 	}
 
-	/* Set virtual slave to only fail transmission of
-	 * TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT packets in burst */
+	/*
+	 * Set virtual member to only fail transmission of
+	 * TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT packets in burst.
+	 */
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
+			test_params->member_port_ids[TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
 			0);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+			test_params->member_port_ids[TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
+			TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
 
 	tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkt_burst,
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE);
+			TEST_RR_MEMBER_TX_FAIL_BURST_SIZE);
 
-	TEST_ASSERT_EQUAL(tx_count, TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
+	TEST_ASSERT_EQUAL(tx_count, TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT,
 			"Transmitted (%d) an unexpected (%d) number of packets", tx_count,
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+			TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
 
 	/* Verify that failed packet are expected failed packets */
-	for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+	for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
 		TEST_ASSERT_EQUAL(expected_tx_fail_pkts[i], pkt_burst[i + tx_count],
 				"expected mbuf (%d) pointer %p not expected pointer %p",
 				i, expected_tx_fail_pkts[i], pkt_burst[i + tx_count]);
@@ -1538,45 +1545,45 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
 	rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
+			(uint64_t)TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT,
 			"Bonded Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+			TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		int slave_expected_tx_count;
+	/* Verify member ports tx stats */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		int member_expected_tx_count;
 
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+		rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
 
-		slave_expected_tx_count = TEST_RR_SLAVE_TX_FAIL_BURST_SIZE /
-				test_params->bonded_slave_count;
+		member_expected_tx_count = TEST_RR_MEMBER_TX_FAIL_BURST_SIZE /
+				test_params->bonded_member_count;
 
-		if (i == TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX)
-			slave_expected_tx_count = slave_expected_tx_count -
-					TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT;
+		if (i == TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX)
+			member_expected_tx_count = member_expected_tx_count -
+					TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT;
 
 		TEST_ASSERT_EQUAL(port_stats.opackets,
-				(uint64_t)slave_expected_tx_count,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[i],
-				(unsigned int)port_stats.opackets, slave_expected_tx_count);
+				(uint64_t)member_expected_tx_count,
+				"Member Port (%d) opackets value (%u) not as expected (%d)",
+				test_params->member_port_ids[i],
+				(unsigned int)port_stats.opackets, member_expected_tx_count);
 	}
 
 	/* Verify that all mbufs have a ref value of zero */
 	TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkt_burst[tx_count],
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
+			TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT, 1),
 			"mbufs refcnts not as expected");
-	free_mbufs(&pkt_burst[tx_count], TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+	free_mbufs(&pkt_burst[tx_count], TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
-test_roundrobin_rx_burst_on_single_slave(void)
+test_roundrobin_rx_burst_on_single_member(void)
 {
 	struct rte_mbuf *gen_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
@@ -1585,19 +1592,19 @@ test_roundrobin_rx_burst_on_single_slave(void)
 
 	int i, j, burst_size = 25;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with members");
 
 	/* Generate test bursts of packets to transmit */
 	TEST_ASSERT_EQUAL(generate_test_burst(
 			gen_pkt_burst, burst_size, 0, 1, 0, 0, 0), burst_size,
 			"burst generation failed");
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		/* Add rx data to slave */
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		/* Add rx data to member */
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[0], burst_size);
 
 		/* Call rx burst on bonded device */
@@ -1616,25 +1623,25 @@ test_roundrobin_rx_burst_on_single_slave(void)
 
 
 
-		/* Verify bonded slave devices rx count */
-		/* Verify slave ports tx stats */
-		for (j = 0; j < test_params->bonded_slave_count; j++) {
-			rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+		/* Verify bonded member devices rx count */
+		/* Verify member ports tx stats */
+		for (j = 0; j < test_params->bonded_member_count; j++) {
+			rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
 
 			if (i == j) {
 				TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
-						"Slave Port (%d) ipackets value (%u) not as expected"
-						" (%d)", test_params->slave_port_ids[i],
+						"Member Port (%d) ipackets value (%u) not as expected"
+						" (%d)", test_params->member_port_ids[i],
 						(unsigned int)port_stats.ipackets, burst_size);
 			} else {
 				TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-						"Slave Port (%d) ipackets value (%u) not as expected"
-						" (%d)", test_params->slave_port_ids[i],
+						"Member Port (%d) ipackets value (%u) not as expected"
+						" (%d)", test_params->member_port_ids[i],
 						(unsigned int)port_stats.ipackets, 0);
 			}
 
-			/* Reset bonded slaves stats */
-			rte_eth_stats_reset(test_params->slave_port_ids[j]);
+			/* Reset bonded members stats */
+			rte_eth_stats_reset(test_params->member_port_ids[j]);
 		}
 		/* reset bonded device stats */
 		rte_eth_stats_reset(test_params->bonded_port_id);
@@ -1646,38 +1653,38 @@ test_roundrobin_rx_burst_on_single_slave(void)
 	}
 
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT (3)
+#define TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT (3)
 
 static int
-test_roundrobin_rx_burst_on_multiple_slaves(void)
+test_roundrobin_rx_burst_on_multiple_members(void)
 {
-	struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
 
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	int burst_size[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT] = { 15, 13, 36 };
+	int burst_size[TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT] = { 15, 13, 36 };
 	int i, nb_rx;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with members");
 
 	/* Generate test bursts of packets to transmit */
-	for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[i][0], burst_size[i], 0, 1, 0, 0, 0),
 				burst_size[i], "burst generation failed");
 	}
 
-	/* Add rx data to slaves */
-	for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+	/* Add rx data to members */
+	for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT; i++) {
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[i][0], burst_size[i]);
 	}
 
@@ -1697,29 +1704,29 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
 			test_params->bonded_port_id, (unsigned int)port_stats.ipackets,
 			burst_size[0] + burst_size[1] + burst_size[2]);
 
-	/* Verify bonded slave devices rx counts */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify bonded member devices rx counts */
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0],
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[0],
 			(unsigned int)port_stats.ipackets, burst_size[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[1], (unsigned int)port_stats.ipackets,
 			burst_size[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[2],
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+				test_params->member_port_ids[2],
 				(unsigned int)port_stats.ipackets, burst_size[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[3],
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[3],
 			(unsigned int)port_stats.ipackets, 0);
 
 	/* free mbufs */
@@ -1727,8 +1734,8 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
 		rte_pktmbuf_free(rx_pkt_burst[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -1739,48 +1746,54 @@ test_roundrobin_verify_mac_assignment(void)
 
 	int i;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+			&expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_2),
+			test_params->member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[2],
+			&expected_mac_addr_2),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[2]);
+			test_params->member_port_ids[2]);
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 				BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
-				"Failed to initialize bonded device with slaves");
+				"Failed to initialize bonded device with members");
 
-	/* Verify that all MACs are the same as first slave added to bonded dev */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	/* Verify that all MACs are the same as first member added to bonded dev */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of primary port",
-				test_params->slave_port_ids[i]);
+				"member port (%d) mac address not set to that of primary port",
+				test_params->member_port_ids[i]);
 	}
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[2]),
+			test_params->member_port_ids[2]),
 			"Failed to set bonded port (%d) primary port to (%d)",
-			test_params->bonded_port_id, test_params->slave_port_ids[i]);
+			test_params->bonded_port_id, test_params->member_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address has changed to that of primary"
+				"member port (%d) mac address has changed to that of primary"
 				" port without stop/start toggle of bonded device",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
-	/* stop / start bonded device and verify that primary MAC address is
-	 * propagate to bonded device and slaves */
+	/*
+	 * stop / start bonded device and verify that primary MAC address is
+	 * propagate to bonded device and members.
+	 */
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
 			test_params->bonded_port_id);
@@ -1794,16 +1807,17 @@ test_roundrobin_verify_mac_assignment(void)
 	TEST_ASSERT_SUCCESS(
 			memcmp(&expected_mac_addr_2, &read_mac_addr, sizeof(read_mac_addr)),
 			"bonded port (%d) mac address not set to that of new primary port",
-			test_params->slave_port_ids[i]);
+			test_params->member_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_2, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of new primary"
-				" port", test_params->slave_port_ids[i]);
+				"member port (%d) mac address not set to that of new primary"
+				" port", test_params->member_port_ids[i]);
 	}
 
 	/* Set explicit MAC address */
@@ -1818,19 +1832,20 @@ test_roundrobin_verify_mac_assignment(void)
 	TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
 			"bonded port (%d) mac address not set to that of new primary port",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
-				sizeof(read_mac_addr)), "slave port (%d) mac address not set to"
-				" that of new primary port\n", test_params->slave_port_ids[i]);
+				sizeof(read_mac_addr)), "member port (%d) mac address not set to"
+				" that of new primary port\n", test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -1839,10 +1854,10 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
 	int i, promiscuous_en;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with members");
 
 	ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
 	TEST_ASSERT_SUCCESS(ret,
@@ -1854,12 +1869,12 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not enabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_EQUAL(promiscuous_en, 1,
-				"slave port (%d) promiscuous mode not enabled",
-				test_params->slave_port_ids[i]);
+				"member port (%d) promiscuous mode not enabled",
+				test_params->member_port_ids[i]);
 	}
 
 	ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -1872,76 +1887,76 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled\n",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_EQUAL(promiscuous_en, 0,
 				"Port (%d) promiscuous mode not disabled\n",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_RR_LINK_STATUS_SLAVE_COUNT (4)
-#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT (2)
+#define TEST_RR_LINK_STATUS_MEMBER_COUNT (4)
+#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT (2)
 
 static int
-test_roundrobin_verify_slave_link_status_change_behaviour(void)
+test_roundrobin_verify_member_link_status_change_behaviour(void)
 {
 	struct rte_mbuf *tx_pkt_burst[MAX_PKT_BURST] = { NULL };
-	struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_MEMBER_COUNT][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 
 	struct rte_eth_stats port_stats;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count;
+	int i, burst_size, member_count;
 
 	/* NULL all pointers in array to simplify cleanup */
 	memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
 
-	/* Initialize bonded device with TEST_RR_LINK_STATUS_SLAVE_COUNT slaves
+	/* Initialize bonded device with TEST_RR_LINK_STATUS_MEMBER_COUNT members
 	 * in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
-			BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_SLAVE_COUNT, 1),
-			"Failed to initialize bonded device with slaves");
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+			BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_MEMBER_COUNT, 1),
+			"Failed to initialize bonded device with members");
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Members Count /Active Member Count is */
+	member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
-			"Number of slaves (%d) is not as expected (%d).",
-			slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
+	TEST_ASSERT_EQUAL(member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT,
+			"Number of members (%d) is not as expected (%d).",
+			member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT);
 
-	/* Set 2 slaves eth_devs link status to down */
+	/* Set 2 members eth_devs link status to down */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->member_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->member_port_ids[3], 0);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count,
-			TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT,
-			"Number of active slaves (%d) is not as expected (%d).\n",
-			slave_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count,
+			TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT,
+			"Number of active members (%d) is not as expected (%d).\n",
+			member_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT);
 
 	burst_size = 20;
 
-	/* Verify that pkts are not sent on slaves with link status down:
+	/* Verify that pkts are not sent on members with link status down:
 	 *
 	 * 1. Generate test burst of traffic
 	 * 2. Transmit burst on bonded eth_dev
 	 * 3. Verify stats for bonded eth_dev (opackets = burst_size)
-	 * 4. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
+	 * 4. Verify stats for member eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
 	 */
 	TEST_ASSERT_EQUAL(
 			generate_test_burst(tx_pkt_burst, burst_size, 0, 1, 0, 0, 0),
@@ -1960,41 +1975,41 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
 			test_params->bonded_port_id, (int)port_stats.opackets,
 			burst_size);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
 			"Port (%d) opackets stats (%d) not expected (%d) value",
-			test_params->slave_port_ids[0], (int)port_stats.opackets, 10);
+			test_params->member_port_ids[0], (int)port_stats.opackets, 10);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
 			"Port (%d) opackets stats (%d) not expected (%d) value",
-			test_params->slave_port_ids[1], (int)port_stats.opackets, 0);
+			test_params->member_port_ids[1], (int)port_stats.opackets, 0);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
 			"Port (%d) opackets stats (%d) not expected (%d) value",
-			test_params->slave_port_ids[2], (int)port_stats.opackets, 10);
+			test_params->member_port_ids[2], (int)port_stats.opackets, 10);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
 			"Port (%d) opackets stats (%d) not expected (%d) value",
-			test_params->slave_port_ids[3], (int)port_stats.opackets, 0);
+			test_params->member_port_ids[3], (int)port_stats.opackets, 0);
 
-	/* Verify that pkts are not sent on slaves with link status down:
+	/* Verify that pkts are not sent on members with link status down:
 	 *
 	 * 1. Generate test bursts of traffic
 	 * 2. Add bursts on to virtual eth_devs
 	 * 3. Rx burst on bonded eth_dev, expected (burst_ size *
-	 *    TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT) received
+	 *    TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT) received
 	 * 4. Verify stats for bonded eth_dev
-	 * 6. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
+	 * 6. Verify stats for member eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
 	 */
-	for (i = 0; i < TEST_RR_LINK_STATUS_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_RR_LINK_STATUS_MEMBER_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0),
 				burst_size, "failed to generate packet burst");
 
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[i][0], burst_size);
 	}
 
@@ -2014,49 +2029,49 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
 		rte_pktmbuf_free(rx_pkt_burst[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT (2)
+#define TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT (2)
 
-uint8_t polling_slave_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
+uint8_t polling_member_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
 
 
-int polling_test_slaves[TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT] = { -1, -1 };
+int polling_test_members[TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT] = { -1, -1 };
 
 static int
-test_roundrobin_verfiy_polling_slave_link_status_change(void)
+test_roundrobin_verify_polling_member_link_status_change(void)
 {
 	struct rte_ether_addr *mac_addr =
-		(struct rte_ether_addr *)polling_slave_mac;
-	char slave_name[RTE_ETH_NAME_MAX_LEN];
+		(struct rte_ether_addr *)polling_member_mac;
+	char member_name[RTE_ETH_NAME_MAX_LEN];
 
 	int i;
 
-	for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
-		/* Generate slave name / MAC address */
-		snprintf(slave_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
+	for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT; i++) {
+		/* Generate member name / MAC address */
+		snprintf(member_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
 		mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
 
-		/* Create slave devices with no ISR Support */
-		if (polling_test_slaves[i] == -1) {
-			polling_test_slaves[i] = virtual_ethdev_create(slave_name, mac_addr,
+		/* Create member devices with no ISR Support */
+		if (polling_test_members[i] == -1) {
+			polling_test_members[i] = virtual_ethdev_create(member_name, mac_addr,
 					rte_socket_id(), 0);
-			TEST_ASSERT(polling_test_slaves[i] >= 0,
-					"Failed to create virtual virtual ethdev %s\n", slave_name);
+			TEST_ASSERT(polling_test_members[i] >= 0,
+					"Failed to create virtual ethdev %s\n", member_name);
 
-			/* Configure slave */
-			TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_slaves[i], 0, 0),
-					"Failed to configure virtual ethdev %s(%d)", slave_name,
-					polling_test_slaves[i]);
+			/* Configure member */
+			TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_members[i], 0, 0),
+					"Failed to configure virtual ethdev %s(%d)", member_name,
+					polling_test_members[i]);
 		}
 
-		/* Add slave to bonded device */
-		TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
-				polling_test_slaves[i]),
-				"Failed to add slave %s(%d) to bonded device %d",
-				slave_name, polling_test_slaves[i],
+		/* Add member to bonded device */
+		TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+				polling_test_members[i]),
+				"Failed to add member %s(%d) to bonded device %d",
+				member_name, polling_test_members[i],
 				test_params->bonded_port_id);
 	}
 
@@ -2071,26 +2086,26 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
 			RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
 			&test_params->bonded_port_id);
 
-	/* link status change callback for first slave link up */
+	/* link status change callback for first member link up */
 	test_lsc_interrupt_count = 0;
 
-	virtual_ethdev_set_link_status(polling_test_slaves[0], 1);
+	virtual_ethdev_set_link_status(polling_test_members[0], 1);
 
 	TEST_ASSERT_SUCCESS(lsc_timeout(15000), "timed out waiting for interrupt");
 
 
-	/* no link status change callback for second slave link up */
+	/* no link status change callback for second member link up */
 	test_lsc_interrupt_count = 0;
 
-	virtual_ethdev_set_link_status(polling_test_slaves[1], 1);
+	virtual_ethdev_set_link_status(polling_test_members[1], 1);
 
 	TEST_ASSERT_FAIL(lsc_timeout(15000), "unexpectedly succeeded");
 
-	/* link status change callback for both slave links down */
+	/* link status change callback for both member links down */
 	test_lsc_interrupt_count = 0;
 
-	virtual_ethdev_set_link_status(polling_test_slaves[0], 0);
-	virtual_ethdev_set_link_status(polling_test_slaves[1], 0);
+	virtual_ethdev_set_link_status(polling_test_members[0], 0);
+	virtual_ethdev_set_link_status(polling_test_members[1], 0);
 
 	TEST_ASSERT_SUCCESS(lsc_timeout(20000), "timed out waiting for interrupt");
 
@@ -2100,17 +2115,17 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
 			&test_params->bonded_port_id);
 
 
-	/* Clean up and remove slaves from bonded device */
-	for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
+	/* Clean up and remove members from bonded device */
+	for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT; i++) {
 
 		TEST_ASSERT_SUCCESS(
-				rte_eth_bond_slave_remove(test_params->bonded_port_id,
-						polling_test_slaves[i]),
-				"Failed to remove slave %d from bonded port (%d)",
-				polling_test_slaves[i], test_params->bonded_port_id);
+				rte_eth_bond_member_remove(test_params->bonded_port_id,
+						polling_test_members[i]),
+				"Failed to remove member %d from bonded port (%d)",
+				polling_test_members[i], test_params->bonded_port_id);
 	}
 
-	return remove_slaves_and_stop_bonded_device();
+	return remove_members_and_stop_bonded_device();
 }
 
 
@@ -2123,9 +2138,9 @@ test_activebackup_tx_burst(void)
 	struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ACTIVE_BACKUP, 0, 1, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with members");
 
 	initialize_eth_header(test_params->pkt_eth_hdr,
 			(struct rte_ether_addr *)src_mac,
@@ -2136,7 +2151,7 @@ test_activebackup_tx_burst(void)
 	pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
 			dst_addr_0, pktlen);
 
-	burst_size = 20 * test_params->bonded_slave_count;
+	burst_size = 20 * test_params->bonded_member_count;
 
 	TEST_ASSERT(burst_size < MAX_PKT_BURST,
 			"Burst size specified is greater than supported.");
@@ -2160,38 +2175,38 @@ test_activebackup_tx_burst(void)
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
-		if (test_params->slave_port_ids[i] == primary_port) {
+	/* Verify member ports tx stats */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
+		if (test_params->member_port_ids[i] == primary_port) {
 			TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
-					"Slave Port (%d) opackets value (%u) not as expected (%d)",
+					"Member Port (%d) opackets value (%u) not as expected (%d)",
 					test_params->bonded_port_id,
 					(unsigned int)port_stats.opackets,
-					burst_size / test_params->bonded_slave_count);
+					burst_size / test_params->bonded_member_count);
 		} else {
 			TEST_ASSERT_EQUAL(port_stats.opackets, 0,
-					"Slave Port (%d) opackets value (%u) not as expected (%d)",
+					"Member Port (%d) opackets value (%u) not as expected (%d)",
 					test_params->bonded_port_id,
 					(unsigned int)port_stats.opackets, 0);
 		}
 	}
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
 	TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
 			pkts_burst, burst_size), 0, "Sending empty burst failed");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT (4)
+#define TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT (4)
 
 static int
 test_activebackup_rx_burst(void)
@@ -2205,24 +2220,24 @@ test_activebackup_rx_burst(void)
 
 	int i, j, burst_size = 17;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ACTIVE_BACKUP, 0,
-			TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
-			"Failed to initialize bonded device with slaves");
+			TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT, 1),
+			"Failed to initialize bonded device with members");
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 	TEST_ASSERT(primary_port >= 0,
-			"failed to get primary slave for bonded port (%d)",
+			"failed to get primary member for bonded port (%d)",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		/* Generate test bursts of packets to transmit */
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0),
 				burst_size, "burst generation failed");
 
-		/* Add rx data to slave */
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		/* Add rx data to member */
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[0], burst_size);
 
 		/* Call rx burst on bonded device */
@@ -2230,7 +2245,7 @@ test_activebackup_rx_burst(void)
 				&rx_pkt_burst[0], MAX_PKT_BURST), burst_size,
 				"rte_eth_rx_burst failed");
 
-		if (test_params->slave_port_ids[i] == primary_port) {
+		if (test_params->member_port_ids[i] == primary_port) {
 			/* Verify bonded device rx count */
 			rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 			TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
@@ -2238,27 +2253,30 @@ test_activebackup_rx_burst(void)
 					test_params->bonded_port_id,
 					(unsigned int)port_stats.ipackets, burst_size);
 
-			/* Verify bonded slave devices rx count */
-			for (j = 0; j < test_params->bonded_slave_count; j++) {
-				rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+			/* Verify bonded member devices rx count */
+			for (j = 0; j < test_params->bonded_member_count; j++) {
+				rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
 				if (i == j) {
 					TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
-							"Slave Port (%d) ipackets value (%u) not as "
-							"expected (%d)", test_params->slave_port_ids[i],
-							(unsigned int)port_stats.ipackets, burst_size);
+							"Member Port (%d) ipackets value (%u) not as "
+							"expected (%d)",
+							test_params->member_port_ids[i],
+							(unsigned int)port_stats.ipackets,
+							burst_size);
 				} else {
 					TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-							"Slave Port (%d) ipackets value (%u) not as "
-							"expected (%d)\n", test_params->slave_port_ids[i],
+							"Member Port (%d) ipackets value (%u) not as "
+							"expected (%d)\n",
+							test_params->member_port_ids[i],
 							(unsigned int)port_stats.ipackets, 0);
 				}
 			}
 		} else {
-			for (j = 0; j < test_params->bonded_slave_count; j++) {
-				rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+			for (j = 0; j < test_params->bonded_member_count; j++) {
+				rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
 				TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-						"Slave Port (%d) ipackets value (%u) not as expected "
-						"(%d)", test_params->slave_port_ids[i],
+						"Member Port (%d) ipackets value (%u) not as expected "
+						"(%d)", test_params->member_port_ids[i],
 						(unsigned int)port_stats.ipackets, 0);
 			}
 		}
@@ -2275,8 +2293,8 @@ test_activebackup_rx_burst(void)
 		rte_eth_stats_reset(test_params->bonded_port_id);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -2285,14 +2303,14 @@ test_activebackup_verify_promiscuous_enable_disable(void)
 	int i, primary_port, promiscuous_en;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ACTIVE_BACKUP, 0, 4, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with members");
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 	TEST_ASSERT(primary_port >= 0,
-			"failed to get primary slave for bonded port (%d)",
+			"failed to get primary member for bonded port (%d)",
 			test_params->bonded_port_id);
 
 	ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -2304,17 +2322,17 @@ test_activebackup_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not enabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
-		if (primary_port == test_params->slave_port_ids[i]) {
+				test_params->member_port_ids[i]);
+		if (primary_port == test_params->member_port_ids[i]) {
 			TEST_ASSERT_EQUAL(promiscuous_en, 1,
-					"slave port (%d) promiscuous mode not enabled",
-					test_params->slave_port_ids[i]);
+					"member port (%d) promiscuous mode not enabled",
+					test_params->member_port_ids[i]);
 		} else {
 			TEST_ASSERT_EQUAL(promiscuous_en, 0,
-					"slave port (%d) promiscuous mode enabled",
-					test_params->slave_port_ids[i]);
+					"member port (%d) promiscuous mode enabled",
+					test_params->member_port_ids[i]);
 		}
 
 	}
@@ -2328,16 +2346,16 @@ test_activebackup_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled\n",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_EQUAL(promiscuous_en, 0,
-				"slave port (%d) promiscuous mode not disabled\n",
-				test_params->slave_port_ids[i]);
+				"member port (%d) promiscuous mode not disabled\n",
+				test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -2346,19 +2364,21 @@ test_activebackup_verify_mac_assignment(void)
 	struct rte_ether_addr read_mac_addr;
 	struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+			&expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+			test_params->member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+			&expected_mac_addr_1),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	/* Initialize bonded device with 2 slaves in active backup mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 2 members in active backup mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with members");
 
-	/* Verify that bonded MACs is that of first slave and that the other slave
+	/* Verify that bonded MACs is that of first member and that the other member
 	 * MAC hasn't been changed */
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -2368,27 +2388,27 @@ test_activebackup_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[1]);
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[1]), 0,
+			test_params->member_port_ids[1]), 0,
 			"Failed to set bonded port (%d) primary port to (%d)",
-			test_params->bonded_port_id, test_params->slave_port_ids[1]);
+			test_params->bonded_port_id, test_params->member_port_ids[1]);
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -2398,24 +2418,26 @@ test_activebackup_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[1]);
 
-	/* stop / start bonded device and verify that primary MAC address is
-	 * propagated to bonded device and slaves */
+	/*
+	 * stop / start bonded device and verify that primary MAC address is
+	 * propagated to bonded device and members.
+	 */
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -2432,21 +2454,21 @@ test_activebackup_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[1]);
 
 	/* Set explicit MAC address */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
@@ -2462,36 +2484,36 @@ test_activebackup_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of bonded port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of bonded port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of bonded port",
+			test_params->member_port_ids[1]);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
-test_activebackup_verify_slave_link_status_change_failover(void)
+test_activebackup_verify_member_link_status_change_failover(void)
 {
-	struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count, primary_port;
+	int i, burst_size, member_count, primary_port;
 
 	burst_size = 21;
 
@@ -2502,96 +2524,96 @@ test_activebackup_verify_slave_link_status_change_failover(void)
 			&pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
 			"generate_test_burst failed");
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ACTIVE_BACKUP, 0,
-			TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
-			"Failed to initialize bonded device with slaves");
+			TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT, 1),
+			"Failed to initialize bonded device with members");
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Members Count /Active Member Count is */
+	member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of slaves (%d) is not as expected (%d).",
-			slave_count, 4);
+	TEST_ASSERT_EQUAL(member_count, 4,
+			"Number of members (%d) is not as expected (%d).",
+			member_count, 4);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 4);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count, 4,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 4);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+	TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
 			"Primary port not as expected");
 
-	/* Bring 2 slaves down and verify active slave count */
+	/* Bring 2 members down and verify active member count */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->member_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->member_port_ids[3], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 2);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 2);
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 1);
+			test_params->member_port_ids[1], 1);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 1);
+			test_params->member_port_ids[3], 1);
 
 
-	/* Bring primary port down, verify that active slave count is 3 and primary
+	/* Bring primary port down, verify that active member count is 3 and primary
 	 *  has changed */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 0);
+			test_params->member_port_ids[0], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS),
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS),
 			3,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 3);
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 3);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
+	TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[2],
 			"Primary port not as expected");
 
-	/* Verify that pkts are sent on new primary slave */
+	/* Verify that pkts are sent on new primary member */
 
 	TEST_ASSERT_EQUAL(rte_eth_tx_burst(
 			test_params->bonded_port_id, 0, &pkt_burst[0][0],
 			burst_size), burst_size, "rte_eth_tx_burst failed");
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[2]);
+			test_params->member_port_ids[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[3]);
+			test_params->member_port_ids[3]);
 
 	/* Generate packet burst for testing */
 
-	for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
 				"generate_test_burst failed");
 
 		virtual_ethdev_add_mbufs_to_rx_queue(
-			test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
+			test_params->member_port_ids[i], &pkt_burst[i][0], burst_size);
 	}
 
 	TEST_ASSERT_EQUAL(rte_eth_rx_burst(
@@ -2604,28 +2626,28 @@ test_activebackup_verify_slave_link_status_change_failover(void)
 			"(%d) port_stats.ipackets not as expected",
 			test_params->bonded_port_id);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[2]);
+			test_params->member_port_ids[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[3]);
+			test_params->member_port_ids[3]);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 /** Balance Mode Tests */
@@ -2633,9 +2655,9 @@ test_activebackup_verify_slave_link_status_change_failover(void)
 static int
 test_balance_xmit_policy_configuration(void)
 {
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			"Failed to initialize_bonded_device_with_members.");
 
 	/* Invalid port id */
 	TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
@@ -2644,7 +2666,7 @@ test_balance_xmit_policy_configuration(void)
 
 	/* Set xmit policy on non bonded device */
 	TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
-			test_params->slave_port_ids[0],	BALANCE_XMIT_POLICY_LAYER2),
+			test_params->member_port_ids[0],	BALANCE_XMIT_POLICY_LAYER2),
 			"Expected call to failed as invalid port specified.");
 
 
@@ -2677,25 +2699,25 @@ test_balance_xmit_policy_configuration(void)
 	TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_get(INVALID_PORT_ID),
 			"Expected call to failed as invalid port specified.");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT (2)
+#define TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT (2)
 
 static int
 test_balance_l2_tx_burst(void)
 {
-	struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
-	int burst_size[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT] = { 10, 15 };
+	struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
+	int burst_size[TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT] = { 10, 15 };
 
 	uint16_t pktlen;
 	int i;
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
-			BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+			BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT, 1),
+			"Failed to initialize_bonded_device_with_members.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
 			test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER2),
@@ -2730,7 +2752,7 @@ test_balance_l2_tx_burst(void)
 			"failed to generate packet burst");
 
 	/* Send burst 1 on bonded port */
-	for (i = 0; i < TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
 				&pkts_burst[i][0], burst_size[i]),
 				burst_size[i], "Failed to transmit packet burst");
@@ -2745,24 +2767,24 @@ test_balance_l2_tx_burst(void)
 			burst_size[0] + burst_size[1]);
 
 
-	/* Verify slave ports tx stats */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify member ports tx stats */
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[0],
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+			"Member Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
 			burst_size[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[1],
-			"Slave Port (%d) opackets value (%u) not as expected (%d)\n",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+			"Member Port (%d) opackets value (%u) not as expected (%d)\n",
+			test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
 			burst_size[1]);
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -2770,8 +2792,8 @@ test_balance_l2_tx_burst(void)
 			test_params->bonded_port_id, 0, &pkts_burst[0][0], burst_size[0]),
 			0, "Expected zero packet");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -2785,9 +2807,9 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BALANCE, 0, 2, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			"Failed to initialize_bonded_device_with_members.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
 			test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER23),
@@ -2825,24 +2847,24 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
 			nb_tx_1 + nb_tx_2);
 
-	/* Verify slave ports tx stats */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify member ports tx stats */
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+			"Member Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
 			nb_tx_1);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+			"Member Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
 			nb_tx_2);
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -2851,8 +2873,8 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 			burst_size_1), 0, "Expected zero packet");
 
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -2897,9 +2919,9 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BALANCE, 0, 2, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			"Failed to initialize_bonded_device_with_members.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
 			test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER34),
@@ -2938,24 +2960,24 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
 			nb_tx_1 + nb_tx_2);
 
-	/* Verify slave ports tx stats */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify member ports tx stats */
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+			"Member Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
 			nb_tx_1);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+			"Member Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
 			nb_tx_2);
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -2963,8 +2985,8 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 			test_params->bonded_port_id, 0, pkts_burst_1,
 			burst_size_1), 0, "Expected zero packet");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -3003,27 +3025,27 @@ test_balance_l34_tx_burst_ipv6_toggle_udp_port(void)
 	return balance_l34_tx_burst(0, 0, 0, 0, 1);
 }
 
-#define TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT			(2)
-#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1			(40)
-#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2			(20)
-#define TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT		(25)
-#define TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX	(0)
+#define TEST_BAL_MEMBER_TX_FAIL_MEMBER_COUNT			(2)
+#define TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1			(40)
+#define TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2			(20)
+#define TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT		(25)
+#define TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX	(0)
 
 static int
-test_balance_tx_burst_slave_tx_fail(void)
+test_balance_tx_burst_member_tx_fail(void)
 {
-	struct rte_mbuf *pkts_burst_1[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1];
-	struct rte_mbuf *pkts_burst_2[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2];
+	struct rte_mbuf *pkts_burst_1[TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1];
+	struct rte_mbuf *pkts_burst_2[TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2];
 
-	struct rte_mbuf *expected_fail_pkts[TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT];
+	struct rte_mbuf *expected_fail_pkts[TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT];
 
 	struct rte_eth_stats port_stats;
 
 	int i, first_tx_fail_idx, tx_count_1, tx_count_2;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BALANCE, 0,
-			TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+			TEST_BAL_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
@@ -3033,46 +3055,48 @@ test_balance_tx_burst_slave_tx_fail(void)
 
 	/* Generate test bursts for transmission */
 	TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_1,
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1,
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1,
 			"Failed to generate test packet burst 1");
 
-	first_tx_fail_idx = TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT;
+	first_tx_fail_idx = TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT;
 
 	/* copy mbuf references for expected transmission failures */
-	for (i = 0; i < TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT; i++)
+	for (i = 0; i < TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT; i++)
 		expected_fail_pkts[i] = pkts_burst_1[i + first_tx_fail_idx];
 
 	TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_2,
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
 			"Failed to generate test packet burst 2");
 
 
-	/* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
-	 * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
+	/*
+	 * Set virtual member TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX to only fail
+	 * transmission of TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT packets of burst.
+	 */
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
+			test_params->member_port_ids[TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
 			0);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+			test_params->member_port_ids[TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
 
 
 	/* Transmit burst 1 */
 	tx_count_1 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_1,
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1);
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1);
 
-	TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
+	TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT,
 			"Transmitted (%d) packets, expected to transmit (%d) packets",
-			tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+			tx_count_1, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
 
 	/* Verify that failed packet are expected failed packets */
-	for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+	for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
 		TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst_1[i + tx_count_1],
 				"expected mbuf (%d) pointer %p not expected pointer %p",
 				i, expected_fail_pkts[i], pkts_burst_1[i + tx_count_1]);
@@ -3080,94 +3104,94 @@ test_balance_tx_burst_slave_tx_fail(void)
 
 	/* Transmit burst 2 */
 	tx_count_2 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_2,
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
 
-	TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
+	TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
 			"Transmitted (%d) packets, expected to transmit (%d) packets",
-			tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+			tx_count_2, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
 
 
 	/* Verify bonded port tx stats */
 	rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)((TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2),
+			(uint64_t)((TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT) +
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2),
 			"Bonded Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			(TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+			(TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT) +
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
 
-	/* Verify slave ports tx stats */
+	/* Verify member ports tx stats */
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)
-				TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-				TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[0],
+				TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+				TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT,
+				"Member Port (%d) opackets value (%u) not as expected (%d)",
+				test_params->member_port_ids[0],
 				(unsigned int)port_stats.opackets,
-				TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-				TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+				TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+				TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
 
 
 
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-				(uint64_t)TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[1],
+				(uint64_t)TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
+				"Member Port (%d) opackets value (%u) not as expected (%d)",
+				test_params->member_port_ids[1],
 				(unsigned int)port_stats.opackets,
-				TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+				TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
 
 	/* Verify that all mbufs have a ref value of zero */
 	TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst_1[tx_count_1],
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT, 1),
 			"mbufs refcnts not as expected");
 
 	free_mbufs(&pkts_burst_1[tx_count_1],
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_BALANCE_RX_BURST_SLAVE_COUNT (3)
+#define TEST_BALANCE_RX_BURST_MEMBER_COUNT (3)
 
 static int
 test_balance_rx_burst(void)
 {
-	struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
 
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	int burst_size[TEST_BALANCE_RX_BURST_SLAVE_COUNT] = { 10, 5, 30 };
+	int burst_size[TEST_BALANCE_RX_BURST_MEMBER_COUNT] = { 10, 5, 30 };
 	int i, j;
 
 	memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BALANCE, 0, 3, 1),
 			"Failed to initialise bonded device");
 
 	/* Generate test bursts of packets to transmit */
-	for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[i][0], burst_size[i], 0, 0, 1,
 				0, 0), burst_size[i],
 				"failed to generate packet burst");
 	}
 
-	/* Add rx data to slaves */
-	for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+	/* Add rx data to members */
+	for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[i][0], burst_size[i]);
 	}
 
@@ -3187,33 +3211,33 @@ test_balance_rx_burst(void)
 			burst_size[0] + burst_size[1] + burst_size[2]);
 
 
-	/* Verify bonded slave devices rx counts */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify bonded member devices rx counts */
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[0],
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+				test_params->member_port_ids[0],
 				(unsigned int)port_stats.ipackets, burst_size[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[1], (unsigned int)port_stats.ipackets,
 			burst_size[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[2], (unsigned int)port_stats.ipackets,
 			burst_size[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[3],	(unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[3],	(unsigned int)port_stats.ipackets,
 			0);
 
 	/* free mbufs */
-	for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
 		for (j = 0; j < MAX_PKT_BURST; j++) {
 			if (gen_pkt_burst[i][j] != NULL) {
 				rte_pktmbuf_free(gen_pkt_burst[i][j]);
@@ -3222,8 +3246,8 @@ test_balance_rx_burst(void)
 		}
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -3232,8 +3256,8 @@ test_balance_verify_promiscuous_enable_disable(void)
 	int i;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BALANCE, 0, 4, 1),
 			"Failed to initialise bonded device");
 
@@ -3246,11 +3270,11 @@ test_balance_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not enabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]), 1,
+				test_params->member_port_ids[i]), 1,
 				"Port (%d) promiscuous mode not enabled",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
 	ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -3262,15 +3286,15 @@ test_balance_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]), 0,
+				test_params->member_port_ids[i]), 0,
 				"Port (%d) promiscuous mode not disabled",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -3279,19 +3303,21 @@ test_balance_verify_mac_assignment(void)
 	struct rte_ether_addr read_mac_addr;
 	struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+			&expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+			test_params->member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+			&expected_mac_addr_1),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	/* Initialize bonded device with 2 slaves in active backup mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 2 members in active backup mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BALANCE, 0, 2, 1),
 			"Failed to initialise bonded device");
 
-	/* Verify that bonded MACs is that of first slave and that the other slave
+	/* Verify that bonded MACs is that of first member and that the other member
 	 * MAC hasn't been changed */
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -3301,27 +3327,27 @@ test_balance_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[1]);
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[1]),
+			test_params->member_port_ids[1]),
 			"Failed to set bonded port (%d) primary port to (%d)\n",
-			test_params->bonded_port_id, test_params->slave_port_ids[1]);
+			test_params->bonded_port_id, test_params->member_port_ids[1]);
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -3331,24 +3357,26 @@ test_balance_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[1]);
 
-	/* stop / start bonded device and verify that primary MAC address is
-	 * propagated to bonded device and slaves */
+	/*
+	 * stop / start bonded device and verify that primary MAC address is
+	 * propagated to bonded device and members.
+	 */
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -3365,21 +3393,21 @@ test_balance_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[1]);
 
 	/* Set explicit MAC address */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
@@ -3395,44 +3423,44 @@ test_balance_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of bonded port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected\n",
-				test_params->slave_port_ids[0]);
+			"member port (%d) mac address not as expected\n",
+				test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of bonded port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of bonded port",
+			test_params->member_port_ids[1]);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_BALANCE_LINK_STATUS_SLAVE_COUNT (4)
+#define TEST_BALANCE_LINK_STATUS_MEMBER_COUNT (4)
 
 static int
-test_balance_verify_slave_link_status_change_behaviour(void)
+test_balance_verify_member_link_status_change_behaviour(void)
 {
-	struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_MEMBER_COUNT][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count;
+	int i, burst_size, member_count;
 
 	memset(pkt_burst, 0, sizeof(pkt_burst));
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
-			BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT, 1),
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+			BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
@@ -3440,32 +3468,34 @@ test_balance_verify_slave_link_status_change_behaviour(void)
 			"Failed to set balance xmit policy.");
 
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Members Count /Active Member Count is */
+	member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
-			"Number of slaves (%d) is not as expected (%d).",
-			slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
+	TEST_ASSERT_EQUAL(member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT,
+			"Number of members (%d) is not as expected (%d).",
+			member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT);
 
-	/* Set 2 slaves link status to down */
+	/* Set 2 members link status to down */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->member_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->member_port_ids[3], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 2);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 2);
 
-	/* Send to sets of packet burst and verify that they are balanced across
-	 *  slaves */
+	/*
+	 * Send to sets of packet burst and verify that they are balanced across
+	 *  members.
+	 */
 	burst_size = 21;
 
 	TEST_ASSERT_EQUAL(generate_test_burst(
@@ -3491,27 +3521,27 @@ test_balance_verify_slave_link_status_change_behaviour(void)
 			test_params->bonded_port_id, (int)port_stats.opackets,
 			burst_size + burst_size);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets (%d) not as expected (%d).",
-			test_params->slave_port_ids[0], (int)port_stats.opackets,
+			test_params->member_port_ids[0], (int)port_stats.opackets,
 			burst_size);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets (%d) not as expected (%d).",
-			test_params->slave_port_ids[2], (int)port_stats.opackets,
+			test_params->member_port_ids[2], (int)port_stats.opackets,
 			burst_size);
 
-	/* verify that all packets get send on primary slave when no other slaves
+	/* verify that all packets get send on primary member when no other members
 	 * are available */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[2], 0);
+			test_params->member_port_ids[2], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 1,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 1);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 1,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 1);
 
 	TEST_ASSERT_EQUAL(generate_test_burst(
 			&pkt_burst[1][0], burst_size, 0, 1, 1, 0, 0), burst_size,
@@ -3528,31 +3558,31 @@ test_balance_verify_slave_link_status_change_behaviour(void)
 			test_params->bonded_port_id, (int)port_stats.opackets,
 			burst_size + burst_size + burst_size);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size + burst_size),
 			"(%d) port_stats.opackets (%d) not as expected (%d).",
-			test_params->slave_port_ids[0], (int)port_stats.opackets,
+			test_params->member_port_ids[0], (int)port_stats.opackets,
 			burst_size + burst_size);
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 0);
+			test_params->member_port_ids[0], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 1);
+			test_params->member_port_ids[1], 1);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[2], 1);
+			test_params->member_port_ids[2], 1);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 1);
+			test_params->member_port_ids[3], 1);
 
-	for (i = 0; i < TEST_BALANCE_LINK_STATUS_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_BALANCE_LINK_STATUS_MEMBER_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
 				"Failed to generate packet burst");
 
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&pkt_burst[i][0], burst_size);
 	}
 
-	/* Verify that pkts are not received on slaves with link status down */
+	/* Verify that pkts are not received on members with link status down */
 
 	rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
 			MAX_PKT_BURST);
@@ -3564,8 +3594,8 @@ test_balance_verify_slave_link_status_change_behaviour(void)
 			test_params->bonded_port_id, (int)port_stats.ipackets,
 			burst_size * 3);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -3576,7 +3606,7 @@ test_broadcast_tx_burst(void)
 
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BROADCAST, 0, 2, 1),
 			"Failed to initialise bonded device");
 
@@ -3590,7 +3620,7 @@ test_broadcast_tx_burst(void)
 	pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
 			dst_addr_0, pktlen);
 
-	burst_size = 20 * test_params->bonded_slave_count;
+	burst_size = 20 * test_params->bonded_member_count;
 
 	TEST_ASSERT(burst_size < MAX_PKT_BURST,
 			"Burst size specified is greater than supported.");
@@ -3611,25 +3641,25 @@ test_broadcast_tx_burst(void)
 	/* Verify bonded port tx stats */
 	rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)burst_size * test_params->bonded_slave_count,
+			(uint64_t)burst_size * test_params->bonded_member_count,
 			"Bonded Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
 			burst_size);
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+	/* Verify member ports tx stats */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
 		TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)\n",
+				"Member Port (%d) opackets value (%u) not as expected (%d)\n",
 				test_params->bonded_port_id,
 				(unsigned int)port_stats.opackets, burst_size);
 	}
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -3637,159 +3667,161 @@ test_broadcast_tx_burst(void)
 			test_params->bonded_port_id, 0, pkts_burst, burst_size),  0,
 			"transmitted an unexpected number of packets");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 
-#define TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT		(3)
-#define TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE			(40)
-#define TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT	(15)
-#define TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT	(10)
+#define TEST_BCAST_MEMBER_TX_FAIL_MEMBER_COUNT		(3)
+#define TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE			(40)
+#define TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT	(15)
+#define TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT	(10)
 
 static int
-test_broadcast_tx_burst_slave_tx_fail(void)
+test_broadcast_tx_burst_member_tx_fail(void)
 {
-	struct rte_mbuf *pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE];
-	struct rte_mbuf *expected_fail_pkts[TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT];
+	struct rte_mbuf *pkts_burst[TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE];
+	struct rte_mbuf *expected_fail_pkts[TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT];
 
 	struct rte_eth_stats port_stats;
 
 	int i, tx_count;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BROADCAST, 0,
-			TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+			TEST_BCAST_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	/* Generate test bursts for transmission */
 	TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE,
+			TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
+			TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE,
 			"Failed to generate test packet burst");
 
-	for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
-		expected_fail_pkts[i] = pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT + i];
+	for (i = 0; i < TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT; i++) {
+		expected_fail_pkts[i] = pkts_burst[TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT + i];
 	}
 
-	/* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
-	 * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
+	/*
+	 * Set virtual member TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX to only fail
+	 * transmission of TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT packets of burst.
+	 */
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[0],
+			test_params->member_port_ids[0],
 			0);
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[1],
+			test_params->member_port_ids[1],
 			0);
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[2],
+			test_params->member_port_ids[2],
 			0);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[0],
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+			test_params->member_port_ids[0],
+			TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[1],
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+			test_params->member_port_ids[1],
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[2],
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+			test_params->member_port_ids[2],
+			TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
 
 	/* Transmit burst */
 	tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE);
+			TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE);
 
-	TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
+	TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT,
 			"Transmitted (%d) packets, expected to transmit (%d) packets",
-			tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+			tx_count, TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
 
 	/* Verify that failed packet are expected failed packets */
-	for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
+	for (i = 0; i < TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT; i++) {
 		TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst[i + tx_count],
 				"expected mbuf (%d) pointer %p not expected pointer %p",
 				i, expected_fail_pkts[i], pkts_burst[i + tx_count]);
 	}
 
-	/* Verify slave ports tx stats */
+	/* Verify member ports tx stats */
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
+			(uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT,
 			"Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+			TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
 
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
+			(uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT,
 			"Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+			TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
+			(uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT,
 			"Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+			TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
 
 
 	/* Verify that all mbufs who transmission failed have a ref value of one */
 	TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst[tx_count],
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT, 1),
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT, 1),
 			"mbufs refcnts not as expected");
 
 	free_mbufs(&pkts_burst[tx_count],
-		TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+		TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define BROADCAST_RX_BURST_NUM_OF_SLAVES (3)
+#define BROADCAST_RX_BURST_NUM_OF_MEMBERS (3)
 
 static int
 test_broadcast_rx_burst(void)
 {
-	struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_SLAVES][MAX_PKT_BURST];
+	struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_MEMBERS][MAX_PKT_BURST];
 
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	int burst_size[BROADCAST_RX_BURST_NUM_OF_SLAVES] = { 10, 5, 30 };
+	int burst_size[BROADCAST_RX_BURST_NUM_OF_MEMBERS] = { 10, 5, 30 };
 	int i, j;
 
 	memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BROADCAST, 0, 3, 1),
 			"Failed to initialise bonded device");
 
 	/* Generate test bursts of packets to transmit */
-	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
+	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[i][0], burst_size[i], 0, 0, 1, 0, 0),
 				burst_size[i], "failed to generate packet burst");
 	}
 
-	/* Add rx data to slave 0 */
-	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+	/* Add rx data to member 0 */
+	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[i][0], burst_size[i]);
 	}
 
@@ -3810,33 +3842,33 @@ test_broadcast_rx_burst(void)
 			burst_size[0] + burst_size[1] + burst_size[2]);
 
 
-	/* Verify bonded slave devices rx counts */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify bonded member devices rx counts */
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[0], (unsigned int)port_stats.ipackets,
 			burst_size[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[0], (unsigned int)port_stats.ipackets,
 			burst_size[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[2], (unsigned int)port_stats.ipackets,
 			burst_size[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[3], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[3], (unsigned int)port_stats.ipackets,
 			0);
 
 	/* free mbufs allocate for rx testing */
-	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
+	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
 		for (j = 0; j < MAX_PKT_BURST; j++) {
 			if (gen_pkt_burst[i][j] != NULL) {
 				rte_pktmbuf_free(gen_pkt_burst[i][j]);
@@ -3845,8 +3877,8 @@ test_broadcast_rx_burst(void)
 		}
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -3855,8 +3887,8 @@ test_broadcast_verify_promiscuous_enable_disable(void)
 	int i;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BROADCAST, 0, 4, 1),
 			"Failed to initialise bonded device");
 
@@ -3870,11 +3902,11 @@ test_broadcast_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not enabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]), 1,
+				test_params->member_port_ids[i]), 1,
 				"Port (%d) promiscuous mode not enabled",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
 	ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -3886,15 +3918,15 @@ test_broadcast_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]), 0,
+				test_params->member_port_ids[i]), 0,
 				"Port (%d) promiscuous mode not disabled",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -3905,49 +3937,55 @@ test_broadcast_verify_mac_assignment(void)
 
 	int i;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+			&expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_1),
+			test_params->member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[2],
+			&expected_mac_addr_1),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[2]);
+			test_params->member_port_ids[2]);
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BROADCAST, 0, 4, 1),
 			"Failed to initialise bonded device");
 
-	/* Verify that all MACs are the same as first slave added to bonded
+	/* Verify that all MACs are the same as first member added to bonded
 	 * device */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of primary port",
-				test_params->slave_port_ids[i]);
+				"member port (%d) mac address not set to that of primary port",
+				test_params->member_port_ids[i]);
 	}
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[2]),
+			test_params->member_port_ids[2]),
 			"Failed to set bonded port (%d) primary port to (%d)",
-			test_params->bonded_port_id, test_params->slave_port_ids[i]);
+			test_params->bonded_port_id, test_params->member_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address has changed to that of primary "
+				"member port (%d) mac address has changed to that of primary "
 				"port without stop/start toggle of bonded device",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
-	/* stop / start bonded device and verify that primary MAC address is
-	 * propagated to bonded device and slaves */
+	/*
+	 * stop / start bonded device and verify that primary MAC address is
+	 * propagated to bonded device and members.
+	 */
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -3962,16 +4000,17 @@ test_broadcast_verify_mac_assignment(void)
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
 			"bonded port (%d) mac address not set to that of new primary  port",
-			test_params->slave_port_ids[i]);
+			test_params->member_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of new primary "
-				"port", test_params->slave_port_ids[i]);
+				"member port (%d) mac address not set to that of new primary "
+				"port", test_params->member_port_ids[i]);
 	}
 
 	/* Set explicit MAC address */
@@ -3986,71 +4025,72 @@ test_broadcast_verify_mac_assignment(void)
 	TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
 			"bonded port (%d) mac address not set to that of new primary port",
-			test_params->slave_port_ids[i]);
+			test_params->member_port_ids[i]);
 
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of new primary "
-				"port", test_params->slave_port_ids[i]);
+				"member port (%d) mac address not set to that of new primary "
+				"port", test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define BROADCAST_LINK_STATUS_NUM_OF_SLAVES (4)
+#define BROADCAST_LINK_STATUS_NUM_OF_MEMBERS (4)
 static int
-test_broadcast_verify_slave_link_status_change_behaviour(void)
+test_broadcast_verify_member_link_status_change_behaviour(void)
 {
-	struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_SLAVES][MAX_PKT_BURST];
+	struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_MEMBERS][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count;
+	int i, burst_size, member_count;
 
 	memset(pkt_burst, 0, sizeof(pkt_burst));
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
-				BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_SLAVES,
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+				BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_MEMBERS,
 				1), "Failed to initialise bonded device");
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Members Count /Active Member Count is */
+	member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of slaves (%d) is not as expected (%d).",
-			slave_count, 4);
+	TEST_ASSERT_EQUAL(member_count, 4,
+			"Number of members (%d) is not as expected (%d).",
+			member_count, 4);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 4);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count, 4,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 4);
 
-	/* Set 2 slaves link status to down */
+	/* Set 2 members link status to down */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->member_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->member_port_ids[3], 0);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 2,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 2);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count, 2,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 2);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++)
-		rte_eth_stats_reset(test_params->slave_port_ids[i]);
+	for (i = 0; i < test_params->bonded_member_count; i++)
+		rte_eth_stats_reset(test_params->member_port_ids[i]);
 
-	/* Verify that pkts are not sent on slaves with link status down */
+	/* Verify that pkts are not sent on members with link status down */
 	burst_size = 21;
 
 	TEST_ASSERT_EQUAL(generate_test_burst(
@@ -4062,43 +4102,43 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
 			"rte_eth_tx_burst failed\n");
 
 	rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
-	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * slave_count),
+	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * member_count),
 			"(%d) port_stats.opackets (%d) not as expected (%d)\n",
 			test_params->bonded_port_id, (int)port_stats.opackets,
-			burst_size * slave_count);
+			burst_size * member_count);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-				test_params->slave_port_ids[1]);
+				test_params->member_port_ids[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets not as expected",
-				test_params->slave_port_ids[2]);
+				test_params->member_port_ids[2]);
 
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[3]);
+			test_params->member_port_ids[3]);
 
 
-	for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_SLAVES; i++) {
+	for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_MEMBERS; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&pkt_burst[i][0], burst_size, 0, 0, 1, 0, 0),
 				burst_size, "failed to generate packet burst");
 
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&pkt_burst[i][0], burst_size);
 	}
 
-	/* Verify that pkts are not received on slaves with link status down */
+	/* Verify that pkts are not received on members with link status down */
 	TEST_ASSERT_EQUAL(rte_eth_rx_burst(
 			test_params->bonded_port_id, 0, rx_pkt_burst, MAX_PKT_BURST),
 			burst_size + burst_size, "rte_eth_rx_burst failed");
@@ -4110,8 +4150,8 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
 			"(%d) port_stats.ipackets not as expected\n",
 			test_params->bonded_port_id);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -4146,21 +4186,21 @@ testsuite_teardown(void)
 	free(test_params->pkt_eth_hdr);
 	test_params->pkt_eth_hdr = NULL;
 
-	/* Clean up and remove slaves from bonded device */
-	remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	remove_members_and_stop_bonded_device();
 }
 
 static void
 free_virtualpmd_tx_queue(void)
 {
-	int i, slave_port, to_free_cnt;
+	int i, member_port, to_free_cnt;
 	struct rte_mbuf *pkts_to_free[MAX_PKT_BURST];
 
 	/* Free tx queue of virtual pmd */
-	for (slave_port = 0; slave_port < test_params->bonded_slave_count;
-			slave_port++) {
+	for (member_port = 0; member_port < test_params->bonded_member_count;
+			member_port++) {
 		to_free_cnt = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_port],
+				test_params->member_port_ids[member_port],
 				pkts_to_free, MAX_PKT_BURST);
 		for (i = 0; i < to_free_cnt; i++)
 			rte_pktmbuf_free(pkts_to_free[i]);
@@ -4177,11 +4217,11 @@ test_tlb_tx_burst(void)
 	uint64_t sum_ports_opackets = 0, all_bond_opackets = 0, all_bond_obytes = 0;
 	uint16_t pktlen;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members
 			(BONDING_MODE_TLB, 1, 3, 1),
 			"Failed to initialise bonded device");
 
-	burst_size = 20 * test_params->bonded_slave_count;
+	burst_size = 20 * test_params->bonded_member_count;
 
 	TEST_ASSERT(burst_size < MAX_PKT_BURST,
 			"Burst size specified is greater than supported.\n");
@@ -4197,7 +4237,7 @@ test_tlb_tx_burst(void)
 					RTE_ETHER_TYPE_IPV4, 0, 0);
 		} else {
 			initialize_eth_header(test_params->pkt_eth_hdr,
-					(struct rte_ether_addr *)test_params->default_slave_mac,
+					(struct rte_ether_addr *)test_params->default_member_mac,
 					(struct rte_ether_addr *)dst_mac_0,
 					RTE_ETHER_TYPE_IPV4, 0, 0);
 		}
@@ -4234,26 +4274,26 @@ test_tlb_tx_burst(void)
 			burst_size);
 
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats[i]);
+	/* Verify member ports tx stats */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		rte_eth_stats_get(test_params->member_port_ids[i], &port_stats[i]);
 		sum_ports_opackets += port_stats[i].opackets;
 	}
 
 	TEST_ASSERT_EQUAL(sum_ports_opackets, (uint64_t)all_bond_opackets,
-			"Total packets sent by slaves is not equal to packets sent by bond interface");
+			"Total packets sent by members is not equal to packets sent by bond interface");
 
-	/* checking if distribution of packets is balanced over slaves */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* checking if distribution of packets is balanced over members */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		TEST_ASSERT(port_stats[i].obytes > 0 &&
 				port_stats[i].obytes < all_bond_obytes,
-						"Packets are not balanced over slaves");
+						"Packets are not balanced over members");
 	}
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -4261,11 +4301,11 @@ test_tlb_tx_burst(void)
 			burst_size);
 	TEST_ASSERT_EQUAL(nb_tx, 0, " bad number of packet in burst");
 
-	/* Clean ugit checkout masterp and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean ugit checkout masterp and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT (4)
+#define TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT (4)
 
 static int
 test_tlb_rx_burst(void)
@@ -4279,26 +4319,26 @@ test_tlb_rx_burst(void)
 
 	uint16_t i, j, nb_rx, burst_size = 17;
 
-	/* Initialize bonded device with 4 slaves in transmit load balancing mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in transmit load balancing mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_TLB,
-			TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1, 1),
+			TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT, 1, 1),
 			"Failed to initialize bonded device");
 
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 	TEST_ASSERT(primary_port >= 0,
-			"failed to get primary slave for bonded port (%d)",
+			"failed to get primary member for bonded port (%d)",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		/* Generate test bursts of packets to transmit */
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0), burst_size,
 				"burst generation failed");
 
-		/* Add rx data to slave */
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		/* Add rx data to member */
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[0], burst_size);
 
 		/* Call rx burst on bonded device */
@@ -4307,7 +4347,7 @@ test_tlb_rx_burst(void)
 
 		TEST_ASSERT_EQUAL(nb_rx, burst_size, "rte_eth_rx_burst failed\n");
 
-		if (test_params->slave_port_ids[i] == primary_port) {
+		if (test_params->member_port_ids[i] == primary_port) {
 			/* Verify bonded device rx count */
 			rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 			TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
@@ -4315,27 +4355,27 @@ test_tlb_rx_burst(void)
 					test_params->bonded_port_id,
 					(unsigned int)port_stats.ipackets, burst_size);
 
-			/* Verify bonded slave devices rx count */
-			for (j = 0; j < test_params->bonded_slave_count; j++) {
-				rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+			/* Verify bonded member devices rx count */
+			for (j = 0; j < test_params->bonded_member_count; j++) {
+				rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
 				if (i == j) {
 					TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
-							"Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
-							test_params->slave_port_ids[i],
+							"Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+							test_params->member_port_ids[i],
 							(unsigned int)port_stats.ipackets, burst_size);
 				} else {
 					TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
-							"Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
-							test_params->slave_port_ids[i],
+							"Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+							test_params->member_port_ids[i],
 							(unsigned int)port_stats.ipackets, 0);
 				}
 			}
 		} else {
-			for (j = 0; j < test_params->bonded_slave_count; j++) {
-				rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+			for (j = 0; j < test_params->bonded_member_count; j++) {
+				rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
 				TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
-						"Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
-						test_params->slave_port_ids[i],
+						"Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+						test_params->member_port_ids[i],
 						(unsigned int)port_stats.ipackets, 0);
 			}
 		}
@@ -4348,8 +4388,8 @@ test_tlb_rx_burst(void)
 		rte_eth_stats_reset(test_params->bonded_port_id);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -4358,14 +4398,14 @@ test_tlb_verify_promiscuous_enable_disable(void)
 	int i, primary_port, promiscuous_en;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in transmit load balancing mode */
-	TEST_ASSERT_SUCCESS( initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in transmit load balancing mode */
+	TEST_ASSERT_SUCCESS( initialize_bonded_device_with_members(
 			BONDING_MODE_TLB, 0, 4, 1),
 			"Failed to initialize bonded device");
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 	TEST_ASSERT(primary_port >= 0,
-			"failed to get primary slave for bonded port (%d)",
+			"failed to get primary member for bonded port (%d)",
 			test_params->bonded_port_id);
 
 	ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -4377,10 +4417,10 @@ test_tlb_verify_promiscuous_enable_disable(void)
 	TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
 			"Port (%d) promiscuous mode not enabled\n",
 			test_params->bonded_port_id);
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
-		if (primary_port == test_params->slave_port_ids[i]) {
+				test_params->member_port_ids[i]);
+		if (primary_port == test_params->member_port_ids[i]) {
 			TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
 					"Port (%d) promiscuous mode not enabled\n",
 					test_params->bonded_port_id);
@@ -4402,16 +4442,16 @@ test_tlb_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled\n",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_EQUAL(promiscuous_en, (int)0,
-				"slave port (%d) promiscuous mode not disabled\n",
-				test_params->slave_port_ids[i]);
+				"member port (%d) promiscuous mode not disabled\n",
+				test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -4420,20 +4460,24 @@ test_tlb_verify_mac_assignment(void)
 	struct rte_ether_addr read_mac_addr;
 	struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+			&expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+			test_params->member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+			&expected_mac_addr_1),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	/* Initialize bonded device with 2 slaves in active backup mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 2 members in active backup mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_TLB, 0, 2, 1),
 			"Failed to initialize bonded device");
 
-	/* Verify that bonded MACs is that of first slave and that the other slave
-	 * MAC hasn't been changed */
+	/*
+	 * Verify that bonded MACs is that of first member and that the other member
+	 * MAC hasn't been changed.
+	 */
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
 			test_params->bonded_port_id);
@@ -4442,27 +4486,27 @@ test_tlb_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[1]);
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[1]), 0,
+			test_params->member_port_ids[1]), 0,
 			"Failed to set bonded port (%d) primary port to (%d)",
-			test_params->bonded_port_id, test_params->slave_port_ids[1]);
+			test_params->bonded_port_id, test_params->member_port_ids[1]);
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -4472,24 +4516,26 @@ test_tlb_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[1]);
 
-	/* stop / start bonded device and verify that primary MAC address is
-	 * propagated to bonded device and slaves */
+	/*
+	 * stop / start bonded device and verify that primary MAC address is
+	 * propagated to bonded device and members.
+	 */
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -4506,21 +4552,21 @@ test_tlb_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[1]);
 
 
 	/* Set explicit MAC address */
@@ -4537,36 +4583,36 @@ test_tlb_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of bonded port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of bonded port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of bonded port",
+			test_params->member_port_ids[1]);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
-test_tlb_verify_slave_link_status_change_failover(void)
+test_tlb_verify_member_link_status_change_failover(void)
 {
-	struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count, primary_port;
+	int i, burst_size, member_count, primary_port;
 
 	burst_size = 21;
 
@@ -4574,61 +4620,63 @@ test_tlb_verify_slave_link_status_change_failover(void)
 
 
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_TLB, 0,
-			TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1),
-			"Failed to initialize bonded device with slaves");
+			TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT, 1),
+			"Failed to initialize bonded device with members");
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Members Count /Active Member Count is */
+	member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of slaves (%d) is not as expected (%d).\n",
-			slave_count, 4);
+	TEST_ASSERT_EQUAL(member_count, 4,
+			"Number of members (%d) is not as expected (%d).\n",
+			member_count, 4);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, (int)4,
-			"Number of slaves (%d) is not as expected (%d).\n",
-			slave_count, 4);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count, 4,
+			"Number of members (%d) is not as expected (%d).\n",
+			member_count, 4);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+	TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
 			"Primary port not as expected");
 
-	/* Bring 2 slaves down and verify active slave count */
+	/* Bring 2 members down and verify active member count */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->member_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->member_port_ids[3], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 2);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 2);
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 1);
+			test_params->member_port_ids[1], 1);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 1);
+			test_params->member_port_ids[3], 1);
 
 
-	/* Bring primary port down, verify that active slave count is 3 and primary
-	 *  has changed */
+	/*
+	 * Bring primary port down, verify that active member count is 3 and primary
+	 *  has changed.
+	 */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 0);
+			test_params->member_port_ids[0], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 3,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 3);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 3,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 3);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
+	TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[2],
 			"Primary port not as expected");
 	rte_delay_us(500000);
-	/* Verify that pkts are sent on new primary slave */
+	/* Verify that pkts are sent on new primary member */
 	for (i = 0; i < 4; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
@@ -4639,36 +4687,36 @@ test_tlb_verify_slave_link_status_change_failover(void)
 		rte_delay_us(11000);
 	}
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (int8_t)0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[2]);
+			test_params->member_port_ids[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[3]);
+			test_params->member_port_ids[3]);
 
 
 	/* Generate packet burst for testing */
 
-	for (i = 0; i < TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT; i++) {
 		if (generate_test_burst(&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0) !=
 				burst_size)
 			return -1;
 
 		virtual_ethdev_add_mbufs_to_rx_queue(
-				test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
+				test_params->member_port_ids[i], &pkt_burst[i][0], burst_size);
 	}
 
 	if (rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
@@ -4684,11 +4732,11 @@ test_tlb_verify_slave_link_status_change_failover(void)
 			"(%d) port_stats.ipackets not as expected\n",
 			test_params->bonded_port_id);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_ALB_SLAVE_COUNT	2
+#define TEST_ALB_MEMBER_COUNT	2
 
 static uint8_t mac_client1[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 1};
 static uint8_t mac_client2[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 2};
@@ -4710,23 +4758,23 @@ test_alb_change_mac_in_reply_sent(void)
 	struct rte_ether_hdr *eth_pkt;
 	struct rte_arp_hdr *arp_pkt;
 
-	int slave_idx, nb_pkts, pkt_idx;
+	int member_idx, nb_pkts, pkt_idx;
 	int retval = 0;
 
 	struct rte_ether_addr bond_mac, client_mac;
-	struct rte_ether_addr *slave_mac1, *slave_mac2;
+	struct rte_ether_addr *member_mac1, *member_mac2;
 
 	TEST_ASSERT_SUCCESS(
-			initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
-					0, TEST_ALB_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			initialize_bonded_device_with_members(BONDING_MODE_ALB,
+					0, TEST_ALB_MEMBER_COUNT, 1),
+			"Failed to initialize_bonded_device_with_members.");
 
 	/* Flush tx queue */
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count;
-			slave_idx++) {
+	for (member_idx = 0; member_idx < test_params->bonded_member_count;
+			member_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->member_port_ids[member_idx], pkts_sent,
 				MAX_PKT_BURST);
 	}
 
@@ -4782,18 +4830,18 @@ test_alb_change_mac_in_reply_sent(void)
 			RTE_ARP_OP_REPLY);
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, &pkt, 1);
 
-	slave_mac1 =
-			rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
-	slave_mac2 =
-			rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
+	member_mac1 =
+			rte_eth_devices[test_params->member_port_ids[0]].data->mac_addrs;
+	member_mac2 =
+			rte_eth_devices[test_params->member_port_ids[1]].data->mac_addrs;
 
 	/*
 	 * Checking if packets are properly distributed on bonding ports. Packets
 	 * 0 and 2 should be sent on port 0 and packets 1 and 3 on port 1.
 	 */
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->member_port_ids[member_idx], pkts_sent,
 				MAX_PKT_BURST);
 
 		for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -4802,14 +4850,14 @@ test_alb_change_mac_in_reply_sent(void)
 			arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
 						sizeof(struct rte_ether_hdr));
 
-			if (slave_idx%2 == 0) {
-				if (!rte_is_same_ether_addr(slave_mac1,
+			if (member_idx%2 == 0) {
+				if (!rte_is_same_ether_addr(member_mac1,
 						&arp_pkt->arp_data.arp_sha)) {
 					retval = -1;
 					goto test_end;
 				}
 			} else {
-				if (!rte_is_same_ether_addr(slave_mac2,
+				if (!rte_is_same_ether_addr(member_mac2,
 						&arp_pkt->arp_data.arp_sha)) {
 					retval = -1;
 					goto test_end;
@@ -4819,7 +4867,7 @@ test_alb_change_mac_in_reply_sent(void)
 	}
 
 test_end:
-	retval += remove_slaves_and_stop_bonded_device();
+	retval += remove_members_and_stop_bonded_device();
 	return retval;
 }
 
@@ -4832,22 +4880,22 @@ test_alb_reply_from_client(void)
 	struct rte_mbuf *pkt;
 	struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
 
-	int slave_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
+	int member_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
 	int retval = 0;
 
 	struct rte_ether_addr bond_mac, client_mac;
-	struct rte_ether_addr *slave_mac1, *slave_mac2;
+	struct rte_ether_addr *member_mac1, *member_mac2;
 
 	TEST_ASSERT_SUCCESS(
-			initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
-					0, TEST_ALB_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			initialize_bonded_device_with_members(BONDING_MODE_ALB,
+					0, TEST_ALB_MEMBER_COUNT, 1),
+			"Failed to initialize_bonded_device_with_members.");
 
 	/* Flush tx queue */
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->member_port_ids[member_idx], pkts_sent,
 				MAX_PKT_BURST);
 	}
 
@@ -4868,7 +4916,7 @@ test_alb_reply_from_client(void)
 					sizeof(struct rte_ether_hdr));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
 			1);
 
 	pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4880,7 +4928,7 @@ test_alb_reply_from_client(void)
 					sizeof(struct rte_ether_hdr));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client2, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
 			1);
 
 	pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4892,7 +4940,7 @@ test_alb_reply_from_client(void)
 					sizeof(struct rte_ether_hdr));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client3, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
 			1);
 
 	pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4904,7 +4952,7 @@ test_alb_reply_from_client(void)
 					sizeof(struct rte_ether_hdr));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client4, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
 			1);
 
 	/*
@@ -4914,15 +4962,15 @@ test_alb_reply_from_client(void)
 	rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
 
-	slave_mac1 = rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
-	slave_mac2 = rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
+	member_mac1 = rte_eth_devices[test_params->member_port_ids[0]].data->mac_addrs;
+	member_mac2 = rte_eth_devices[test_params->member_port_ids[1]].data->mac_addrs;
 
 	/*
-	 * Checking if update ARP packets were properly send on slave ports.
+	 * Checking if update ARP packets were properly send on member ports.
 	 */
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent, MAX_PKT_BURST);
+				test_params->member_port_ids[member_idx], pkts_sent, MAX_PKT_BURST);
 		nb_pkts_sum += nb_pkts;
 
 		for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -4931,14 +4979,14 @@ test_alb_reply_from_client(void)
 			arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
 						sizeof(struct rte_ether_hdr));
 
-			if (slave_idx%2 == 0) {
-				if (!rte_is_same_ether_addr(slave_mac1,
+			if (member_idx%2 == 0) {
+				if (!rte_is_same_ether_addr(member_mac1,
 						&arp_pkt->arp_data.arp_sha)) {
 					retval = -1;
 					goto test_end;
 				}
 			} else {
-				if (!rte_is_same_ether_addr(slave_mac2,
+				if (!rte_is_same_ether_addr(member_mac2,
 						&arp_pkt->arp_data.arp_sha)) {
 					retval = -1;
 					goto test_end;
@@ -4954,7 +5002,7 @@ test_alb_reply_from_client(void)
 	}
 
 test_end:
-	retval += remove_slaves_and_stop_bonded_device();
+	retval += remove_members_and_stop_bonded_device();
 	return retval;
 }
 
@@ -4968,21 +5016,21 @@ test_alb_receive_vlan_reply(void)
 	struct rte_mbuf *pkt;
 	struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
 
-	int slave_idx, nb_pkts, pkt_idx;
+	int member_idx, nb_pkts, pkt_idx;
 	int retval = 0;
 
 	struct rte_ether_addr bond_mac, client_mac;
 
 	TEST_ASSERT_SUCCESS(
-			initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
-					0, TEST_ALB_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			initialize_bonded_device_with_members(BONDING_MODE_ALB,
+					0, TEST_ALB_MEMBER_COUNT, 1),
+			"Failed to initialize_bonded_device_with_members.");
 
 	/* Flush tx queue */
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->member_port_ids[member_idx], pkts_sent,
 				MAX_PKT_BURST);
 	}
 
@@ -5007,7 +5055,7 @@ test_alb_receive_vlan_reply(void)
 	arp_pkt = (struct rte_arp_hdr *)((char *)(vlan_pkt + 1));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
 			1);
 
 	rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
@@ -5016,9 +5064,9 @@ test_alb_receive_vlan_reply(void)
 	/*
 	 * Checking if VLAN headers in generated ARP Update packet are correct.
 	 */
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->member_port_ids[member_idx], pkts_sent,
 				MAX_PKT_BURST);
 
 		for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -5049,7 +5097,7 @@ test_alb_receive_vlan_reply(void)
 	}
 
 test_end:
-	retval += remove_slaves_and_stop_bonded_device();
+	retval += remove_members_and_stop_bonded_device();
 	return retval;
 }
 
@@ -5062,9 +5110,9 @@ test_alb_ipv4_tx(void)
 	retval = 0;
 
 	TEST_ASSERT_SUCCESS(
-			initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
-					0, TEST_ALB_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			initialize_bonded_device_with_members(BONDING_MODE_ALB,
+					0, TEST_ALB_MEMBER_COUNT, 1),
+			"Failed to initialize_bonded_device_with_members.");
 
 	burst_size = 32;
 
@@ -5085,7 +5133,7 @@ test_alb_ipv4_tx(void)
 	}
 
 test_end:
-	retval += remove_slaves_and_stop_bonded_device();
+	retval += remove_members_and_stop_bonded_device();
 	return retval;
 }
 
@@ -5096,34 +5144,34 @@ static struct unit_test_suite link_bonding_test_suite  = {
 	.unit_test_cases = {
 		TEST_CASE(test_create_bonded_device),
 		TEST_CASE(test_create_bonded_device_with_invalid_params),
-		TEST_CASE(test_add_slave_to_bonded_device),
-		TEST_CASE(test_add_slave_to_invalid_bonded_device),
-		TEST_CASE(test_remove_slave_from_bonded_device),
-		TEST_CASE(test_remove_slave_from_invalid_bonded_device),
-		TEST_CASE(test_get_slaves_from_bonded_device),
-		TEST_CASE(test_add_already_bonded_slave_to_bonded_device),
-		TEST_CASE(test_add_remove_multiple_slaves_to_from_bonded_device),
+		TEST_CASE(test_add_member_to_bonded_device),
+		TEST_CASE(test_add_member_to_invalid_bonded_device),
+		TEST_CASE(test_remove_member_from_bonded_device),
+		TEST_CASE(test_remove_member_from_invalid_bonded_device),
+		TEST_CASE(test_get_members_from_bonded_device),
+		TEST_CASE(test_add_already_bonded_member_to_bonded_device),
+		TEST_CASE(test_add_remove_multiple_members_to_from_bonded_device),
 		TEST_CASE(test_start_bonded_device),
 		TEST_CASE(test_stop_bonded_device),
 		TEST_CASE(test_set_bonding_mode),
-		TEST_CASE(test_set_primary_slave),
+		TEST_CASE(test_set_primary_member),
 		TEST_CASE(test_set_explicit_bonded_mac),
 		TEST_CASE(test_set_bonded_port_initialization_mac_assignment),
 		TEST_CASE(test_status_interrupt),
-		TEST_CASE(test_adding_slave_after_bonded_device_started),
+		TEST_CASE(test_adding_member_after_bonded_device_started),
 		TEST_CASE(test_roundrobin_tx_burst),
-		TEST_CASE(test_roundrobin_tx_burst_slave_tx_fail),
-		TEST_CASE(test_roundrobin_rx_burst_on_single_slave),
-		TEST_CASE(test_roundrobin_rx_burst_on_multiple_slaves),
+		TEST_CASE(test_roundrobin_tx_burst_member_tx_fail),
+		TEST_CASE(test_roundrobin_rx_burst_on_single_member),
+		TEST_CASE(test_roundrobin_rx_burst_on_multiple_members),
 		TEST_CASE(test_roundrobin_verify_promiscuous_enable_disable),
 		TEST_CASE(test_roundrobin_verify_mac_assignment),
-		TEST_CASE(test_roundrobin_verify_slave_link_status_change_behaviour),
-		TEST_CASE(test_roundrobin_verfiy_polling_slave_link_status_change),
+		TEST_CASE(test_roundrobin_verify_member_link_status_change_behaviour),
+		TEST_CASE(test_roundrobin_verify_polling_member_link_status_change),
 		TEST_CASE(test_activebackup_tx_burst),
 		TEST_CASE(test_activebackup_rx_burst),
 		TEST_CASE(test_activebackup_verify_promiscuous_enable_disable),
 		TEST_CASE(test_activebackup_verify_mac_assignment),
-		TEST_CASE(test_activebackup_verify_slave_link_status_change_failover),
+		TEST_CASE(test_activebackup_verify_member_link_status_change_failover),
 		TEST_CASE(test_balance_xmit_policy_configuration),
 		TEST_CASE(test_balance_l2_tx_burst),
 		TEST_CASE(test_balance_l23_tx_burst_ipv4_toggle_ip_addr),
@@ -5137,26 +5185,26 @@ static struct unit_test_suite link_bonding_test_suite  = {
 		TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_ip_addr),
 		TEST_CASE(test_balance_l34_tx_burst_vlan_ipv6_toggle_ip_addr),
 		TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_udp_port),
-		TEST_CASE(test_balance_tx_burst_slave_tx_fail),
+		TEST_CASE(test_balance_tx_burst_member_tx_fail),
 		TEST_CASE(test_balance_rx_burst),
 		TEST_CASE(test_balance_verify_promiscuous_enable_disable),
 		TEST_CASE(test_balance_verify_mac_assignment),
-		TEST_CASE(test_balance_verify_slave_link_status_change_behaviour),
+		TEST_CASE(test_balance_verify_member_link_status_change_behaviour),
 		TEST_CASE(test_tlb_tx_burst),
 		TEST_CASE(test_tlb_rx_burst),
 		TEST_CASE(test_tlb_verify_mac_assignment),
 		TEST_CASE(test_tlb_verify_promiscuous_enable_disable),
-		TEST_CASE(test_tlb_verify_slave_link_status_change_failover),
+		TEST_CASE(test_tlb_verify_member_link_status_change_failover),
 		TEST_CASE(test_alb_change_mac_in_reply_sent),
 		TEST_CASE(test_alb_reply_from_client),
 		TEST_CASE(test_alb_receive_vlan_reply),
 		TEST_CASE(test_alb_ipv4_tx),
 		TEST_CASE(test_broadcast_tx_burst),
-		TEST_CASE(test_broadcast_tx_burst_slave_tx_fail),
+		TEST_CASE(test_broadcast_tx_burst_member_tx_fail),
 		TEST_CASE(test_broadcast_rx_burst),
 		TEST_CASE(test_broadcast_verify_promiscuous_enable_disable),
 		TEST_CASE(test_broadcast_verify_mac_assignment),
-		TEST_CASE(test_broadcast_verify_slave_link_status_change_behaviour),
+		TEST_CASE(test_broadcast_verify_member_link_status_change_behaviour),
 		TEST_CASE(test_reconfigure_bonded_device),
 		TEST_CASE(test_close_bonded_device),
 
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 21c512c94b..2de907e7f3 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -31,7 +31,7 @@
 
 #include "test.h"
 
-#define SLAVE_COUNT (4)
+#define MEMBER_COUNT (4)
 
 #define RX_RING_SIZE 1024
 #define TX_RING_SIZE 1024
@@ -46,15 +46,15 @@
 
 #define BONDED_DEV_NAME         ("net_bonding_m4_bond_dev")
 
-#define SLAVE_DEV_NAME_FMT      ("net_virt_%d")
-#define SLAVE_RX_QUEUE_FMT      ("net_virt_%d_rx")
-#define SLAVE_TX_QUEUE_FMT      ("net_virt_%d_tx")
+#define MEMBER_DEV_NAME_FMT      ("net_virt_%d")
+#define MEMBER_RX_QUEUE_FMT      ("net_virt_%d_rx")
+#define MEMBER_TX_QUEUE_FMT      ("net_virt_%d_tx")
 
 #define INVALID_SOCKET_ID       (-1)
 #define INVALID_PORT_ID         (0xFF)
 #define INVALID_BONDING_MODE    (-1)
 
-static const struct rte_ether_addr slave_mac_default = {
+static const struct rte_ether_addr member_mac_default = {
 	{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 }
 };
 
@@ -70,7 +70,7 @@ static const struct rte_ether_addr slow_protocol_mac_addr = {
 	{ 0x01, 0x80, 0xC2, 0x00, 0x00, 0x02 }
 };
 
-struct slave_conf {
+struct member_conf {
 	struct rte_ring *rx_queue;
 	struct rte_ring *tx_queue;
 	uint16_t port_id;
@@ -86,21 +86,21 @@ struct ether_vlan_hdr {
 
 struct link_bonding_unittest_params {
 	uint8_t bonded_port_id;
-	struct slave_conf slave_ports[SLAVE_COUNT];
+	struct member_conf member_ports[MEMBER_COUNT];
 
 	struct rte_mempool *mbuf_pool;
 };
 
-#define TEST_DEFAULT_SLAVE_COUNT     RTE_DIM(test_params.slave_ports)
-#define TEST_RX_SLAVE_COUT           TEST_DEFAULT_SLAVE_COUNT
-#define TEST_TX_SLAVE_COUNT          TEST_DEFAULT_SLAVE_COUNT
-#define TEST_MARKER_SLAVE_COUT       TEST_DEFAULT_SLAVE_COUNT
-#define TEST_EXPIRED_SLAVE_COUNT     TEST_DEFAULT_SLAVE_COUNT
-#define TEST_PROMISC_SLAVE_COUNT     TEST_DEFAULT_SLAVE_COUNT
+#define TEST_DEFAULT_MEMBER_COUNT     RTE_DIM(test_params.member_ports)
+#define TEST_RX_MEMBER_COUT           TEST_DEFAULT_MEMBER_COUNT
+#define TEST_TX_MEMBER_COUNT          TEST_DEFAULT_MEMBER_COUNT
+#define TEST_MARKER_MEMBER_COUT       TEST_DEFAULT_MEMBER_COUNT
+#define TEST_EXPIRED_MEMBER_COUNT     TEST_DEFAULT_MEMBER_COUNT
+#define TEST_PROMISC_MEMBER_COUNT     TEST_DEFAULT_MEMBER_COUNT
 
 static struct link_bonding_unittest_params test_params  = {
 	.bonded_port_id = INVALID_PORT_ID,
-	.slave_ports = { [0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
+	.member_ports = { [0 ... MEMBER_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
 
 	.mbuf_pool = NULL,
 };
@@ -120,58 +120,58 @@ static uint8_t lacpdu_rx_count[RTE_MAX_ETHPORTS] = {0, };
 #define FOR_EACH(_i, _item, _array, _size) \
 	for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
 
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
  * in this test.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
  */
 #define FOR_EACH_PORT(_i, _port) \
-	FOR_EACH(_i, _port, test_params.slave_ports, \
-		RTE_DIM(test_params.slave_ports))
+	FOR_EACH(_i, _port, test_params.member_ports, \
+		RTE_DIM(test_params.member_ports))
 
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
  * in this test and satisfy given condition.
  *
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
  * _condition condition that need to be checked
  */
 #define FOR_EACH_PORT_IF(_i, _port, _condition) FOR_EACH_PORT((_i), (_port)) \
 	if (!!(_condition))
 
-/* Macro for iterating over every port that is currently a slave of a bonded
+/* Macro for iterating over every port that is currently a member of a bonded
  * device.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
  * */
-#define FOR_EACH_SLAVE(_i, _slave) \
-	FOR_EACH_PORT_IF(_i, _slave, (_slave)->bonded != 0)
+#define FOR_EACH_MEMBER(_i, _member) \
+	FOR_EACH_PORT_IF(_i, _member, (_member)->bonded != 0)
 
 /*
- * Returns packets from slaves TX queue.
- * slave slave port
+ * Returns packets from members TX queue.
+ * member port
  * buffer for packets
  * size size of buffer
  * return number of packets or negative error number
  */
 static int
-slave_get_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
+member_get_pkts(struct member_conf *member, struct rte_mbuf **buf, uint16_t size)
 {
-	return rte_ring_dequeue_burst(slave->tx_queue, (void **)buf,
+	return rte_ring_dequeue_burst(member->tx_queue, (void **)buf,
 			size, NULL);
 }
 
 /*
- * Injects given packets into slaves RX queue.
- * slave slave port
+ * Injects given packets into members RX queue.
+ * member port
  * buffer for packets
  * size number of packets to be injected
  * return number of queued packets or negative error number
  */
 static int
-slave_put_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
+member_put_pkts(struct member_conf *member, struct rte_mbuf **buf, uint16_t size)
 {
-	return rte_ring_enqueue_burst(slave->rx_queue, (void **)buf,
+	return rte_ring_enqueue_burst(member->rx_queue, (void **)buf,
 			size, NULL);
 }
 
@@ -219,79 +219,79 @@ configure_ethdev(uint16_t port_id, uint8_t start)
 }
 
 static int
-add_slave(struct slave_conf *slave, uint8_t start)
+add_member(struct member_conf *member, uint8_t start)
 {
 	struct rte_ether_addr addr, addr_check;
 	int retval;
 
 	/* Some sanity check */
-	RTE_VERIFY(test_params.slave_ports <= slave &&
-		slave - test_params.slave_ports < (int)RTE_DIM(test_params.slave_ports));
-	RTE_VERIFY(slave->bonded == 0);
-	RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
+	RTE_VERIFY(test_params.member_ports <= member &&
+		member - test_params.member_ports < (int)RTE_DIM(test_params.member_ports));
+	RTE_VERIFY(member->bonded == 0);
+	RTE_VERIFY(member->port_id != INVALID_PORT_ID);
 
-	rte_ether_addr_copy(&slave_mac_default, &addr);
-	addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
+	rte_ether_addr_copy(&member_mac_default, &addr);
+	addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
 
-	rte_eth_dev_mac_addr_remove(slave->port_id, &addr);
+	rte_eth_dev_mac_addr_remove(member->port_id, &addr);
 
-	TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(slave->port_id, &addr, 0),
-		"Failed to set slave MAC address");
+	TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(member->port_id, &addr, 0),
+		"Failed to set member MAC address");
 
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bonded_port_id,
-		slave->port_id),
-			"Failed to add slave (idx=%u, id=%u) to bonding (id=%u)",
-			(uint8_t)(slave - test_params.slave_ports), slave->port_id,
+	TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bonded_port_id,
+		member->port_id),
+			"Failed to add member (idx=%u, id=%u) to bonding (id=%u)",
+			(uint8_t)(member - test_params.member_ports), member->port_id,
 			test_params.bonded_port_id);
 
-	slave->bonded = 1;
+	member->bonded = 1;
 	if (start) {
-		TEST_ASSERT_SUCCESS(rte_eth_dev_start(slave->port_id),
-			"Failed to start slave %u", slave->port_id);
+		TEST_ASSERT_SUCCESS(rte_eth_dev_start(member->port_id),
+			"Failed to start member %u", member->port_id);
 	}
 
-	retval = rte_eth_macaddr_get(slave->port_id, &addr_check);
-	TEST_ASSERT_SUCCESS(retval, "Failed to get slave mac address: %s",
+	retval = rte_eth_macaddr_get(member->port_id, &addr_check);
+	TEST_ASSERT_SUCCESS(retval, "Failed to get member mac address: %s",
 			    strerror(-retval));
 	TEST_ASSERT_EQUAL(rte_is_same_ether_addr(&addr, &addr_check), 1,
-			"Slave MAC address is not as expected");
+			"Member MAC address is not as expected");
 
-	RTE_VERIFY(slave->lacp_parnter_state == 0);
+	RTE_VERIFY(member->lacp_parnter_state == 0);
 	return 0;
 }
 
 static int
-remove_slave(struct slave_conf *slave)
+remove_member(struct member_conf *member)
 {
-	ptrdiff_t slave_idx = slave - test_params.slave_ports;
+	ptrdiff_t member_idx = member - test_params.member_ports;
 
-	RTE_VERIFY(test_params.slave_ports <= slave &&
-		slave_idx < (ptrdiff_t)RTE_DIM(test_params.slave_ports));
+	RTE_VERIFY(test_params.member_ports <= member &&
+		member_idx < (ptrdiff_t)RTE_DIM(test_params.member_ports));
 
-	RTE_VERIFY(slave->bonded == 1);
-	RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
+	RTE_VERIFY(member->bonded == 1);
+	RTE_VERIFY(member->port_id != INVALID_PORT_ID);
 
-	TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
-		"Slave %u tx queue not empty while removing from bonding.",
-		slave->port_id);
+	TEST_ASSERT_EQUAL(rte_ring_count(member->rx_queue), 0,
+		"Member %u tx queue not empty while removing from bonding.",
+		member->port_id);
 
-	TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
-		"Slave %u tx queue not empty while removing from bonding.",
-		slave->port_id);
+	TEST_ASSERT_EQUAL(rte_ring_count(member->rx_queue), 0,
+		"Member %u tx queue not empty while removing from bonding.",
+		member->port_id);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_slave_remove(test_params.bonded_port_id,
-			slave->port_id), 0,
-			"Failed to remove slave (idx=%u, id=%u) from bonding (id=%u)",
-			(uint8_t)slave_idx, slave->port_id,
+	TEST_ASSERT_EQUAL(rte_eth_bond_member_remove(test_params.bonded_port_id,
+			member->port_id), 0,
+			"Failed to remove member (idx=%u, id=%u) from bonding (id=%u)",
+			(uint8_t)member_idx, member->port_id,
 			test_params.bonded_port_id);
 
-	slave->bonded = 0;
-	slave->lacp_parnter_state = 0;
+	member->bonded = 0;
+	member->lacp_parnter_state = 0;
 	return 0;
 }
 
 static void
-lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
+lacp_recv_cb(uint16_t member_id, struct rte_mbuf *lacp_pkt)
 {
 	struct rte_ether_hdr *hdr;
 	struct slow_protocol_frame *slow_hdr;
@@ -304,22 +304,22 @@ lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
 	slow_hdr = rte_pktmbuf_mtod(lacp_pkt, struct slow_protocol_frame *);
 	RTE_VERIFY(slow_hdr->slow_protocol.subtype == SLOW_SUBTYPE_LACP);
 
-	lacpdu_rx_count[slave_id]++;
+	lacpdu_rx_count[member_id]++;
 	rte_pktmbuf_free(lacp_pkt);
 }
 
 static int
-initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
+initialize_bonded_device_with_members(uint16_t member_count, uint8_t external_sm)
 {
 	uint8_t i;
 	int ret;
 
 	RTE_VERIFY(test_params.bonded_port_id != INVALID_PORT_ID);
 
-	for (i = 0; i < slave_count; i++) {
-		TEST_ASSERT_SUCCESS(add_slave(&test_params.slave_ports[i], 1),
+	for (i = 0; i < member_count; i++) {
+		TEST_ASSERT_SUCCESS(add_member(&test_params.member_ports[i], 1),
 			"Failed to add port %u to bonded device.\n",
-			test_params.slave_ports[i].port_id);
+			test_params.member_ports[i].port_id);
 	}
 
 	/* Reset mode 4 configuration */
@@ -345,34 +345,34 @@ initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
 }
 
 static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
 {
-	struct slave_conf *slave;
+	struct member_conf *member;
 	int retval;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 	uint16_t i;
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bonded_port_id),
 			"Failed to stop bonded port %u",
 			test_params.bonded_port_id);
 
-	FOR_EACH_SLAVE(i, slave)
-		remove_slave(slave);
+	FOR_EACH_MEMBER(i, member)
+		remove_member(member);
 
-	retval = rte_eth_bond_slaves_get(test_params.bonded_port_id, slaves,
-		RTE_DIM(slaves));
+	retval = rte_eth_bond_members_get(test_params.bonded_port_id, members,
+		RTE_DIM(members));
 
 	TEST_ASSERT_EQUAL(retval, 0,
-		"Expected bonded device %u have 0 slaves but returned %d.",
+		"Expected bonded device %u have 0 members but returned %d.",
 			test_params.bonded_port_id, retval);
 
-	FOR_EACH_PORT(i, slave) {
-		TEST_ASSERT_SUCCESS(rte_eth_dev_stop(slave->port_id),
+	FOR_EACH_PORT(i, member) {
+		TEST_ASSERT_SUCCESS(rte_eth_dev_stop(member->port_id),
 				"Failed to stop bonded port %u",
-				slave->port_id);
+				member->port_id);
 
-		TEST_ASSERT(slave->bonded == 0,
-			"Port id=%u is still marked as enslaved.", slave->port_id);
+		TEST_ASSERT(member->bonded == 0,
+			"Port id=%u is still marked as enmemberd.", member->port_id);
 	}
 
 	return TEST_SUCCESS;
@@ -383,7 +383,7 @@ test_setup(void)
 {
 	int retval, nb_mbuf_per_pool;
 	char name[RTE_ETH_NAME_MAX_LEN];
-	struct slave_conf *port;
+	struct member_conf *port;
 	const uint8_t socket_id = rte_socket_id();
 	uint16_t i;
 
@@ -400,10 +400,10 @@ test_setup(void)
 
 	/* Create / initialize ring eth devs. */
 	FOR_EACH_PORT(i, port) {
-		port = &test_params.slave_ports[i];
+		port = &test_params.member_ports[i];
 
 		if (port->rx_queue == NULL) {
-			retval = snprintf(name, RTE_DIM(name), SLAVE_RX_QUEUE_FMT, i);
+			retval = snprintf(name, RTE_DIM(name), MEMBER_RX_QUEUE_FMT, i);
 			TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
 			port->rx_queue = rte_ring_create(name, RX_RING_SIZE, socket_id, 0);
 			TEST_ASSERT(port->rx_queue != NULL,
@@ -412,7 +412,7 @@ test_setup(void)
 		}
 
 		if (port->tx_queue == NULL) {
-			retval = snprintf(name, RTE_DIM(name), SLAVE_TX_QUEUE_FMT, i);
+			retval = snprintf(name, RTE_DIM(name), MEMBER_TX_QUEUE_FMT, i);
 			TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
 			port->tx_queue = rte_ring_create(name, TX_RING_SIZE, socket_id, 0);
 			TEST_ASSERT_NOT_NULL(port->tx_queue,
@@ -421,7 +421,7 @@ test_setup(void)
 		}
 
 		if (port->port_id == INVALID_PORT_ID) {
-			retval = snprintf(name, RTE_DIM(name), SLAVE_DEV_NAME_FMT, i);
+			retval = snprintf(name, RTE_DIM(name), MEMBER_DEV_NAME_FMT, i);
 			TEST_ASSERT(retval < (int)RTE_DIM(name) - 1, "Name too long");
 			retval = rte_eth_from_rings(name, &port->rx_queue, 1,
 					&port->tx_queue, 1, socket_id);
@@ -460,7 +460,7 @@ test_setup(void)
 static void
 testsuite_teardown(void)
 {
-	struct slave_conf *port;
+	struct member_conf *port;
 	uint8_t i;
 
 	/* Only stop ports.
@@ -480,7 +480,7 @@ testsuite_teardown(void)
  * frame but not LACP
  */
 static int
-make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
+make_lacp_reply(struct member_conf *member, struct rte_mbuf *pkt)
 {
 	struct rte_ether_hdr *hdr;
 	struct slow_protocol_frame *slow_hdr;
@@ -501,11 +501,11 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
 	/* Change source address to partner address */
 	rte_ether_addr_copy(&parnter_mac_default, &slow_hdr->eth_hdr.src_addr);
 	slow_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
-		slave->port_id;
+		member->port_id;
 
 	lacp = (struct lacpdu *) &slow_hdr->slow_protocol;
 	/* Save last received state */
-	slave->lacp_parnter_state = lacp->actor.state;
+	member->lacp_parnter_state = lacp->actor.state;
 	/* Change it into LACP replay by matching parameters. */
 	memcpy(&lacp->partner.port_params, &lacp->actor.port_params,
 		sizeof(struct port_params));
@@ -523,27 +523,27 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
 }
 
 /*
- * Reads packets from given slave, search for LACP packet and reply them.
+ * Reads packets from given member, search for LACP packet and reply them.
  *
- * Receives burst of packets from slave. Looks for LACP packet. Drops
+ * Receives burst of packets from member. Looks for LACP packet. Drops
  * all other packets. Prepares response LACP and sends it back.
  *
  * return number of LACP received and replied, -1 on error.
  */
 static int
-bond_handshake_reply(struct slave_conf *slave)
+bond_handshake_reply(struct member_conf *member)
 {
 	int retval;
 	struct rte_mbuf *rx_buf[MAX_PKT_BURST];
 	struct rte_mbuf *lacp_tx_buf[MAX_PKT_BURST];
 	uint16_t lacp_tx_buf_cnt = 0, i;
 
-	retval = slave_get_pkts(slave, rx_buf, RTE_DIM(rx_buf));
-	TEST_ASSERT(retval >= 0, "Getting slave %u packets failed.",
-			slave->port_id);
+	retval = member_get_pkts(member, rx_buf, RTE_DIM(rx_buf));
+	TEST_ASSERT(retval >= 0, "Getting member %u packets failed.",
+			member->port_id);
 
 	for (i = 0; i < (uint16_t)retval; i++) {
-		if (make_lacp_reply(slave, rx_buf[i]) == 0) {
+		if (make_lacp_reply(member, rx_buf[i]) == 0) {
 			/* reply with actor's LACP */
 			lacp_tx_buf[lacp_tx_buf_cnt++] = rx_buf[i];
 		} else
@@ -553,7 +553,7 @@ bond_handshake_reply(struct slave_conf *slave)
 	if (lacp_tx_buf_cnt == 0)
 		return 0;
 
-	retval = slave_put_pkts(slave, lacp_tx_buf, lacp_tx_buf_cnt);
+	retval = member_put_pkts(member, lacp_tx_buf, lacp_tx_buf_cnt);
 	if (retval <= lacp_tx_buf_cnt) {
 		/* retval might be negative */
 		for (i = RTE_MAX(0, retval); retval < lacp_tx_buf_cnt; retval++)
@@ -561,24 +561,24 @@ bond_handshake_reply(struct slave_conf *slave)
 	}
 
 	TEST_ASSERT_EQUAL(retval, lacp_tx_buf_cnt,
-		"Failed to equeue lacp packets into slave %u tx queue.",
-		slave->port_id);
+		"Failed to equeue lacp packets into member %u tx queue.",
+		member->port_id);
 
 	return lacp_tx_buf_cnt;
 }
 
 /*
- * Function check if given slave tx queue contains packets that make mode 4
- * handshake complete. It will drain slave queue.
+ * Function check if given member tx queue contains packets that make mode 4
+ * handshake complete. It will drain member queue.
  * return 0 if handshake not completed, 1 if handshake was complete,
  */
 static int
-bond_handshake_done(struct slave_conf *slave)
+bond_handshake_done(struct member_conf *member)
 {
 	const uint8_t expected_state = STATE_LACP_ACTIVE | STATE_SYNCHRONIZATION |
 			STATE_AGGREGATION | STATE_COLLECTING | STATE_DISTRIBUTING;
 
-	return slave->lacp_parnter_state == expected_state;
+	return member->lacp_parnter_state == expected_state;
 }
 
 static unsigned
@@ -603,32 +603,32 @@ bond_get_update_timeout_ms(void)
 static int
 bond_handshake(void)
 {
-	struct slave_conf *slave;
+	struct member_conf *member;
 	struct rte_mbuf *buf[MAX_PKT_BURST];
 	uint16_t nb_pkts;
-	uint8_t all_slaves_done, i, j;
-	uint8_t status[RTE_DIM(test_params.slave_ports)] = { 0 };
+	uint8_t all_members_done, i, j;
+	uint8_t status[RTE_DIM(test_params.member_ports)] = { 0 };
 	const unsigned delay = bond_get_update_timeout_ms();
 
 	/* Exchange LACP frames */
-	all_slaves_done = 0;
-	for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
+	all_members_done = 0;
+	for (i = 0; i < 30 && all_members_done == 0; ++i) {
 		rte_delay_ms(delay);
 
-		all_slaves_done = 1;
-		FOR_EACH_SLAVE(j, slave) {
-			/* If response already send, skip slave */
+		all_members_done = 1;
+		FOR_EACH_MEMBER(j, member) {
+			/* If response already send, skip member */
 			if (status[j] != 0)
 				continue;
 
-			if (bond_handshake_reply(slave) < 0) {
-				all_slaves_done = 0;
+			if (bond_handshake_reply(member) < 0) {
+				all_members_done = 0;
 				break;
 			}
 
-			status[j] = bond_handshake_done(slave);
+			status[j] = bond_handshake_done(member);
 			if (status[j] == 0)
-				all_slaves_done = 0;
+				all_members_done = 0;
 		}
 
 		nb_pkts = bond_tx(NULL, 0);
@@ -639,26 +639,26 @@ bond_handshake(void)
 		TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
 	}
 	/* If response didn't send - report failure */
-	TEST_ASSERT_EQUAL(all_slaves_done, 1, "Bond handshake failed\n");
+	TEST_ASSERT_EQUAL(all_members_done, 1, "Bond handshake failed\n");
 
 	/* If flags doesn't match - report failure */
-	return all_slaves_done == 1 ? TEST_SUCCESS : TEST_FAILED;
+	return all_members_done == 1 ? TEST_SUCCESS : TEST_FAILED;
 }
 
-#define TEST_LACP_SLAVE_COUT RTE_DIM(test_params.slave_ports)
+#define TEST_LACP_MEMBER_COUT RTE_DIM(test_params.member_ports)
 static int
 test_mode4_lacp(void)
 {
 	int retval;
 
-	retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+	retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 	/* Test LACP handshake function */
 	retval = bond_handshake();
 	TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -668,7 +668,7 @@ test_mode4_agg_mode_selection(void)
 {
 	int retval;
 	/* Test and verify for Stable mode */
-	retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+	retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 
@@ -684,12 +684,12 @@ test_mode4_agg_mode_selection(void)
 	TEST_ASSERT_EQUAL(retval, AGG_STABLE,
 			"Wrong agg mode received from bonding device");
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 
 	/* test and verify for Bandwidth mode */
-	retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+	retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 
@@ -706,11 +706,11 @@ test_mode4_agg_mode_selection(void)
 	TEST_ASSERT_EQUAL(retval, AGG_BANDWIDTH,
 			"Wrong agg mode received from bonding device");
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	/* test and verify selection for count mode */
-	retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+	retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 
@@ -726,7 +726,7 @@ test_mode4_agg_mode_selection(void)
 	TEST_ASSERT_EQUAL(retval, AGG_COUNT,
 			"Wrong agg mode received from bonding device");
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -780,7 +780,7 @@ generate_packets(struct rte_ether_addr *src_mac,
 }
 
 static int
-generate_and_put_packets(struct slave_conf *slave,
+generate_and_put_packets(struct member_conf *member,
 			struct rte_ether_addr *src_mac,
 			struct rte_ether_addr *dst_mac, uint16_t count)
 {
@@ -791,12 +791,12 @@ generate_and_put_packets(struct slave_conf *slave,
 	if (retval != (int)count)
 		return retval;
 
-	retval = slave_put_pkts(slave, pkts, count);
+	retval = member_put_pkts(member, pkts, count);
 	if (retval > 0 && retval != count)
 		free_pkts(&pkts[retval], count - retval);
 
 	TEST_ASSERT_EQUAL(retval, count,
-		"Failed to enqueue packets into slave %u RX queue", slave->port_id);
+		"Failed to enqueue packets into member %u RX queue", member->port_id);
 
 	return TEST_SUCCESS;
 }
@@ -804,7 +804,7 @@ generate_and_put_packets(struct slave_conf *slave,
 static int
 test_mode4_rx(void)
 {
-	struct slave_conf *slave;
+	struct member_conf *member;
 	uint16_t i, j;
 
 	uint16_t expected_pkts_cnt;
@@ -819,7 +819,7 @@ test_mode4_rx(void)
 	struct rte_ether_addr dst_mac;
 	struct rte_ether_addr bonded_mac;
 
-	retval = initialize_bonded_device_with_slaves(TEST_PROMISC_SLAVE_COUNT,
+	retval = initialize_bonded_device_with_members(TEST_PROMISC_MEMBER_COUNT,
 						      0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
@@ -838,7 +838,7 @@ test_mode4_rx(void)
 	dst_mac.addr_bytes[0] += 2;
 
 	/* First try with promiscuous mode enabled.
-	 * Add 2 packets to each slave. First with bonding MAC address, second with
+	 * Add 2 packets to each member. First with bonding MAC address, second with
 	 * different. Check if we received all of them. */
 	retval = rte_eth_promiscuous_enable(test_params.bonded_port_id);
 	TEST_ASSERT_SUCCESS(retval,
@@ -846,16 +846,16 @@ test_mode4_rx(void)
 			test_params.bonded_port_id, rte_strerror(-retval));
 
 	expected_pkts_cnt = 0;
-	FOR_EACH_SLAVE(i, slave) {
-		retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
-		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
-			slave->port_id);
+	FOR_EACH_MEMBER(i, member) {
+		retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
+		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+			member->port_id);
 
-		retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
-		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
-			slave->port_id);
+		retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
+		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+			member->port_id);
 
-		/* Expect 2 packets per slave */
+		/* Expect 2 packets per member */
 		expected_pkts_cnt += 2;
 	}
 
@@ -894,16 +894,16 @@ test_mode4_rx(void)
 		test_params.bonded_port_id, rte_strerror(-retval));
 
 	expected_pkts_cnt = 0;
-	FOR_EACH_SLAVE(i, slave) {
-		retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
-		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
-			slave->port_id);
+	FOR_EACH_MEMBER(i, member) {
+		retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
+		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+			member->port_id);
 
-		retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
-		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
-			slave->port_id);
+		retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
+		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+			member->port_id);
 
-		/* Expect only one packet per slave */
+		/* Expect only one packet per member */
 		expected_pkts_cnt += 1;
 	}
 
@@ -927,19 +927,19 @@ test_mode4_rx(void)
 	TEST_ASSERT_EQUAL(retval, expected_pkts_cnt,
 		"Expected %u packets but received only %d", expected_pkts_cnt, retval);
 
-	/* Link down test: simulate link down for first slave. */
+	/* Link down test: simulate link down for first member. */
 	delay = bond_get_update_timeout_ms();
 
-	uint8_t slave_down_id = INVALID_PORT_ID;
+	uint8_t member_down_id = INVALID_PORT_ID;
 
-	/* Find first slave and make link down on it*/
-	FOR_EACH_SLAVE(i, slave) {
-		rte_eth_dev_set_link_down(slave->port_id);
-		slave_down_id = slave->port_id;
+	/* Find first member and make link down on it*/
+	FOR_EACH_MEMBER(i, member) {
+		rte_eth_dev_set_link_down(member->port_id);
+		member_down_id = member->port_id;
 		break;
 	}
 
-	RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
+	RTE_VERIFY(member_down_id != INVALID_PORT_ID);
 
 	/* Give some time to rearrange bonding */
 	for (i = 0; i < 3; i++) {
@@ -949,16 +949,16 @@ test_mode4_rx(void)
 
 	TEST_ASSERT_SUCCESS(bond_handshake(), "Handshake after link down failed");
 
-	/* Put packet to each slave */
-	FOR_EACH_SLAVE(i, slave) {
+	/* Put packet to each member */
+	FOR_EACH_MEMBER(i, member) {
 		void *pkt = NULL;
 
-		dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
-		retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
+		dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
+		retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
 		TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
 
-		src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
-		retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
+		src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
+		retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
 		TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
 
 		retval = bond_rx(pkts, RTE_DIM(pkts));
@@ -967,36 +967,36 @@ test_mode4_rx(void)
 		if (retval > 0)
 			free_pkts(pkts, retval);
 
-		while (rte_ring_dequeue(slave->rx_queue, (void **)&pkt) == 0)
+		while (rte_ring_dequeue(member->rx_queue, (void **)&pkt) == 0)
 			rte_pktmbuf_free(pkt);
 
-		if (slave_down_id == slave->port_id)
+		if (member_down_id == member->port_id)
 			TEST_ASSERT_EQUAL(retval, 0, "Packets received unexpectedly.");
 		else
 			TEST_ASSERT_NOT_EQUAL(retval, 0,
-				"Expected to receive some packets on slave %u.",
-				slave->port_id);
-		rte_eth_dev_start(slave->port_id);
+				"Expected to receive some packets on member %u.",
+				member->port_id);
+		rte_eth_dev_start(member->port_id);
 
 		for (j = 0; j < 5; j++) {
-			TEST_ASSERT(bond_handshake_reply(slave) >= 0,
+			TEST_ASSERT(bond_handshake_reply(member) >= 0,
 				"Handshake after link up");
 
-			if (bond_handshake_done(slave) == 1)
+			if (bond_handshake_done(member) == 1)
 				break;
 		}
 
-		TEST_ASSERT(j < 5, "Failed to aggregate slave after link up");
+		TEST_ASSERT(j < 5, "Failed to aggregate member after link up");
 	}
 
-	remove_slaves_and_stop_bonded_device();
+	remove_members_and_stop_bonded_device();
 	return TEST_SUCCESS;
 }
 
 static int
 test_mode4_tx_burst(void)
 {
-	struct slave_conf *slave;
+	struct member_conf *member;
 	uint16_t i, j;
 
 	uint16_t exp_pkts_cnt, pkts_cnt = 0;
@@ -1008,7 +1008,7 @@ test_mode4_tx_burst(void)
 		{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 } };
 	struct rte_ether_addr bonded_mac;
 
-	retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
+	retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 	retval = bond_handshake();
@@ -1036,19 +1036,19 @@ test_mode4_tx_burst(void)
 
 	TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
 
-	/* Check if packets were transmitted properly. Every slave should have
+	/* Check if packets were transmitted properly. Every member should have
 	 * at least one packet, and sum must match. Under normal operation
 	 * there should be no LACP nor MARKER frames. */
 	pkts_cnt = 0;
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_MEMBER(i, member) {
 		uint16_t normal_cnt, slow_cnt;
 
-		retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
+		retval = member_get_pkts(member, pkts, RTE_DIM(pkts));
 		normal_cnt = 0;
 		slow_cnt = 0;
 
 		for (j = 0; j < retval; j++) {
-			if (make_lacp_reply(slave, pkts[j]) == 1)
+			if (make_lacp_reply(member, pkts[j]) == 1)
 				normal_cnt++;
 			else
 				slow_cnt++;
@@ -1056,11 +1056,11 @@ test_mode4_tx_burst(void)
 
 		free_pkts(pkts, normal_cnt + slow_cnt);
 		TEST_ASSERT_EQUAL(slow_cnt, 0,
-			"slave %u unexpectedly transmitted %d SLOW packets", slave->port_id,
+			"member %u unexpectedly transmitted %d SLOW packets", member->port_id,
 			slow_cnt);
 
 		TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
-			"slave %u did not transmitted any packets", slave->port_id);
+			"member %u did not transmitted any packets", member->port_id);
 
 		pkts_cnt += normal_cnt;
 	}
@@ -1068,19 +1068,21 @@ test_mode4_tx_burst(void)
 	TEST_ASSERT_EQUAL(exp_pkts_cnt, pkts_cnt,
 		"Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
 
-	/* Link down test:
-	 * simulate link down for first slave. */
+	/*
+	 * Link down test:
+	 * simulate link down for first member.
+	 */
 	delay = bond_get_update_timeout_ms();
 
-	uint8_t slave_down_id = INVALID_PORT_ID;
+	uint8_t member_down_id = INVALID_PORT_ID;
 
-	FOR_EACH_SLAVE(i, slave) {
-		rte_eth_dev_set_link_down(slave->port_id);
-		slave_down_id = slave->port_id;
+	FOR_EACH_MEMBER(i, member) {
+		rte_eth_dev_set_link_down(member->port_id);
+		member_down_id = member->port_id;
 		break;
 	}
 
-	RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
+	RTE_VERIFY(member_down_id != INVALID_PORT_ID);
 
 	/* Give some time to rearrange bonding. */
 	for (i = 0; i < 3; i++) {
@@ -1110,19 +1112,19 @@ test_mode4_tx_burst(void)
 
 	TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
 
-	/* Check if packets was transmitted properly. Every slave should have
+	/* Check if packets was transmitted properly. Every member should have
 	 * at least one packet, and sum must match. Under normal operation
 	 * there should be no LACP nor MARKER frames. */
 	pkts_cnt = 0;
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_MEMBER(i, member) {
 		uint16_t normal_cnt, slow_cnt;
 
-		retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
+		retval = member_get_pkts(member, pkts, RTE_DIM(pkts));
 		normal_cnt = 0;
 		slow_cnt = 0;
 
 		for (j = 0; j < retval; j++) {
-			if (make_lacp_reply(slave, pkts[j]) == 1)
+			if (make_lacp_reply(member, pkts[j]) == 1)
 				normal_cnt++;
 			else
 				slow_cnt++;
@@ -1130,17 +1132,17 @@ test_mode4_tx_burst(void)
 
 		free_pkts(pkts, normal_cnt + slow_cnt);
 
-		if (slave_down_id == slave->port_id) {
+		if (member_down_id == member->port_id) {
 			TEST_ASSERT_EQUAL(normal_cnt + slow_cnt, 0,
-				"slave %u enexpectedly transmitted %u packets",
-				normal_cnt + slow_cnt, slave->port_id);
+				"member %u enexpectedly transmitted %u packets",
+				normal_cnt + slow_cnt, member->port_id);
 		} else {
 			TEST_ASSERT_EQUAL(slow_cnt, 0,
-				"slave %u unexpectedly transmitted %d SLOW packets",
-				slave->port_id, slow_cnt);
+				"member %u unexpectedly transmitted %d SLOW packets",
+				member->port_id, slow_cnt);
 
 			TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
-				"slave %u did not transmitted any packets", slave->port_id);
+				"member %u did not transmitted any packets", member->port_id);
 		}
 
 		pkts_cnt += normal_cnt;
@@ -1149,11 +1151,11 @@ test_mode4_tx_burst(void)
 	TEST_ASSERT_EQUAL(exp_pkts_cnt, pkts_cnt,
 		"Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
 
-	return remove_slaves_and_stop_bonded_device();
+	return remove_members_and_stop_bonded_device();
 }
 
 static void
-init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
+init_marker(struct rte_mbuf *pkt, struct member_conf *member)
 {
 	struct marker_header *marker_hdr = rte_pktmbuf_mtod(pkt,
 			struct marker_header *);
@@ -1166,7 +1168,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
 	rte_ether_addr_copy(&parnter_mac_default,
 			&marker_hdr->eth_hdr.src_addr);
 	marker_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
-		slave->port_id;
+		member->port_id;
 
 	marker_hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
 
@@ -1177,7 +1179,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
 			offsetof(struct marker, reserved_90) -
 			offsetof(struct marker, requester_port);
 	RTE_VERIFY(marker_hdr->marker.info_length == 16);
-	marker_hdr->marker.requester_port = slave->port_id + 1;
+	marker_hdr->marker.requester_port = member->port_id + 1;
 	marker_hdr->marker.tlv_type_terminator = TLV_TYPE_TERMINATOR_INFORMATION;
 	marker_hdr->marker.terminator_length = 0;
 }
@@ -1185,7 +1187,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
 static int
 test_mode4_marker(void)
 {
-	struct slave_conf *slave;
+	struct member_conf *member;
 	struct rte_mbuf *pkts[MAX_PKT_BURST];
 	struct rte_mbuf *marker_pkt;
 	struct marker_header *marker_hdr;
@@ -1196,7 +1198,7 @@ test_mode4_marker(void)
 	uint8_t i, j;
 	const uint16_t ethtype_slow_be = rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
 
-	retval = initialize_bonded_device_with_slaves(TEST_MARKER_SLAVE_COUT,
+	retval = initialize_bonded_device_with_members(TEST_MARKER_MEMBER_COUT,
 						      0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
@@ -1205,17 +1207,17 @@ test_mode4_marker(void)
 	TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
 
 	delay = bond_get_update_timeout_ms();
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_MEMBER(i, member) {
 		marker_pkt = rte_pktmbuf_alloc(test_params.mbuf_pool);
 		TEST_ASSERT_NOT_NULL(marker_pkt, "Failed to allocate marker packet");
-		init_marker(marker_pkt, slave);
+		init_marker(marker_pkt, member);
 
-		retval = slave_put_pkts(slave, &marker_pkt, 1);
+		retval = member_put_pkts(member, &marker_pkt, 1);
 		if (retval != 1)
 			rte_pktmbuf_free(marker_pkt);
 
 		TEST_ASSERT_EQUAL(retval, 1,
-			"Failed to send marker packet to slave %u", slave->port_id);
+			"Failed to send marker packet to member %u", member->port_id);
 
 		for (j = 0; j < 20; ++j) {
 			rte_delay_ms(delay);
@@ -1233,13 +1235,13 @@ test_mode4_marker(void)
 
 			/* Check if LACP packet was send by state machines
 			   First and only packet must be a maker response */
-			retval = slave_get_pkts(slave, pkts, MAX_PKT_BURST);
+			retval = member_get_pkts(member, pkts, MAX_PKT_BURST);
 			if (retval == 0)
 				continue;
 			if (retval > 1)
 				free_pkts(pkts, retval);
 
-			TEST_ASSERT_EQUAL(retval, 1, "failed to get slave packets");
+			TEST_ASSERT_EQUAL(retval, 1, "failed to get member packets");
 			nb_pkts = retval;
 
 			marker_hdr = rte_pktmbuf_mtod(pkts[0], struct marker_header *);
@@ -1263,7 +1265,7 @@ test_mode4_marker(void)
 		TEST_ASSERT(j < 20, "Marker response not found");
 	}
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval,	"Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -1272,7 +1274,7 @@ test_mode4_marker(void)
 static int
 test_mode4_expired(void)
 {
-	struct slave_conf *slave, *exp_slave = NULL;
+	struct member_conf *member, *exp_member = NULL;
 	struct rte_mbuf *pkts[MAX_PKT_BURST];
 	int retval;
 	uint32_t old_delay;
@@ -1282,7 +1284,7 @@ test_mode4_expired(void)
 
 	struct rte_eth_bond_8023ad_conf conf;
 
-	retval = initialize_bonded_device_with_slaves(TEST_EXPIRED_SLAVE_COUNT,
+	retval = initialize_bonded_device_with_members(TEST_EXPIRED_MEMBER_COUNT,
 						      0);
 	/* Set custom timeouts to make test last shorter. */
 	rte_eth_bond_8023ad_conf_get(test_params.bonded_port_id, &conf);
@@ -1298,8 +1300,8 @@ test_mode4_expired(void)
 
 	/* Wait for new settings to be applied. */
 	for (i = 0; i < old_delay/conf.update_timeout_ms * 2; i++) {
-		FOR_EACH_SLAVE(j, slave)
-			bond_handshake_reply(slave);
+		FOR_EACH_MEMBER(j, member)
+			bond_handshake_reply(member);
 
 		rte_delay_ms(conf.update_timeout_ms);
 	}
@@ -1307,13 +1309,13 @@ test_mode4_expired(void)
 	retval = bond_handshake();
 	TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
 
-	/* Find first slave */
-	FOR_EACH_SLAVE(i, slave) {
-		exp_slave = slave;
+	/* Find first member */
+	FOR_EACH_MEMBER(i, member) {
+		exp_member = member;
 		break;
 	}
 
-	RTE_VERIFY(exp_slave != NULL);
+	RTE_VERIFY(exp_member != NULL);
 
 	/* When one of partners do not send or respond to LACP frame in
 	 * conf.long_timeout_ms time, internal state machines should detect this
@@ -1325,16 +1327,16 @@ test_mode4_expired(void)
 		TEST_ASSERT_EQUAL(retval, 0, "Unexpectedly received %d packets",
 			retval);
 
-		FOR_EACH_SLAVE(i, slave) {
-			retval = bond_handshake_reply(slave);
+		FOR_EACH_MEMBER(i, member) {
+			retval = bond_handshake_reply(member);
 			TEST_ASSERT(retval >= 0, "Handshake failed");
 
-			/* Remove replay for slave that suppose to be expired. */
-			if (slave == exp_slave) {
-				while (rte_ring_count(slave->rx_queue) > 0) {
+			/* Remove replay for member that suppose to be expired. */
+			if (member == exp_member) {
+				while (rte_ring_count(member->rx_queue) > 0) {
 					void *pkt = NULL;
 
-					rte_ring_dequeue(slave->rx_queue, &pkt);
+					rte_ring_dequeue(member->rx_queue, &pkt);
 					rte_pktmbuf_free(pkt);
 				}
 			}
@@ -1348,17 +1350,17 @@ test_mode4_expired(void)
 			retval);
 	}
 
-	/* After test only expected slave should be in EXPIRED state */
-	FOR_EACH_SLAVE(i, slave) {
-		if (slave == exp_slave)
-			TEST_ASSERT(slave->lacp_parnter_state & STATE_EXPIRED,
-				"Slave %u should be in expired.", slave->port_id);
+	/* After test only expected member should be in EXPIRED state */
+	FOR_EACH_MEMBER(i, member) {
+		if (member == exp_member)
+			TEST_ASSERT(member->lacp_parnter_state & STATE_EXPIRED,
+				"Member %u should be in expired.", member->port_id);
 		else
-			TEST_ASSERT_EQUAL(bond_handshake_done(slave), 1,
-				"Slave %u should be operational.", slave->port_id);
+			TEST_ASSERT_EQUAL(bond_handshake_done(member), 1,
+				"Member %u should be operational.", member->port_id);
 	}
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -1372,17 +1374,17 @@ test_mode4_ext_ctrl(void)
 	 *   . try to transmit lacpdu (should fail)
 	 *   . try to set collecting and distributing flags (should fail)
 	 * reconfigure w/external sm
-	 *   . transmit one lacpdu on each slave using new api
-	 *   . make sure each slave receives one lacpdu using the callback api
-	 *   . transmit one data pdu on each slave (should fail)
+	 *   . transmit one lacpdu on each member using new api
+	 *   . make sure each member receives one lacpdu using the callback api
+	 *   . transmit one data pdu on each member (should fail)
 	 *   . enable distribution and collection, send one data pdu each again
 	 */
 
 	int retval;
-	struct slave_conf *slave = NULL;
+	struct member_conf *member = NULL;
 	uint8_t i;
 
-	struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
+	struct rte_mbuf *lacp_tx_buf[MEMBER_COUNT];
 	struct rte_ether_addr src_mac, dst_mac;
 	struct lacpdu_header lacpdu = {
 		.lacpdu = {
@@ -1396,30 +1398,30 @@ test_mode4_ext_ctrl(void)
 	initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
 			      RTE_ETHER_TYPE_SLOW, 0, 0);
 
-	for (i = 0; i < SLAVE_COUNT; i++) {
+	for (i = 0; i < MEMBER_COUNT; i++) {
 		lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
 		rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
 			   &lacpdu, sizeof(lacpdu));
 		rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
 	}
 
-	retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
+	retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_MEMBER(i, member) {
 		TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_slowtx(
 						test_params.bonded_port_id,
-						slave->port_id, lacp_tx_buf[i]),
-				 "Slave should not allow manual LACP xmit");
+						member->port_id, lacp_tx_buf[i]),
+				 "Member should not allow manual LACP xmit");
 		TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_collect(
 						test_params.bonded_port_id,
-						slave->port_id, 1),
-				 "Slave should not allow external state controls");
+						member->port_id, 1),
+				 "Member should not allow external state controls");
 	}
 
 	free_pkts(lacp_tx_buf, RTE_DIM(lacp_tx_buf));
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Bonded device cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -1430,13 +1432,13 @@ static int
 test_mode4_ext_lacp(void)
 {
 	int retval;
-	struct slave_conf *slave = NULL;
-	uint8_t all_slaves_done = 0, i;
+	struct member_conf *member = NULL;
+	uint8_t all_members_done = 0, i;
 	uint16_t nb_pkts;
 	const unsigned int delay = bond_get_update_timeout_ms();
 
-	struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
-	struct rte_mbuf *buf[SLAVE_COUNT];
+	struct rte_mbuf *lacp_tx_buf[MEMBER_COUNT];
+	struct rte_mbuf *buf[MEMBER_COUNT];
 	struct rte_ether_addr src_mac, dst_mac;
 	struct lacpdu_header lacpdu = {
 		.lacpdu = {
@@ -1450,14 +1452,14 @@ test_mode4_ext_lacp(void)
 	initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
 			      RTE_ETHER_TYPE_SLOW, 0, 0);
 
-	for (i = 0; i < SLAVE_COUNT; i++) {
+	for (i = 0; i < MEMBER_COUNT; i++) {
 		lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
 		rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
 			   &lacpdu, sizeof(lacpdu));
 		rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
 	}
 
-	retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 1);
+	retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 1);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 	memset(lacpdu_rx_count, 0, sizeof(lacpdu_rx_count));
@@ -1466,22 +1468,22 @@ test_mode4_ext_lacp(void)
 	for (i = 0; i < 30; ++i)
 		rte_delay_ms(delay);
 
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_MEMBER(i, member) {
 		retval = rte_eth_bond_8023ad_ext_slowtx(
 						test_params.bonded_port_id,
-						slave->port_id, lacp_tx_buf[i]);
+						member->port_id, lacp_tx_buf[i]);
 		TEST_ASSERT_SUCCESS(retval,
-				    "Slave should allow manual LACP xmit");
+				    "Member should allow manual LACP xmit");
 	}
 
 	nb_pkts = bond_tx(NULL, 0);
 	TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets transmitted unexpectedly");
 
-	FOR_EACH_SLAVE(i, slave) {
-		nb_pkts = slave_get_pkts(slave, buf, RTE_DIM(buf));
-		TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on slave %d\n",
+	FOR_EACH_MEMBER(i, member) {
+		nb_pkts = member_get_pkts(member, buf, RTE_DIM(buf));
+		TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on member %d\n",
 				  nb_pkts, i);
-		slave_put_pkts(slave, buf, nb_pkts);
+		member_put_pkts(member, buf, nb_pkts);
 	}
 
 	nb_pkts = bond_rx(buf, RTE_DIM(buf));
@@ -1489,26 +1491,26 @@ test_mode4_ext_lacp(void)
 	TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
 
 	/* wait for the periodic callback to run */
-	for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
+	for (i = 0; i < 30 && all_members_done == 0; ++i) {
 		uint8_t s, total = 0;
 
 		rte_delay_ms(delay);
-		FOR_EACH_SLAVE(s, slave) {
-			total += lacpdu_rx_count[slave->port_id];
+		FOR_EACH_MEMBER(s, member) {
+			total += lacpdu_rx_count[member->port_id];
 		}
 
-		if (total >= SLAVE_COUNT)
-			all_slaves_done = 1;
+		if (total >= MEMBER_COUNT)
+			all_members_done = 1;
 	}
 
-	FOR_EACH_SLAVE(i, slave) {
-		TEST_ASSERT_EQUAL(lacpdu_rx_count[slave->port_id], 1,
-				  "Slave port %u should have received 1 lacpdu (count=%u)",
-				  slave->port_id,
-				  lacpdu_rx_count[slave->port_id]);
+	FOR_EACH_MEMBER(i, member) {
+		TEST_ASSERT_EQUAL(lacpdu_rx_count[member->port_id], 1,
+				  "Member port %u should have received 1 lacpdu (count=%u)",
+				  member->port_id,
+				  lacpdu_rx_count[member->port_id]);
 	}
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -1517,10 +1519,10 @@ test_mode4_ext_lacp(void)
 static int
 check_environment(void)
 {
-	struct slave_conf *port;
+	struct member_conf *port;
 	uint8_t i, env_state;
-	uint16_t slaves[RTE_DIM(test_params.slave_ports)];
-	int slaves_count;
+	uint16_t members[RTE_DIM(test_params.member_ports)];
+	int members_count;
 
 	env_state = 0;
 	FOR_EACH_PORT(i, port) {
@@ -1540,20 +1542,20 @@ check_environment(void)
 			break;
 	}
 
-	slaves_count = rte_eth_bond_slaves_get(test_params.bonded_port_id,
-			slaves, RTE_DIM(slaves));
+	members_count = rte_eth_bond_members_get(test_params.bonded_port_id,
+			members, RTE_DIM(members));
 
-	if (slaves_count != 0)
+	if (members_count != 0)
 		env_state |= 0x10;
 
 	TEST_ASSERT_EQUAL(env_state, 0,
 		"Environment not clean (port %u):%s%s%s%s%s",
 		port->port_id,
-		env_state & 0x01 ? " slave rx queue not clean" : "",
-		env_state & 0x02 ? " slave tx queue not clean" : "",
-		env_state & 0x04 ? " port marked as enslaved" : "",
-		env_state & 0x80 ? " slave state is not reset" : "",
-		env_state & 0x10 ? " slave count not equal 0" : ".");
+		env_state & 0x01 ? " member rx queue not clean" : "",
+		env_state & 0x02 ? " member tx queue not clean" : "",
+		env_state & 0x04 ? " port marked as enmemberd" : "",
+		env_state & 0x80 ? " member state is not reset" : "",
+		env_state & 0x10 ? " member count not equal 0" : ".");
 
 
 	return TEST_SUCCESS;
@@ -1562,7 +1564,7 @@ check_environment(void)
 static int
 test_mode4_executor(int (*test_func)(void))
 {
-	struct slave_conf *port;
+	struct member_conf *port;
 	int test_result;
 	uint8_t i;
 	void *pkt;
@@ -1581,7 +1583,7 @@ test_mode4_executor(int (*test_func)(void))
 
 	/* Reset environment in case test failed to do that. */
 	if (test_result != TEST_SUCCESS) {
-		TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
+		TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
 			"Failed to stop bonded device");
 
 		FOR_EACH_PORT(i, port) {
diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 464fb2dbd0..1f888b4771 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -27,15 +27,15 @@
 
 #include "test.h"
 
-#define SLAVE_COUNT (4)
+#define MEMBER_COUNT (4)
 
 #define RXTX_RING_SIZE			1024
 #define RXTX_QUEUE_COUNT		4
 
 #define BONDED_DEV_NAME         ("net_bonding_rss")
 
-#define SLAVE_DEV_NAME_FMT      ("net_null%d")
-#define SLAVE_RXTX_QUEUE_FMT      ("rssconf_slave%d_q%d")
+#define MEMBER_DEV_NAME_FMT      ("net_null%d")
+#define MEMBER_RXTX_QUEUE_FMT      ("rssconf_member%d_q%d")
 
 #define NUM_MBUFS 8191
 #define MBUF_SIZE (1600 + RTE_PKTMBUF_HEADROOM)
@@ -46,7 +46,7 @@
 #define INVALID_PORT_ID         (0xFF)
 #define INVALID_BONDING_MODE    (-1)
 
-struct slave_conf {
+struct member_conf {
 	uint16_t port_id;
 	struct rte_eth_dev_info dev_info;
 
@@ -54,7 +54,7 @@ struct slave_conf {
 	uint8_t rss_key[40];
 	struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
 
-	uint8_t is_slave;
+	uint8_t is_member;
 	struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
 };
 
@@ -62,15 +62,15 @@ struct link_bonding_rssconf_unittest_params {
 	uint8_t bond_port_id;
 	struct rte_eth_dev_info bond_dev_info;
 	struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
-	struct slave_conf slave_ports[SLAVE_COUNT];
+	struct member_conf member_ports[MEMBER_COUNT];
 
 	struct rte_mempool *mbuf_pool;
 };
 
 static struct link_bonding_rssconf_unittest_params test_params  = {
 	.bond_port_id = INVALID_PORT_ID,
-	.slave_ports = {
-		[0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_slave = 0}
+	.member_ports = {
+		[0 ... MEMBER_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_member = 0}
 	},
 	.mbuf_pool = NULL,
 };
@@ -107,14 +107,14 @@ static struct rte_eth_conf rss_pmd_conf = {
 #define FOR_EACH(_i, _item, _array, _size) \
 	for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
 
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
  * in this test.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
  */
 #define FOR_EACH_PORT(_i, _port) \
-	FOR_EACH(_i, _port, test_params.slave_ports, \
-		RTE_DIM(test_params.slave_ports))
+	FOR_EACH(_i, _port, test_params.member_ports, \
+		RTE_DIM(test_params.member_ports))
 
 static int
 configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
@@ -151,21 +151,21 @@ configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
 }
 
 /**
- * Remove all slaves from bonding
+ * Remove all members from bonding
  */
 static int
-remove_slaves(void)
+remove_members(void)
 {
 	unsigned n;
-	struct slave_conf *port;
+	struct member_conf *port;
 
 	FOR_EACH_PORT(n, port) {
-		port = &test_params.slave_ports[n];
-		if (port->is_slave) {
-			TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(
+		port = &test_params.member_ports[n];
+		if (port->is_member) {
+			TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(
 					test_params.bond_port_id, port->port_id),
-					"Cannot remove slave %d from bonding", port->port_id);
-			port->is_slave = 0;
+					"Cannot remove member %d from bonding", port->port_id);
+			port->is_member = 0;
 		}
 	}
 
@@ -173,30 +173,30 @@ remove_slaves(void)
 }
 
 static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
 {
-	TEST_ASSERT_SUCCESS(remove_slaves(), "Removing slaves");
+	TEST_ASSERT_SUCCESS(remove_members(), "Removing members");
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bond_port_id),
 			"Failed to stop port %u", test_params.bond_port_id);
 	return TEST_SUCCESS;
 }
 
 /**
- * Add all slaves to bonding
+ * Add all members to bonding
  */
 static int
-bond_slaves(void)
+bond_members(void)
 {
 	unsigned n;
-	struct slave_conf *port;
+	struct member_conf *port;
 
 	FOR_EACH_PORT(n, port) {
-		port = &test_params.slave_ports[n];
-		if (!port->is_slave) {
-			TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
-					port->port_id), "Cannot attach slave %d to the bonding",
+		port = &test_params.member_ports[n];
+		if (!port->is_member) {
+			TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bond_port_id,
+					port->port_id), "Cannot attach member %d to the bonding",
 					port->port_id);
-			port->is_slave = 1;
+			port->is_member = 1;
 		}
 	}
 
@@ -223,11 +223,11 @@ reta_set(uint16_t port_id, uint8_t value, int reta_size)
 }
 
 /**
- * Check if slaves RETA is synchronized with bonding port. Returns 1 if slave
+ * Check if members RETA is synchronized with bonding port. Returns 1 if member
  * port is synced with bonding port.
  */
 static int
-reta_check_synced(struct slave_conf *port)
+reta_check_synced(struct member_conf *port)
 {
 	unsigned i;
 
@@ -264,10 +264,10 @@ bond_reta_fetch(void) {
 }
 
 /**
- * Fetch slaves RETA
+ * Fetch members RETA
  */
 static int
-slave_reta_fetch(struct slave_conf *port) {
+member_reta_fetch(struct member_conf *port) {
 	unsigned j;
 
 	for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
@@ -280,49 +280,49 @@ slave_reta_fetch(struct slave_conf *port) {
 }
 
 /**
- * Remove and add slave to check if slaves configuration is synced with
- * the bonding ports values after adding new slave.
+ * Remove and add member to check if members configuration is synced with
+ * the bonding ports values after adding new member.
  */
 static int
-slave_remove_and_add(void)
+member_remove_and_add(void)
 {
-	struct slave_conf *port = &(test_params.slave_ports[0]);
+	struct member_conf *port = &(test_params.member_ports[0]);
 
-	/* 1. Remove first slave from bonding */
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params.bond_port_id,
-			port->port_id), "Cannot remove slave #d from bonding");
+	/* 1. Remove first member from bonding */
+	TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(test_params.bond_port_id,
+			port->port_id), "Cannot remove member #d from bonding");
 
-	/* 2. Change removed (ex-)slave and bonding configuration to different
+	/* 2. Change removed (ex-)member and bonding configuration to different
 	 *    values
 	 */
 	reta_set(test_params.bond_port_id, 1, test_params.bond_dev_info.reta_size);
 	bond_reta_fetch();
 
 	reta_set(port->port_id, 2, port->dev_info.reta_size);
-	slave_reta_fetch(port);
+	member_reta_fetch(port);
 
 	TEST_ASSERT(reta_check_synced(port) == 0,
-			"Removed slave didn't should be synchronized with bonding port");
+			"Removed member didn't should be synchronized with bonding port");
 
-	/* 3. Add (ex-)slave and check if configuration changed*/
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
-			port->port_id), "Cannot add slave");
+	/* 3. Add (ex-)member and check if configuration changed*/
+	TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bond_port_id,
+			port->port_id), "Cannot add member");
 
 	bond_reta_fetch();
-	slave_reta_fetch(port);
+	member_reta_fetch(port);
 
 	return reta_check_synced(port);
 }
 
 /**
- * Test configuration propagation over slaves.
+ * Test configuration propagation over members.
  */
 static int
 test_propagate(void)
 {
 	unsigned i;
 	uint8_t n;
-	struct slave_conf *port;
+	struct member_conf *port;
 	uint8_t bond_rss_key[40];
 	struct rte_eth_rss_conf bond_rss_conf;
 
@@ -349,18 +349,18 @@ test_propagate(void)
 
 			retval = rte_eth_dev_rss_hash_update(test_params.bond_port_id,
 					&bond_rss_conf);
-			TEST_ASSERT_SUCCESS(retval, "Cannot set slaves hash function");
+			TEST_ASSERT_SUCCESS(retval, "Cannot set members hash function");
 
 			FOR_EACH_PORT(n, port) {
-				port = &test_params.slave_ports[n];
+				port = &test_params.member_ports[n];
 
 				retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
 						&port->rss_conf);
 				TEST_ASSERT_SUCCESS(retval,
-						"Cannot take slaves RSS configuration");
+						"Cannot take members RSS configuration");
 
 				TEST_ASSERT(port->rss_conf.rss_hf == rss_hf,
-						"Hash function not propagated for slave %d",
+						"Hash function not propagated for member %d",
 						port->port_id);
 			}
 
@@ -376,11 +376,11 @@ test_propagate(void)
 
 		/* Set all keys to zero */
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.member_ports[n];
 			memset(port->rss_conf.rss_key, 0, 40);
 			retval = rte_eth_dev_rss_hash_update(port->port_id,
 					&port->rss_conf);
-			TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RSS keys");
+			TEST_ASSERT_SUCCESS(retval, "Cannot set members RSS keys");
 		}
 
 		memset(bond_rss_key, i, sizeof(bond_rss_key));
@@ -393,18 +393,18 @@ test_propagate(void)
 		TEST_ASSERT_SUCCESS(retval, "Cannot set bonded port RSS keys");
 
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.member_ports[n];
 
 			retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
 					&(port->rss_conf));
 
 			TEST_ASSERT_SUCCESS(retval,
-					"Cannot take slaves RSS configuration");
+					"Cannot take members RSS configuration");
 
 			/* compare keys */
 			retval = memcmp(port->rss_conf.rss_key, bond_rss_key,
 					sizeof(bond_rss_key));
-			TEST_ASSERT(retval == 0, "Key value not propagated for slave %d",
+			TEST_ASSERT(retval == 0, "Key value not propagated for member %d",
 					port->port_id);
 		}
 	}
@@ -416,10 +416,10 @@ test_propagate(void)
 
 		/* Set all keys to zero */
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.member_ports[n];
 			retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
 					port->dev_info.reta_size);
-			TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RETA");
+			TEST_ASSERT_SUCCESS(retval, "Cannot set members RETA");
 		}
 
 		TEST_ASSERT_SUCCESS(reta_set(test_params.bond_port_id,
@@ -429,9 +429,9 @@ test_propagate(void)
 		bond_reta_fetch();
 
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.member_ports[n];
 
-			slave_reta_fetch(port);
+			member_reta_fetch(port);
 			TEST_ASSERT(reta_check_synced(port) == 1, "RETAs inconsistent");
 		}
 	}
@@ -459,29 +459,29 @@ test_rss(void)
 			"Error during getting device (port %u) info: %s\n",
 			test_params.bond_port_id, strerror(-ret));
 
-	TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
+	TEST_ASSERT_SUCCESS(bond_members(), "Bonding members failed");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
 			"Failed to start bonding port (%d).", test_params.bond_port_id);
 
 	TEST_ASSERT_SUCCESS(test_propagate(), "Propagation test failed");
 
-	TEST_ASSERT(slave_remove_and_add() == 1, "remove and add slaves success.");
+	TEST_ASSERT(member_remove_and_add() == 1, "remove and add members success.");
 
-	remove_slaves_and_stop_bonded_device();
+	remove_members_and_stop_bonded_device();
 
 	return TEST_SUCCESS;
 }
 
 
 /**
- * Test RSS configuration over bonded and slaves.
+ * Test RSS configuration over bonded and members.
  */
 static int
 test_rss_config_lazy(void)
 {
 	struct rte_eth_rss_conf bond_rss_conf = {0};
-	struct slave_conf *port;
+	struct member_conf *port;
 	uint8_t rss_key[40];
 	uint64_t rss_hf;
 	int retval;
@@ -502,18 +502,18 @@ test_rss_config_lazy(void)
 		TEST_ASSERT(retval != 0, "Succeeded in setting bonded port hash function");
 	}
 
-	/* Set all keys to zero for all slaves */
+	/* Set all keys to zero for all members */
 	FOR_EACH_PORT(n, port) {
-		port = &test_params.slave_ports[n];
+		port = &test_params.member_ports[n];
 		retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
 						       &port->rss_conf);
-		TEST_ASSERT_SUCCESS(retval, "Cannot get slaves RSS configuration");
+		TEST_ASSERT_SUCCESS(retval, "Cannot get members RSS configuration");
 		memset(port->rss_key, 0, sizeof(port->rss_key));
 		port->rss_conf.rss_key = port->rss_key;
 		port->rss_conf.rss_key_len = sizeof(port->rss_key);
 		retval = rte_eth_dev_rss_hash_update(port->port_id,
 						     &port->rss_conf);
-		TEST_ASSERT(retval != 0, "Succeeded in setting slaves RSS keys");
+		TEST_ASSERT(retval != 0, "Succeeded in setting members RSS keys");
 	}
 
 	/* Set RSS keys for bonded port */
@@ -529,10 +529,10 @@ test_rss_config_lazy(void)
 	/*  Test RETA propagation */
 	for (i = 0; i < RXTX_QUEUE_COUNT; i++) {
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.member_ports[n];
 			retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
 					  port->dev_info.reta_size);
-			TEST_ASSERT(retval != 0, "Succeeded in setting slaves RETA");
+			TEST_ASSERT(retval != 0, "Succeeded in setting members RETA");
 		}
 
 		retval = reta_set(test_params.bond_port_id, i % RXTX_QUEUE_COUNT,
@@ -560,14 +560,14 @@ test_rss_lazy(void)
 			"Error during getting device (port %u) info: %s\n",
 			test_params.bond_port_id, strerror(-ret));
 
-	TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
+	TEST_ASSERT_SUCCESS(bond_members(), "Bonding members failed");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
 			"Failed to start bonding port (%d).", test_params.bond_port_id);
 
 	TEST_ASSERT_SUCCESS(test_rss_config_lazy(), "Succeeded in setting RSS hash when RX_RSS mq_mode is turned off");
 
-	remove_slaves_and_stop_bonded_device();
+	remove_members_and_stop_bonded_device();
 
 	return TEST_SUCCESS;
 }
@@ -579,13 +579,13 @@ test_setup(void)
 	int retval;
 	int port_id;
 	char name[256];
-	struct slave_conf *port;
+	struct member_conf *port;
 	struct rte_ether_addr mac_addr = { .addr_bytes = {0} };
 
 	if (test_params.mbuf_pool == NULL) {
 
 		test_params.mbuf_pool = rte_pktmbuf_pool_create(
-			"RSS_MBUF_POOL", NUM_MBUFS * SLAVE_COUNT,
+			"RSS_MBUF_POOL", NUM_MBUFS * MEMBER_COUNT,
 			MBUF_CACHE_SIZE, 0, MBUF_SIZE, rte_socket_id());
 
 		TEST_ASSERT(test_params.mbuf_pool != NULL,
@@ -594,10 +594,10 @@ test_setup(void)
 
 	/* Create / initialize ring eth devs. */
 	FOR_EACH_PORT(n, port) {
-		port = &test_params.slave_ports[n];
+		port = &test_params.member_ports[n];
 
 		port_id = rte_eth_dev_count_avail();
-		snprintf(name, sizeof(name), SLAVE_DEV_NAME_FMT, port_id);
+		snprintf(name, sizeof(name), MEMBER_DEV_NAME_FMT, port_id);
 
 		retval = rte_vdev_init(name, "size=64,copy=0");
 		TEST_ASSERT_SUCCESS(retval, "Failed to create null device '%s'\n",
@@ -647,7 +647,7 @@ test_setup(void)
 static void
 testsuite_teardown(void)
 {
-	struct slave_conf *port;
+	struct member_conf *port;
 	uint8_t i;
 
 	/* Only stop ports.
@@ -685,7 +685,7 @@ test_rssconf_executor(int (*test_func)(void))
 
 	/* Reset environment in case test failed to do that. */
 	if (test_result != TEST_SUCCESS) {
-		TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
+		TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
 			"Failed to stop bonded device");
 	}
 
diff --git a/doc/guides/howto/lm_bond_virtio_sriov.rst b/doc/guides/howto/lm_bond_virtio_sriov.rst
index e854ae214e..c06d1bc43c 100644
--- a/doc/guides/howto/lm_bond_virtio_sriov.rst
+++ b/doc/guides/howto/lm_bond_virtio_sriov.rst
@@ -17,8 +17,8 @@ Test Setup
 ----------
 
 A bonded device is created in the VM.
-The virtio and VF PMD's are added as slaves to the bonded device.
-The VF is set as the primary slave of the bonded device.
+The virtio and VF PMD's are added as members to the bonded device.
+The VF is set as the primary member of the bonded device.
 
 A bridge must be set up on the Host connecting the tap device, which is the
 backend of the Virtio device and the Physical Function (PF) device.
@@ -116,13 +116,13 @@ Bonding is port 2 (P2).
 
    testpmd> create bonded device 1 0
    Created new bonded device net_bond_testpmd_0 on (port 2).
-   testpmd> add bonding slave 0 2
-   testpmd> add bonding slave 1 2
+   testpmd> add bonding member 0 2
+   testpmd> add bonding member 1 2
    testpmd> show bonding config 2
 
 The syntax of the ``testpmd`` command is:
 
-set bonding primary (slave id) (port id)
+set bonding primary (member id) (port id)
 
 Set primary to P1 before starting bonding port.
 
@@ -139,7 +139,7 @@ Set primary to P1 before starting bonding port.
 
    testpmd> show bonding config 2
 
-Primary is now P1. There are 2 active slaves.
+Primary is now P1. There are 2 active members.
 
 Use P2 only for forwarding.
 
@@ -151,7 +151,7 @@ Use P2 only for forwarding.
    testpmd> start
    testpmd> show bonding config 2
 
-Primary is now P1. There are 2 active slaves.
+Primary is now P1. There are 2 active members.
 
 .. code-block:: console
 
@@ -163,10 +163,10 @@ VF traffic is seen at P1 and P2.
 
    testpmd> clear port stats all
    testpmd> set bonding primary 0 2
-   testpmd> remove bonding slave 1 2
+   testpmd> remove bonding member 1 2
    testpmd> show bonding config 2
 
-Primary is now P0. There is 1 active slave.
+Primary is now P0. There is 1 active member.
 
 .. code-block:: console
 
@@ -210,7 +210,7 @@ On host_server_1: Terminal 1
 
    testpmd> show bonding config 2
 
-Primary is now P0. There is 1 active slave.
+Primary is now P0. There is 1 active member.
 
 .. code-block:: console
 
@@ -346,7 +346,7 @@ The ``mac_addr`` command only works with the Kernel PF for Niantic.
    testpmd> show port stats all.
    testpmd> show config fwd
    testpmd> show bonding config 2
-   testpmd> add bonding slave 1 2
+   testpmd> add bonding member 1 2
    testpmd> set bonding primary 1 2
    testpmd> show bonding config 2
    testpmd> show port stats all
@@ -355,7 +355,7 @@ VF traffic is seen at P1 (VF) and P2 (Bonded device).
 
 .. code-block:: console
 
-   testpmd> remove bonding slave 0 2
+   testpmd> remove bonding member 0 2
    testpmd> show bonding config 2
    testpmd> port stop 0
    testpmd> port close 0
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 0b09b0c50a..43b2622022 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -781,8 +781,8 @@ DPDK implements a light-weight library to allow PMDs to be bonded together and p
 
 .. code-block:: console
 
-    dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=<PCI B:D.F device 1>,slave=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
-    (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
+    dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,member=<PCI B:D.F device 1>,member=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
+    (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,member=0000:82:00.0,member=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
 
 Vector Processing
 -----------------
diff --git a/doc/guides/prog_guide/img/bond-mode-1.svg b/doc/guides/prog_guide/img/bond-mode-1.svg
index 7c81b856b7..5a9271facf 100644
--- a/doc/guides/prog_guide/img/bond-mode-1.svg
+++ b/doc/guides/prog_guide/img/bond-mode-1.svg
@@ -53,7 +53,7 @@
      v:langID="1033"
      v:metric="true"
      v:viewMarkup="false"><v:userDefs><v:ud
-         v:nameU="msvSubprocessMaster"
+         v:nameU="msvSubprocessMain"
          v:prompt=""
          v:val="VT4(Rectangle)" /><v:ud
          v:nameU="msvNoAutoConnect"
diff --git a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
index 1f66154e35..519a364105 100644
--- a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
+++ b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
@@ -16,14 +16,14 @@ allows physical PMDs to be bonded together to create a single logical PMD.
 The Link Bonding PMD library(librte_net_bond) supports bonding of groups of
 ``rte_eth_dev`` ports of the same speed and duplex to provide similar
 capabilities to that found in Linux bonding driver to allow the aggregation
-of multiple (slave) NICs into a single logical interface between a server
+of multiple (member) NICs into a single logical interface between a server
 and a switch. The new bonded PMD will then process these interfaces based on
 the mode of operation specified to provide support for features such as
 redundant links, fault tolerance and/or load balancing.
 
 The librte_net_bond library exports a C API which provides an API for the
 creation of bonded devices as well as the configuration and management of the
-bonded device and its slave devices.
+bonded device and its member devices.
 
 .. note::
 
@@ -45,7 +45,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
 
 
     This mode provides load balancing and fault tolerance by transmission of
-    packets in sequential order from the first available slave device through
+    packets in sequential order from the first available member device through
     the last. Packets are bulk dequeued from devices then serviced in a
     round-robin manner. This mode does not guarantee in order reception of
     packets and down stream should be able to handle out of order packets.
@@ -57,9 +57,9 @@ Currently the Link Bonding PMD library supports following modes of operation:
    Active Backup (Mode 1)
 
 
-    In this mode only one slave in the bond is active at any time, a different
-    slave becomes active if, and only if, the primary active slave fails,
-    thereby providing fault tolerance to slave failure. The single logical
+    In this mode only one member in the bond is active at any time, a different
+    member becomes active if, and only if, the primary active member fails,
+    thereby providing fault tolerance to member failure. The single logical
     bonded interface's MAC address is externally visible on only one NIC (port)
     to avoid confusing the network switch.
 
@@ -73,10 +73,10 @@ Currently the Link Bonding PMD library supports following modes of operation:
     This mode provides transmit load balancing (based on the selected
     transmission policy) and fault tolerance. The default policy (layer2) uses
     a simple calculation based on the packet flow source and destination MAC
-    addresses as well as the number of active slaves available to the bonded
-    device to classify the packet to a specific slave to transmit on. Alternate
+    addresses as well as the number of active members available to the bonded
+    device to classify the packet to a specific member to transmit on. Alternate
     transmission policies supported are layer 2+3, this takes the IP source and
-    destination addresses into the calculation of the transmit slave port and
+    destination addresses into the calculation of the transmit member port and
     the final supported policy is layer 3+4, this uses IP source and
     destination addresses as well as the TCP/UDP source and destination port.
 
@@ -92,7 +92,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
    Broadcast (Mode 3)
 
 
-    This mode provides fault tolerance by transmission of packets on all slave
+    This mode provides fault tolerance by transmission of packets on all member
     ports.
 
 *   **Link Aggregation 802.3AD (Mode 4):**
@@ -114,7 +114,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
        intervals period of less than 100ms.
 
     #. Calls to ``rte_eth_tx_burst`` must have a buffer size of at least 2xN,
-       where N is the number of slaves. This is a space required for LACP
+       where N is the number of members. This is a space required for LACP
        frames. Additionally LACP packets are included in the statistics, but
        they are not returned to the application.
 
@@ -126,7 +126,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
 
 
     This mode provides an adaptive transmit load balancing. It dynamically
-    changes the transmitting slave, according to the computed load. Statistics
+    changes the transmitting member, according to the computed load. Statistics
     are collected in 100ms intervals and scheduled every 10ms.
 
 
@@ -140,74 +140,74 @@ The Link Bonding Library supports the creation of bonded devices at application
 startup time during EAL initialization using the ``--vdev`` option as well as
 programmatically via the C API ``rte_eth_bond_create`` function.
 
-Bonded devices support the dynamical addition and removal of slave devices using
-the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove`` APIs.
+Bonded devices support the dynamical addition and removal of member devices using
+the ``rte_eth_bond_member_add`` / ``rte_eth_bond_member_remove`` APIs.
 
-After a slave device is added to a bonded device slave is stopped using
+After a member device is added to a bonded device member is stopped using
 ``rte_eth_dev_stop`` and then reconfigured using ``rte_eth_dev_configure``
 the RX and TX queues are also reconfigured using ``rte_eth_tx_queue_setup`` /
 ``rte_eth_rx_queue_setup`` with the parameters use to configure the bonding
 device. If RSS is enabled for bonding device, this mode is also enabled on new
-slave and configured as well.
+member and configured as well.
 Any flow which was configured to the bond device also is configured to the added
-slave.
+member.
 
 Setting up multi-queue mode for bonding device to RSS, makes it fully
-RSS-capable, so all slaves are synchronized with its configuration. This mode is
-intended to provide RSS configuration on slaves transparent for client
+RSS-capable, so all members are synchronized with its configuration. This mode is
+intended to provide RSS configuration on members transparent for client
 application implementation.
 
 Bonding device stores its own version of RSS settings i.e. RETA, RSS hash
-function and RSS key, used to set up its slaves. That let to define the meaning
+function and RSS key, used to set up its members. That let to define the meaning
 of RSS configuration of bonding device as desired configuration of whole bonding
-(as one unit), without pointing any of slave inside. It is required to ensure
+(as one unit), without pointing any of member inside. It is required to ensure
 consistency and made it more error-proof.
 
 RSS hash function set for bonding device, is a maximal set of RSS hash functions
-supported by all bonded slaves. RETA size is a GCD of all its RETA's sizes, so
-it can be easily used as a pattern providing expected behavior, even if slave
+supported by all bonded members. RETA size is a GCD of all its RETA's sizes, so
+it can be easily used as a pattern providing expected behavior, even if member
 RETAs' sizes are different. If RSS Key is not set for bonded device, it's not
-changed on the slaves and default key for device is used.
+changed on the members and default key for device is used.
 
-As RSS configurations, there is flow consistency in the bonded slaves for the
+As RSS configurations, there is flow consistency in the bonded members for the
 next rte flow operations:
 
 Validate:
-	- Validate flow for each slave, failure at least for one slave causes to
+	- Validate flow for each member, failure at least for one member causes to
 	  bond validation failure.
 
 Create:
-	- Create the flow in all slaves.
-	- Save all the slaves created flows objects in bonding internal flow
+	- Create the flow in all members.
+	- Save all the members created flows objects in bonding internal flow
 	  structure.
-	- Failure in flow creation for existed slave rejects the flow.
-	- Failure in flow creation for new slaves in slave adding time rejects
-	  the slave.
+	- Failure in flow creation for existed member rejects the flow.
+	- Failure in flow creation for new members in member adding time rejects
+	  the member.
 
 Destroy:
-	- Destroy the flow in all slaves and release the bond internal flow
+	- Destroy the flow in all members and release the bond internal flow
 	  memory.
 
 Flush:
-	- Destroy all the bonding PMD flows in all the slaves.
+	- Destroy all the bonding PMD flows in all the members.
 
 .. note::
 
-    Don't call slaves flush directly, It destroys all the slave flows which
+    Don't call members flush directly, It destroys all the member flows which
     may include external flows or the bond internal LACP flow.
 
 Query:
-	- Summarize flow counters from all the slaves, relevant only for
+	- Summarize flow counters from all the members, relevant only for
 	  ``RTE_FLOW_ACTION_TYPE_COUNT``.
 
 Isolate:
-	- Call to flow isolate for all slaves.
-	- Failure in flow isolation for existed slave rejects the isolate mode.
-	- Failure in flow isolation for new slaves in slave adding time rejects
-	  the slave.
+	- Call to flow isolate for all members.
+	- Failure in flow isolation for existed member rejects the isolate mode.
+	- Failure in flow isolation for new members in member adding time rejects
+	  the member.
 
 All settings are managed through the bonding port API and always are propagated
-in one direction (from bonding to slaves).
+in one direction (from bonding to members).
 
 Link Status Change Interrupts / Polling
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -215,16 +215,16 @@ Link Status Change Interrupts / Polling
 Link bonding devices support the registration of a link status change callback,
 using the ``rte_eth_dev_callback_register`` API, this will be called when the
 status of the bonding device changes. For example in the case of a bonding
-device which has 3 slaves, the link status will change to up when one slave
-becomes active or change to down when all slaves become inactive. There is no
-callback notification when a single slave changes state and the previous
-conditions are not met. If a user wishes to monitor individual slaves then they
-must register callbacks with that slave directly.
+device which has 3 members, the link status will change to up when one member
+becomes active or change to down when all members become inactive. There is no
+callback notification when a single member changes state and the previous
+conditions are not met. If a user wishes to monitor individual members then they
+must register callbacks with that member directly.
 
 The link bonding library also supports devices which do not implement link
 status change interrupts, this is achieved by polling the devices link status at
 a defined period which is set using the ``rte_eth_bond_link_monitoring_set``
-API, the default polling interval is 10ms. When a device is added as a slave to
+API, the default polling interval is 10ms. When a device is added as a member to
 a bonding device it is determined using the ``RTE_PCI_DRV_INTR_LSC`` flag
 whether the device supports interrupts or whether the link status should be
 monitored by polling it.
@@ -233,30 +233,30 @@ Requirements / Limitations
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 The current implementation only supports devices that support the same speed
-and duplex to be added as a slaves to the same bonded device. The bonded device
-inherits these attributes from the first active slave added to the bonded
-device and then all further slaves added to the bonded device must support
+and duplex to be added as a members to the same bonded device. The bonded device
+inherits these attributes from the first active member added to the bonded
+device and then all further members added to the bonded device must support
 these parameters.
 
-A bonding device must have a minimum of one slave before the bonding device
+A bonding device must have a minimum of one member before the bonding device
 itself can be started.
 
 To use a bonding device dynamic RSS configuration feature effectively, it is
-also required, that all slaves should be RSS-capable and support, at least one
+also required, that all members should be RSS-capable and support, at least one
 common hash function available for each of them. Changing RSS key is only
-possible, when all slave devices support the same key size.
+possible, when all member devices support the same key size.
 
-To prevent inconsistency on how slaves process packets, once a device is added
+To prevent inconsistency on how members process packets, once a device is added
 to a bonding device, RSS and rte flow configurations should be managed through
-the bonding device API, and not directly on the slave.
+the bonding device API, and not directly on the member.
 
 Like all other PMD, all functions exported by a PMD are lock-free functions
 that are assumed not to be invoked in parallel on different logical cores to
 work on the same target object.
 
 It should also be noted that the PMD receive function should not be invoked
-directly on a slave devices after they have been to a bonded device since
-packets read directly from the slave device will no longer be available to the
+directly on a member devices after they have been to a bonded device since
+packets read directly from the member device will no longer be available to the
 bonded device to read.
 
 Configuration
@@ -265,25 +265,25 @@ Configuration
 Link bonding devices are created using the ``rte_eth_bond_create`` API
 which requires a unique device name, the bonding mode,
 and the socket Id to allocate the bonding device's resources on.
-The other configurable parameters for a bonded device are its slave devices,
-its primary slave, a user defined MAC address and transmission policy to use if
+The other configurable parameters for a bonded device are its member devices,
+its primary member, a user defined MAC address and transmission policy to use if
 the device is in balance XOR mode.
 
-Slave Devices
+Member Devices
 ^^^^^^^^^^^^^
 
-Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` slave devices
-of the same speed and duplex. Ethernet devices can be added as a slave to a
-maximum of one bonded device. Slave devices are reconfigured with the
+Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` member devices
+of the same speed and duplex. Ethernet devices can be added as a member to a
+maximum of one bonded device. Member devices are reconfigured with the
 configuration of the bonded device on being added to a bonded device.
 
-The bonded also guarantees to return the MAC address of the slave device to its
-original value of removal of a slave from it.
+The bonded also guarantees to return the MAC address of the member device to its
+original value of removal of a member from it.
 
-Primary Slave
+Primary Member
 ^^^^^^^^^^^^^
 
-The primary slave is used to define the default port to use when a bonded
+The primary member is used to define the default port to use when a bonded
 device is in active backup mode. A different port will only be used if, and
 only if, the current primary port goes down. If the user does not specify a
 primary port it will default to being the first port added to the bonded device.
@@ -292,14 +292,14 @@ MAC Address
 ^^^^^^^^^^^
 
 The bonded device can be configured with a user specified MAC address, this
-address will be inherited by the some/all slave devices depending on the
+address will be inherited by the some/all member devices depending on the
 operating mode. If the device is in active backup mode then only the primary
-device will have the user specified MAC, all other slaves will retain their
-original MAC address. In mode 0, 2, 3, 4 all slaves devices are configure with
+device will have the user specified MAC, all other members will retain their
+original MAC address. In mode 0, 2, 3, 4 all members devices are configure with
 the bonded devices MAC address.
 
 If a user defined MAC address is not defined then the bonded device will
-default to using the primary slaves MAC address.
+default to using the primary members MAC address.
 
 Balance XOR Transmit Policies
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -310,17 +310,17 @@ Balance XOR mode. Layer 2, Layer 2+3, Layer 3+4.
 *   **Layer 2:**   Ethernet MAC address based balancing is the default
     transmission policy for Balance XOR bonding mode. It uses a simple XOR
     calculation on the source MAC address and destination MAC address of the
-    packet and then calculate the modulus of this value to calculate the slave
+    packet and then calculate the modulus of this value to calculate the member
     device to transmit the packet on.
 
 *   **Layer 2 + 3:** Ethernet MAC address & IP Address based balancing uses a
     combination of source/destination MAC addresses and the source/destination
-    IP addresses of the data packet to decide which slave port the packet will
+    IP addresses of the data packet to decide which member port the packet will
     be transmitted on.
 
 *   **Layer 3 + 4:**  IP Address & UDP Port based  balancing uses a combination
     of source/destination IP Address and the source/destination UDP ports of
-    the packet of the data packet to decide which slave port the packet will be
+    the packet of the data packet to decide which member port the packet will be
     transmitted on.
 
 All these policies support 802.1Q VLAN Ethernet packets, as well as IPv4, IPv6
@@ -350,13 +350,13 @@ device configure API ``rte_eth_dev_configure`` and then the RX and TX queues
 which will be used must be setup using ``rte_eth_tx_queue_setup`` /
 ``rte_eth_rx_queue_setup``.
 
-Slave devices can be dynamically added and removed from a link bonding device
-using the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove``
-APIs but at least one slave device must be added to the link bonding device
+Member devices can be dynamically added and removed from a link bonding device
+using the ``rte_eth_bond_member_add`` / ``rte_eth_bond_member_remove``
+APIs but at least one member device must be added to the link bonding device
 before it can be started using ``rte_eth_dev_start``.
 
-The link status of a bonded device is dictated by that of its slaves, if all
-slave device link status are down or if all slaves are removed from the link
+The link status of a bonded device is dictated by that of its members, if all
+member device link status are down or if all members are removed from the link
 bonding device then the link status of the bonding device will go down.
 
 It is also possible to configure / query the configuration of the control
@@ -390,7 +390,7 @@ long as the following two rules are respected:
     where X can be any combination of numbers and/or letters,
     and the name is no greater than 32 characters long.
 
-*   A least one slave device is provided with for each bonded device definition.
+*   A least one member device is provided with for each bonded device definition.
 
 *   The operation mode of the bonded device being created is provided.
 
@@ -404,20 +404,20 @@ The different options are:
 
         mode=2
 
-*   slave: Defines the PMD device which will be added as slave to the bonded
+*   member: Defines the PMD device which will be added as member to the bonded
     device. This option can be selected multiple times, for each device to be
-    added as a slave. Physical devices should be specified using their PCI
+    added as a member. Physical devices should be specified using their PCI
     address, in the format domain:bus:devid.function
 
 .. code-block:: console
 
-        slave=0000:0a:00.0,slave=0000:0a:00.1
+        member=0000:0a:00.0,member=0000:0a:00.1
 
-*   primary: Optional parameter which defines the primary slave port,
-    is used in active backup mode to select the primary slave for data TX/RX if
+*   primary: Optional parameter which defines the primary member port,
+    is used in active backup mode to select the primary member for data TX/RX if
     it is available. The primary port also is used to select the MAC address to
-    use when it is not defined by the user. This defaults to the first slave
-    added to the device if it is specified. The primary device must be a slave
+    use when it is not defined by the user. This defaults to the first member
+    added to the device if it is specified. The primary device must be a member
     of the bonded device.
 
 .. code-block:: console
@@ -432,7 +432,7 @@ The different options are:
         socket_id=0
 
 *   mac: Optional parameter to select a MAC address for link bonding device,
-    this overrides the value of the primary slave device.
+    this overrides the value of the primary member device.
 
 .. code-block:: console
 
@@ -474,29 +474,29 @@ The different options are:
 Examples of Usage
 ^^^^^^^^^^^^^^^^^
 
-Create a bonded device in round robin mode with two slaves specified by their PCI address:
+Create a bonded device in round robin mode with two members specified by their PCI address:
 
 .. code-block:: console
 
-    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00' -- --port-topology=chained
+    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,member=0000:0a:00.01,member=0000:04:00.00' -- --port-topology=chained
 
-Create a bonded device in round robin mode with two slaves specified by their PCI address and an overriding MAC address:
+Create a bonded device in round robin mode with two members specified by their PCI address and an overriding MAC address:
 
 .. code-block:: console
 
-    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
+    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,member=0000:0a:00.01,member=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
 
-Create a bonded device in active backup mode with two slaves specified, and a primary slave specified by their PCI addresses:
+Create a bonded device in active backup mode with two members specified, and a primary member specified by their PCI addresses:
 
 .. code-block:: console
 
-    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,slave=0000:0a:00.01,slave=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
+    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,member=0000:0a:00.01,member=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
 
-Create a bonded device in balance mode with two slaves specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
+Create a bonded device in balance mode with two members specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
 
 .. code-block:: console
 
-    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,slave=0000:0a:00.01,slave=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
+    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,member=0000:0a:00.01,member=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
 
 .. _bonding_testpmd_commands:
 
@@ -517,28 +517,28 @@ For example, to create a bonded device in mode 1 on socket 0::
    testpmd> create bonded device 1 0
    created new bonded device (port X)
 
-add bonding slave
+add bonding member
 ~~~~~~~~~~~~~~~~~
 
 Adds Ethernet device to a Link Bonding device::
 
-   testpmd> add bonding slave (slave id) (port id)
+   testpmd> add bonding member (member id) (port id)
 
 For example, to add Ethernet device (port 6) to a Link Bonding device (port 10)::
 
-   testpmd> add bonding slave 6 10
+   testpmd> add bonding member 6 10
 
 
-remove bonding slave
+remove bonding member
 ~~~~~~~~~~~~~~~~~~~~
 
-Removes an Ethernet slave device from a Link Bonding device::
+Removes an Ethernet member device from a Link Bonding device::
 
-   testpmd> remove bonding slave (slave id) (port id)
+   testpmd> remove bonding member (member id) (port id)
 
-For example, to remove Ethernet slave device (port 6) to a Link Bonding device (port 10)::
+For example, to remove Ethernet member device (port 6) to a Link Bonding device (port 10)::
 
-   testpmd> remove bonding slave 6 10
+   testpmd> remove bonding member 6 10
 
 set bonding mode
 ~~~~~~~~~~~~~~~~
@@ -554,11 +554,11 @@ For example, to set the bonding mode of a Link Bonding device (port 10) to broad
 set bonding primary
 ~~~~~~~~~~~~~~~~~~~
 
-Set an Ethernet slave device as the primary device on a Link Bonding device::
+Set an Ethernet member device as the primary device on a Link Bonding device::
 
-   testpmd> set bonding primary (slave id) (port id)
+   testpmd> set bonding primary (member id) (port id)
 
-For example, to set the Ethernet slave device (port 6) as the primary port of a Link Bonding device (port 10)::
+For example, to set the Ethernet member device (port 6) as the primary port of a Link Bonding device (port 10)::
 
    testpmd> set bonding primary 6 10
 
@@ -590,7 +590,7 @@ set bonding mon_period
 
 Set the link status monitoring polling period in milliseconds for a bonding device.
 
-This adds support for PMD slave devices which do not support link status interrupts.
+This adds support for PMD member devices which do not support link status interrupts.
 When the mon_period is set to a value greater than 0 then all PMD's which do not support
 link status ISR will be queried every polling interval to check if their link status has changed::
 
@@ -604,7 +604,7 @@ For example, to set the link status monitoring polling period of bonded device (
 set bonding lacp dedicated_queue
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Enable dedicated tx/rx queues on bonding devices slaves to handle LACP control plane traffic
+Enable dedicated tx/rx queues on bonding devices members to handle LACP control plane traffic
 when in mode 4 (link-aggregation-802.3ad)::
 
    testpmd> set bonding lacp dedicated_queues (port_id) (enable|disable)
@@ -627,13 +627,13 @@ it also shows link-aggregation-802.3ad information if the link mode is mode 4::
    testpmd> show bonding config (port id)
 
 For example,
-to show the configuration a Link Bonding device (port 9) with 3 slave devices (1, 3, 4)
+to show the configuration a Link Bonding device (port 9) with 3 member devices (1, 3, 4)
 in balance mode with a transmission policy of layer 2+3::
 
    testpmd> show bonding config 9
      - Dev basic:
         Bonding mode: BALANCE(2)
         Balance Xmit Policy: BALANCE_XMIT_POLICY_LAYER23
-        Slaves (3): [1 3 4]
-        Active Slaves (3): [1 3 4]
+        Members (3): [1 3 4]
+        Active Members (3): [1 3 4]
         Primary: [3]
diff --git a/drivers/net/bonding/bonding_testpmd.c b/drivers/net/bonding/bonding_testpmd.c
index b3c12cada0..1fe85839ed 100644
--- a/drivers/net/bonding/bonding_testpmd.c
+++ b/drivers/net/bonding/bonding_testpmd.c
@@ -279,7 +279,7 @@ struct cmd_set_bonding_primary_result {
 	cmdline_fixed_string_t set;
 	cmdline_fixed_string_t bonding;
 	cmdline_fixed_string_t primary;
-	portid_t slave_id;
+	portid_t member_id;
 	portid_t port_id;
 };
 
@@ -287,13 +287,13 @@ static void cmd_set_bonding_primary_parsed(void *parsed_result,
 	__rte_unused struct cmdline *cl, __rte_unused void *data)
 {
 	struct cmd_set_bonding_primary_result *res = parsed_result;
-	portid_t master_port_id = res->port_id;
-	portid_t slave_port_id = res->slave_id;
+	portid_t main_port_id = res->port_id;
+	portid_t member_port_id = res->member_id;
 
-	/* Set the primary slave for a bonded device. */
-	if (rte_eth_bond_primary_set(master_port_id, slave_port_id) != 0) {
-		fprintf(stderr, "\t Failed to set primary slave for port = %d.\n",
-			master_port_id);
+	/* Set the primary member for a bonded device. */
+	if (rte_eth_bond_primary_set(main_port_id, member_port_id) != 0) {
+		fprintf(stderr, "\t Failed to set primary member for port = %d.\n",
+			main_port_id);
 		return;
 	}
 	init_port_config();
@@ -308,141 +308,141 @@ static cmdline_parse_token_string_t cmd_setbonding_primary_bonding =
 static cmdline_parse_token_string_t cmd_setbonding_primary_primary =
 	TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_primary_result,
 		primary, "primary");
-static cmdline_parse_token_num_t cmd_setbonding_primary_slave =
+static cmdline_parse_token_num_t cmd_setbonding_primary_member =
 	TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
-		slave_id, RTE_UINT16);
+		member_id, RTE_UINT16);
 static cmdline_parse_token_num_t cmd_setbonding_primary_port =
 	TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
 		port_id, RTE_UINT16);
 
 static cmdline_parse_inst_t cmd_set_bonding_primary = {
 	.f = cmd_set_bonding_primary_parsed,
-	.help_str = "set bonding primary <slave_id> <port_id>: "
-		"Set the primary slave for port_id",
+	.help_str = "set bonding primary <member_id> <port_id>: "
+		"Set the primary member for port_id",
 	.data = NULL,
 	.tokens = {
 		(void *)&cmd_setbonding_primary_set,
 		(void *)&cmd_setbonding_primary_bonding,
 		(void *)&cmd_setbonding_primary_primary,
-		(void *)&cmd_setbonding_primary_slave,
+		(void *)&cmd_setbonding_primary_member,
 		(void *)&cmd_setbonding_primary_port,
 		NULL
 	}
 };
 
-/* *** ADD SLAVE *** */
-struct cmd_add_bonding_slave_result {
+/* *** ADD Member *** */
+struct cmd_add_bonding_member_result {
 	cmdline_fixed_string_t add;
 	cmdline_fixed_string_t bonding;
-	cmdline_fixed_string_t slave;
-	portid_t slave_id;
+	cmdline_fixed_string_t member;
+	portid_t member_id;
 	portid_t port_id;
 };
 
-static void cmd_add_bonding_slave_parsed(void *parsed_result,
+static void cmd_add_bonding_member_parsed(void *parsed_result,
 	__rte_unused struct cmdline *cl, __rte_unused void *data)
 {
-	struct cmd_add_bonding_slave_result *res = parsed_result;
-	portid_t master_port_id = res->port_id;
-	portid_t slave_port_id = res->slave_id;
+	struct cmd_add_bonding_member_result *res = parsed_result;
+	portid_t main_port_id = res->port_id;
+	portid_t member_port_id = res->member_id;
 
-	/* add the slave for a bonded device. */
-	if (rte_eth_bond_slave_add(master_port_id, slave_port_id) != 0) {
+	/* add the member for a bonded device. */
+	if (rte_eth_bond_member_add(main_port_id, member_port_id) != 0) {
 		fprintf(stderr,
-			"\t Failed to add slave %d to master port = %d.\n",
-			slave_port_id, master_port_id);
+			"\t Failed to add member %d to main port = %d.\n",
+			member_port_id, main_port_id);
 		return;
 	}
-	ports[master_port_id].update_conf = 1;
+	ports[main_port_id].update_conf = 1;
 	init_port_config();
-	set_port_slave_flag(slave_port_id);
+	set_port_member_flag(member_port_id);
 }
 
-static cmdline_parse_token_string_t cmd_addbonding_slave_add =
-	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_add =
+	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
 		add, "add");
-static cmdline_parse_token_string_t cmd_addbonding_slave_bonding =
-	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_bonding =
+	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
 		bonding, "bonding");
-static cmdline_parse_token_string_t cmd_addbonding_slave_slave =
-	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
-		slave, "slave");
-static cmdline_parse_token_num_t cmd_addbonding_slave_slaveid =
-	TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
-		slave_id, RTE_UINT16);
-static cmdline_parse_token_num_t cmd_addbonding_slave_port =
-	TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_member =
+	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
+		member, "member");
+static cmdline_parse_token_num_t cmd_addbonding_member_memberid =
+	TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_member_result,
+		member_id, RTE_UINT16);
+static cmdline_parse_token_num_t cmd_addbonding_member_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_member_result,
 		port_id, RTE_UINT16);
 
-static cmdline_parse_inst_t cmd_add_bonding_slave = {
-	.f = cmd_add_bonding_slave_parsed,
-	.help_str = "add bonding slave <slave_id> <port_id>: "
-		"Add a slave device to a bonded device",
+static cmdline_parse_inst_t cmd_add_bonding_member = {
+	.f = cmd_add_bonding_member_parsed,
+	.help_str = "add bonding member <member_id> <port_id>: "
+		"Add a member device to a bonded device",
 	.data = NULL,
 	.tokens = {
-		(void *)&cmd_addbonding_slave_add,
-		(void *)&cmd_addbonding_slave_bonding,
-		(void *)&cmd_addbonding_slave_slave,
-		(void *)&cmd_addbonding_slave_slaveid,
-		(void *)&cmd_addbonding_slave_port,
+		(void *)&cmd_addbonding_member_add,
+		(void *)&cmd_addbonding_member_bonding,
+		(void *)&cmd_addbonding_member_member,
+		(void *)&cmd_addbonding_member_memberid,
+		(void *)&cmd_addbonding_member_port,
 		NULL
 	}
 };
 
-/* *** REMOVE SLAVE *** */
-struct cmd_remove_bonding_slave_result {
+/* *** REMOVE Member *** */
+struct cmd_remove_bonding_member_result {
 	cmdline_fixed_string_t remove;
 	cmdline_fixed_string_t bonding;
-	cmdline_fixed_string_t slave;
-	portid_t slave_id;
+	cmdline_fixed_string_t member;
+	portid_t member_id;
 	portid_t port_id;
 };
 
-static void cmd_remove_bonding_slave_parsed(void *parsed_result,
+static void cmd_remove_bonding_member_parsed(void *parsed_result,
 	__rte_unused struct cmdline *cl, __rte_unused void *data)
 {
-	struct cmd_remove_bonding_slave_result *res = parsed_result;
-	portid_t master_port_id = res->port_id;
-	portid_t slave_port_id = res->slave_id;
+	struct cmd_remove_bonding_member_result *res = parsed_result;
+	portid_t main_port_id = res->port_id;
+	portid_t member_port_id = res->member_id;
 
-	/* remove the slave from a bonded device. */
-	if (rte_eth_bond_slave_remove(master_port_id, slave_port_id) != 0) {
+	/* remove the member from a bonded device. */
+	if (rte_eth_bond_member_remove(main_port_id, member_port_id) != 0) {
 		fprintf(stderr,
-			"\t Failed to remove slave %d from master port = %d.\n",
-			slave_port_id, master_port_id);
+			"\t Failed to remove member %d from main port = %d.\n",
+			member_port_id, main_port_id);
 		return;
 	}
 	init_port_config();
-	clear_port_slave_flag(slave_port_id);
+	clear_port_member_flag(member_port_id);
 }
 
-static cmdline_parse_token_string_t cmd_removebonding_slave_remove =
-	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_remove =
+	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
 		remove, "remove");
-static cmdline_parse_token_string_t cmd_removebonding_slave_bonding =
-	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_bonding =
+	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
 		bonding, "bonding");
-static cmdline_parse_token_string_t cmd_removebonding_slave_slave =
-	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
-		slave, "slave");
-static cmdline_parse_token_num_t cmd_removebonding_slave_slaveid =
-	TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
-		slave_id, RTE_UINT16);
-static cmdline_parse_token_num_t cmd_removebonding_slave_port =
-	TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_member =
+	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
+		member, "member");
+static cmdline_parse_token_num_t cmd_removebonding_member_memberid =
+	TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_member_result,
+		member_id, RTE_UINT16);
+static cmdline_parse_token_num_t cmd_removebonding_member_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_member_result,
 		port_id, RTE_UINT16);
 
-static cmdline_parse_inst_t cmd_remove_bonding_slave = {
-	.f = cmd_remove_bonding_slave_parsed,
-	.help_str = "remove bonding slave <slave_id> <port_id>: "
-		"Remove a slave device from a bonded device",
+static cmdline_parse_inst_t cmd_remove_bonding_member = {
+	.f = cmd_remove_bonding_member_parsed,
+	.help_str = "remove bonding member <member_id> <port_id>: "
+		"Remove a member device from a bonded device",
 	.data = NULL,
 	.tokens = {
-		(void *)&cmd_removebonding_slave_remove,
-		(void *)&cmd_removebonding_slave_bonding,
-		(void *)&cmd_removebonding_slave_slave,
-		(void *)&cmd_removebonding_slave_slaveid,
-		(void *)&cmd_removebonding_slave_port,
+		(void *)&cmd_removebonding_member_remove,
+		(void *)&cmd_removebonding_member_bonding,
+		(void *)&cmd_removebonding_member_member,
+		(void *)&cmd_removebonding_member_memberid,
+		(void *)&cmd_removebonding_member_port,
 		NULL
 	}
 };
@@ -706,18 +706,18 @@ static struct testpmd_driver_commands bonding_cmds = {
 	},
 	{
 		&cmd_set_bonding_primary,
-		"set bonding primary (slave_id) (port_id)\n"
-		"	Set the primary slave for a bonded device.\n",
+		"set bonding primary (member_id) (port_id)\n"
+		"	Set the primary member for a bonded device.\n",
 	},
 	{
-		&cmd_add_bonding_slave,
-		"add bonding slave (slave_id) (port_id)\n"
-		"	Add a slave device to a bonded device.\n",
+		&cmd_add_bonding_member,
+		"add bonding member (member_id) (port_id)\n"
+		"	Add a member device to a bonded device.\n",
 	},
 	{
-		&cmd_remove_bonding_slave,
-		"remove bonding slave (slave_id) (port_id)\n"
-		"	Remove a slave device from a bonded device.\n",
+		&cmd_remove_bonding_member,
+		"remove bonding member (member_id) (port_id)\n"
+		"	Remove a member device from a bonded device.\n",
 	},
 	{
 		&cmd_create_bonded_device,
diff --git a/drivers/net/bonding/eth_bond_8023ad_private.h b/drivers/net/bonding/eth_bond_8023ad_private.h
index a5e1fffea1..77892c0601 100644
--- a/drivers/net/bonding/eth_bond_8023ad_private.h
+++ b/drivers/net/bonding/eth_bond_8023ad_private.h
@@ -15,10 +15,10 @@
 #include "rte_eth_bond_8023ad.h"
 
 #define BOND_MODE_8023AX_UPDATE_TIMEOUT_MS  100
-/** Maximum number of packets to one slave queued in TX ring. */
-#define BOND_MODE_8023AX_SLAVE_RX_PKTS        3
-/** Maximum number of LACP packets from one slave queued in TX ring. */
-#define BOND_MODE_8023AX_SLAVE_TX_PKTS        1
+/** Maximum number of packets to one member queued in TX ring. */
+#define BOND_MODE_8023AX_MEMBER_RX_PKTS        3
+/** Maximum number of LACP packets from one member queued in TX ring. */
+#define BOND_MODE_8023AX_MEMBER_TX_PKTS        1
 /**
  * Timeouts definitions (5.4.4 in 802.1AX documentation).
  */
@@ -113,7 +113,7 @@ struct port {
 	enum rte_bond_8023ad_selection selected;
 
 	/** Indicates if either allmulti or promisc has been enforced on the
-	 * slave so that we can receive lacp packets
+	 * member so that we can receive lacp packets
 	 */
 #define BOND_8023AD_FORCED_ALLMULTI (1 << 0)
 #define BOND_8023AD_FORCED_PROMISC (1 << 1)
@@ -162,8 +162,8 @@ struct mode8023ad_private {
 	uint8_t external_sm;
 	struct rte_ether_addr mac_addr;
 
-	struct rte_eth_link slave_link;
-	/***< slave link properties */
+	struct rte_eth_link member_link;
+	/***< member link properties */
 
 	/**
 	 * Configuration of dedicated hardware queues for control plane
@@ -208,7 +208,7 @@ bond_mode_8023ad_setup(struct rte_eth_dev *dev,
 /**
  * @internal
  *
- * Enables 802.1AX mode and all active slaves on bonded interface.
+ * Enables 802.1AX mode and all active members on bonded interface.
  *
  * @param dev Bonded interface
  * @return
@@ -220,7 +220,7 @@ bond_mode_8023ad_enable(struct rte_eth_dev *dev);
 /**
  * @internal
  *
- * Disables 802.1AX mode of the bonded interface and slaves.
+ * Disables 802.1AX mode of the bonded interface and members.
  *
  * @param dev Bonded interface
  * @return
@@ -256,43 +256,43 @@ bond_mode_8023ad_stop(struct rte_eth_dev *dev);
  *
  * Passes given slow packet to state machines management logic.
  * @param internals Bonded device private data.
- * @param slave_id Slave port id.
+ * @param member_id Member port id.
  * @param slot_pkt Slow packet.
  */
 void
 bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
-				 uint16_t slave_id, struct rte_mbuf *pkt);
+				 uint16_t member_id, struct rte_mbuf *pkt);
 
 /**
  * @internal
  *
- * Appends given slave used slave
+ * Appends given member used member
  *
  * @param dev       Bonded interface.
- * @param port_id   Slave port ID to be added
+ * @param port_id   Member port ID to be added
  *
  * @return
  *  0 on success, negative value otherwise.
  */
 void
-bond_mode_8023ad_activate_slave(struct rte_eth_dev *dev, uint16_t port_id);
+bond_mode_8023ad_activate_member(struct rte_eth_dev *dev, uint16_t port_id);
 
 /**
  * @internal
  *
- * Denitializes and removes given slave from 802.1AX mode.
+ * Denitializes and removes given member from 802.1AX mode.
  *
  * @param dev       Bonded interface.
- * @param slave_num Position of slave in active_slaves array
+ * @param member_num Position of member in active_members array
  *
  * @return
  *  0 on success, negative value otherwise.
  */
 int
-bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *dev, uint16_t slave_pos);
+bond_mode_8023ad_deactivate_member(struct rte_eth_dev *dev, uint16_t member_pos);
 
 /**
- * Updates state when MAC was changed on bonded device or one of its slaves.
+ * Updates state when MAC was changed on bonded device or one of its members.
  * @param bond_dev Bonded device
  */
 void
@@ -300,10 +300,10 @@ bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev);
 
 int
 bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
-		uint16_t slave_port);
+		uint16_t member_port);
 
 int
-bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port);
+bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t member_port);
 
 int
 bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id);
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index d4f1fb27d4..93d03b0a79 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -18,8 +18,8 @@
 #include "eth_bond_8023ad_private.h"
 #include "rte_eth_bond_alb.h"
 
-#define PMD_BOND_SLAVE_PORT_KVARG			("slave")
-#define PMD_BOND_PRIMARY_SLAVE_KVARG		("primary")
+#define PMD_BOND_MEMBER_PORT_KVARG			("member")
+#define PMD_BOND_PRIMARY_MEMBER_KVARG		("primary")
 #define PMD_BOND_MODE_KVARG					("mode")
 #define PMD_BOND_AGG_MODE_KVARG				("agg_mode")
 #define PMD_BOND_XMIT_POLICY_KVARG			("xmit_policy")
@@ -50,8 +50,8 @@ extern const struct rte_flow_ops bond_flow_ops;
 /** Port Queue Mapping Structure */
 struct bond_rx_queue {
 	uint16_t queue_id;
-	/**< Next active_slave to poll */
-	uint16_t active_slave;
+	/**< Next active_member to poll */
+	uint16_t active_member;
 	/**< Queue Id */
 	struct bond_dev_private *dev_private;
 	/**< Reference to eth_dev private structure */
@@ -74,19 +74,19 @@ struct bond_tx_queue {
 	/**< Copy of TX configuration structure for queue */
 };
 
-/** Bonded slave devices structure */
-struct bond_ethdev_slave_ports {
-	uint16_t slaves[RTE_MAX_ETHPORTS];	/**< Slave port id array */
-	uint16_t slave_count;				/**< Number of slaves */
+/** Bonded member devices structure */
+struct bond_ethdev_member_ports {
+	uint16_t members[RTE_MAX_ETHPORTS];	/**< Member port id array */
+	uint16_t member_count;				/**< Number of members */
 };
 
-struct bond_slave_details {
+struct bond_member_details {
 	uint16_t port_id;
 
 	uint8_t link_status_poll_enabled;
 	uint8_t link_status_wait_to_complete;
 	uint8_t last_link_status;
-	/**< Port Id of slave eth_dev */
+	/**< Port Id of member eth_dev */
 	struct rte_ether_addr persisted_mac_addr;
 
 	uint16_t reta_size;
@@ -94,7 +94,7 @@ struct bond_slave_details {
 
 struct rte_flow {
 	TAILQ_ENTRY(rte_flow) next;
-	/* Slaves flows */
+	/* Members flows */
 	struct rte_flow *flows[RTE_MAX_ETHPORTS];
 	/* Flow description for synchronization */
 	struct rte_flow_conv_rule rule;
@@ -102,7 +102,7 @@ struct rte_flow {
 };
 
 typedef void (*burst_xmit_hash_t)(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves);
+		uint16_t member_count, uint16_t *members);
 
 /** Link Bonding PMD device private configuration Structure */
 struct bond_dev_private {
@@ -112,8 +112,8 @@ struct bond_dev_private {
 	rte_spinlock_t lock;
 	rte_spinlock_t lsc_lock;
 
-	uint16_t primary_port;			/**< Primary Slave Port */
-	uint16_t current_primary_port;		/**< Primary Slave Port */
+	uint16_t primary_port;			/**< Primary Member Port */
+	uint16_t current_primary_port;		/**< Primary Member Port */
 	uint16_t user_defined_primary_port;
 	/**< Flag for whether primary port is user defined or not */
 
@@ -137,16 +137,16 @@ struct bond_dev_private {
 	uint16_t nb_rx_queues;			/**< Total number of rx queues */
 	uint16_t nb_tx_queues;			/**< Total number of tx queues*/
 
-	uint16_t active_slave_count;		/**< Number of active slaves */
-	uint16_t active_slaves[RTE_MAX_ETHPORTS];    /**< Active slave list */
+	uint16_t active_member_count;		/**< Number of active members */
+	uint16_t active_members[RTE_MAX_ETHPORTS];    /**< Active member list */
 
-	uint16_t slave_count;			/**< Number of bonded slaves */
-	struct bond_slave_details slaves[RTE_MAX_ETHPORTS];
-	/**< Array of bonded slaves details */
+	uint16_t member_count;			/**< Number of bonded members */
+	struct bond_member_details members[RTE_MAX_ETHPORTS];
+	/**< Array of bonded members details */
 
 	struct mode8023ad_private mode4;
-	uint16_t tlb_slaves_order[RTE_MAX_ETHPORTS];
-	/**< TLB active slaves send order */
+	uint16_t tlb_members_order[RTE_MAX_ETHPORTS];
+	/**< TLB active members send order */
 	struct mode_alb_private mode6;
 
 	uint64_t rx_offload_capa;       /** Rx offload capability */
@@ -177,7 +177,7 @@ struct bond_dev_private {
 	uint8_t rss_key_len;				/**< hash key length in bytes. */
 
 	struct rte_kvargs *kvlist;
-	uint8_t slave_update_idx;
+	uint8_t member_update_idx;
 
 	bool kvargs_processing_is_done;
 
@@ -191,19 +191,21 @@ struct bond_dev_private {
 extern const struct eth_dev_ops default_dev_ops;
 
 int
-check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev);
+check_for_main_bonded_ethdev(const struct rte_eth_dev *eth_dev);
 
 int
 check_for_bonded_ethdev(const struct rte_eth_dev *eth_dev);
 
-/* Search given slave array to find position of given id.
- * Return slave pos or slaves_count if not found. */
+/*
+ * Search given member array to find position of given id.
+ * Return member pos or members_count if not found.
+ */
 static inline uint16_t
-find_slave_by_id(uint16_t *slaves, uint16_t slaves_count, uint16_t slave_id) {
+find_member_by_id(uint16_t *members, uint16_t members_count, uint16_t member_id) {
 
 	uint16_t pos;
-	for (pos = 0; pos < slaves_count; pos++) {
-		if (slave_id == slaves[pos])
+	for (pos = 0; pos < members_count; pos++) {
+		if (member_id == members[pos])
 			break;
 	}
 
@@ -217,13 +219,13 @@ int
 valid_bonded_port_id(uint16_t port_id);
 
 int
-valid_slave_port_id(struct bond_dev_private *internals, uint16_t port_id);
+valid_member_port_id(struct bond_dev_private *internals, uint16_t port_id);
 
 void
-deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
+deactivate_member(struct rte_eth_dev *eth_dev, uint16_t port_id);
 
 void
-activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
+activate_member(struct rte_eth_dev *eth_dev, uint16_t port_id);
 
 int
 mac_address_set(struct rte_eth_dev *eth_dev,
@@ -234,66 +236,66 @@ mac_address_get(struct rte_eth_dev *eth_dev,
 		struct rte_ether_addr *dst_mac_addr);
 
 int
-mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev);
+mac_address_members_update(struct rte_eth_dev *bonded_eth_dev);
 
 int
-slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
-		uint16_t slave_port_id);
+member_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+		uint16_t member_port_id);
 
 int
-slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
-		uint16_t slave_port_id);
+member_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+		uint16_t member_port_id);
 
 int
 bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode);
 
 int
-slave_configure(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev);
+member_configure(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *member_eth_dev);
 
 int
-slave_start(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev);
+member_start(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *member_eth_dev);
 
 void
-slave_remove(struct bond_dev_private *internals,
-		struct rte_eth_dev *slave_eth_dev);
+member_remove(struct bond_dev_private *internals,
+		struct rte_eth_dev *member_eth_dev);
 
 void
-slave_add(struct bond_dev_private *internals,
-		struct rte_eth_dev *slave_eth_dev);
+member_add(struct bond_dev_private *internals,
+		struct rte_eth_dev *member_eth_dev);
 
 void
 burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves);
+		uint16_t member_count, uint16_t *members);
 
 void
 burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves);
+		uint16_t member_count, uint16_t *members);
 
 void
 burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves);
+		uint16_t member_count, uint16_t *members);
 
 
 void
 bond_ethdev_primary_set(struct bond_dev_private *internals,
-		uint16_t slave_port_id);
+		uint16_t member_port_id);
 
 int
 bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 		void *param, void *ret_param);
 
 int
-bond_ethdev_parse_slave_port_kvarg(const char *key,
+bond_ethdev_parse_member_port_kvarg(const char *key,
 		const char *value, void *extra_args);
 
 int
-bond_ethdev_parse_slave_mode_kvarg(const char *key,
+bond_ethdev_parse_member_mode_kvarg(const char *key,
 		const char *value, void *extra_args);
 
 int
-bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_agg_mode_kvarg(const char *key __rte_unused,
 		const char *value, void *extra_args);
 
 int
@@ -301,7 +303,7 @@ bond_ethdev_parse_socket_id_kvarg(const char *key,
 		const char *value, void *extra_args);
 
 int
-bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key,
+bond_ethdev_parse_primary_member_port_id_kvarg(const char *key,
 		const char *value, void *extra_args);
 
 int
@@ -323,7 +325,7 @@ void
 bond_tlb_enable(struct bond_dev_private *internals);
 
 void
-bond_tlb_activate_slave(struct bond_dev_private *internals);
+bond_tlb_activate_member(struct bond_dev_private *internals);
 
 int
 bond_ethdev_stop(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/bonding/rte_eth_bond.h b/drivers/net/bonding/rte_eth_bond.h
index 874aa91a5f..b90242264d 100644
--- a/drivers/net/bonding/rte_eth_bond.h
+++ b/drivers/net/bonding/rte_eth_bond.h
@@ -10,7 +10,7 @@
  *
  * RTE Link Bonding Ethernet Device
  * Link Bonding for 1GbE and 10GbE ports to allow the aggregation of multiple
- * (slave) NICs into a single logical interface. The bonded device processes
+ * (member) NICs into a single logical interface. The bonded device processes
  * these interfaces based on the mode of operation specified and supported.
  * This implementation supports 4 modes of operation round robin, active backup
  * balance and broadcast. Providing redundant links, fault tolerance and/or
@@ -28,24 +28,28 @@ extern "C" {
 #define BONDING_MODE_ROUND_ROBIN		(0)
 /**< Round Robin (Mode 0).
  * In this mode all transmitted packets will be balanced equally across all
- * active slaves of the bonded in a round robin fashion. */
+ * active members of the bonded in a round robin fashion.
+ */
 #define BONDING_MODE_ACTIVE_BACKUP		(1)
 /**< Active Backup (Mode 1).
  * In this mode all packets transmitted will be transmitted on the primary
- * slave until such point as the primary slave is no longer available and then
- * transmitted packets will be sent on the next available slaves. The primary
- * slave can be defined by the user but defaults to the first active slave
- * available if not specified. */
+ * member until such point as the primary member is no longer available and then
+ * transmitted packets will be sent on the next available members. The primary
+ * member can be defined by the user but defaults to the first active member
+ * available if not specified.
+ */
 #define BONDING_MODE_BALANCE			(2)
 /**< Balance (Mode 2).
  * In this mode all packets transmitted will be balanced across the available
- * slaves using one of three available transmit policies - l2, l2+3 or l3+4.
+ * members using one of three available transmit policies - l2, l2+3 or l3+4.
  * See BALANCE_XMIT_POLICY macros definitions for further details on transmit
- * policies. */
+ * policies.
+ */
 #define BONDING_MODE_BROADCAST			(3)
 /**< Broadcast (Mode 3).
  * In this mode all transmitted packets will be transmitted on all available
- * active slaves of the bonded. */
+ * active members of the bonded.
+ */
 #define BONDING_MODE_8023AD				(4)
 /**< 802.3AD (Mode 4).
  *
@@ -62,22 +66,22 @@ extern "C" {
  * be handled with the expected latency and this may cause the link status to be
  * incorrectly marked as down or failure to correctly negotiate with peers.
  * - For optimal performance during initial handshaking the array of mbufs provided
- * to rx_burst should be at least 2 times the slave count size.
- *
+ * to rx_burst should be at least 2 times the member count size.
  */
 #define BONDING_MODE_TLB	(5)
 /**< Adaptive TLB (Mode 5)
  * This mode provides an adaptive transmit load balancing. It dynamically
- * changes the transmitting slave, according to the computed load. Statistics
- * are collected in 100ms intervals and scheduled every 10ms */
+ * changes the transmitting member, according to the computed load. Statistics
+ * are collected in 100ms intervals and scheduled every 10ms.
+ */
 #define BONDING_MODE_ALB	(6)
 /**< Adaptive Load Balancing (Mode 6)
  * This mode includes adaptive TLB and receive load balancing (RLB). In RLB the
  * bonding driver intercepts ARP replies send by local system and overwrites its
  * source MAC address, so that different peers send data to the server on
- * different slave interfaces. When local system sends ARP request, it saves IP
+ * different member interfaces. When local system sends ARP request, it saves IP
  * information from it. When ARP reply from that peer is received, its MAC is
- * stored, one of slave MACs assigned and ARP reply send to that peer.
+ * stored, one of member MACs assigned and ARP reply send to that peer.
  */
 
 /* Balance Mode Transmit Policies */
@@ -113,28 +117,44 @@ int
 rte_eth_bond_free(const char *name);
 
 /**
- * Add a rte_eth_dev device as a slave to the bonded device
+ * Add a rte_eth_dev device as a member to the bonded device
  *
  * @param bonded_port_id	Port ID of bonded device.
- * @param slave_port_id		Port ID of slave device.
+ * @param member_port_id		Port ID of member device.
  *
  * @return
  *	0 on success, negative value otherwise
  */
+__rte_experimental
 int
-rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_member_add(uint16_t bonded_port_id, uint16_t member_port_id);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t member_port_id)
+{
+	return rte_eth_bond_member_add(bonded_port_id, member_port_id);
+}
 
 /**
- * Remove a slave rte_eth_dev device from the bonded device
+ * Remove a member rte_eth_dev device from the bonded device
  *
  * @param bonded_port_id	Port ID of bonded device.
- * @param slave_port_id		Port ID of slave device.
+ * @param member_port_id		Port ID of member device.
  *
  * @return
  *	0 on success, negative value otherwise
  */
+__rte_experimental
 int
-rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_member_remove(uint16_t bonded_port_id, uint16_t member_port_id);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t member_port_id)
+{
+	return rte_eth_bond_member_remove(bonded_port_id, member_port_id);
+}
 
 /**
  * Set link bonding mode of bonded device
@@ -160,65 +180,83 @@ int
 rte_eth_bond_mode_get(uint16_t bonded_port_id);
 
 /**
- * Set slave rte_eth_dev as primary slave of bonded device
+ * Set member rte_eth_dev as primary member of bonded device
  *
  * @param bonded_port_id	Port ID of bonded device.
- * @param slave_port_id		Port ID of slave device.
+ * @param member_port_id		Port ID of member device.
  *
  * @return
  *	0 on success, negative value otherwise
  */
 int
-rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t member_port_id);
 
 /**
- * Get primary slave of bonded device
+ * Get primary member of bonded device
  *
  * @param bonded_port_id	Port ID of bonded device.
  *
  * @return
- *	Port Id of primary slave on success, -1 on failure
+ *	Port Id of primary member on success, -1 on failure
  */
 int
 rte_eth_bond_primary_get(uint16_t bonded_port_id);
 
 /**
- * Populate an array with list of the slaves port id's of the bonded device
+ * Populate an array with list of the members port id's of the bonded device
  *
  * @param bonded_port_id	Port ID of bonded eth_dev to interrogate
- * @param slaves			Array to be populated with the current active slaves
- * @param len				Length of slaves array
+ * @param members			Array to be populated with the current active members
+ * @param len				Length of members array
  *
  * @return
- *	Number of slaves associated with bonded device on success,
+ *	Number of members associated with bonded device on success,
  *	negative value otherwise
  */
+__rte_experimental
 int
-rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
-			uint16_t len);
+rte_eth_bond_members_get(uint16_t bonded_port_id, uint16_t members[],
+		uint16_t len);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t members[],
+		uint16_t len)
+{
+	return rte_eth_bond_members_get(bonded_port_id, members, len);
+}
 
 /**
- * Populate an array with list of the active slaves port id's of the bonded
+ * Populate an array with list of the active members port id's of the bonded
  * device.
  *
  * @param bonded_port_id	Port ID of bonded eth_dev to interrogate
- * @param slaves			Array to be populated with the current active slaves
- * @param len				Length of slaves array
+ * @param members			Array to be populated with the current active members
+ * @param len				Length of members array
  *
  * @return
- *	Number of active slaves associated with bonded device on success,
+ *	Number of active members associated with bonded device on success,
  *	negative value otherwise
  */
+__rte_experimental
 int
-rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
-				uint16_t len);
+rte_eth_bond_active_members_get(uint16_t bonded_port_id, uint16_t members[],
+		uint16_t len);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t members[],
+		uint16_t len)
+{
+	return rte_eth_bond_active_members_get(bonded_port_id, members, len);
+}
 
 /**
- * Set explicit MAC address to use on bonded device and it's slaves.
+ * Set explicit MAC address to use on bonded device and it's members.
  *
  * @param bonded_port_id	Port ID of bonded device.
  * @param mac_addr			MAC Address to use on bonded device overriding
- *							slaves MAC addresses
+ *							members MAC addresses
  *
  * @return
  *	0 on success, negative value otherwise
@@ -228,8 +266,8 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
 		struct rte_ether_addr *mac_addr);
 
 /**
- * Reset bonded device to use MAC from primary slave on bonded device and it's
- * slaves.
+ * Reset bonded device to use MAC from primary member on bonded device and it's
+ * members.
  *
  * @param bonded_port_id	Port ID of bonded device.
  *
@@ -266,7 +304,7 @@ rte_eth_bond_xmit_policy_get(uint16_t bonded_port_id);
 
 /**
  * Set the link monitoring frequency (in ms) for monitoring the link status of
- * slave devices
+ * member devices
  *
  * @param bonded_port_id	Port ID of bonded device.
  * @param internal_ms		Monitoring interval in milliseconds
@@ -280,7 +318,7 @@ rte_eth_bond_link_monitoring_set(uint16_t bonded_port_id, uint32_t internal_ms);
 
 /**
  * Get the current link monitoring frequency (in ms) for monitoring of the link
- * status of slave devices
+ * status of member devices
  *
  * @param bonded_port_id	Port ID of bonded device.
  *
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 4a266bb2ca..ac9f414e74 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -19,7 +19,7 @@ static void bond_mode_8023ad_ext_periodic_cb(void *arg);
 #define MODE4_DEBUG(fmt, ...)				\
 	rte_log(RTE_LOG_DEBUG, bond_logtype,		\
 		"%6u [Port %u: %s] " fmt,		\
-		bond_dbg_get_time_diff_ms(), slave_id,	\
+		bond_dbg_get_time_diff_ms(), member_id,	\
 		__func__, ##__VA_ARGS__)
 
 static uint64_t start_time;
@@ -184,9 +184,9 @@ set_warning_flags(struct port *port, uint16_t flags)
 }
 
 static void
-show_warnings(uint16_t slave_id)
+show_warnings(uint16_t member_id)
 {
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[member_id];
 	uint8_t warnings;
 
 	do {
@@ -205,36 +205,36 @@ show_warnings(uint16_t slave_id)
 
 	if (warnings & WRN_RX_QUEUE_FULL) {
 		RTE_BOND_LOG(DEBUG,
-			     "Slave %u: failed to enqueue LACP packet into RX ring.\n"
+			     "Member %u: failed to enqueue LACP packet into RX ring.\n"
 			     "Receive and transmit functions must be invoked on bonded"
 			     "interface at least 10 times per second or LACP will notwork correctly",
-			     slave_id);
+			     member_id);
 	}
 
 	if (warnings & WRN_TX_QUEUE_FULL) {
 		RTE_BOND_LOG(DEBUG,
-			     "Slave %u: failed to enqueue LACP packet into TX ring.\n"
+			     "Member %u: failed to enqueue LACP packet into TX ring.\n"
 			     "Receive and transmit functions must be invoked on bonded"
 			     "interface at least 10 times per second or LACP will not work correctly",
-			     slave_id);
+			     member_id);
 	}
 
 	if (warnings & WRN_RX_MARKER_TO_FAST)
-		RTE_BOND_LOG(INFO, "Slave %u: marker to early - ignoring.",
-			     slave_id);
+		RTE_BOND_LOG(INFO, "Member %u: marker to early - ignoring.",
+			     member_id);
 
 	if (warnings & WRN_UNKNOWN_SLOW_TYPE) {
 		RTE_BOND_LOG(INFO,
-			"Slave %u: ignoring unknown slow protocol frame type",
-			     slave_id);
+			"Member %u: ignoring unknown slow protocol frame type",
+			     member_id);
 	}
 
 	if (warnings & WRN_UNKNOWN_MARKER_TYPE)
-		RTE_BOND_LOG(INFO, "Slave %u: ignoring unknown marker type",
-			     slave_id);
+		RTE_BOND_LOG(INFO, "Member %u: ignoring unknown marker type",
+			     member_id);
 
 	if (warnings & WRN_NOT_LACP_CAPABLE)
-		MODE4_DEBUG("Port %u is not LACP capable!\n", slave_id);
+		MODE4_DEBUG("Port %u is not LACP capable!\n", member_id);
 }
 
 static void
@@ -256,10 +256,10 @@ record_default(struct port *port)
  * @param port			Port on which LACPDU was received.
  */
 static void
-rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
+rx_machine(struct bond_dev_private *internals, uint16_t member_id,
 		struct lacpdu *lacp)
 {
-	struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *agg, *port = &bond_mode_8023ad_ports[member_id];
 	uint64_t timeout;
 
 	if (SM_FLAG(port, BEGIN)) {
@@ -389,9 +389,9 @@ rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
  * @param port			Port to handle state machine.
  */
 static void
-periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
+periodic_machine(struct bond_dev_private *internals, uint16_t member_id)
 {
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[member_id];
 	/* Calculate if either site is LACP enabled */
 	uint64_t timeout;
 	uint8_t active = ACTOR_STATE(port, LACP_ACTIVE) ||
@@ -451,9 +451,9 @@ periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
  * @param port			Port to handle state machine.
  */
 static void
-mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
+mux_machine(struct bond_dev_private *internals, uint16_t member_id)
 {
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[member_id];
 
 	/* Save current state for later use */
 	const uint8_t state_mask = STATE_SYNCHRONIZATION | STATE_DISTRIBUTING |
@@ -527,8 +527,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
 				SM_FLAG_SET(port, NTT);
 				MODE4_DEBUG("COLLECTING -> DISTRIBUTING\n");
 				RTE_BOND_LOG(INFO,
-					"Bond %u: slave id %u distributing started.",
-					internals->port_id, slave_id);
+					"Bond %u: member id %u distributing started.",
+					internals->port_id, member_id);
 			}
 		} else {
 			if (!PARTNER_STATE(port, COLLECTING)) {
@@ -538,8 +538,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
 				SM_FLAG_SET(port, NTT);
 				MODE4_DEBUG("DISTRIBUTING -> COLLECTING\n");
 				RTE_BOND_LOG(INFO,
-					"Bond %u: slave id %u distributing stopped.",
-					internals->port_id, slave_id);
+					"Bond %u: member id %u distributing stopped.",
+					internals->port_id, member_id);
 			}
 		}
 	}
@@ -554,9 +554,9 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
  * @param port
  */
 static void
-tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
+tx_machine(struct bond_dev_private *internals, uint16_t member_id)
 {
-	struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *agg, *port = &bond_mode_8023ad_ports[member_id];
 
 	struct rte_mbuf *lacp_pkt = NULL;
 	struct lacpdu_header *hdr;
@@ -587,7 +587,7 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
 
 	/* Source and destination MAC */
 	rte_ether_addr_copy(&lacp_mac_addr, &hdr->eth_hdr.dst_addr);
-	rte_eth_macaddr_get(slave_id, &hdr->eth_hdr.src_addr);
+	rte_eth_macaddr_get(member_id, &hdr->eth_hdr.src_addr);
 	hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
 
 	lacpdu = &hdr->lacpdu;
@@ -635,10 +635,10 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
 			return;
 		}
 	} else {
-		uint16_t pkts_sent = rte_eth_tx_prepare(slave_id,
+		uint16_t pkts_sent = rte_eth_tx_prepare(member_id,
 				internals->mode4.dedicated_queues.tx_qid,
 				&lacp_pkt, 1);
-		pkts_sent = rte_eth_tx_burst(slave_id,
+		pkts_sent = rte_eth_tx_burst(member_id,
 				internals->mode4.dedicated_queues.tx_qid,
 				&lacp_pkt, pkts_sent);
 		if (pkts_sent != 1) {
@@ -679,40 +679,40 @@ max_index(uint64_t *a, int n)
  * @param port_pos			Port to assign.
  */
 static void
-selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
+selection_logic(struct bond_dev_private *internals, uint16_t member_id)
 {
 	struct port *agg, *port;
-	uint16_t slaves_count, new_agg_id, i, j = 0;
-	uint16_t *slaves;
+	uint16_t members_count, new_agg_id, i, j = 0;
+	uint16_t *members;
 	uint64_t agg_bandwidth[RTE_MAX_ETHPORTS] = {0};
 	uint64_t agg_count[RTE_MAX_ETHPORTS] = {0};
-	uint16_t default_slave = 0;
+	uint16_t default_member = 0;
 	struct rte_eth_link link_info;
 	uint16_t agg_new_idx = 0;
 	int ret;
 
-	slaves = internals->active_slaves;
-	slaves_count = internals->active_slave_count;
-	port = &bond_mode_8023ad_ports[slave_id];
+	members = internals->active_members;
+	members_count = internals->active_member_count;
+	port = &bond_mode_8023ad_ports[member_id];
 
 	/* Search for aggregator suitable for this port */
-	for (i = 0; i < slaves_count; ++i) {
-		agg = &bond_mode_8023ad_ports[slaves[i]];
+	for (i = 0; i < members_count; ++i) {
+		agg = &bond_mode_8023ad_ports[members[i]];
 		/* Skip ports that are not aggregators */
-		if (agg->aggregator_port_id != slaves[i])
+		if (agg->aggregator_port_id != members[i])
 			continue;
 
-		ret = rte_eth_link_get_nowait(slaves[i], &link_info);
+		ret = rte_eth_link_get_nowait(members[i], &link_info);
 		if (ret < 0) {
 			RTE_BOND_LOG(ERR,
-				"Slave (port %u) link get failed: %s\n",
-				slaves[i], rte_strerror(-ret));
+				"Member (port %u) link get failed: %s\n",
+				members[i], rte_strerror(-ret));
 			continue;
 		}
 		agg_count[i] += 1;
 		agg_bandwidth[i] += link_info.link_speed;
 
-		/* Actors system ID is not checked since all slave device have the same
+		/* Actors system ID is not checked since all member device have the same
 		 * ID (MAC address). */
 		if ((agg->actor.key == port->actor.key &&
 			agg->partner.system_priority == port->partner.system_priority &&
@@ -724,31 +724,31 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
 				rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) != 0) {
 
 			if (j == 0)
-				default_slave = i;
+				default_member = i;
 			j++;
 		}
 	}
 
 	switch (internals->mode4.agg_selection) {
 	case AGG_COUNT:
-		agg_new_idx = max_index(agg_count, slaves_count);
-		new_agg_id = slaves[agg_new_idx];
+		agg_new_idx = max_index(agg_count, members_count);
+		new_agg_id = members[agg_new_idx];
 		break;
 	case AGG_BANDWIDTH:
-		agg_new_idx = max_index(agg_bandwidth, slaves_count);
-		new_agg_id = slaves[agg_new_idx];
+		agg_new_idx = max_index(agg_bandwidth, members_count);
+		new_agg_id = members[agg_new_idx];
 		break;
 	case AGG_STABLE:
-		if (default_slave == slaves_count)
-			new_agg_id = slaves[slave_id];
+		if (default_member == members_count)
+			new_agg_id = members[member_id];
 		else
-			new_agg_id = slaves[default_slave];
+			new_agg_id = members[default_member];
 		break;
 	default:
-		if (default_slave == slaves_count)
-			new_agg_id = slaves[slave_id];
+		if (default_member == members_count)
+			new_agg_id = members[member_id];
 		else
-			new_agg_id = slaves[default_slave];
+			new_agg_id = members[default_member];
 		break;
 	}
 
@@ -758,7 +758,7 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
 		MODE4_DEBUG("-> SELECTED: ID=%3u\n"
 			"\t%s aggregator ID=%3u\n",
 			port->aggregator_port_id,
-			port->aggregator_port_id == slave_id ?
+			port->aggregator_port_id == member_id ?
 				"aggregator not found, using default" : "aggregator found",
 			port->aggregator_port_id);
 	}
@@ -802,7 +802,7 @@ link_speed_key(uint16_t speed) {
 }
 
 static void
-rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
+rx_machine_update(struct bond_dev_private *internals, uint16_t member_id,
 		struct rte_mbuf *lacp_pkt) {
 	struct lacpdu_header *lacp;
 	struct lacpdu_actor_partner_params *partner;
@@ -813,7 +813,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
 		RTE_ASSERT(lacp->lacpdu.subtype == SLOW_SUBTYPE_LACP);
 
 		partner = &lacp->lacpdu.partner;
-		port = &bond_mode_8023ad_ports[slave_id];
+		port = &bond_mode_8023ad_ports[member_id];
 		agg = &bond_mode_8023ad_ports[port->aggregator_port_id];
 
 		if (rte_is_zero_ether_addr(&partner->port_params.system) ||
@@ -822,7 +822,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
 			/* This LACP frame is sending to the bonding port
 			 * so pass it to rx_machine.
 			 */
-			rx_machine(internals, slave_id, &lacp->lacpdu);
+			rx_machine(internals, member_id, &lacp->lacpdu);
 		} else {
 			char preferred_system_name[RTE_ETHER_ADDR_FMT_SIZE];
 			char self_system_name[RTE_ETHER_ADDR_FMT_SIZE];
@@ -837,16 +837,16 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
 		}
 		rte_pktmbuf_free(lacp_pkt);
 	} else
-		rx_machine(internals, slave_id, NULL);
+		rx_machine(internals, member_id, NULL);
 }
 
 static void
 bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
-			uint16_t slave_id)
+			uint16_t member_id)
 {
 #define DEDICATED_QUEUE_BURST_SIZE 32
 	struct rte_mbuf *lacp_pkt[DEDICATED_QUEUE_BURST_SIZE];
-	uint16_t rx_count = rte_eth_rx_burst(slave_id,
+	uint16_t rx_count = rte_eth_rx_burst(member_id,
 				internals->mode4.dedicated_queues.rx_qid,
 				lacp_pkt, DEDICATED_QUEUE_BURST_SIZE);
 
@@ -854,10 +854,10 @@ bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
 		uint16_t i;
 
 		for (i = 0; i < rx_count; i++)
-			bond_mode_8023ad_handle_slow_pkt(internals, slave_id,
+			bond_mode_8023ad_handle_slow_pkt(internals, member_id,
 					lacp_pkt[i]);
 	} else {
-		rx_machine_update(internals, slave_id, NULL);
+		rx_machine_update(internals, member_id, NULL);
 	}
 }
 
@@ -868,23 +868,23 @@ bond_mode_8023ad_periodic_cb(void *arg)
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
 	struct port *port;
 	struct rte_eth_link link_info;
-	struct rte_ether_addr slave_addr;
+	struct rte_ether_addr member_addr;
 	struct rte_mbuf *lacp_pkt = NULL;
-	uint16_t slave_id;
+	uint16_t member_id;
 	uint16_t i;
 
 
 	/* Update link status on each port */
-	for (i = 0; i < internals->active_slave_count; i++) {
+	for (i = 0; i < internals->active_member_count; i++) {
 		uint16_t key;
 		int ret;
 
-		slave_id = internals->active_slaves[i];
-		ret = rte_eth_link_get_nowait(slave_id, &link_info);
+		member_id = internals->active_members[i];
+		ret = rte_eth_link_get_nowait(member_id, &link_info);
 		if (ret < 0) {
 			RTE_BOND_LOG(ERR,
-				"Slave (port %u) link get failed: %s\n",
-				slave_id, rte_strerror(-ret));
+				"Member (port %u) link get failed: %s\n",
+				member_id, rte_strerror(-ret));
 		}
 
 		if (ret >= 0 && link_info.link_status != 0) {
@@ -895,8 +895,8 @@ bond_mode_8023ad_periodic_cb(void *arg)
 			key = 0;
 		}
 
-		rte_eth_macaddr_get(slave_id, &slave_addr);
-		port = &bond_mode_8023ad_ports[slave_id];
+		rte_eth_macaddr_get(member_id, &member_addr);
+		port = &bond_mode_8023ad_ports[member_id];
 
 		key = rte_cpu_to_be_16(key);
 		if (key != port->actor.key) {
@@ -907,16 +907,16 @@ bond_mode_8023ad_periodic_cb(void *arg)
 			SM_FLAG_SET(port, NTT);
 		}
 
-		if (!rte_is_same_ether_addr(&port->actor.system, &slave_addr)) {
-			rte_ether_addr_copy(&slave_addr, &port->actor.system);
-			if (port->aggregator_port_id == slave_id)
+		if (!rte_is_same_ether_addr(&port->actor.system, &member_addr)) {
+			rte_ether_addr_copy(&member_addr, &port->actor.system);
+			if (port->aggregator_port_id == member_id)
 				SM_FLAG_SET(port, NTT);
 		}
 	}
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		slave_id = internals->active_slaves[i];
-		port = &bond_mode_8023ad_ports[slave_id];
+	for (i = 0; i < internals->active_member_count; i++) {
+		member_id = internals->active_members[i];
+		port = &bond_mode_8023ad_ports[member_id];
 
 		if ((port->actor.key &
 				rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) == 0) {
@@ -947,19 +947,19 @@ bond_mode_8023ad_periodic_cb(void *arg)
 			if (retval != 0)
 				lacp_pkt = NULL;
 
-			rx_machine_update(internals, slave_id, lacp_pkt);
+			rx_machine_update(internals, member_id, lacp_pkt);
 		} else {
 			bond_mode_8023ad_dedicated_rxq_process(internals,
-					slave_id);
+					member_id);
 		}
 
-		periodic_machine(internals, slave_id);
-		mux_machine(internals, slave_id);
-		tx_machine(internals, slave_id);
-		selection_logic(internals, slave_id);
+		periodic_machine(internals, member_id);
+		mux_machine(internals, member_id);
+		tx_machine(internals, member_id);
+		selection_logic(internals, member_id);
 
 		SM_FLAG_CLR(port, BEGIN);
-		show_warnings(slave_id);
+		show_warnings(member_id);
 	}
 
 	rte_eal_alarm_set(internals->mode4.update_timeout_us,
@@ -967,34 +967,34 @@ bond_mode_8023ad_periodic_cb(void *arg)
 }
 
 static int
-bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
+bond_mode_8023ad_register_lacp_mac(uint16_t member_id)
 {
 	int ret;
 
-	ret = rte_eth_allmulticast_enable(slave_id);
+	ret = rte_eth_allmulticast_enable(member_id);
 	if (ret != 0) {
 		RTE_BOND_LOG(ERR,
 			"failed to enable allmulti mode for port %u: %s",
-			slave_id, rte_strerror(-ret));
+			member_id, rte_strerror(-ret));
 	}
-	if (rte_eth_allmulticast_get(slave_id)) {
+	if (rte_eth_allmulticast_get(member_id)) {
 		RTE_BOND_LOG(DEBUG, "forced allmulti for port %u",
-			     slave_id);
-		bond_mode_8023ad_ports[slave_id].forced_rx_flags =
+			     member_id);
+		bond_mode_8023ad_ports[member_id].forced_rx_flags =
 				BOND_8023AD_FORCED_ALLMULTI;
 		return 0;
 	}
 
-	ret = rte_eth_promiscuous_enable(slave_id);
+	ret = rte_eth_promiscuous_enable(member_id);
 	if (ret != 0) {
 		RTE_BOND_LOG(ERR,
 			"failed to enable promiscuous mode for port %u: %s",
-			slave_id, rte_strerror(-ret));
+			member_id, rte_strerror(-ret));
 	}
-	if (rte_eth_promiscuous_get(slave_id)) {
+	if (rte_eth_promiscuous_get(member_id)) {
 		RTE_BOND_LOG(DEBUG, "forced promiscuous for port %u",
-			     slave_id);
-		bond_mode_8023ad_ports[slave_id].forced_rx_flags =
+			     member_id);
+		bond_mode_8023ad_ports[member_id].forced_rx_flags =
 				BOND_8023AD_FORCED_PROMISC;
 		return 0;
 	}
@@ -1003,27 +1003,27 @@ bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
 }
 
 static void
-bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
+bond_mode_8023ad_unregister_lacp_mac(uint16_t member_id)
 {
 	int ret;
 
-	switch (bond_mode_8023ad_ports[slave_id].forced_rx_flags) {
+	switch (bond_mode_8023ad_ports[member_id].forced_rx_flags) {
 	case BOND_8023AD_FORCED_ALLMULTI:
-		RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", slave_id);
-		ret = rte_eth_allmulticast_disable(slave_id);
+		RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", member_id);
+		ret = rte_eth_allmulticast_disable(member_id);
 		if (ret != 0)
 			RTE_BOND_LOG(ERR,
 				"failed to disable allmulti mode for port %u: %s",
-				slave_id, rte_strerror(-ret));
+				member_id, rte_strerror(-ret));
 		break;
 
 	case BOND_8023AD_FORCED_PROMISC:
-		RTE_BOND_LOG(DEBUG, "unset promisc for port %u", slave_id);
-		ret = rte_eth_promiscuous_disable(slave_id);
+		RTE_BOND_LOG(DEBUG, "unset promisc for port %u", member_id);
+		ret = rte_eth_promiscuous_disable(member_id);
 		if (ret != 0)
 			RTE_BOND_LOG(ERR,
 				"failed to disable promiscuous mode for port %u: %s",
-				slave_id, rte_strerror(-ret));
+				member_id, rte_strerror(-ret));
 		break;
 
 	default:
@@ -1032,12 +1032,12 @@ bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
 }
 
 void
-bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
-				uint16_t slave_id)
+bond_mode_8023ad_activate_member(struct rte_eth_dev *bond_dev,
+				uint16_t member_id)
 {
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
 
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[member_id];
 	struct port_params initial = {
 			.system = { { 0 } },
 			.system_priority = rte_cpu_to_be_16(0xFFFF),
@@ -1053,15 +1053,15 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 	struct bond_tx_queue *bd_tx_q;
 	uint16_t q_id;
 
-	/* Given slave mus not be in active list */
-	RTE_ASSERT(find_slave_by_id(internals->active_slaves,
-	internals->active_slave_count, slave_id) == internals->active_slave_count);
+	/* Given member mus not be in active list */
+	RTE_ASSERT(find_member_by_id(internals->active_members,
+	internals->active_member_count, member_id) == internals->active_member_count);
 	RTE_SET_USED(internals); /* used only for assert when enabled */
 
 	memcpy(&port->actor, &initial, sizeof(struct port_params));
 	/* Standard requires that port ID must be grater than 0.
 	 * Add 1 do get corresponding port_number */
-	port->actor.port_number = rte_cpu_to_be_16(slave_id + 1);
+	port->actor.port_number = rte_cpu_to_be_16(member_id + 1);
 
 	memcpy(&port->partner, &initial, sizeof(struct port_params));
 	memcpy(&port->partner_admin, &initial, sizeof(struct port_params));
@@ -1072,11 +1072,11 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 	port->sm_flags = SM_FLAGS_BEGIN;
 
 	/* use this port as aggregator */
-	port->aggregator_port_id = slave_id;
+	port->aggregator_port_id = member_id;
 
-	if (bond_mode_8023ad_register_lacp_mac(slave_id) < 0) {
-		RTE_BOND_LOG(WARNING, "slave %u is most likely broken and won't receive LACP packets",
-			     slave_id);
+	if (bond_mode_8023ad_register_lacp_mac(member_id) < 0) {
+		RTE_BOND_LOG(WARNING, "member %u is most likely broken and won't receive LACP packets",
+			     member_id);
 	}
 
 	timer_cancel(&port->warning_timer);
@@ -1087,22 +1087,24 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 	RTE_ASSERT(port->rx_ring == NULL);
 	RTE_ASSERT(port->tx_ring == NULL);
 
-	socket_id = rte_eth_dev_socket_id(slave_id);
+	socket_id = rte_eth_dev_socket_id(member_id);
 	if (socket_id == -1)
 		socket_id = rte_socket_id();
 
 	element_size = sizeof(struct slow_protocol_frame) +
 				RTE_PKTMBUF_HEADROOM;
 
-	/* The size of the mempool should be at least:
-	 * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */
-	total_tx_desc = BOND_MODE_8023AX_SLAVE_TX_PKTS;
+	/*
+	 * The size of the mempool should be at least:
+	 * the sum of the TX descriptors + BOND_MODE_8023AX_MEMBER_TX_PKTS.
+	 */
+	total_tx_desc = BOND_MODE_8023AX_MEMBER_TX_PKTS;
 	for (q_id = 0; q_id < bond_dev->data->nb_tx_queues; q_id++) {
 		bd_tx_q = (struct bond_tx_queue*)bond_dev->data->tx_queues[q_id];
 		total_tx_desc += bd_tx_q->nb_tx_desc;
 	}
 
-	snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
+	snprintf(mem_name, RTE_DIM(mem_name), "member_port%u_pool", member_id);
 	port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
 		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
 			32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
@@ -1111,39 +1113,39 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 	/* Any memory allocation failure in initialization is critical because
 	 * resources can't be free, so reinitialization is impossible. */
 	if (port->mbuf_pool == NULL) {
-		rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
-			slave_id, mem_name, rte_strerror(rte_errno));
+		rte_panic("Member %u: Failed to create memory pool '%s': %s\n",
+			member_id, mem_name, rte_strerror(rte_errno));
 	}
 
-	snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_rx", slave_id);
+	snprintf(mem_name, RTE_DIM(mem_name), "member_%u_rx", member_id);
 	port->rx_ring = rte_ring_create(mem_name,
-			rte_align32pow2(BOND_MODE_8023AX_SLAVE_RX_PKTS), socket_id, 0);
+			rte_align32pow2(BOND_MODE_8023AX_MEMBER_RX_PKTS), socket_id, 0);
 
 	if (port->rx_ring == NULL) {
-		rte_panic("Slave %u: Failed to create rx ring '%s': %s\n", slave_id,
+		rte_panic("Member %u: Failed to create rx ring '%s': %s\n", member_id,
 			mem_name, rte_strerror(rte_errno));
 	}
 
 	/* TX ring is at least one pkt longer to make room for marker packet. */
-	snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_tx", slave_id);
+	snprintf(mem_name, RTE_DIM(mem_name), "member_%u_tx", member_id);
 	port->tx_ring = rte_ring_create(mem_name,
-			rte_align32pow2(BOND_MODE_8023AX_SLAVE_TX_PKTS + 1), socket_id, 0);
+			rte_align32pow2(BOND_MODE_8023AX_MEMBER_TX_PKTS + 1), socket_id, 0);
 
 	if (port->tx_ring == NULL) {
-		rte_panic("Slave %u: Failed to create tx ring '%s': %s\n", slave_id,
+		rte_panic("Member %u: Failed to create tx ring '%s': %s\n", member_id,
 			mem_name, rte_strerror(rte_errno));
 	}
 }
 
 int
-bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
-		uint16_t slave_id)
+bond_mode_8023ad_deactivate_member(struct rte_eth_dev *bond_dev __rte_unused,
+		uint16_t member_id)
 {
 	void *pkt = NULL;
 	struct port *port = NULL;
 	uint8_t old_partner_state;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 
 	ACTOR_STATE_CLR(port, AGGREGATION);
 	port->selected = UNSELECTED;
@@ -1151,7 +1153,7 @@ bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
 	old_partner_state = port->partner_state;
 	record_default(port);
 
-	bond_mode_8023ad_unregister_lacp_mac(slave_id);
+	bond_mode_8023ad_unregister_lacp_mac(member_id);
 
 	/* If partner timeout state changes then disable timer */
 	if (!((old_partner_state ^ port->partner_state) &
@@ -1174,30 +1176,30 @@ void
 bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev)
 {
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
-	struct rte_ether_addr slave_addr;
-	struct port *slave, *agg_slave;
-	uint16_t slave_id, i, j;
+	struct rte_ether_addr member_addr;
+	struct port *member, *agg_member;
+	uint16_t member_id, i, j;
 
 	bond_mode_8023ad_stop(bond_dev);
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		slave_id = internals->active_slaves[i];
-		slave = &bond_mode_8023ad_ports[slave_id];
-		rte_eth_macaddr_get(slave_id, &slave_addr);
+	for (i = 0; i < internals->active_member_count; i++) {
+		member_id = internals->active_members[i];
+		member = &bond_mode_8023ad_ports[member_id];
+		rte_eth_macaddr_get(member_id, &member_addr);
 
-		if (rte_is_same_ether_addr(&slave_addr, &slave->actor.system))
+		if (rte_is_same_ether_addr(&member_addr, &member->actor.system))
 			continue;
 
-		rte_ether_addr_copy(&slave_addr, &slave->actor.system);
+		rte_ether_addr_copy(&member_addr, &member->actor.system);
 		/* Do nothing if this port is not an aggregator. In other case
 		 * Set NTT flag on every port that use this aggregator. */
-		if (slave->aggregator_port_id != slave_id)
+		if (member->aggregator_port_id != member_id)
 			continue;
 
-		for (j = 0; j < internals->active_slave_count; j++) {
-			agg_slave = &bond_mode_8023ad_ports[internals->active_slaves[j]];
-			if (agg_slave->aggregator_port_id == slave_id)
-				SM_FLAG_SET(agg_slave, NTT);
+		for (j = 0; j < internals->active_member_count; j++) {
+			agg_member = &bond_mode_8023ad_ports[internals->active_members[j]];
+			if (agg_member->aggregator_port_id == member_id)
+				SM_FLAG_SET(agg_member, NTT);
 		}
 	}
 
@@ -1288,9 +1290,9 @@ bond_mode_8023ad_enable(struct rte_eth_dev *bond_dev)
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
 	uint16_t i;
 
-	for (i = 0; i < internals->active_slave_count; i++)
-		bond_mode_8023ad_activate_slave(bond_dev,
-				internals->active_slaves[i]);
+	for (i = 0; i < internals->active_member_count; i++)
+		bond_mode_8023ad_activate_member(bond_dev,
+				internals->active_members[i]);
 
 	return 0;
 }
@@ -1326,10 +1328,10 @@ bond_mode_8023ad_stop(struct rte_eth_dev *bond_dev)
 
 void
 bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
-				  uint16_t slave_id, struct rte_mbuf *pkt)
+				  uint16_t member_id, struct rte_mbuf *pkt)
 {
 	struct mode8023ad_private *mode4 = &internals->mode4;
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[member_id];
 	struct marker_header *m_hdr;
 	uint64_t marker_timer, old_marker_timer;
 	int retval;
@@ -1362,7 +1364,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
 		} while (unlikely(retval == 0));
 
 		m_hdr->marker.tlv_type_marker = MARKER_TLV_TYPE_RESP;
-		rte_eth_macaddr_get(slave_id, &m_hdr->eth_hdr.src_addr);
+		rte_eth_macaddr_get(member_id, &m_hdr->eth_hdr.src_addr);
 
 		if (internals->mode4.dedicated_queues.enabled == 0) {
 			if (rte_ring_enqueue(port->tx_ring, pkt) != 0) {
@@ -1373,10 +1375,10 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
 			}
 		} else {
 			/* Send packet directly to the slow queue */
-			uint16_t tx_count = rte_eth_tx_prepare(slave_id,
+			uint16_t tx_count = rte_eth_tx_prepare(member_id,
 					internals->mode4.dedicated_queues.tx_qid,
 					&pkt, 1);
-			tx_count = rte_eth_tx_burst(slave_id,
+			tx_count = rte_eth_tx_burst(member_id,
 					internals->mode4.dedicated_queues.tx_qid,
 					&pkt, tx_count);
 			if (tx_count != 1) {
@@ -1394,7 +1396,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
 				goto free_out;
 			}
 		} else
-			rx_machine_update(internals, slave_id, pkt);
+			rx_machine_update(internals, member_id, pkt);
 	} else {
 		wrn = WRN_UNKNOWN_SLOW_TYPE;
 		goto free_out;
@@ -1517,8 +1519,8 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
 
 
 int
-rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
-		struct rte_eth_bond_8023ad_slave_info *info)
+rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
+		struct rte_eth_bond_8023ad_member_info *info)
 {
 	struct rte_eth_dev *bond_dev;
 	struct bond_dev_private *internals;
@@ -1531,12 +1533,12 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
 	bond_dev = &rte_eth_devices[port_id];
 
 	internals = bond_dev->data->dev_private;
-	if (find_slave_by_id(internals->active_slaves,
-			internals->active_slave_count, slave_id) ==
-				internals->active_slave_count)
+	if (find_member_by_id(internals->active_members,
+			internals->active_member_count, member_id) ==
+				internals->active_member_count)
 		return -EINVAL;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 	info->selected = port->selected;
 
 	info->actor_state = port->actor_state;
@@ -1550,7 +1552,7 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
 }
 
 static int
-bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
+bond_8023ad_ext_validate(uint16_t port_id, uint16_t member_id)
 {
 	struct rte_eth_dev *bond_dev;
 	struct bond_dev_private *internals;
@@ -1565,9 +1567,9 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
 		return -EINVAL;
 
 	internals = bond_dev->data->dev_private;
-	if (find_slave_by_id(internals->active_slaves,
-			internals->active_slave_count, slave_id) ==
-				internals->active_slave_count)
+	if (find_member_by_id(internals->active_members,
+			internals->active_member_count, member_id) ==
+				internals->active_member_count)
 		return -EINVAL;
 
 	mode4 = &internals->mode4;
@@ -1578,17 +1580,17 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
 }
 
 int
-rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
 				int enabled)
 {
 	struct port *port;
 	int res;
 
-	res = bond_8023ad_ext_validate(port_id, slave_id);
+	res = bond_8023ad_ext_validate(port_id, member_id);
 	if (res != 0)
 		return res;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 
 	if (enabled)
 		ACTOR_STATE_SET(port, COLLECTING);
@@ -1599,17 +1601,17 @@ rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
 }
 
 int
-rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
 				int enabled)
 {
 	struct port *port;
 	int res;
 
-	res = bond_8023ad_ext_validate(port_id, slave_id);
+	res = bond_8023ad_ext_validate(port_id, member_id);
 	if (res != 0)
 		return res;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 
 	if (enabled)
 		ACTOR_STATE_SET(port, DISTRIBUTING);
@@ -1620,45 +1622,45 @@ rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
 }
 
 int
-rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id)
+rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id)
 {
 	struct port *port;
 	int err;
 
-	err = bond_8023ad_ext_validate(port_id, slave_id);
+	err = bond_8023ad_ext_validate(port_id, member_id);
 	if (err != 0)
 		return err;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 	return ACTOR_STATE(port, DISTRIBUTING);
 }
 
 int
-rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id)
+rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id)
 {
 	struct port *port;
 	int err;
 
-	err = bond_8023ad_ext_validate(port_id, slave_id);
+	err = bond_8023ad_ext_validate(port_id, member_id);
 	if (err != 0)
 		return err;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 	return ACTOR_STATE(port, COLLECTING);
 }
 
 int
-rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t member_id,
 		struct rte_mbuf *lacp_pkt)
 {
 	struct port *port;
 	int res;
 
-	res = bond_8023ad_ext_validate(port_id, slave_id);
+	res = bond_8023ad_ext_validate(port_id, member_id);
 	if (res != 0)
 		return res;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 
 	if (rte_pktmbuf_pkt_len(lacp_pkt) < sizeof(struct lacpdu_header))
 		return -EINVAL;
@@ -1683,11 +1685,11 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
 	struct mode8023ad_private *mode4 = &internals->mode4;
 	struct port *port;
 	void *pkt = NULL;
-	uint16_t i, slave_id;
+	uint16_t i, member_id;
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		slave_id = internals->active_slaves[i];
-		port = &bond_mode_8023ad_ports[slave_id];
+	for (i = 0; i < internals->active_member_count; i++) {
+		member_id = internals->active_members[i];
+		port = &bond_mode_8023ad_ports[member_id];
 
 		if (rte_ring_dequeue(port->rx_ring, &pkt) == 0) {
 			struct rte_mbuf *lacp_pkt = pkt;
@@ -1700,7 +1702,7 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
 			/* This is LACP frame so pass it to rx callback.
 			 * Callback is responsible for freeing mbuf.
 			 */
-			mode4->slowrx_cb(slave_id, lacp_pkt);
+			mode4->slowrx_cb(member_id, lacp_pkt);
 		}
 	}
 
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.h b/drivers/net/bonding/rte_eth_bond_8023ad.h
index 7ad8d6d00b..3144ee378a 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.h
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.h
@@ -35,7 +35,7 @@ extern "C" {
 #define MARKER_TLV_TYPE_INFO                0x01
 #define MARKER_TLV_TYPE_RESP                0x02
 
-typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t slave_id,
+typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t member_id,
 						  struct rte_mbuf *lacp_pkt);
 
 enum rte_bond_8023ad_selection {
@@ -66,13 +66,13 @@ struct port_params {
 	uint16_t system_priority;
 	/**< System priority (unused in current implementation) */
 	struct rte_ether_addr system;
-	/**< System ID - Slave MAC address, same as bonding MAC address */
+	/**< System ID - Member MAC address, same as bonding MAC address */
 	uint16_t key;
 	/**< Speed information (implementation dependent) and duplex. */
 	uint16_t port_priority;
 	/**< Priority of this (unused in current implementation) */
 	uint16_t port_number;
-	/**< Port number. It corresponds to slave port id. */
+	/**< Port number. It corresponds to member port id. */
 } __rte_packed __rte_aligned(2);
 
 struct lacpdu_actor_partner_params {
@@ -141,7 +141,7 @@ struct rte_eth_bond_8023ad_conf {
 	enum rte_bond_8023ad_agg_selection agg_selection;
 };
 
-struct rte_eth_bond_8023ad_slave_info {
+struct rte_eth_bond_8023ad_member_info {
 	enum rte_bond_8023ad_selection selected;
 	uint8_t actor_state;
 	struct port_params actor;
@@ -184,104 +184,113 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
 /**
  * @internal
  *
- * Function returns current state of given slave device.
+ * Function returns current state of given member device.
  *
- * @param slave_id  Port id of valid slave.
+ * @param member_id  Port id of valid member.
  * @param conf		buffer for configuration
  * @return
  *   0 - if ok
- *   -EINVAL if conf is NULL or slave id is invalid (not a slave of given
+ *   -EINVAL if conf is NULL or member id is invalid (not a member of given
  *       bonded device or is not inactive).
  */
+__rte_experimental
 int
-rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
-		struct rte_eth_bond_8023ad_slave_info *conf);
+rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
+		struct rte_eth_bond_8023ad_member_info *conf);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t member_id,
+		struct rte_eth_bond_8023ad_member_info *conf)
+{
+	return rte_eth_bond_8023ad_member_info(port_id, member_id, conf);
+}
 
 #ifdef __cplusplus
 }
 #endif
 
 /**
- * Configure a slave port to start collecting.
+ * Configure a member port to start collecting.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port id of valid slave.
+ * @param member_id	Port id of valid member.
  * @param enabled	Non-zero when collection enabled.
  * @return
  *   0 - if ok
- *   -EINVAL if slave is not valid.
+ *   -EINVAL if member is not valid.
  */
 int
-rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
 				int enabled);
 
 /**
- * Get COLLECTING flag from slave port actor state.
+ * Get COLLECTING flag from member port actor state.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port id of valid slave.
+ * @param member_id	Port id of valid member.
  * @return
  *   0 - if not set
  *   1 - if set
- *   -EINVAL if slave is not valid.
+ *   -EINVAL if member is not valid.
  */
 int
-rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id);
+rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id);
 
 /**
- * Configure a slave port to start distributing.
+ * Configure a member port to start distributing.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port id of valid slave.
+ * @param member_id	Port id of valid member.
  * @param enabled	Non-zero when distribution enabled.
  * @return
  *   0 - if ok
- *   -EINVAL if slave is not valid.
+ *   -EINVAL if member is not valid.
  */
 int
-rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
 				int enabled);
 
 /**
- * Get DISTRIBUTING flag from slave port actor state.
+ * Get DISTRIBUTING flag from member port actor state.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port id of valid slave.
+ * @param member_id	Port id of valid member.
  * @return
  *   0 - if not set
  *   1 - if set
- *   -EINVAL if slave is not valid.
+ *   -EINVAL if member is not valid.
  */
 int
-rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id);
+rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id);
 
 /**
  * LACPDU transmit path for external 802.3ad state machine.  Caller retains
  * ownership of the packet on failure.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port ID of valid slave device.
+ * @param member_id	Port ID of valid member device.
  * @param lacp_pkt	mbuf containing LACPDU.
  *
  * @return
  *   0 on success, negative value otherwise.
  */
 int
-rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t member_id,
 		struct rte_mbuf *lacp_pkt);
 
 /**
- * Enable dedicated hw queues for 802.3ad control plane traffic on slaves
+ * Enable dedicated hw queues for 802.3ad control plane traffic on members
  *
- * This function creates an additional tx and rx queue on each slave for
+ * This function creates an additional tx and rx queue on each member for
  * dedicated 802.3ad control plane traffic . A flow filtering rule is
- * programmed on each slave to redirect all LACP slow packets to that rx queue
+ * programmed on each member to redirect all LACP slow packets to that rx queue
  * for processing in the LACP state machine, this removes the need to filter
  * these packets in the bonded devices data path. The additional tx queue is
  * used to enable the LACP state machine to enqueue LACP packets directly to
- * slave hw independently of the bonded devices data path.
+ * member hw independently of the bonded devices data path.
  *
- * To use this feature all slaves must support the programming of the flow
+ * To use this feature all members must support the programming of the flow
  * filter rule required for rx and have enough queues that one rx and tx queue
  * can be reserved for the LACP state machines control packets.
  *
@@ -296,7 +305,7 @@ int
 rte_eth_bond_8023ad_dedicated_queues_enable(uint16_t port_id);
 
 /**
- * Disable slow queue on slaves
+ * Disable slow queue on members
  *
  * This function disables hardware slow packet filter.
  *
diff --git a/drivers/net/bonding/rte_eth_bond_alb.c b/drivers/net/bonding/rte_eth_bond_alb.c
index 86335a7971..56945e2349 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.c
+++ b/drivers/net/bonding/rte_eth_bond_alb.c
@@ -19,13 +19,13 @@ simple_hash(uint8_t *hash_start, int hash_size)
 }
 
 static uint16_t
-calculate_slave(struct bond_dev_private *internals)
+calculate_member(struct bond_dev_private *internals)
 {
 	uint16_t idx;
 
-	idx = (internals->mode6.last_slave + 1) % internals->active_slave_count;
-	internals->mode6.last_slave = idx;
-	return internals->active_slaves[idx];
+	idx = (internals->mode6.last_member + 1) % internals->active_member_count;
+	internals->mode6.last_member = idx;
+	return internals->active_members[idx];
 }
 
 int
@@ -41,7 +41,7 @@ bond_mode_alb_enable(struct rte_eth_dev *bond_dev)
 	/* Fill hash table with initial values */
 	memset(hash_table, 0, sizeof(struct client_data) * ALB_HASH_TABLE_SIZE);
 	rte_spinlock_init(&internals->mode6.lock);
-	internals->mode6.last_slave = ALB_NULL_INDEX;
+	internals->mode6.last_member = ALB_NULL_INDEX;
 	internals->mode6.ntt = 0;
 
 	/* Initialize memory pool for ARP packets to send */
@@ -96,7 +96,7 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
 	/*
 	 * We got reply for ARP Request send by the application. We need to
 	 * update client table when received data differ from what is stored
-	 * in ALB table and issue sending update packet to that slave.
+	 * in ALB table and issue sending update packet to that member.
 	 */
 	rte_spinlock_lock(&internals->mode6.lock);
 	if (client_info->in_use == 0 ||
@@ -112,8 +112,8 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
 		client_info->cli_ip = arp->arp_data.arp_sip;
 		rte_ether_addr_copy(&arp->arp_data.arp_sha,
 				&client_info->cli_mac);
-		client_info->slave_idx = calculate_slave(internals);
-		rte_eth_macaddr_get(client_info->slave_idx,
+		client_info->member_idx = calculate_member(internals);
+		rte_eth_macaddr_get(client_info->member_idx,
 				&client_info->app_mac);
 		rte_ether_addr_copy(&client_info->app_mac,
 				&arp->arp_data.arp_tha);
@@ -166,33 +166,33 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
 						&arp->arp_data.arp_tha,
 						&client_info->cli_mac);
 				}
-				rte_eth_macaddr_get(client_info->slave_idx,
+				rte_eth_macaddr_get(client_info->member_idx,
 						&client_info->app_mac);
 				rte_ether_addr_copy(&client_info->app_mac,
 						&arp->arp_data.arp_sha);
 				memcpy(client_info->vlan, eth_h + 1, offset);
 				client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
 				rte_spinlock_unlock(&internals->mode6.lock);
-				return client_info->slave_idx;
+				return client_info->member_idx;
 			}
 		}
 
-		/* Assign new slave to this client and update src mac in ARP */
+		/* Assign new member to this client and update src mac in ARP */
 		client_info->in_use = 1;
 		client_info->ntt = 0;
 		client_info->app_ip = arp->arp_data.arp_sip;
 		rte_ether_addr_copy(&arp->arp_data.arp_tha,
 				&client_info->cli_mac);
 		client_info->cli_ip = arp->arp_data.arp_tip;
-		client_info->slave_idx = calculate_slave(internals);
-		rte_eth_macaddr_get(client_info->slave_idx,
+		client_info->member_idx = calculate_member(internals);
+		rte_eth_macaddr_get(client_info->member_idx,
 				&client_info->app_mac);
 		rte_ether_addr_copy(&client_info->app_mac,
 				&arp->arp_data.arp_sha);
 		memcpy(client_info->vlan, eth_h + 1, offset);
 		client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
 		rte_spinlock_unlock(&internals->mode6.lock);
-		return client_info->slave_idx;
+		return client_info->member_idx;
 	}
 
 	/* If packet is not ARP Reply, send it on current primary port. */
@@ -208,7 +208,7 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
 {
 	struct rte_ether_hdr *eth_h;
 	struct rte_arp_hdr *arp_h;
-	uint16_t slave_idx;
+	uint16_t member_idx;
 
 	rte_spinlock_lock(&internals->mode6.lock);
 	eth_h = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
@@ -238,10 +238,10 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
 	arp_h->arp_plen = sizeof(uint32_t);
 	arp_h->arp_opcode = rte_cpu_to_be_16(RTE_ARP_OP_REPLY);
 
-	slave_idx = client_info->slave_idx;
+	member_idx = client_info->member_idx;
 	rte_spinlock_unlock(&internals->mode6.lock);
 
-	return slave_idx;
+	return member_idx;
 }
 
 void
@@ -252,18 +252,18 @@ bond_mode_alb_client_list_upd(struct rte_eth_dev *bond_dev)
 
 	int i;
 
-	/* If active slave count is 0, it's pointless to refresh alb table */
-	if (internals->active_slave_count <= 0)
+	/* If active member count is 0, it's pointless to refresh alb table */
+	if (internals->active_member_count <= 0)
 		return;
 
 	rte_spinlock_lock(&internals->mode6.lock);
-	internals->mode6.last_slave = ALB_NULL_INDEX;
+	internals->mode6.last_member = ALB_NULL_INDEX;
 
 	for (i = 0; i < ALB_HASH_TABLE_SIZE; i++) {
 		client_info = &internals->mode6.client_table[i];
 		if (client_info->in_use) {
-			client_info->slave_idx = calculate_slave(internals);
-			rte_eth_macaddr_get(client_info->slave_idx, &client_info->app_mac);
+			client_info->member_idx = calculate_member(internals);
+			rte_eth_macaddr_get(client_info->member_idx, &client_info->app_mac);
 			internals->mode6.ntt = 1;
 		}
 	}
diff --git a/drivers/net/bonding/rte_eth_bond_alb.h b/drivers/net/bonding/rte_eth_bond_alb.h
index 4e9aeda9bc..beb2e619f9 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.h
+++ b/drivers/net/bonding/rte_eth_bond_alb.h
@@ -22,8 +22,8 @@ struct client_data {
 	uint32_t cli_ip;
 	/**< Client IP address */
 
-	uint16_t slave_idx;
-	/**< Index of slave on which we connect with that client */
+	uint16_t member_idx;
+	/**< Index of member on which we connect with that client */
 	uint8_t in_use;
 	/**< Flag indicating if entry in client table is currently used */
 	uint8_t ntt;
@@ -42,8 +42,8 @@ struct mode_alb_private {
 	/**< Mempool for creating ARP update packets */
 	uint8_t ntt;
 	/**< Flag indicating if we need to send update to any client on next tx */
-	uint32_t last_slave;
-	/**< Index of last used slave in client table */
+	uint32_t last_member;
+	/**< Index of last used member in client table */
 	rte_spinlock_t lock;
 };
 
@@ -72,9 +72,9 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
 		struct bond_dev_private *internals);
 
 /**
- * Function handles ARP packet transmission. It also decides on which slave
- * send that packet. If packet is ARP Request, it is send on primary slave.
- * If it is ARP Reply, it is send on slave stored in client table for that
+ * Function handles ARP packet transmission. It also decides on which member
+ * send that packet. If packet is ARP Request, it is send on primary member.
+ * If it is ARP Reply, it is send on member stored in client table for that
  * connection. On Reply function also updates data in client table.
  *
  * @param eth_h			ETH header of transmitted packet.
@@ -82,7 +82,7 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
  * @param internals		Bonding data.
  *
  * @return
- * Index of slave on which packet should be sent.
+ * Index of member on which packet should be sent.
  */
 uint16_t
 bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
@@ -96,14 +96,14 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
  * @param internals		Bonding data.
  *
  * @return
- * Index of slave on which packet should be sent.
+ * Index of member on which packet should be sent.
  */
 uint16_t
 bond_mode_alb_arp_upd(struct client_data *client_info,
 		struct rte_mbuf *pkt, struct bond_dev_private *internals);
 
 /**
- * Function updates slave indexes of active connections.
+ * Function updates member indexes of active connections.
  *
  * @param bond_dev		Pointer to bonded device struct.
  */
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index c0178369b4..b6512a098a 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -37,7 +37,7 @@ valid_bonded_port_id(uint16_t port_id)
 }
 
 int
-check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
+check_for_main_bonded_ethdev(const struct rte_eth_dev *eth_dev)
 {
 	int i;
 	struct bond_dev_private *internals;
@@ -47,31 +47,31 @@ check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
 
 	internals = eth_dev->data->dev_private;
 
-	/* Check if any of slave devices is a bonded device */
-	for (i = 0; i < internals->slave_count; i++)
-		if (valid_bonded_port_id(internals->slaves[i].port_id) == 0)
+	/* Check if any of member devices is a bonded device */
+	for (i = 0; i < internals->member_count; i++)
+		if (valid_bonded_port_id(internals->members[i].port_id) == 0)
 			return 1;
 
 	return 0;
 }
 
 int
-valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
+valid_member_port_id(struct bond_dev_private *internals, uint16_t member_port_id)
 {
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(slave_port_id, -1);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(member_port_id, -1);
 
-	/* Verify that slave_port_id refers to a non bonded port */
-	if (check_for_bonded_ethdev(&rte_eth_devices[slave_port_id]) == 0 &&
+	/* Verify that member_port_id refers to a non bonded port */
+	if (check_for_bonded_ethdev(&rte_eth_devices[member_port_id]) == 0 &&
 			internals->mode == BONDING_MODE_8023AD) {
-		RTE_BOND_LOG(ERR, "Cannot add slave to bonded device in 802.3ad"
-				" mode as slave is also a bonded device, only "
+		RTE_BOND_LOG(ERR, "Cannot add member to bonded device in 802.3ad"
+				" mode as member is also a bonded device, only "
 				"physical devices can be support in this mode.");
 		return -1;
 	}
 
-	if (internals->port_id == slave_port_id) {
+	if (internals->port_id == member_port_id) {
 		RTE_BOND_LOG(ERR,
-			"Cannot add the bonded device itself as its slave.");
+			"Cannot add the bonded device itself as its member.");
 		return -1;
 	}
 
@@ -79,61 +79,63 @@ valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
 }
 
 void
-activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
+activate_member(struct rte_eth_dev *eth_dev, uint16_t port_id)
 {
 	struct bond_dev_private *internals = eth_dev->data->dev_private;
-	uint16_t active_count = internals->active_slave_count;
+	uint16_t active_count = internals->active_member_count;
 
 	if (internals->mode == BONDING_MODE_8023AD)
-		bond_mode_8023ad_activate_slave(eth_dev, port_id);
+		bond_mode_8023ad_activate_member(eth_dev, port_id);
 
 	if (internals->mode == BONDING_MODE_TLB
 			|| internals->mode == BONDING_MODE_ALB) {
 
-		internals->tlb_slaves_order[active_count] = port_id;
+		internals->tlb_members_order[active_count] = port_id;
 	}
 
-	RTE_ASSERT(internals->active_slave_count <
-			(RTE_DIM(internals->active_slaves) - 1));
+	RTE_ASSERT(internals->active_member_count <
+			(RTE_DIM(internals->active_members) - 1));
 
-	internals->active_slaves[internals->active_slave_count] = port_id;
-	internals->active_slave_count++;
+	internals->active_members[internals->active_member_count] = port_id;
+	internals->active_member_count++;
 
 	if (internals->mode == BONDING_MODE_TLB)
-		bond_tlb_activate_slave(internals);
+		bond_tlb_activate_member(internals);
 	if (internals->mode == BONDING_MODE_ALB)
 		bond_mode_alb_client_list_upd(eth_dev);
 }
 
 void
-deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
+deactivate_member(struct rte_eth_dev *eth_dev, uint16_t port_id)
 {
-	uint16_t slave_pos;
+	uint16_t member_pos;
 	struct bond_dev_private *internals = eth_dev->data->dev_private;
-	uint16_t active_count = internals->active_slave_count;
+	uint16_t active_count = internals->active_member_count;
 
 	if (internals->mode == BONDING_MODE_8023AD) {
 		bond_mode_8023ad_stop(eth_dev);
-		bond_mode_8023ad_deactivate_slave(eth_dev, port_id);
+		bond_mode_8023ad_deactivate_member(eth_dev, port_id);
 	} else if (internals->mode == BONDING_MODE_TLB
 			|| internals->mode == BONDING_MODE_ALB)
 		bond_tlb_disable(internals);
 
-	slave_pos = find_slave_by_id(internals->active_slaves, active_count,
+	member_pos = find_member_by_id(internals->active_members, active_count,
 			port_id);
 
-	/* If slave was not at the end of the list
-	 * shift active slaves up active array list */
-	if (slave_pos < active_count) {
+	/*
+	 * If member was not at the end of the list
+	 * shift active members up active array list.
+	 */
+	if (member_pos < active_count) {
 		active_count--;
-		memmove(internals->active_slaves + slave_pos,
-				internals->active_slaves + slave_pos + 1,
-				(active_count - slave_pos) *
-					sizeof(internals->active_slaves[0]));
+		memmove(internals->active_members + member_pos,
+				internals->active_members + member_pos + 1,
+				(active_count - member_pos) *
+					sizeof(internals->active_members[0]));
 	}
 
-	RTE_ASSERT(active_count < RTE_DIM(internals->active_slaves));
-	internals->active_slave_count = active_count;
+	RTE_ASSERT(active_count < RTE_DIM(internals->active_members));
+	internals->active_member_count = active_count;
 
 	if (eth_dev->data->dev_started) {
 		if (internals->mode == BONDING_MODE_8023AD) {
@@ -192,7 +194,7 @@ rte_eth_bond_free(const char *name)
 }
 
 static int
-slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
+member_vlan_filter_set(uint16_t bonded_port_id, uint16_t member_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
@@ -224,7 +226,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
 			if (unlikely(slab & mask)) {
 				uint16_t vlan_id = pos + i;
 
-				res = rte_eth_dev_vlan_filter(slave_port_id,
+				res = rte_eth_dev_vlan_filter(member_port_id,
 							      vlan_id, 1);
 			}
 		}
@@ -236,45 +238,45 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
 }
 
 static int
-slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
+member_rte_flow_prepare(uint16_t member_id, struct bond_dev_private *internals)
 {
 	struct rte_flow *flow;
 	struct rte_flow_error ferror;
-	uint16_t slave_port_id = internals->slaves[slave_id].port_id;
+	uint16_t member_port_id = internals->members[member_id].port_id;
 
 	if (internals->flow_isolated_valid != 0) {
-		if (rte_eth_dev_stop(slave_port_id) != 0) {
+		if (rte_eth_dev_stop(member_port_id) != 0) {
 			RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
-				     slave_port_id);
+				     member_port_id);
 			return -1;
 		}
 
-		if (rte_flow_isolate(slave_port_id, internals->flow_isolated,
+		if (rte_flow_isolate(member_port_id, internals->flow_isolated,
 		    &ferror)) {
-			RTE_BOND_LOG(ERR, "rte_flow_isolate failed for slave"
-				     " %d: %s", slave_id, ferror.message ?
+			RTE_BOND_LOG(ERR, "rte_flow_isolate failed for member"
+				     " %d: %s", member_id, ferror.message ?
 				     ferror.message : "(no stated reason)");
 			return -1;
 		}
 	}
 	TAILQ_FOREACH(flow, &internals->flow_list, next) {
-		flow->flows[slave_id] = rte_flow_create(slave_port_id,
+		flow->flows[member_id] = rte_flow_create(member_port_id,
 							flow->rule.attr,
 							flow->rule.pattern,
 							flow->rule.actions,
 							&ferror);
-		if (flow->flows[slave_id] == NULL) {
-			RTE_BOND_LOG(ERR, "Cannot create flow for slave"
-				     " %d: %s", slave_id,
+		if (flow->flows[member_id] == NULL) {
+			RTE_BOND_LOG(ERR, "Cannot create flow for member"
+				     " %d: %s", member_id,
 				     ferror.message ? ferror.message :
 				     "(no stated reason)");
-			/* Destroy successful bond flows from the slave */
+			/* Destroy successful bond flows from the member */
 			TAILQ_FOREACH(flow, &internals->flow_list, next) {
-				if (flow->flows[slave_id] != NULL) {
-					rte_flow_destroy(slave_port_id,
-							 flow->flows[slave_id],
+				if (flow->flows[member_id] != NULL) {
+					rte_flow_destroy(member_port_id,
+							 flow->flows[member_id],
 							 &ferror);
-					flow->flows[slave_id] = NULL;
+					flow->flows[member_id] = NULL;
 				}
 			}
 			return -1;
@@ -284,7 +286,7 @@ slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
 }
 
 static void
-eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_rx_first(struct bond_dev_private *internals,
 					 const struct rte_eth_dev_info *di)
 {
 	struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
@@ -292,20 +294,20 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
 	internals->reta_size = di->reta_size;
 	internals->rss_key_len = di->hash_key_size;
 
-	/* Inherit Rx offload capabilities from the first slave device */
+	/* Inherit Rx offload capabilities from the first member device */
 	internals->rx_offload_capa = di->rx_offload_capa;
 	internals->rx_queue_offload_capa = di->rx_queue_offload_capa;
 	internals->flow_type_rss_offloads = di->flow_type_rss_offloads;
 
-	/* Inherit maximum Rx packet size from the first slave device */
+	/* Inherit maximum Rx packet size from the first member device */
 	internals->candidate_max_rx_pktlen = di->max_rx_pktlen;
 
-	/* Inherit default Rx queue settings from the first slave device */
+	/* Inherit default Rx queue settings from the first member device */
 	memcpy(rxconf_i, &di->default_rxconf, sizeof(*rxconf_i));
 
 	/*
 	 * Turn off descriptor prefetch and writeback by default for all
-	 * slave devices. Applications may tweak this setting if need be.
+	 * member devices. Applications may tweak this setting if need be.
 	 */
 	rxconf_i->rx_thresh.pthresh = 0;
 	rxconf_i->rx_thresh.hthresh = 0;
@@ -314,26 +316,26 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
 	/* Setting this to zero should effectively enable default values */
 	rxconf_i->rx_free_thresh = 0;
 
-	/* Disable deferred start by default for all slave devices */
+	/* Disable deferred start by default for all member devices */
 	rxconf_i->rx_deferred_start = 0;
 }
 
 static void
-eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_tx_first(struct bond_dev_private *internals,
 					 const struct rte_eth_dev_info *di)
 {
 	struct rte_eth_txconf *txconf_i = &internals->default_txconf;
 
-	/* Inherit Tx offload capabilities from the first slave device */
+	/* Inherit Tx offload capabilities from the first member device */
 	internals->tx_offload_capa = di->tx_offload_capa;
 	internals->tx_queue_offload_capa = di->tx_queue_offload_capa;
 
-	/* Inherit default Tx queue settings from the first slave device */
+	/* Inherit default Tx queue settings from the first member device */
 	memcpy(txconf_i, &di->default_txconf, sizeof(*txconf_i));
 
 	/*
 	 * Turn off descriptor prefetch and writeback by default for all
-	 * slave devices. Applications may tweak this setting if need be.
+	 * member devices. Applications may tweak this setting if need be.
 	 */
 	txconf_i->tx_thresh.pthresh = 0;
 	txconf_i->tx_thresh.hthresh = 0;
@@ -341,17 +343,17 @@ eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
 
 	/*
 	 * Setting these parameters to zero assumes that default
-	 * values will be configured implicitly by slave devices.
+	 * values will be configured implicitly by member devices.
 	 */
 	txconf_i->tx_free_thresh = 0;
 	txconf_i->tx_rs_thresh = 0;
 
-	/* Disable deferred start by default for all slave devices */
+	/* Disable deferred start by default for all member devices */
 	txconf_i->tx_deferred_start = 0;
 }
 
 static void
-eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_rx_next(struct bond_dev_private *internals,
 					const struct rte_eth_dev_info *di)
 {
 	struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
@@ -362,32 +364,32 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
 	internals->flow_type_rss_offloads &= di->flow_type_rss_offloads;
 
 	/*
-	 * If at least one slave device suggests enabling this
-	 * setting by default, enable it for all slave devices
+	 * If at least one member device suggests enabling this
+	 * setting by default, enable it for all member devices
 	 * since disabling it may not be necessarily supported.
 	 */
 	if (rxconf->rx_drop_en == 1)
 		rxconf_i->rx_drop_en = 1;
 
 	/*
-	 * Adding a new slave device may cause some of previously inherited
+	 * Adding a new member device may cause some of previously inherited
 	 * offloads to be withdrawn from the internal rx_queue_offload_capa
 	 * value. Thus, the new internal value of default Rx queue offloads
 	 * has to be masked by rx_queue_offload_capa to make sure that only
 	 * commonly supported offloads are preserved from both the previous
-	 * value and the value being inherited from the new slave device.
+	 * value and the value being inherited from the new member device.
 	 */
 	rxconf_i->offloads = (rxconf_i->offloads | rxconf->offloads) &
 			     internals->rx_queue_offload_capa;
 
 	/*
-	 * RETA size is GCD of all slaves RETA sizes, so, if all sizes will be
+	 * RETA size is GCD of all members RETA sizes, so, if all sizes will be
 	 * the power of 2, the lower one is GCD
 	 */
 	if (internals->reta_size > di->reta_size)
 		internals->reta_size = di->reta_size;
 	if (internals->rss_key_len > di->hash_key_size) {
-		RTE_BOND_LOG(WARNING, "slave has different rss key size, "
+		RTE_BOND_LOG(WARNING, "member has different rss key size, "
 				"configuring rss may fail");
 		internals->rss_key_len = di->hash_key_size;
 	}
@@ -398,7 +400,7 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
 }
 
 static void
-eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_tx_next(struct bond_dev_private *internals,
 					const struct rte_eth_dev_info *di)
 {
 	struct rte_eth_txconf *txconf_i = &internals->default_txconf;
@@ -408,34 +410,34 @@ eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
 	internals->tx_queue_offload_capa &= di->tx_queue_offload_capa;
 
 	/*
-	 * Adding a new slave device may cause some of previously inherited
+	 * Adding a new member device may cause some of previously inherited
 	 * offloads to be withdrawn from the internal tx_queue_offload_capa
 	 * value. Thus, the new internal value of default Tx queue offloads
 	 * has to be masked by tx_queue_offload_capa to make sure that only
 	 * commonly supported offloads are preserved from both the previous
-	 * value and the value being inherited from the new slave device.
+	 * value and the value being inherited from the new member device.
 	 */
 	txconf_i->offloads = (txconf_i->offloads | txconf->offloads) &
 			     internals->tx_queue_offload_capa;
 }
 
 static void
-eth_bond_slave_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
-		const struct rte_eth_desc_lim *slave_desc_lim)
+eth_bond_member_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
+		const struct rte_eth_desc_lim *member_desc_lim)
 {
-	memcpy(bond_desc_lim, slave_desc_lim, sizeof(*bond_desc_lim));
+	memcpy(bond_desc_lim, member_desc_lim, sizeof(*bond_desc_lim));
 }
 
 static int
-eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
-		const struct rte_eth_desc_lim *slave_desc_lim)
+eth_bond_member_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
+		const struct rte_eth_desc_lim *member_desc_lim)
 {
 	bond_desc_lim->nb_max = RTE_MIN(bond_desc_lim->nb_max,
-					slave_desc_lim->nb_max);
+					member_desc_lim->nb_max);
 	bond_desc_lim->nb_min = RTE_MAX(bond_desc_lim->nb_min,
-					slave_desc_lim->nb_min);
+					member_desc_lim->nb_min);
 	bond_desc_lim->nb_align = RTE_MAX(bond_desc_lim->nb_align,
-					  slave_desc_lim->nb_align);
+					  member_desc_lim->nb_align);
 
 	if (bond_desc_lim->nb_min > bond_desc_lim->nb_max ||
 	    bond_desc_lim->nb_align > bond_desc_lim->nb_max) {
@@ -444,22 +446,22 @@ eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
 	}
 
 	/* Treat maximum number of segments equal to 0 as unspecified */
-	if (slave_desc_lim->nb_seg_max != 0 &&
+	if (member_desc_lim->nb_seg_max != 0 &&
 	    (bond_desc_lim->nb_seg_max == 0 ||
-	     slave_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
-		bond_desc_lim->nb_seg_max = slave_desc_lim->nb_seg_max;
-	if (slave_desc_lim->nb_mtu_seg_max != 0 &&
+	     member_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
+		bond_desc_lim->nb_seg_max = member_desc_lim->nb_seg_max;
+	if (member_desc_lim->nb_mtu_seg_max != 0 &&
 	    (bond_desc_lim->nb_mtu_seg_max == 0 ||
-	     slave_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
-		bond_desc_lim->nb_mtu_seg_max = slave_desc_lim->nb_mtu_seg_max;
+	     member_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
+		bond_desc_lim->nb_mtu_seg_max = member_desc_lim->nb_mtu_seg_max;
 
 	return 0;
 }
 
 static int
-__eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
+__eth_bond_member_add_lock_free(uint16_t bonded_port_id, uint16_t member_port_id)
 {
-	struct rte_eth_dev *bonded_eth_dev, *slave_eth_dev;
+	struct rte_eth_dev *bonded_eth_dev, *member_eth_dev;
 	struct bond_dev_private *internals;
 	struct rte_eth_link link_props;
 	struct rte_eth_dev_info dev_info;
@@ -468,78 +470,78 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	internals = bonded_eth_dev->data->dev_private;
 
-	if (valid_slave_port_id(internals, slave_port_id) != 0)
+	if (valid_member_port_id(internals, member_port_id) != 0)
 		return -1;
 
-	slave_eth_dev = &rte_eth_devices[slave_port_id];
-	if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDED_SLAVE) {
-		RTE_BOND_LOG(ERR, "Slave device is already a slave of a bonded device");
+	member_eth_dev = &rte_eth_devices[member_port_id];
+	if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDED_MEMBER) {
+		RTE_BOND_LOG(ERR, "Member device is already a member of a bonded device");
 		return -1;
 	}
 
-	ret = rte_eth_dev_info_get(slave_port_id, &dev_info);
+	ret = rte_eth_dev_info_get(member_port_id, &dev_info);
 	if (ret != 0) {
 		RTE_BOND_LOG(ERR,
 			"%s: Error during getting device (port %u) info: %s\n",
-			__func__, slave_port_id, strerror(-ret));
+			__func__, member_port_id, strerror(-ret));
 
 		return ret;
 	}
 	if (dev_info.max_rx_pktlen < internals->max_rx_pktlen) {
-		RTE_BOND_LOG(ERR, "Slave (port %u) max_rx_pktlen too small",
-			     slave_port_id);
+		RTE_BOND_LOG(ERR, "Member (port %u) max_rx_pktlen too small",
+			     member_port_id);
 		return -1;
 	}
 
-	slave_add(internals, slave_eth_dev);
+	member_add(internals, member_eth_dev);
 
-	/* We need to store slaves reta_size to be able to synchronize RETA for all
-	 * slave devices even if its sizes are different.
+	/* We need to store members reta_size to be able to synchronize RETA for all
+	 * member devices even if its sizes are different.
 	 */
-	internals->slaves[internals->slave_count].reta_size = dev_info.reta_size;
+	internals->members[internals->member_count].reta_size = dev_info.reta_size;
 
-	if (internals->slave_count < 1) {
-		/* if MAC is not user defined then use MAC of first slave add to
+	if (internals->member_count < 1) {
+		/* if MAC is not user defined then use MAC of first member add to
 		 * bonded device */
 		if (!internals->user_defined_mac) {
 			if (mac_address_set(bonded_eth_dev,
-					    slave_eth_dev->data->mac_addrs)) {
+					    member_eth_dev->data->mac_addrs)) {
 				RTE_BOND_LOG(ERR, "Failed to set MAC address");
 				return -1;
 			}
 		}
 
-		/* Make primary slave */
-		internals->primary_port = slave_port_id;
-		internals->current_primary_port = slave_port_id;
+		/* Make primary member */
+		internals->primary_port = member_port_id;
+		internals->current_primary_port = member_port_id;
 
 		internals->speed_capa = dev_info.speed_capa;
 
-		/* Inherit queues settings from first slave */
-		internals->nb_rx_queues = slave_eth_dev->data->nb_rx_queues;
-		internals->nb_tx_queues = slave_eth_dev->data->nb_tx_queues;
+		/* Inherit queues settings from first member */
+		internals->nb_rx_queues = member_eth_dev->data->nb_rx_queues;
+		internals->nb_tx_queues = member_eth_dev->data->nb_tx_queues;
 
-		eth_bond_slave_inherit_dev_info_rx_first(internals, &dev_info);
-		eth_bond_slave_inherit_dev_info_tx_first(internals, &dev_info);
+		eth_bond_member_inherit_dev_info_rx_first(internals, &dev_info);
+		eth_bond_member_inherit_dev_info_tx_first(internals, &dev_info);
 
-		eth_bond_slave_inherit_desc_lim_first(&internals->rx_desc_lim,
+		eth_bond_member_inherit_desc_lim_first(&internals->rx_desc_lim,
 						      &dev_info.rx_desc_lim);
-		eth_bond_slave_inherit_desc_lim_first(&internals->tx_desc_lim,
+		eth_bond_member_inherit_desc_lim_first(&internals->tx_desc_lim,
 						      &dev_info.tx_desc_lim);
 	} else {
 		int ret;
 
 		internals->speed_capa &= dev_info.speed_capa;
-		eth_bond_slave_inherit_dev_info_rx_next(internals, &dev_info);
-		eth_bond_slave_inherit_dev_info_tx_next(internals, &dev_info);
+		eth_bond_member_inherit_dev_info_rx_next(internals, &dev_info);
+		eth_bond_member_inherit_dev_info_tx_next(internals, &dev_info);
 
-		ret = eth_bond_slave_inherit_desc_lim_next(
-				&internals->rx_desc_lim, &dev_info.rx_desc_lim);
+		ret = eth_bond_member_inherit_desc_lim_next(&internals->rx_desc_lim,
+							&dev_info.rx_desc_lim);
 		if (ret != 0)
 			return ret;
 
-		ret = eth_bond_slave_inherit_desc_lim_next(
-				&internals->tx_desc_lim, &dev_info.tx_desc_lim);
+		ret = eth_bond_member_inherit_desc_lim_next(&internals->tx_desc_lim,
+							&dev_info.tx_desc_lim);
 		if (ret != 0)
 			return ret;
 	}
@@ -552,79 +554,81 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
 	bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf &=
 			internals->flow_type_rss_offloads;
 
-	if (slave_rte_flow_prepare(internals->slave_count, internals) != 0) {
-		RTE_BOND_LOG(ERR, "Failed to prepare new slave flows: port=%d",
-			     slave_port_id);
+	if (member_rte_flow_prepare(internals->member_count, internals) != 0) {
+		RTE_BOND_LOG(ERR, "Failed to prepare new member flows: port=%d",
+			     member_port_id);
 		return -1;
 	}
 
-	/* Add additional MAC addresses to the slave */
-	if (slave_add_mac_addresses(bonded_eth_dev, slave_port_id) != 0) {
-		RTE_BOND_LOG(ERR, "Failed to add mac address(es) to slave %hu",
-				slave_port_id);
+	/* Add additional MAC addresses to the member */
+	if (member_add_mac_addresses(bonded_eth_dev, member_port_id) != 0) {
+		RTE_BOND_LOG(ERR, "Failed to add mac address(es) to member %hu",
+				member_port_id);
 		return -1;
 	}
 
-	internals->slave_count++;
+	internals->member_count++;
 
 	if (bonded_eth_dev->data->dev_started) {
-		if (slave_configure(bonded_eth_dev, slave_eth_dev) != 0) {
-			internals->slave_count--;
-			RTE_BOND_LOG(ERR, "rte_bond_slaves_configure: port=%d",
-					slave_port_id);
+		if (member_configure(bonded_eth_dev, member_eth_dev) != 0) {
+			internals->member_count--;
+			RTE_BOND_LOG(ERR, "rte_bond_members_configure: port=%d",
+					member_port_id);
 			return -1;
 		}
-		if (slave_start(bonded_eth_dev, slave_eth_dev) != 0) {
-			internals->slave_count--;
-			RTE_BOND_LOG(ERR, "rte_bond_slaves_start: port=%d",
-					slave_port_id);
+		if (member_start(bonded_eth_dev, member_eth_dev) != 0) {
+			internals->member_count--;
+			RTE_BOND_LOG(ERR, "rte_bond_members_start: port=%d",
+					member_port_id);
 			return -1;
 		}
 	}
 
-	/* Update all slave devices MACs */
-	mac_address_slaves_update(bonded_eth_dev);
+	/* Update all member devices MACs */
+	mac_address_members_update(bonded_eth_dev);
 
 	/* Register link status change callback with bonded device pointer as
 	 * argument*/
-	rte_eth_dev_callback_register(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
+	rte_eth_dev_callback_register(member_port_id, RTE_ETH_EVENT_INTR_LSC,
 			bond_ethdev_lsc_event_callback, &bonded_eth_dev->data->port_id);
 
-	/* If bonded device is started then we can add the slave to our active
-	 * slave array */
+	/*
+	 * If bonded device is started then we can add the member to our active
+	 * member array.
+	 */
 	if (bonded_eth_dev->data->dev_started) {
-		ret = rte_eth_link_get_nowait(slave_port_id, &link_props);
+		ret = rte_eth_link_get_nowait(member_port_id, &link_props);
 		if (ret < 0) {
-			rte_eth_dev_callback_unregister(slave_port_id,
+			rte_eth_dev_callback_unregister(member_port_id,
 					RTE_ETH_EVENT_INTR_LSC,
 					bond_ethdev_lsc_event_callback,
 					&bonded_eth_dev->data->port_id);
-			internals->slave_count--;
+			internals->member_count--;
 			RTE_BOND_LOG(ERR,
-				"Slave (port %u) link get failed: %s\n",
-				slave_port_id, rte_strerror(-ret));
+				"Member (port %u) link get failed: %s\n",
+				member_port_id, rte_strerror(-ret));
 			return -1;
 		}
 
 		if (link_props.link_status == RTE_ETH_LINK_UP) {
-			if (internals->active_slave_count == 0 &&
+			if (internals->active_member_count == 0 &&
 			    !internals->user_defined_primary_port)
 				bond_ethdev_primary_set(internals,
-							slave_port_id);
+							member_port_id);
 		}
 	}
 
-	/* Add slave details to bonded device */
-	slave_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDED_SLAVE;
+	/* Add member details to bonded device */
+	member_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDED_MEMBER;
 
-	slave_vlan_filter_set(bonded_port_id, slave_port_id);
+	member_vlan_filter_set(bonded_port_id, member_port_id);
 
 	return 0;
 
 }
 
 int
-rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_member_add(uint16_t bonded_port_id, uint16_t member_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
@@ -637,12 +641,12 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	internals = bonded_eth_dev->data->dev_private;
 
-	if (valid_slave_port_id(internals, slave_port_id) != 0)
+	if (valid_member_port_id(internals, member_port_id) != 0)
 		return -1;
 
 	rte_spinlock_lock(&internals->lock);
 
-	retval = __eth_bond_slave_add_lock_free(bonded_port_id, slave_port_id);
+	retval = __eth_bond_member_add_lock_free(bonded_port_id, member_port_id);
 
 	rte_spinlock_unlock(&internals->lock);
 
@@ -650,93 +654,95 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
 }
 
 static int
-__eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
-				   uint16_t slave_port_id)
+__eth_bond_member_remove_lock_free(uint16_t bonded_port_id,
+				   uint16_t member_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
-	struct rte_eth_dev *slave_eth_dev;
+	struct rte_eth_dev *member_eth_dev;
 	struct rte_flow_error flow_error;
 	struct rte_flow *flow;
-	int i, slave_idx;
+	int i, member_idx;
 
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	internals = bonded_eth_dev->data->dev_private;
 
-	if (valid_slave_port_id(internals, slave_port_id) < 0)
+	if (valid_member_port_id(internals, member_port_id) < 0)
 		return -1;
 
-	/* first remove from active slave list */
-	slave_idx = find_slave_by_id(internals->active_slaves,
-		internals->active_slave_count, slave_port_id);
+	/* first remove from active member list */
+	member_idx = find_member_by_id(internals->active_members,
+		internals->active_member_count, member_port_id);
 
-	if (slave_idx < internals->active_slave_count)
-		deactivate_slave(bonded_eth_dev, slave_port_id);
+	if (member_idx < internals->active_member_count)
+		deactivate_member(bonded_eth_dev, member_port_id);
 
-	slave_idx = -1;
-	/* now find in slave list */
-	for (i = 0; i < internals->slave_count; i++)
-		if (internals->slaves[i].port_id == slave_port_id) {
-			slave_idx = i;
+	member_idx = -1;
+	/* now find in member list */
+	for (i = 0; i < internals->member_count; i++)
+		if (internals->members[i].port_id == member_port_id) {
+			member_idx = i;
 			break;
 		}
 
-	if (slave_idx < 0) {
-		RTE_BOND_LOG(ERR, "Couldn't find slave in port list, slave count %u",
-				internals->slave_count);
+	if (member_idx < 0) {
+		RTE_BOND_LOG(ERR, "Could not find member in port list, member count %u",
+				internals->member_count);
 		return -1;
 	}
 
 	/* Un-register link status change callback with bonded device pointer as
 	 * argument*/
-	rte_eth_dev_callback_unregister(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
+	rte_eth_dev_callback_unregister(member_port_id, RTE_ETH_EVENT_INTR_LSC,
 			bond_ethdev_lsc_event_callback,
 			&rte_eth_devices[bonded_port_id].data->port_id);
 
-	/* Restore original MAC address of slave device */
-	rte_eth_dev_default_mac_addr_set(slave_port_id,
-			&(internals->slaves[slave_idx].persisted_mac_addr));
+	/* Restore original MAC address of member device */
+	rte_eth_dev_default_mac_addr_set(member_port_id,
+			&internals->members[member_idx].persisted_mac_addr);
 
-	/* remove additional MAC addresses from the slave */
-	slave_remove_mac_addresses(bonded_eth_dev, slave_port_id);
+	/* remove additional MAC addresses from the member */
+	member_remove_mac_addresses(bonded_eth_dev, member_port_id);
 
 	/*
-	 * Remove bond device flows from slave device.
+	 * Remove bond device flows from member device.
 	 * Note: don't restore flow isolate mode.
 	 */
 	TAILQ_FOREACH(flow, &internals->flow_list, next) {
-		if (flow->flows[slave_idx] != NULL) {
-			rte_flow_destroy(slave_port_id, flow->flows[slave_idx],
+		if (flow->flows[member_idx] != NULL) {
+			rte_flow_destroy(member_port_id, flow->flows[member_idx],
 					 &flow_error);
-			flow->flows[slave_idx] = NULL;
+			flow->flows[member_idx] = NULL;
 		}
 	}
 
-	slave_eth_dev = &rte_eth_devices[slave_port_id];
-	slave_remove(internals, slave_eth_dev);
-	slave_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDED_SLAVE);
+	member_eth_dev = &rte_eth_devices[member_port_id];
+	member_remove(internals, member_eth_dev);
+	member_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDED_MEMBER);
 
-	/*  first slave in the active list will be the primary by default,
+	/*  first member in the active list will be the primary by default,
 	 *  otherwise use first device in list */
-	if (internals->current_primary_port == slave_port_id) {
-		if (internals->active_slave_count > 0)
-			internals->current_primary_port = internals->active_slaves[0];
-		else if (internals->slave_count > 0)
-			internals->current_primary_port = internals->slaves[0].port_id;
+	if (internals->current_primary_port == member_port_id) {
+		if (internals->active_member_count > 0)
+			internals->current_primary_port = internals->active_members[0];
+		else if (internals->member_count > 0)
+			internals->current_primary_port = internals->members[0].port_id;
 		else
 			internals->primary_port = 0;
-		mac_address_slaves_update(bonded_eth_dev);
+		mac_address_members_update(bonded_eth_dev);
 	}
 
-	if (internals->active_slave_count < 1) {
-		/* if no slaves are any longer attached to bonded device and MAC is not
+	if (internals->active_member_count < 1) {
+		/*
+		 * if no members are any longer attached to bonded device and MAC is not
 		 * user defined then clear MAC of bonded device as it will be reset
-		 * when a new slave is added */
-		if (internals->slave_count < 1 && !internals->user_defined_mac)
+		 * when a new member is added.
+		 */
+		if (internals->member_count < 1 && !internals->user_defined_mac)
 			memset(rte_eth_devices[bonded_port_id].data->mac_addrs, 0,
 					sizeof(*(rte_eth_devices[bonded_port_id].data->mac_addrs)));
 	}
-	if (internals->slave_count == 0) {
+	if (internals->member_count == 0) {
 		internals->rx_offload_capa = 0;
 		internals->tx_offload_capa = 0;
 		internals->rx_queue_offload_capa = 0;
@@ -750,7 +756,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
 }
 
 int
-rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_member_remove(uint16_t bonded_port_id, uint16_t member_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
@@ -764,7 +770,7 @@ rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
 
 	rte_spinlock_lock(&internals->lock);
 
-	retval = __eth_bond_slave_remove_lock_free(bonded_port_id, slave_port_id);
+	retval = __eth_bond_member_remove_lock_free(bonded_port_id, member_port_id);
 
 	rte_spinlock_unlock(&internals->lock);
 
@@ -781,7 +787,7 @@ rte_eth_bond_mode_set(uint16_t bonded_port_id, uint8_t mode)
 
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 
-	if (check_for_master_bonded_ethdev(bonded_eth_dev) != 0 &&
+	if (check_for_main_bonded_ethdev(bonded_eth_dev) != 0 &&
 			mode == BONDING_MODE_8023AD)
 		return -1;
 
@@ -802,7 +808,7 @@ rte_eth_bond_mode_get(uint16_t bonded_port_id)
 }
 
 int
-rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t member_port_id)
 {
 	struct bond_dev_private *internals;
 
@@ -811,13 +817,13 @@ rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
 
 	internals = rte_eth_devices[bonded_port_id].data->dev_private;
 
-	if (valid_slave_port_id(internals, slave_port_id) != 0)
+	if (valid_member_port_id(internals, member_port_id) != 0)
 		return -1;
 
 	internals->user_defined_primary_port = 1;
-	internals->primary_port = slave_port_id;
+	internals->primary_port = member_port_id;
 
-	bond_ethdev_primary_set(internals, slave_port_id);
+	bond_ethdev_primary_set(internals, member_port_id);
 
 	return 0;
 }
@@ -832,14 +838,14 @@ rte_eth_bond_primary_get(uint16_t bonded_port_id)
 
 	internals = rte_eth_devices[bonded_port_id].data->dev_private;
 
-	if (internals->slave_count < 1)
+	if (internals->member_count < 1)
 		return -1;
 
 	return internals->current_primary_port;
 }
 
 int
-rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_members_get(uint16_t bonded_port_id, uint16_t members[],
 			uint16_t len)
 {
 	struct bond_dev_private *internals;
@@ -848,22 +854,22 @@ rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
 	if (valid_bonded_port_id(bonded_port_id) != 0)
 		return -1;
 
-	if (slaves == NULL)
+	if (members == NULL)
 		return -1;
 
 	internals = rte_eth_devices[bonded_port_id].data->dev_private;
 
-	if (internals->slave_count > len)
+	if (internals->member_count > len)
 		return -1;
 
-	for (i = 0; i < internals->slave_count; i++)
-		slaves[i] = internals->slaves[i].port_id;
+	for (i = 0; i < internals->member_count; i++)
+		members[i] = internals->members[i].port_id;
 
-	return internals->slave_count;
+	return internals->member_count;
 }
 
 int
-rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_active_members_get(uint16_t bonded_port_id, uint16_t members[],
 		uint16_t len)
 {
 	struct bond_dev_private *internals;
@@ -871,18 +877,18 @@ rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
 	if (valid_bonded_port_id(bonded_port_id) != 0)
 		return -1;
 
-	if (slaves == NULL)
+	if (members == NULL)
 		return -1;
 
 	internals = rte_eth_devices[bonded_port_id].data->dev_private;
 
-	if (internals->active_slave_count > len)
+	if (internals->active_member_count > len)
 		return -1;
 
-	memcpy(slaves, internals->active_slaves,
-	internals->active_slave_count * sizeof(internals->active_slaves[0]));
+	memcpy(members, internals->active_members,
+	internals->active_member_count * sizeof(internals->active_members[0]));
 
-	return internals->active_slave_count;
+	return internals->active_member_count;
 }
 
 int
@@ -904,9 +910,9 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
 
 	internals->user_defined_mac = 1;
 
-	/* Update all slave devices MACs*/
-	if (internals->slave_count > 0)
-		return mac_address_slaves_update(bonded_eth_dev);
+	/* Update all member devices MACs*/
+	if (internals->member_count > 0)
+		return mac_address_members_update(bonded_eth_dev);
 
 	return 0;
 }
@@ -925,30 +931,30 @@ rte_eth_bond_mac_address_reset(uint16_t bonded_port_id)
 
 	internals->user_defined_mac = 0;
 
-	if (internals->slave_count > 0) {
-		int slave_port;
-		/* Get the primary slave location based on the primary port
-		 * number as, while slave_add(), we will keep the primary
-		 * slave based on slave_count,but not based on the primary port.
+	if (internals->member_count > 0) {
+		int member_port;
+		/* Get the primary member location based on the primary port
+		 * number as, while member_add(), we will keep the primary
+		 * member based on member_count,but not based on the primary port.
 		 */
-		for (slave_port = 0; slave_port < internals->slave_count;
-		     slave_port++) {
-			if (internals->slaves[slave_port].port_id ==
+		for (member_port = 0; member_port < internals->member_count;
+		     member_port++) {
+			if (internals->members[member_port].port_id ==
 			    internals->primary_port)
 				break;
 		}
 
 		/* Set MAC Address of Bonded Device */
 		if (mac_address_set(bonded_eth_dev,
-			&internals->slaves[slave_port].persisted_mac_addr)
+			&internals->members[member_port].persisted_mac_addr)
 				!= 0) {
 			RTE_BOND_LOG(ERR, "Failed to set MAC address on bonded device");
 			return -1;
 		}
-		/* Update all slave devices MAC addresses */
-		return mac_address_slaves_update(bonded_eth_dev);
+		/* Update all member devices MAC addresses */
+		return mac_address_members_update(bonded_eth_dev);
 	}
-	/* No need to update anything as no slaves present */
+	/* No need to update anything as no members present */
 	return 0;
 }
 
diff --git a/drivers/net/bonding/rte_eth_bond_args.c b/drivers/net/bonding/rte_eth_bond_args.c
index 6553166f5c..cbc905f700 100644
--- a/drivers/net/bonding/rte_eth_bond_args.c
+++ b/drivers/net/bonding/rte_eth_bond_args.c
@@ -12,8 +12,8 @@
 #include "eth_bond_private.h"
 
 const char *pmd_bond_init_valid_arguments[] = {
-	PMD_BOND_SLAVE_PORT_KVARG,
-	PMD_BOND_PRIMARY_SLAVE_KVARG,
+	PMD_BOND_MEMBER_PORT_KVARG,
+	PMD_BOND_PRIMARY_MEMBER_KVARG,
 	PMD_BOND_MODE_KVARG,
 	PMD_BOND_XMIT_POLICY_KVARG,
 	PMD_BOND_SOCKET_ID_KVARG,
@@ -109,31 +109,31 @@ parse_port_id(const char *port_str)
 }
 
 int
-bond_ethdev_parse_slave_port_kvarg(const char *key,
+bond_ethdev_parse_member_port_kvarg(const char *key,
 		const char *value, void *extra_args)
 {
-	struct bond_ethdev_slave_ports *slave_ports;
+	struct bond_ethdev_member_ports *member_ports;
 
 	if (value == NULL || extra_args == NULL)
 		return -1;
 
-	slave_ports = extra_args;
+	member_ports = extra_args;
 
-	if (strcmp(key, PMD_BOND_SLAVE_PORT_KVARG) == 0) {
+	if (strcmp(key, PMD_BOND_MEMBER_PORT_KVARG) == 0) {
 		int port_id = parse_port_id(value);
 		if (port_id < 0) {
-			RTE_BOND_LOG(ERR, "Invalid slave port value (%s) specified",
+			RTE_BOND_LOG(ERR, "Invalid member port value (%s) specified",
 				     value);
 			return -1;
 		} else
-			slave_ports->slaves[slave_ports->slave_count++] =
+			member_ports->members[member_ports->member_count++] =
 					port_id;
 	}
 	return 0;
 }
 
 int
-bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_mode_kvarg(const char *key __rte_unused,
 		const char *value, void *extra_args)
 {
 	uint8_t *mode;
@@ -160,13 +160,13 @@ bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
 	case BONDING_MODE_ALB:
 		return 0;
 	default:
-		RTE_BOND_LOG(ERR, "Invalid slave mode value (%s) specified", value);
+		RTE_BOND_LOG(ERR, "Invalid member mode value (%s) specified", value);
 		return -1;
 	}
 }
 
 int
-bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_agg_mode_kvarg(const char *key __rte_unused,
 		const char *value, void *extra_args)
 {
 	uint8_t *agg_mode;
@@ -221,19 +221,19 @@ bond_ethdev_parse_socket_id_kvarg(const char *key __rte_unused,
 }
 
 int
-bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_primary_member_port_id_kvarg(const char *key __rte_unused,
 		const char *value, void *extra_args)
 {
-	int primary_slave_port_id;
+	int primary_member_port_id;
 
 	if (value == NULL || extra_args == NULL)
 		return -1;
 
-	primary_slave_port_id = parse_port_id(value);
-	if (primary_slave_port_id < 0)
+	primary_member_port_id = parse_port_id(value);
+	if (primary_member_port_id < 0)
 		return -1;
 
-	*(uint16_t *)extra_args = (uint16_t)primary_slave_port_id;
+	*(uint16_t *)extra_args = (uint16_t)primary_member_port_id;
 
 	return 0;
 }
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 65b77faae7..71a91675f7 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -69,12 +69,12 @@ bond_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	int i;
 	int ret;
 
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_flow_validate(internals->slaves[i].port_id, attr,
+	for (i = 0; i < internals->member_count; i++) {
+		ret = rte_flow_validate(internals->members[i].port_id, attr,
 					patterns, actions, err);
 		if (ret) {
 			RTE_BOND_LOG(ERR, "Operation rte_flow_validate failed"
-				     " for slave %d with error %d", i, ret);
+				     " for member %d with error %d", i, ret);
 			return ret;
 		}
 	}
@@ -97,11 +97,11 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 				   NULL, rte_strerror(ENOMEM));
 		return NULL;
 	}
-	for (i = 0; i < internals->slave_count; i++) {
-		flow->flows[i] = rte_flow_create(internals->slaves[i].port_id,
+	for (i = 0; i < internals->member_count; i++) {
+		flow->flows[i] = rte_flow_create(internals->members[i].port_id,
 						 attr, patterns, actions, err);
 		if (unlikely(flow->flows[i] == NULL)) {
-			RTE_BOND_LOG(ERR, "Failed to create flow on slave %d",
+			RTE_BOND_LOG(ERR, "Failed to create flow on member %d",
 				     i);
 			goto err;
 		}
@@ -109,10 +109,10 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	TAILQ_INSERT_TAIL(&internals->flow_list, flow, next);
 	return flow;
 err:
-	/* Destroy all slaves flows. */
-	for (i = 0; i < internals->slave_count; i++) {
+	/* Destroy all members flows. */
+	for (i = 0; i < internals->member_count; i++) {
 		if (flow->flows[i] != NULL)
-			rte_flow_destroy(internals->slaves[i].port_id,
+			rte_flow_destroy(internals->members[i].port_id,
 					 flow->flows[i], err);
 	}
 	bond_flow_release(&flow);
@@ -127,15 +127,15 @@ bond_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
 	int i;
 	int ret = 0;
 
-	for (i = 0; i < internals->slave_count; i++) {
+	for (i = 0; i < internals->member_count; i++) {
 		int lret;
 
 		if (unlikely(flow->flows[i] == NULL))
 			continue;
-		lret = rte_flow_destroy(internals->slaves[i].port_id,
+		lret = rte_flow_destroy(internals->members[i].port_id,
 					flow->flows[i], err);
 		if (unlikely(lret != 0)) {
-			RTE_BOND_LOG(ERR, "Failed to destroy flow on slave %d:"
+			RTE_BOND_LOG(ERR, "Failed to destroy flow on member %d:"
 				     " %d", i, lret);
 			ret = lret;
 		}
@@ -154,7 +154,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
 	int ret = 0;
 	int lret;
 
-	/* Destroy all bond flows from its slaves instead of flushing them to
+	/* Destroy all bond flows from its members instead of flushing them to
 	 * keep the LACP flow or any other external flows.
 	 */
 	RTE_TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
@@ -163,7 +163,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
 			ret = lret;
 	}
 	if (unlikely(ret != 0))
-		RTE_BOND_LOG(ERR, "Failed to flush flow in all slaves");
+		RTE_BOND_LOG(ERR, "Failed to flush flow in all members");
 	return ret;
 }
 
@@ -174,26 +174,26 @@ bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
 		      struct rte_flow_error *err)
 {
 	struct bond_dev_private *internals = dev->data->dev_private;
-	struct rte_flow_query_count slave_count;
+	struct rte_flow_query_count member_count;
 	int i;
 	int ret;
 
 	count->bytes = 0;
 	count->hits = 0;
-	rte_memcpy(&slave_count, count, sizeof(slave_count));
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_flow_query(internals->slaves[i].port_id,
+	rte_memcpy(&member_count, count, sizeof(member_count));
+	for (i = 0; i < internals->member_count; i++) {
+		ret = rte_flow_query(internals->members[i].port_id,
 				     flow->flows[i], action,
-				     &slave_count, err);
+				     &member_count, err);
 		if (unlikely(ret != 0)) {
 			RTE_BOND_LOG(ERR, "Failed to query flow on"
-				     " slave %d: %d", i, ret);
+				     " member %d: %d", i, ret);
 			return ret;
 		}
-		count->bytes += slave_count.bytes;
-		count->hits += slave_count.hits;
-		slave_count.bytes = 0;
-		slave_count.hits = 0;
+		count->bytes += member_count.bytes;
+		count->hits += member_count.hits;
+		member_count.bytes = 0;
+		member_count.hits = 0;
 	}
 	return 0;
 }
@@ -221,11 +221,11 @@ bond_flow_isolate(struct rte_eth_dev *dev, int set,
 	int i;
 	int ret;
 
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_flow_isolate(internals->slaves[i].port_id, set, err);
+	for (i = 0; i < internals->member_count; i++) {
+		ret = rte_flow_isolate(internals->members[i].port_id, set, err);
 		if (unlikely(ret != 0)) {
 			RTE_BOND_LOG(ERR, "Operation rte_flow_isolate failed"
-				     " for slave %d with error %d", i, ret);
+				     " for member %d with error %d", i, ret);
 			internals->flow_isolated_valid = 0;
 			return ret;
 		}
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index f0c4f7d26b..0e17febcf6 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -61,33 +61,35 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	struct bond_dev_private *internals;
 
 	uint16_t num_rx_total = 0;
-	uint16_t slave_count;
-	uint16_t active_slave;
+	uint16_t member_count;
+	uint16_t active_member;
 	int i;
 
 	/* Cast to structure, containing bonded device's port id and queue id */
 	struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue;
 	internals = bd_rx_q->dev_private;
-	slave_count = internals->active_slave_count;
-	active_slave = bd_rx_q->active_slave;
+	member_count = internals->active_member_count;
+	active_member = bd_rx_q->active_member;
 
-	for (i = 0; i < slave_count && nb_pkts; i++) {
-		uint16_t num_rx_slave;
+	for (i = 0; i < member_count && nb_pkts; i++) {
+		uint16_t num_rx_member;
 
-		/* Offset of pointer to *bufs increases as packets are received
-		 * from other slaves */
-		num_rx_slave =
-			rte_eth_rx_burst(internals->active_slaves[active_slave],
+		/*
+		 * Offset of pointer to *bufs increases as packets are received
+		 * from other members.
+		 */
+		num_rx_member =
+			rte_eth_rx_burst(internals->active_members[active_member],
 					 bd_rx_q->queue_id,
 					 bufs + num_rx_total, nb_pkts);
-		num_rx_total += num_rx_slave;
-		nb_pkts -= num_rx_slave;
-		if (++active_slave >= slave_count)
-			active_slave = 0;
+		num_rx_total += num_rx_member;
+		nb_pkts -= num_rx_member;
+		if (++active_member >= member_count)
+			active_member = 0;
 	}
 
-	if (++bd_rx_q->active_slave >= slave_count)
-		bd_rx_q->active_slave = 0;
+	if (++bd_rx_q->active_member >= member_count)
+		bd_rx_q->active_member = 0;
 	return num_rx_total;
 }
 
@@ -158,8 +160,8 @@ const struct rte_flow_attr flow_attr_8023ad = {
 
 int
 bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
-		uint16_t slave_port) {
-	struct rte_eth_dev_info slave_info;
+		uint16_t member_port) {
+	struct rte_eth_dev_info member_info;
 	struct rte_flow_error error;
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
 
@@ -177,29 +179,29 @@ bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
 		}
 	};
 
-	int ret = rte_flow_validate(slave_port, &flow_attr_8023ad,
+	int ret = rte_flow_validate(member_port, &flow_attr_8023ad,
 			flow_item_8023ad, actions, &error);
 	if (ret < 0) {
-		RTE_BOND_LOG(ERR, "%s: %s (slave_port=%d queue_id=%d)",
-				__func__, error.message, slave_port,
+		RTE_BOND_LOG(ERR, "%s: %s (member_port=%d queue_id=%d)",
+				__func__, error.message, member_port,
 				internals->mode4.dedicated_queues.rx_qid);
 		return -1;
 	}
 
-	ret = rte_eth_dev_info_get(slave_port, &slave_info);
+	ret = rte_eth_dev_info_get(member_port, &member_info);
 	if (ret != 0) {
 		RTE_BOND_LOG(ERR,
 			"%s: Error during getting device (port %u) info: %s\n",
-			__func__, slave_port, strerror(-ret));
+			__func__, member_port, strerror(-ret));
 
 		return ret;
 	}
 
-	if (slave_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
-			slave_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
+	if (member_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
+			member_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
 		RTE_BOND_LOG(ERR,
-			"%s: Slave %d capabilities doesn't allow allocating additional queues",
-			__func__, slave_port);
+			"%s: Member %d capabilities doesn't allow allocating additional queues",
+			__func__, member_port);
 		return -1;
 	}
 
@@ -214,8 +216,8 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
 	uint16_t idx;
 	int ret;
 
-	/* Verify if all slaves in bonding supports flow director and */
-	if (internals->slave_count > 0) {
+	/* Verify if all members in bonding supports flow director and */
+	if (internals->member_count > 0) {
 		ret = rte_eth_dev_info_get(bond_dev->data->port_id, &bond_info);
 		if (ret != 0) {
 			RTE_BOND_LOG(ERR,
@@ -229,9 +231,9 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
 		internals->mode4.dedicated_queues.rx_qid = bond_info.nb_rx_queues;
 		internals->mode4.dedicated_queues.tx_qid = bond_info.nb_tx_queues;
 
-		for (idx = 0; idx < internals->slave_count; idx++) {
+		for (idx = 0; idx < internals->member_count; idx++) {
 			if (bond_ethdev_8023ad_flow_verify(bond_dev,
-					internals->slaves[idx].port_id) != 0)
+					internals->members[idx].port_id) != 0)
 				return -1;
 		}
 	}
@@ -240,7 +242,7 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
 }
 
 int
-bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
+bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t member_port) {
 
 	struct rte_flow_error error;
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
@@ -258,12 +260,12 @@ bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
 		}
 	};
 
-	internals->mode4.dedicated_queues.flow[slave_port] = rte_flow_create(slave_port,
+	internals->mode4.dedicated_queues.flow[member_port] = rte_flow_create(member_port,
 			&flow_attr_8023ad, flow_item_8023ad, actions, &error);
-	if (internals->mode4.dedicated_queues.flow[slave_port] == NULL) {
+	if (internals->mode4.dedicated_queues.flow[member_port] == NULL) {
 		RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_set: %s "
-				"(slave_port=%d queue_id=%d)",
-				error.message, slave_port,
+				"(member_port=%d queue_id=%d)",
+				error.message, member_port,
 				internals->mode4.dedicated_queues.rx_qid);
 		return -1;
 	}
@@ -304,10 +306,10 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 	const uint16_t ether_type_slow_be =
 		rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
 	uint16_t num_rx_total = 0;	/* Total number of received packets */
-	uint16_t slaves[RTE_MAX_ETHPORTS];
-	uint16_t slave_count, idx;
+	uint16_t members[RTE_MAX_ETHPORTS];
+	uint16_t member_count, idx;
 
-	uint8_t collecting;  /* current slave collecting status */
+	uint8_t collecting;  /* current member collecting status */
 	const uint8_t promisc = rte_eth_promiscuous_get(internals->port_id);
 	const uint8_t allmulti = rte_eth_allmulticast_get(internals->port_id);
 	uint8_t subtype;
@@ -315,24 +317,24 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 	uint16_t j;
 	uint16_t k;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy member list to protect against member up/down changes during tx
 	 * bursting */
-	slave_count = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * slave_count);
+	member_count = internals->active_member_count;
+	memcpy(members, internals->active_members,
+			sizeof(internals->active_members[0]) * member_count);
 
-	idx = bd_rx_q->active_slave;
-	if (idx >= slave_count) {
-		bd_rx_q->active_slave = 0;
+	idx = bd_rx_q->active_member;
+	if (idx >= member_count) {
+		bd_rx_q->active_member = 0;
 		idx = 0;
 	}
-	for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) {
+	for (i = 0; i < member_count && num_rx_total < nb_pkts; i++) {
 		j = num_rx_total;
-		collecting = ACTOR_STATE(&bond_mode_8023ad_ports[slaves[idx]],
+		collecting = ACTOR_STATE(&bond_mode_8023ad_ports[members[idx]],
 					 COLLECTING);
 
-		/* Read packets from this slave */
-		num_rx_total += rte_eth_rx_burst(slaves[idx], bd_rx_q->queue_id,
+		/* Read packets from this member */
+		num_rx_total += rte_eth_rx_burst(members[idx], bd_rx_q->queue_id,
 				&bufs[num_rx_total], nb_pkts - num_rx_total);
 
 		for (k = j; k < 2 && k < num_rx_total; k++)
@@ -348,7 +350,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 
 			/* Remove packet from array if:
 			 * - it is slow packet but no dedicated rxq is present,
-			 * - slave is not in collecting state,
+			 * - member is not in collecting state,
 			 * - bonding interface is not in promiscuous mode and
 			 *   packet address isn't in mac_addrs array:
 			 *   - packet is unicast,
@@ -367,7 +369,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 				  !allmulti)))) {
 				if (hdr->ether_type == ether_type_slow_be) {
 					bond_mode_8023ad_handle_slow_pkt(
-					    internals, slaves[idx], bufs[j]);
+					    internals, members[idx], bufs[j]);
 				} else
 					rte_pktmbuf_free(bufs[j]);
 
@@ -380,12 +382,12 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 			} else
 				j++;
 		}
-		if (unlikely(++idx == slave_count))
+		if (unlikely(++idx == member_count))
 			idx = 0;
 	}
 
-	if (++bd_rx_q->active_slave >= slave_count)
-		bd_rx_q->active_slave = 0;
+	if (++bd_rx_q->active_member >= member_count)
+		bd_rx_q->active_member = 0;
 
 	return num_rx_total;
 }
@@ -406,7 +408,7 @@ bond_ethdev_rx_burst_8023ad_fast_queue(void *queue, struct rte_mbuf **bufs,
 
 #if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
 uint32_t burstnumberRX;
-uint32_t burstnumberTX;
+uint32_t burst_number_TX;
 
 #ifdef RTE_LIBRTE_BOND_DEBUG_ALB
 
@@ -583,59 +585,61 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 	struct bond_dev_private *internals;
 	struct bond_tx_queue *bd_tx_q;
 
-	struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_pkts];
-	uint16_t slave_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
+	struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS][nb_pkts];
+	uint16_t member_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
 
-	uint16_t num_of_slaves;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t num_of_members;
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	uint16_t num_tx_total = 0, num_tx_slave;
+	uint16_t num_tx_total = 0, num_tx_member;
 
-	static int slave_idx = 0;
-	int i, cslave_idx = 0, tx_fail_total = 0;
+	static int member_idx;
+	int i, cmember_idx = 0, tx_fail_total = 0;
 
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy member list to protect against member up/down changes during tx
 	 * bursting */
-	num_of_slaves = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * num_of_slaves);
+	num_of_members = internals->active_member_count;
+	memcpy(members, internals->active_members,
+			sizeof(internals->active_members[0]) * num_of_members);
 
-	if (num_of_slaves < 1)
+	if (num_of_members < 1)
 		return num_tx_total;
 
-	/* Populate slaves mbuf with which packets are to be sent on it  */
+	/* Populate members mbuf with which packets are to be sent on it  */
 	for (i = 0; i < nb_pkts; i++) {
-		cslave_idx = (slave_idx + i) % num_of_slaves;
-		slave_bufs[cslave_idx][(slave_nb_pkts[cslave_idx])++] = bufs[i];
+		cmember_idx = (member_idx + i) % num_of_members;
+		member_bufs[cmember_idx][(member_nb_pkts[cmember_idx])++] = bufs[i];
 	}
 
-	/* increment current slave index so the next call to tx burst starts on the
-	 * next slave */
-	slave_idx = ++cslave_idx;
+	/*
+	 * increment current member index so the next call to tx burst starts on the
+	 * next member.
+	 */
+	member_idx = ++cmember_idx;
 
-	/* Send packet burst on each slave device */
-	for (i = 0; i < num_of_slaves; i++) {
-		if (slave_nb_pkts[i] > 0) {
-			num_tx_slave = rte_eth_tx_prepare(slaves[i],
-					bd_tx_q->queue_id, slave_bufs[i],
-					slave_nb_pkts[i]);
-			num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
-					slave_bufs[i], num_tx_slave);
+	/* Send packet burst on each member device */
+	for (i = 0; i < num_of_members; i++) {
+		if (member_nb_pkts[i] > 0) {
+			num_tx_member = rte_eth_tx_prepare(members[i],
+					bd_tx_q->queue_id, member_bufs[i],
+					member_nb_pkts[i]);
+			num_tx_member = rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
+					member_bufs[i], num_tx_member);
 
 			/* if tx burst fails move packets to end of bufs */
-			if (unlikely(num_tx_slave < slave_nb_pkts[i])) {
-				int tx_fail_slave = slave_nb_pkts[i] - num_tx_slave;
+			if (unlikely(num_tx_member < member_nb_pkts[i])) {
+				int tx_fail_member = member_nb_pkts[i] - num_tx_member;
 
-				tx_fail_total += tx_fail_slave;
+				tx_fail_total += tx_fail_member;
 
 				memcpy(&bufs[nb_pkts - tx_fail_total],
-				       &slave_bufs[i][num_tx_slave],
-				       tx_fail_slave * sizeof(bufs[0]));
+				       &member_bufs[i][num_tx_member],
+				       tx_fail_member * sizeof(bufs[0]));
 			}
-			num_tx_total += num_tx_slave;
+			num_tx_total += num_tx_member;
 		}
 	}
 
@@ -653,7 +657,7 @@ bond_ethdev_tx_burst_active_backup(void *queue,
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	if (internals->active_slave_count < 1)
+	if (internals->active_member_count < 1)
 		return 0;
 
 	nb_prep_pkts = rte_eth_tx_prepare(internals->current_primary_port,
@@ -699,7 +703,7 @@ ipv6_hash(struct rte_ipv6_hdr *ipv6_hdr)
 
 void
 burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves)
+		uint16_t member_count, uint16_t *members)
 {
 	struct rte_ether_hdr *eth_hdr;
 	uint32_t hash;
@@ -710,13 +714,13 @@ burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
 
 		hash = ether_hash(eth_hdr);
 
-		slaves[i] = (hash ^= hash >> 8) % slave_count;
+		members[i] = (hash ^= hash >> 8) % member_count;
 	}
 }
 
 void
 burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves)
+		uint16_t member_count, uint16_t *members)
 {
 	uint16_t i;
 	struct rte_ether_hdr *eth_hdr;
@@ -748,13 +752,13 @@ burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
 		hash ^= hash >> 16;
 		hash ^= hash >> 8;
 
-		slaves[i] = hash % slave_count;
+		members[i] = hash % member_count;
 	}
 }
 
 void
 burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves)
+		uint16_t member_count, uint16_t *members)
 {
 	struct rte_ether_hdr *eth_hdr;
 	uint16_t proto;
@@ -822,30 +826,29 @@ burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
 		hash ^= hash >> 16;
 		hash ^= hash >> 8;
 
-		slaves[i] = hash % slave_count;
+		members[i] = hash % member_count;
 	}
 }
 
-struct bwg_slave {
+struct bwg_member {
 	uint64_t bwg_left_int;
 	uint64_t bwg_left_remainder;
-	uint16_t slave;
+	uint16_t member;
 };
 
 void
-bond_tlb_activate_slave(struct bond_dev_private *internals) {
+bond_tlb_activate_member(struct bond_dev_private *internals) {
 	int i;
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		tlb_last_obytets[internals->active_slaves[i]] = 0;
-	}
+	for (i = 0; i < internals->active_member_count; i++)
+		tlb_last_obytets[internals->active_members[i]] = 0;
 }
 
 static int
 bandwidth_cmp(const void *a, const void *b)
 {
-	const struct bwg_slave *bwg_a = a;
-	const struct bwg_slave *bwg_b = b;
+	const struct bwg_member *bwg_a = a;
+	const struct bwg_member *bwg_b = b;
 	int64_t diff = (int64_t)bwg_b->bwg_left_int - (int64_t)bwg_a->bwg_left_int;
 	int64_t diff2 = (int64_t)bwg_b->bwg_left_remainder -
 			(int64_t)bwg_a->bwg_left_remainder;
@@ -863,14 +866,14 @@ bandwidth_cmp(const void *a, const void *b)
 
 static void
 bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
-		struct bwg_slave *bwg_slave)
+		struct bwg_member *bwg_member)
 {
 	struct rte_eth_link link_status;
 	int ret;
 
 	ret = rte_eth_link_get_nowait(port_id, &link_status);
 	if (ret < 0) {
-		RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
+		RTE_BOND_LOG(ERR, "Member (port %u) link get failed: %s",
 			     port_id, rte_strerror(-ret));
 		return;
 	}
@@ -878,51 +881,51 @@ bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
 	if (link_bwg == 0)
 		return;
 	link_bwg = link_bwg * (update_idx+1) * REORDER_PERIOD_MS;
-	bwg_slave->bwg_left_int = (link_bwg - 1000*load) / link_bwg;
-	bwg_slave->bwg_left_remainder = (link_bwg - 1000*load) % link_bwg;
+	bwg_member->bwg_left_int = (link_bwg - 1000 * load) / link_bwg;
+	bwg_member->bwg_left_remainder = (link_bwg - 1000 * load) % link_bwg;
 }
 
 static void
-bond_ethdev_update_tlb_slave_cb(void *arg)
+bond_ethdev_update_tlb_member_cb(void *arg)
 {
 	struct bond_dev_private *internals = arg;
-	struct rte_eth_stats slave_stats;
-	struct bwg_slave bwg_array[RTE_MAX_ETHPORTS];
-	uint16_t slave_count;
+	struct rte_eth_stats member_stats;
+	struct bwg_member bwg_array[RTE_MAX_ETHPORTS];
+	uint16_t member_count;
 	uint64_t tx_bytes;
 
 	uint8_t update_stats = 0;
-	uint16_t slave_id;
+	uint16_t member_id;
 	uint16_t i;
 
-	internals->slave_update_idx++;
+	internals->member_update_idx++;
 
 
-	if (internals->slave_update_idx >= REORDER_PERIOD_MS)
+	if (internals->member_update_idx >= REORDER_PERIOD_MS)
 		update_stats = 1;
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		slave_id = internals->active_slaves[i];
-		rte_eth_stats_get(slave_id, &slave_stats);
-		tx_bytes = slave_stats.obytes - tlb_last_obytets[slave_id];
-		bandwidth_left(slave_id, tx_bytes,
-				internals->slave_update_idx, &bwg_array[i]);
-		bwg_array[i].slave = slave_id;
+	for (i = 0; i < internals->active_member_count; i++) {
+		member_id = internals->active_members[i];
+		rte_eth_stats_get(member_id, &member_stats);
+		tx_bytes = member_stats.obytes - tlb_last_obytets[member_id];
+		bandwidth_left(member_id, tx_bytes,
+				internals->member_update_idx, &bwg_array[i]);
+		bwg_array[i].member = member_id;
 
 		if (update_stats) {
-			tlb_last_obytets[slave_id] = slave_stats.obytes;
+			tlb_last_obytets[member_id] = member_stats.obytes;
 		}
 	}
 
 	if (update_stats == 1)
-		internals->slave_update_idx = 0;
+		internals->member_update_idx = 0;
 
-	slave_count = i;
-	qsort(bwg_array, slave_count, sizeof(bwg_array[0]), bandwidth_cmp);
-	for (i = 0; i < slave_count; i++)
-		internals->tlb_slaves_order[i] = bwg_array[i].slave;
+	member_count = i;
+	qsort(bwg_array, member_count, sizeof(bwg_array[0]), bandwidth_cmp);
+	for (i = 0; i < member_count; i++)
+		internals->tlb_members_order[i] = bwg_array[i].member;
 
-	rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_slave_cb,
+	rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_member_cb,
 			(struct bond_dev_private *)internals);
 }
 
@@ -937,29 +940,29 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	uint16_t num_tx_total = 0, num_tx_prep;
 	uint16_t i, j;
 
-	uint16_t num_of_slaves = internals->active_slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t num_of_members = internals->active_member_count;
+	uint16_t members[RTE_MAX_ETHPORTS];
 
 	struct rte_ether_hdr *ether_hdr;
-	struct rte_ether_addr primary_slave_addr;
-	struct rte_ether_addr active_slave_addr;
+	struct rte_ether_addr primary_member_addr;
+	struct rte_ether_addr active_member_addr;
 
-	if (num_of_slaves < 1)
+	if (num_of_members < 1)
 		return num_tx_total;
 
-	memcpy(slaves, internals->tlb_slaves_order,
-				sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
+	memcpy(members, internals->tlb_members_order,
+				sizeof(internals->tlb_members_order[0]) * num_of_members);
 
 
-	rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
+	rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_member_addr);
 
 	if (nb_pkts > 3) {
 		for (i = 0; i < 3; i++)
 			rte_prefetch0(rte_pktmbuf_mtod(bufs[i], void*));
 	}
 
-	for (i = 0; i < num_of_slaves; i++) {
-		rte_eth_macaddr_get(slaves[i], &active_slave_addr);
+	for (i = 0; i < num_of_members; i++) {
+		rte_eth_macaddr_get(members[i], &active_member_addr);
 		for (j = num_tx_total; j < nb_pkts; j++) {
 			if (j + 3 < nb_pkts)
 				rte_prefetch0(rte_pktmbuf_mtod(bufs[j+3], void*));
@@ -967,17 +970,18 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			ether_hdr = rte_pktmbuf_mtod(bufs[j],
 						struct rte_ether_hdr *);
 			if (rte_is_same_ether_addr(&ether_hdr->src_addr,
-							&primary_slave_addr))
-				rte_ether_addr_copy(&active_slave_addr,
+							&primary_member_addr))
+				rte_ether_addr_copy(&active_member_addr,
 						&ether_hdr->src_addr);
 #if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
-					mode6_debug("TX IPv4:", ether_hdr, slaves[i], &burstnumberTX);
+					mode6_debug("TX IPv4:", ether_hdr, members[i],
+						&burst_number_TX);
 #endif
 		}
 
-		num_tx_prep = rte_eth_tx_prepare(slaves[i], bd_tx_q->queue_id,
+		num_tx_prep = rte_eth_tx_prepare(members[i], bd_tx_q->queue_id,
 				bufs + num_tx_total, nb_pkts - num_tx_total);
-		num_tx_total += rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+		num_tx_total += rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
 				bufs + num_tx_total, num_tx_prep);
 
 		if (num_tx_total == nb_pkts)
@@ -990,13 +994,13 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 void
 bond_tlb_disable(struct bond_dev_private *internals)
 {
-	rte_eal_alarm_cancel(bond_ethdev_update_tlb_slave_cb, internals);
+	rte_eal_alarm_cancel(bond_ethdev_update_tlb_member_cb, internals);
 }
 
 void
 bond_tlb_enable(struct bond_dev_private *internals)
 {
-	bond_ethdev_update_tlb_slave_cb(internals);
+	bond_ethdev_update_tlb_member_cb(internals);
 }
 
 static uint16_t
@@ -1011,11 +1015,11 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	struct client_data *client_info;
 
 	/*
-	 * We create transmit buffers for every slave and one additional to send
+	 * We create transmit buffers for every member and one additional to send
 	 * through tlb. In worst case every packet will be send on one port.
 	 */
-	struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
-	uint16_t slave_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
+	struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
+	uint16_t member_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
 
 	/*
 	 * We create separate transmit buffers for update packets as they won't
@@ -1029,7 +1033,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 	uint16_t num_send, num_not_send = 0;
 	uint16_t num_tx_total = 0;
-	uint16_t slave_idx;
+	uint16_t member_idx;
 
 	int i, j;
 
@@ -1040,19 +1044,19 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		offset = get_vlan_offset(eth_h, &ether_type);
 
 		if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP)) {
-			slave_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
+			member_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
 
 			/* Change src mac in eth header */
-			rte_eth_macaddr_get(slave_idx, &eth_h->src_addr);
+			rte_eth_macaddr_get(member_idx, &eth_h->src_addr);
 
-			/* Add packet to slave tx buffer */
-			slave_bufs[slave_idx][slave_bufs_pkts[slave_idx]] = bufs[i];
-			slave_bufs_pkts[slave_idx]++;
+			/* Add packet to member tx buffer */
+			member_bufs[member_idx][member_bufs_pkts[member_idx]] = bufs[i];
+			member_bufs_pkts[member_idx]++;
 		} else {
 			/* If packet is not ARP, send it with TLB policy */
-			slave_bufs[RTE_MAX_ETHPORTS][slave_bufs_pkts[RTE_MAX_ETHPORTS]] =
+			member_bufs[RTE_MAX_ETHPORTS][member_bufs_pkts[RTE_MAX_ETHPORTS]] =
 					bufs[i];
-			slave_bufs_pkts[RTE_MAX_ETHPORTS]++;
+			member_bufs_pkts[RTE_MAX_ETHPORTS]++;
 		}
 	}
 
@@ -1062,7 +1066,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			client_info = &internals->mode6.client_table[i];
 
 			if (client_info->in_use) {
-				/* Allocate new packet to send ARP update on current slave */
+				/* Allocate new packet to send ARP update on current member */
 				upd_pkt = rte_pktmbuf_alloc(internals->mode6.mempool);
 				if (upd_pkt == NULL) {
 					RTE_BOND_LOG(ERR,
@@ -1076,44 +1080,44 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				upd_pkt->data_len = pkt_size;
 				upd_pkt->pkt_len = pkt_size;
 
-				slave_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
+				member_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
 						internals);
 
 				/* Add packet to update tx buffer */
-				update_bufs[slave_idx][update_bufs_pkts[slave_idx]] = upd_pkt;
-				update_bufs_pkts[slave_idx]++;
+				update_bufs[member_idx][update_bufs_pkts[member_idx]] = upd_pkt;
+				update_bufs_pkts[member_idx]++;
 			}
 		}
 		internals->mode6.ntt = 0;
 	}
 
-	/* Send ARP packets on proper slaves */
+	/* Send ARP packets on proper members */
 	for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
-		if (slave_bufs_pkts[i] > 0) {
+		if (member_bufs_pkts[i] > 0) {
 			num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
-					slave_bufs[i], slave_bufs_pkts[i]);
+					member_bufs[i], member_bufs_pkts[i]);
 			num_send = rte_eth_tx_burst(i, bd_tx_q->queue_id,
-					slave_bufs[i], num_send);
-			for (j = 0; j < slave_bufs_pkts[i] - num_send; j++) {
+					member_bufs[i], num_send);
+			for (j = 0; j < member_bufs_pkts[i] - num_send; j++) {
 				bufs[nb_pkts - 1 - num_not_send - j] =
-						slave_bufs[i][nb_pkts - 1 - j];
+						member_bufs[i][nb_pkts - 1 - j];
 			}
 
 			num_tx_total += num_send;
-			num_not_send += slave_bufs_pkts[i] - num_send;
+			num_not_send += member_bufs_pkts[i] - num_send;
 
 #if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
 	/* Print TX stats including update packets */
-			for (j = 0; j < slave_bufs_pkts[i]; j++) {
-				eth_h = rte_pktmbuf_mtod(slave_bufs[i][j],
+			for (j = 0; j < member_bufs_pkts[i]; j++) {
+				eth_h = rte_pktmbuf_mtod(member_bufs[i][j],
 							struct rte_ether_hdr *);
-				mode6_debug("TX ARP:", eth_h, i, &burstnumberTX);
+				mode6_debug("TX ARP:", eth_h, i, &burst_number_TX);
 			}
 #endif
 		}
 	}
 
-	/* Send update packets on proper slaves */
+	/* Send update packets on proper members */
 	for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
 		if (update_bufs_pkts[i] > 0) {
 			num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
@@ -1127,21 +1131,21 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			for (j = 0; j < update_bufs_pkts[i]; j++) {
 				eth_h = rte_pktmbuf_mtod(update_bufs[i][j],
 							struct rte_ether_hdr *);
-				mode6_debug("TX ARPupd:", eth_h, i, &burstnumberTX);
+				mode6_debug("TX ARPupd:", eth_h, i, &burst_number_TX);
 			}
 #endif
 		}
 	}
 
 	/* Send non-ARP packets using tlb policy */
-	if (slave_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
+	if (member_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
 		num_send = bond_ethdev_tx_burst_tlb(queue,
-				slave_bufs[RTE_MAX_ETHPORTS],
-				slave_bufs_pkts[RTE_MAX_ETHPORTS]);
+				member_bufs[RTE_MAX_ETHPORTS],
+				member_bufs_pkts[RTE_MAX_ETHPORTS]);
 
-		for (j = 0; j < slave_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
+		for (j = 0; j < member_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
 			bufs[nb_pkts - 1 - num_not_send - j] =
-					slave_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
+					member_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
 		}
 
 		num_tx_total += num_send;
@@ -1152,59 +1156,59 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 static inline uint16_t
 tx_burst_balance(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
-		 uint16_t *slave_port_ids, uint16_t slave_count)
+		 uint16_t *member_port_ids, uint16_t member_count)
 {
 	struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
 	struct bond_dev_private *internals = bd_tx_q->dev_private;
 
-	/* Array to sort mbufs for transmission on each slave into */
-	struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_bufs];
-	/* Number of mbufs for transmission on each slave */
-	uint16_t slave_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
-	/* Mapping array generated by hash function to map mbufs to slaves */
-	uint16_t bufs_slave_port_idxs[nb_bufs];
+	/* Array to sort mbufs for transmission on each member into */
+	struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS][nb_bufs];
+	/* Number of mbufs for transmission on each member */
+	uint16_t member_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
+	/* Mapping array generated by hash function to map mbufs to members */
+	uint16_t bufs_member_port_idxs[nb_bufs];
 
-	uint16_t slave_tx_count;
+	uint16_t member_tx_count;
 	uint16_t total_tx_count = 0, total_tx_fail_count = 0;
 
 	uint16_t i;
 
 	/*
-	 * Populate slaves mbuf with the packets which are to be sent on it
-	 * selecting output slave using hash based on xmit policy
+	 * Populate members mbuf with the packets which are to be sent on it
+	 * selecting output member using hash based on xmit policy
 	 */
-	internals->burst_xmit_hash(bufs, nb_bufs, slave_count,
-			bufs_slave_port_idxs);
+	internals->burst_xmit_hash(bufs, nb_bufs, member_count,
+			bufs_member_port_idxs);
 
 	for (i = 0; i < nb_bufs; i++) {
-		/* Populate slave mbuf arrays with mbufs for that slave. */
-		uint16_t slave_idx = bufs_slave_port_idxs[i];
+		/* Populate member mbuf arrays with mbufs for that member. */
+		uint16_t member_idx = bufs_member_port_idxs[i];
 
-		slave_bufs[slave_idx][slave_nb_bufs[slave_idx]++] = bufs[i];
+		member_bufs[member_idx][member_nb_bufs[member_idx]++] = bufs[i];
 	}
 
-	/* Send packet burst on each slave device */
-	for (i = 0; i < slave_count; i++) {
-		if (slave_nb_bufs[i] == 0)
+	/* Send packet burst on each member device */
+	for (i = 0; i < member_count; i++) {
+		if (member_nb_bufs[i] == 0)
 			continue;
 
-		slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
-				bd_tx_q->queue_id, slave_bufs[i],
-				slave_nb_bufs[i]);
-		slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
-				bd_tx_q->queue_id, slave_bufs[i],
-				slave_tx_count);
+		member_tx_count = rte_eth_tx_prepare(member_port_ids[i],
+				bd_tx_q->queue_id, member_bufs[i],
+				member_nb_bufs[i]);
+		member_tx_count = rte_eth_tx_burst(member_port_ids[i],
+				bd_tx_q->queue_id, member_bufs[i],
+				member_tx_count);
 
-		total_tx_count += slave_tx_count;
+		total_tx_count += member_tx_count;
 
 		/* If tx burst fails move packets to end of bufs */
-		if (unlikely(slave_tx_count < slave_nb_bufs[i])) {
-			int slave_tx_fail_count = slave_nb_bufs[i] -
-					slave_tx_count;
-			total_tx_fail_count += slave_tx_fail_count;
+		if (unlikely(member_tx_count < member_nb_bufs[i])) {
+			int member_tx_fail_count = member_nb_bufs[i] -
+					member_tx_count;
+			total_tx_fail_count += member_tx_fail_count;
 			memcpy(&bufs[nb_bufs - total_tx_fail_count],
-			       &slave_bufs[i][slave_tx_count],
-			       slave_tx_fail_count * sizeof(bufs[0]));
+			       &member_bufs[i][member_tx_count],
+			       member_tx_fail_count * sizeof(bufs[0]));
 		}
 	}
 
@@ -1218,23 +1222,23 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 	struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
 	struct bond_dev_private *internals = bd_tx_q->dev_private;
 
-	uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
-	uint16_t slave_count;
+	uint16_t member_port_ids[RTE_MAX_ETHPORTS];
+	uint16_t member_count;
 
 	if (unlikely(nb_bufs == 0))
 		return 0;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy member list to protect against member up/down changes during tx
 	 * bursting
 	 */
-	slave_count = internals->active_slave_count;
-	if (unlikely(slave_count < 1))
+	member_count = internals->active_member_count;
+	if (unlikely(member_count < 1))
 		return 0;
 
-	memcpy(slave_port_ids, internals->active_slaves,
-			sizeof(slave_port_ids[0]) * slave_count);
-	return tx_burst_balance(queue, bufs, nb_bufs, slave_port_ids,
-				slave_count);
+	memcpy(member_port_ids, internals->active_members,
+			sizeof(member_port_ids[0]) * member_count);
+	return tx_burst_balance(queue, bufs, nb_bufs, member_port_ids,
+				member_count);
 }
 
 static inline uint16_t
@@ -1244,31 +1248,31 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
 	struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
 	struct bond_dev_private *internals = bd_tx_q->dev_private;
 
-	uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
-	uint16_t slave_count;
+	uint16_t member_port_ids[RTE_MAX_ETHPORTS];
+	uint16_t member_count;
 
-	uint16_t dist_slave_port_ids[RTE_MAX_ETHPORTS];
-	uint16_t dist_slave_count;
+	uint16_t dist_member_port_ids[RTE_MAX_ETHPORTS];
+	uint16_t dist_member_count;
 
-	uint16_t slave_tx_count;
+	uint16_t member_tx_count;
 
 	uint16_t i;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy member list to protect against member up/down changes during tx
 	 * bursting */
-	slave_count = internals->active_slave_count;
-	if (unlikely(slave_count < 1))
+	member_count = internals->active_member_count;
+	if (unlikely(member_count < 1))
 		return 0;
 
-	memcpy(slave_port_ids, internals->active_slaves,
-			sizeof(slave_port_ids[0]) * slave_count);
+	memcpy(member_port_ids, internals->active_members,
+			sizeof(member_port_ids[0]) * member_count);
 
 	if (dedicated_txq)
 		goto skip_tx_ring;
 
 	/* Check for LACP control packets and send if available */
-	for (i = 0; i < slave_count; i++) {
-		struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
+	for (i = 0; i < member_count; i++) {
+		struct port *port = &bond_mode_8023ad_ports[member_port_ids[i]];
 		struct rte_mbuf *ctrl_pkt = NULL;
 
 		if (likely(rte_ring_empty(port->tx_ring)))
@@ -1276,15 +1280,15 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
 
 		if (rte_ring_dequeue(port->tx_ring,
 				     (void **)&ctrl_pkt) != -ENOENT) {
-			slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
+			member_tx_count = rte_eth_tx_prepare(member_port_ids[i],
 					bd_tx_q->queue_id, &ctrl_pkt, 1);
-			slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
-					bd_tx_q->queue_id, &ctrl_pkt, slave_tx_count);
+			member_tx_count = rte_eth_tx_burst(member_port_ids[i],
+					bd_tx_q->queue_id, &ctrl_pkt, member_tx_count);
 			/*
 			 * re-enqueue LAG control plane packets to buffering
 			 * ring if transmission fails so the packet isn't lost.
 			 */
-			if (slave_tx_count != 1)
+			if (member_tx_count != 1)
 				rte_ring_enqueue(port->tx_ring,	ctrl_pkt);
 		}
 	}
@@ -1293,20 +1297,20 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
 	if (unlikely(nb_bufs == 0))
 		return 0;
 
-	dist_slave_count = 0;
-	for (i = 0; i < slave_count; i++) {
-		struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
+	dist_member_count = 0;
+	for (i = 0; i < member_count; i++) {
+		struct port *port = &bond_mode_8023ad_ports[member_port_ids[i]];
 
 		if (ACTOR_STATE(port, DISTRIBUTING))
-			dist_slave_port_ids[dist_slave_count++] =
-					slave_port_ids[i];
+			dist_member_port_ids[dist_member_count++] =
+					member_port_ids[i];
 	}
 
-	if (unlikely(dist_slave_count < 1))
+	if (unlikely(dist_member_count < 1))
 		return 0;
 
-	return tx_burst_balance(queue, bufs, nb_bufs, dist_slave_port_ids,
-				dist_slave_count);
+	return tx_burst_balance(queue, bufs, nb_bufs, dist_member_port_ids,
+				dist_member_count);
 }
 
 static uint16_t
@@ -1330,78 +1334,78 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 	struct bond_dev_private *internals;
 	struct bond_tx_queue *bd_tx_q;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 	uint8_t tx_failed_flag = 0;
-	uint16_t num_of_slaves;
+	uint16_t num_of_members;
 
 	uint16_t max_nb_of_tx_pkts = 0;
 
-	int slave_tx_total[RTE_MAX_ETHPORTS];
-	int i, most_successful_tx_slave = -1;
+	int member_tx_total[RTE_MAX_ETHPORTS];
+	int i, most_successful_tx_member = -1;
 
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy member list to protect against member up/down changes during tx
 	 * bursting */
-	num_of_slaves = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * num_of_slaves);
+	num_of_members = internals->active_member_count;
+	memcpy(members, internals->active_members,
+			sizeof(internals->active_members[0]) * num_of_members);
 
-	if (num_of_slaves < 1)
+	if (num_of_members < 1)
 		return 0;
 
 	/* It is rare that bond different PMDs together, so just call tx-prepare once */
-	nb_pkts = rte_eth_tx_prepare(slaves[0], bd_tx_q->queue_id, bufs, nb_pkts);
+	nb_pkts = rte_eth_tx_prepare(members[0], bd_tx_q->queue_id, bufs, nb_pkts);
 
 	/* Increment reference count on mbufs */
 	for (i = 0; i < nb_pkts; i++)
-		rte_pktmbuf_refcnt_update(bufs[i], num_of_slaves - 1);
+		rte_pktmbuf_refcnt_update(bufs[i], num_of_members - 1);
 
-	/* Transmit burst on each active slave */
-	for (i = 0; i < num_of_slaves; i++) {
-		slave_tx_total[i] = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+	/* Transmit burst on each active member */
+	for (i = 0; i < num_of_members; i++) {
+		member_tx_total[i] = rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
 					bufs, nb_pkts);
 
-		if (unlikely(slave_tx_total[i] < nb_pkts))
+		if (unlikely(member_tx_total[i] < nb_pkts))
 			tx_failed_flag = 1;
 
-		/* record the value and slave index for the slave which transmits the
+		/* record the value and member index for the member which transmits the
 		 * maximum number of packets */
-		if (slave_tx_total[i] > max_nb_of_tx_pkts) {
-			max_nb_of_tx_pkts = slave_tx_total[i];
-			most_successful_tx_slave = i;
+		if (member_tx_total[i] > max_nb_of_tx_pkts) {
+			max_nb_of_tx_pkts = member_tx_total[i];
+			most_successful_tx_member = i;
 		}
 	}
 
-	/* if slaves fail to transmit packets from burst, the calling application
+	/* if members fail to transmit packets from burst, the calling application
 	 * is not expected to know about multiple references to packets so we must
-	 * handle failures of all packets except those of the most successful slave
+	 * handle failures of all packets except those of the most successful member
 	 */
 	if (unlikely(tx_failed_flag))
-		for (i = 0; i < num_of_slaves; i++)
-			if (i != most_successful_tx_slave)
-				while (slave_tx_total[i] < nb_pkts)
-					rte_pktmbuf_free(bufs[slave_tx_total[i]++]);
+		for (i = 0; i < num_of_members; i++)
+			if (i != most_successful_tx_member)
+				while (member_tx_total[i] < nb_pkts)
+					rte_pktmbuf_free(bufs[member_tx_total[i]++]);
 
 	return max_nb_of_tx_pkts;
 }
 
 static void
-link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
+link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *member_link)
 {
 	struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
 
 	if (bond_ctx->mode == BONDING_MODE_8023AD) {
 		/**
 		 * If in mode 4 then save the link properties of the first
-		 * slave, all subsequent slaves must match these properties
+		 * member, all subsequent members must match these properties
 		 */
-		struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
+		struct rte_eth_link *bond_link = &bond_ctx->mode4.member_link;
 
-		bond_link->link_autoneg = slave_link->link_autoneg;
-		bond_link->link_duplex = slave_link->link_duplex;
-		bond_link->link_speed = slave_link->link_speed;
+		bond_link->link_autoneg = member_link->link_autoneg;
+		bond_link->link_duplex = member_link->link_duplex;
+		bond_link->link_speed = member_link->link_speed;
 	} else {
 		/**
 		 * In any other mode the link properties are set to default
@@ -1414,16 +1418,16 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
 
 static int
 link_properties_valid(struct rte_eth_dev *ethdev,
-		struct rte_eth_link *slave_link)
+		struct rte_eth_link *member_link)
 {
 	struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
 
 	if (bond_ctx->mode == BONDING_MODE_8023AD) {
-		struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
+		struct rte_eth_link *bond_link = &bond_ctx->mode4.member_link;
 
-		if (bond_link->link_duplex != slave_link->link_duplex ||
-			bond_link->link_autoneg != slave_link->link_autoneg ||
-			bond_link->link_speed != slave_link->link_speed)
+		if (bond_link->link_duplex != member_link->link_duplex ||
+			bond_link->link_autoneg != member_link->link_autoneg ||
+			bond_link->link_speed != member_link->link_speed)
 			return -1;
 	}
 
@@ -1480,11 +1484,11 @@ mac_address_set(struct rte_eth_dev *eth_dev,
 static const struct rte_ether_addr null_mac_addr;
 
 /*
- * Add additional MAC addresses to the slave
+ * Add additional MAC addresses to the member
  */
 int
-slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
-		uint16_t slave_port_id)
+member_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+		uint16_t member_port_id)
 {
 	int i, ret;
 	struct rte_ether_addr *mac_addr;
@@ -1494,11 +1498,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
 		if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
 			break;
 
-		ret = rte_eth_dev_mac_addr_add(slave_port_id, mac_addr, 0);
+		ret = rte_eth_dev_mac_addr_add(member_port_id, mac_addr, 0);
 		if (ret < 0) {
 			/* rollback */
 			for (i--; i > 0; i--)
-				rte_eth_dev_mac_addr_remove(slave_port_id,
+				rte_eth_dev_mac_addr_remove(member_port_id,
 					&bonded_eth_dev->data->mac_addrs[i]);
 			return ret;
 		}
@@ -1508,11 +1512,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
 }
 
 /*
- * Remove additional MAC addresses from the slave
+ * Remove additional MAC addresses from the member
  */
 int
-slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
-		uint16_t slave_port_id)
+member_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+		uint16_t member_port_id)
 {
 	int i, rc, ret;
 	struct rte_ether_addr *mac_addr;
@@ -1523,7 +1527,7 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
 		if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
 			break;
 
-		ret = rte_eth_dev_mac_addr_remove(slave_port_id, mac_addr);
+		ret = rte_eth_dev_mac_addr_remove(member_port_id, mac_addr);
 		/* save only the first error */
 		if (ret < 0 && rc == 0)
 			rc = ret;
@@ -1533,26 +1537,26 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
 }
 
 int
-mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
+mac_address_members_update(struct rte_eth_dev *bonded_eth_dev)
 {
 	struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
 	bool set;
 	int i;
 
-	/* Update slave devices MAC addresses */
-	if (internals->slave_count < 1)
+	/* Update member devices MAC addresses */
+	if (internals->member_count < 1)
 		return -1;
 
 	switch (internals->mode) {
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
-		for (i = 0; i < internals->slave_count; i++) {
+		for (i = 0; i < internals->member_count; i++) {
 			if (rte_eth_dev_default_mac_addr_set(
-					internals->slaves[i].port_id,
+					internals->members[i].port_id,
 					bonded_eth_dev->data->mac_addrs)) {
 				RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
-						internals->slaves[i].port_id);
+						internals->members[i].port_id);
 				return -1;
 			}
 		}
@@ -1565,8 +1569,8 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
 	case BONDING_MODE_ALB:
 	default:
 		set = true;
-		for (i = 0; i < internals->slave_count; i++) {
-			if (internals->slaves[i].port_id ==
+		for (i = 0; i < internals->member_count; i++) {
+			if (internals->members[i].port_id ==
 					internals->current_primary_port) {
 				if (rte_eth_dev_default_mac_addr_set(
 						internals->current_primary_port,
@@ -1577,10 +1581,10 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
 				}
 			} else {
 				if (rte_eth_dev_default_mac_addr_set(
-						internals->slaves[i].port_id,
-						&internals->slaves[i].persisted_mac_addr)) {
+						internals->members[i].port_id,
+						&internals->members[i].persisted_mac_addr)) {
 					RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
-							internals->slaves[i].port_id);
+							internals->members[i].port_id);
 				}
 			}
 		}
@@ -1655,55 +1659,55 @@ bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode)
 
 
 static int
-slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev)
+member_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *member_eth_dev)
 {
 	int errval = 0;
 	struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
-	struct port *port = &bond_mode_8023ad_ports[slave_eth_dev->data->port_id];
+	struct port *port = &bond_mode_8023ad_ports[member_eth_dev->data->port_id];
 
 	if (port->slow_pool == NULL) {
 		char mem_name[256];
-		int slave_id = slave_eth_dev->data->port_id;
+		int member_id = member_eth_dev->data->port_id;
 
-		snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_slow_pool",
-				slave_id);
+		snprintf(mem_name, RTE_DIM(mem_name), "member_port%u_slow_pool",
+				member_id);
 		port->slow_pool = rte_pktmbuf_pool_create(mem_name, 8191,
 			250, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
-			slave_eth_dev->data->numa_node);
+			member_eth_dev->data->numa_node);
 
 		/* Any memory allocation failure in initialization is critical because
 		 * resources can't be free, so reinitialization is impossible. */
 		if (port->slow_pool == NULL) {
-			rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
-				slave_id, mem_name, rte_strerror(rte_errno));
+			rte_panic("Member %u: Failed to create memory pool '%s': %s\n",
+				member_id, mem_name, rte_strerror(rte_errno));
 		}
 	}
 
 	if (internals->mode4.dedicated_queues.enabled == 1) {
 		/* Configure slow Rx queue */
 
-		errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id,
+		errval = rte_eth_rx_queue_setup(member_eth_dev->data->port_id,
 				internals->mode4.dedicated_queues.rx_qid, 128,
-				rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+				rte_eth_dev_socket_id(member_eth_dev->data->port_id),
 				NULL, port->slow_pool);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 					"rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
-					slave_eth_dev->data->port_id,
+					member_eth_dev->data->port_id,
 					internals->mode4.dedicated_queues.rx_qid,
 					errval);
 			return errval;
 		}
 
-		errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id,
+		errval = rte_eth_tx_queue_setup(member_eth_dev->data->port_id,
 				internals->mode4.dedicated_queues.tx_qid, 512,
-				rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+				rte_eth_dev_socket_id(member_eth_dev->data->port_id),
 				NULL);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 				"rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
-				slave_eth_dev->data->port_id,
+				member_eth_dev->data->port_id,
 				internals->mode4.dedicated_queues.tx_qid,
 				errval);
 			return errval;
@@ -1713,8 +1717,8 @@ slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
 }
 
 int
-slave_configure(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev)
+member_configure(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *member_eth_dev)
 {
 	uint16_t nb_rx_queues;
 	uint16_t nb_tx_queues;
@@ -1723,45 +1727,45 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 
 	struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
 
-	/* Stop slave */
-	errval = rte_eth_dev_stop(slave_eth_dev->data->port_id);
+	/* Stop member */
+	errval = rte_eth_dev_stop(member_eth_dev->data->port_id);
 	if (errval != 0)
 		RTE_BOND_LOG(ERR, "rte_eth_dev_stop: port %u, err (%d)",
-			     slave_eth_dev->data->port_id, errval);
+			     member_eth_dev->data->port_id, errval);
 
-	/* Enable interrupts on slave device if supported */
-	if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
-		slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
+	/* Enable interrupts on member device if supported */
+	if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+		member_eth_dev->data->dev_conf.intr_conf.lsc = 1;
 
-	/* If RSS is enabled for bonding, try to enable it for slaves  */
+	/* If RSS is enabled for bonding, try to enable it for members  */
 	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		/* rss_key won't be empty if RSS is configured in bonded dev */
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
+		member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
 					internals->rss_key_len;
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
+		member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
 					internals->rss_key;
 
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
+		member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
 				bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
-		slave_eth_dev->data->dev_conf.rxmode.mq_mode =
+		member_eth_dev->data->dev_conf.rxmode.mq_mode =
 				bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
 	} else {
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
-		slave_eth_dev->data->dev_conf.rxmode.mq_mode =
+		member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
+		member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
+		member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
+		member_eth_dev->data->dev_conf.rxmode.mq_mode =
 				bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
 	}
 
-	slave_eth_dev->data->dev_conf.rxmode.mtu =
+	member_eth_dev->data->dev_conf.rxmode.mtu =
 			bonded_eth_dev->data->dev_conf.rxmode.mtu;
-	slave_eth_dev->data->dev_conf.link_speeds =
+	member_eth_dev->data->dev_conf.link_speeds =
 			bonded_eth_dev->data->dev_conf.link_speeds;
 
-	slave_eth_dev->data->dev_conf.txmode.offloads =
+	member_eth_dev->data->dev_conf.txmode.offloads =
 			bonded_eth_dev->data->dev_conf.txmode.offloads;
 
-	slave_eth_dev->data->dev_conf.rxmode.offloads =
+	member_eth_dev->data->dev_conf.rxmode.offloads =
 			bonded_eth_dev->data->dev_conf.rxmode.offloads;
 
 	nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
@@ -1775,28 +1779,28 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 	}
 
 	/* Configure device */
-	errval = rte_eth_dev_configure(slave_eth_dev->data->port_id,
+	errval = rte_eth_dev_configure(member_eth_dev->data->port_id,
 			nb_rx_queues, nb_tx_queues,
-			&(slave_eth_dev->data->dev_conf));
+			&member_eth_dev->data->dev_conf);
 	if (errval != 0) {
-		RTE_BOND_LOG(ERR, "Cannot configure slave device: port %u, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+		RTE_BOND_LOG(ERR, "Cannot configure member device: port %u, err (%d)",
+				member_eth_dev->data->port_id, errval);
 		return errval;
 	}
 
-	errval = rte_eth_dev_set_mtu(slave_eth_dev->data->port_id,
+	errval = rte_eth_dev_set_mtu(member_eth_dev->data->port_id,
 				     bonded_eth_dev->data->mtu);
 	if (errval != 0 && errval != -ENOTSUP) {
 		RTE_BOND_LOG(ERR, "rte_eth_dev_set_mtu: port %u, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				member_eth_dev->data->port_id, errval);
 		return errval;
 	}
 	return 0;
 }
 
 int
-slave_start(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev)
+member_start(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *member_eth_dev)
 {
 	int errval = 0;
 	struct bond_rx_queue *bd_rx_q;
@@ -1804,19 +1808,20 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
 	uint16_t q_id;
 	struct rte_flow_error flow_error;
 	struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
+	uint16_t member_port_id = member_eth_dev->data->port_id;
 
 	/* Setup Rx Queues */
 	for (q_id = 0; q_id < bonded_eth_dev->data->nb_rx_queues; q_id++) {
 		bd_rx_q = (struct bond_rx_queue *)bonded_eth_dev->data->rx_queues[q_id];
 
-		errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id, q_id,
+		errval = rte_eth_rx_queue_setup(member_port_id, q_id,
 				bd_rx_q->nb_rx_desc,
-				rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+				rte_eth_dev_socket_id(member_port_id),
 				&(bd_rx_q->rx_conf), bd_rx_q->mb_pool);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 					"rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
-					slave_eth_dev->data->port_id, q_id, errval);
+					member_port_id, q_id, errval);
 			return errval;
 		}
 	}
@@ -1825,58 +1830,58 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
 	for (q_id = 0; q_id < bonded_eth_dev->data->nb_tx_queues; q_id++) {
 		bd_tx_q = (struct bond_tx_queue *)bonded_eth_dev->data->tx_queues[q_id];
 
-		errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id, q_id,
+		errval = rte_eth_tx_queue_setup(member_port_id, q_id,
 				bd_tx_q->nb_tx_desc,
-				rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+				rte_eth_dev_socket_id(member_port_id),
 				&bd_tx_q->tx_conf);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 				"rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
-				slave_eth_dev->data->port_id, q_id, errval);
+				member_port_id, q_id, errval);
 			return errval;
 		}
 	}
 
 	if (internals->mode == BONDING_MODE_8023AD &&
 			internals->mode4.dedicated_queues.enabled == 1) {
-		if (slave_configure_slow_queue(bonded_eth_dev, slave_eth_dev)
+		if (member_configure_slow_queue(bonded_eth_dev, member_eth_dev)
 				!= 0)
 			return errval;
 
 		errval = bond_ethdev_8023ad_flow_verify(bonded_eth_dev,
-				slave_eth_dev->data->port_id);
+				member_port_id);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 				"bond_ethdev_8023ad_flow_verify: port=%d, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				member_port_id, errval);
 			return errval;
 		}
 
-		if (internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id] != NULL) {
-			errval = rte_flow_destroy(slave_eth_dev->data->port_id,
-					internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id],
+		if (internals->mode4.dedicated_queues.flow[member_port_id] != NULL) {
+			errval = rte_flow_destroy(member_port_id,
+					internals->mode4.dedicated_queues.flow[member_port_id],
 					&flow_error);
 			RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_destroy: port=%d, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				member_port_id, errval);
 		}
 	}
 
 	/* Start device */
-	errval = rte_eth_dev_start(slave_eth_dev->data->port_id);
+	errval = rte_eth_dev_start(member_port_id);
 	if (errval != 0) {
 		RTE_BOND_LOG(ERR, "rte_eth_dev_start: port=%u, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				member_port_id, errval);
 		return -1;
 	}
 
 	if (internals->mode == BONDING_MODE_8023AD &&
 			internals->mode4.dedicated_queues.enabled == 1) {
 		errval = bond_ethdev_8023ad_flow_set(bonded_eth_dev,
-				slave_eth_dev->data->port_id);
+				member_port_id);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 				"bond_ethdev_8023ad_flow_set: port=%d, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				member_port_id, errval);
 			return errval;
 		}
 	}
@@ -1888,27 +1893,27 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
 
 		internals = bonded_eth_dev->data->dev_private;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			if (internals->slaves[i].port_id == slave_eth_dev->data->port_id) {
+		for (i = 0; i < internals->member_count; i++) {
+			if (internals->members[i].port_id == member_port_id) {
 				errval = rte_eth_dev_rss_reta_update(
-						slave_eth_dev->data->port_id,
+						member_port_id,
 						&internals->reta_conf[0],
-						internals->slaves[i].reta_size);
+						internals->members[i].reta_size);
 				if (errval != 0) {
 					RTE_BOND_LOG(WARNING,
-						     "rte_eth_dev_rss_reta_update on slave port %d fails (err %d)."
+						     "rte_eth_dev_rss_reta_update on member port %d fails (err %d)."
 						     " RSS Configuration for bonding may be inconsistent.",
-						     slave_eth_dev->data->port_id, errval);
+						     member_port_id, errval);
 				}
 				break;
 			}
 		}
 	}
 
-	/* If lsc interrupt is set, check initial slave's link status */
-	if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
-		slave_eth_dev->dev_ops->link_update(slave_eth_dev, 0);
-		bond_ethdev_lsc_event_callback(slave_eth_dev->data->port_id,
+	/* If lsc interrupt is set, check initial member's link status */
+	if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
+		member_eth_dev->dev_ops->link_update(member_eth_dev, 0);
+		bond_ethdev_lsc_event_callback(member_port_id,
 			RTE_ETH_EVENT_INTR_LSC, &bonded_eth_dev->data->port_id,
 			NULL);
 	}
@@ -1917,75 +1922,74 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
 }
 
 void
-slave_remove(struct bond_dev_private *internals,
-		struct rte_eth_dev *slave_eth_dev)
+member_remove(struct bond_dev_private *internals,
+		struct rte_eth_dev *member_eth_dev)
 {
 	uint16_t i;
 
-	for (i = 0; i < internals->slave_count; i++)
-		if (internals->slaves[i].port_id ==
-				slave_eth_dev->data->port_id)
+	for (i = 0; i < internals->member_count; i++)
+		if (internals->members[i].port_id ==
+				member_eth_dev->data->port_id)
 			break;
 
-	if (i < (internals->slave_count - 1)) {
+	if (i < (internals->member_count - 1)) {
 		struct rte_flow *flow;
 
-		memmove(&internals->slaves[i], &internals->slaves[i + 1],
-				sizeof(internals->slaves[0]) *
-				(internals->slave_count - i - 1));
+		memmove(&internals->members[i], &internals->members[i + 1],
+				sizeof(internals->members[0]) *
+				(internals->member_count - i - 1));
 		TAILQ_FOREACH(flow, &internals->flow_list, next) {
 			memmove(&flow->flows[i], &flow->flows[i + 1],
 				sizeof(flow->flows[0]) *
-				(internals->slave_count - i - 1));
-			flow->flows[internals->slave_count - 1] = NULL;
+				(internals->member_count - i - 1));
+			flow->flows[internals->member_count - 1] = NULL;
 		}
 	}
 
-	internals->slave_count--;
+	internals->member_count--;
 
-	/* force reconfiguration of slave interfaces */
-	rte_eth_dev_internal_reset(slave_eth_dev);
+	/* force reconfiguration of member interfaces */
+	rte_eth_dev_internal_reset(member_eth_dev);
 }
 
 static void
-bond_ethdev_slave_link_status_change_monitor(void *cb_arg);
+bond_ethdev_member_link_status_change_monitor(void *cb_arg);
 
 void
-slave_add(struct bond_dev_private *internals,
-		struct rte_eth_dev *slave_eth_dev)
+member_add(struct bond_dev_private *internals,
+		struct rte_eth_dev *member_eth_dev)
 {
-	struct bond_slave_details *slave_details =
-			&internals->slaves[internals->slave_count];
+	struct bond_member_details *member_details =
+			&internals->members[internals->member_count];
 
-	slave_details->port_id = slave_eth_dev->data->port_id;
-	slave_details->last_link_status = 0;
+	member_details->port_id = member_eth_dev->data->port_id;
+	member_details->last_link_status = 0;
 
-	/* Mark slave devices that don't support interrupts so we can
+	/* Mark member devices that don't support interrupts so we can
 	 * compensate when we start the bond
 	 */
-	if (!(slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) {
-		slave_details->link_status_poll_enabled = 1;
-	}
+	if (!(member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC))
+		member_details->link_status_poll_enabled = 1;
 
-	slave_details->link_status_wait_to_complete = 0;
+	member_details->link_status_wait_to_complete = 0;
 	/* clean tlb_last_obytes when adding port for bonding device */
-	memcpy(&(slave_details->persisted_mac_addr), slave_eth_dev->data->mac_addrs,
+	memcpy(&member_details->persisted_mac_addr, member_eth_dev->data->mac_addrs,
 			sizeof(struct rte_ether_addr));
 }
 
 void
 bond_ethdev_primary_set(struct bond_dev_private *internals,
-		uint16_t slave_port_id)
+		uint16_t member_port_id)
 {
 	int i;
 
-	if (internals->active_slave_count < 1)
-		internals->current_primary_port = slave_port_id;
+	if (internals->active_member_count < 1)
+		internals->current_primary_port = member_port_id;
 	else
-		/* Search bonded device slave ports for new proposed primary port */
-		for (i = 0; i < internals->active_slave_count; i++) {
-			if (internals->active_slaves[i] == slave_port_id)
-				internals->current_primary_port = slave_port_id;
+		/* Search bonded device member ports for new proposed primary port */
+		for (i = 0; i < internals->active_member_count; i++) {
+			if (internals->active_members[i] == member_port_id)
+				internals->current_primary_port = member_port_id;
 		}
 }
 
@@ -1998,9 +2002,9 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 	struct bond_dev_private *internals;
 	int i;
 
-	/* slave eth dev will be started by bonded device */
+	/* member eth dev will be started by bonded device */
 	if (check_for_bonded_ethdev(eth_dev)) {
-		RTE_BOND_LOG(ERR, "User tried to explicitly start a slave eth_dev (%d)",
+		RTE_BOND_LOG(ERR, "User tried to explicitly start a member eth_dev (%d)",
 				eth_dev->data->port_id);
 		return -1;
 	}
@@ -2010,17 +2014,17 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 
 	internals = eth_dev->data->dev_private;
 
-	if (internals->slave_count == 0) {
-		RTE_BOND_LOG(ERR, "Cannot start port since there are no slave devices");
+	if (internals->member_count == 0) {
+		RTE_BOND_LOG(ERR, "Cannot start port since there are no member devices");
 		goto out_err;
 	}
 
 	if (internals->user_defined_mac == 0) {
 		struct rte_ether_addr *new_mac_addr = NULL;
 
-		for (i = 0; i < internals->slave_count; i++)
-			if (internals->slaves[i].port_id == internals->primary_port)
-				new_mac_addr = &internals->slaves[i].persisted_mac_addr;
+		for (i = 0; i < internals->member_count; i++)
+			if (internals->members[i].port_id == internals->primary_port)
+				new_mac_addr = &internals->members[i].persisted_mac_addr;
 
 		if (new_mac_addr == NULL)
 			goto out_err;
@@ -2042,28 +2046,28 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 	}
 
 
-	/* Reconfigure each slave device if starting bonded device */
-	for (i = 0; i < internals->slave_count; i++) {
-		struct rte_eth_dev *slave_ethdev =
-				&(rte_eth_devices[internals->slaves[i].port_id]);
-		if (slave_configure(eth_dev, slave_ethdev) != 0) {
+	/* Reconfigure each member device if starting bonded device */
+	for (i = 0; i < internals->member_count; i++) {
+		struct rte_eth_dev *member_ethdev =
+				&(rte_eth_devices[internals->members[i].port_id]);
+		if (member_configure(eth_dev, member_ethdev) != 0) {
 			RTE_BOND_LOG(ERR,
-				"bonded port (%d) failed to reconfigure slave device (%d)",
+				"bonded port (%d) failed to reconfigure member device (%d)",
 				eth_dev->data->port_id,
-				internals->slaves[i].port_id);
+				internals->members[i].port_id);
 			goto out_err;
 		}
-		if (slave_start(eth_dev, slave_ethdev) != 0) {
+		if (member_start(eth_dev, member_ethdev) != 0) {
 			RTE_BOND_LOG(ERR,
-				"bonded port (%d) failed to start slave device (%d)",
+				"bonded port (%d) failed to start member device (%d)",
 				eth_dev->data->port_id,
-				internals->slaves[i].port_id);
+				internals->members[i].port_id);
 			goto out_err;
 		}
-		/* We will need to poll for link status if any slave doesn't
+		/* We will need to poll for link status if any member doesn't
 		 * support interrupts
 		 */
-		if (internals->slaves[i].link_status_poll_enabled)
+		if (internals->members[i].link_status_poll_enabled)
 			internals->link_status_polling_enabled = 1;
 	}
 
@@ -2071,12 +2075,12 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 	if (internals->link_status_polling_enabled) {
 		rte_eal_alarm_set(
 			internals->link_status_polling_interval_ms * 1000,
-			bond_ethdev_slave_link_status_change_monitor,
+			bond_ethdev_member_link_status_change_monitor,
 			(void *)&rte_eth_devices[internals->port_id]);
 	}
 
-	/* Update all slave devices MACs*/
-	if (mac_address_slaves_update(eth_dev) != 0)
+	/* Update all member devices MACs*/
+	if (mac_address_members_update(eth_dev) != 0)
 		goto out_err;
 
 	if (internals->user_defined_primary_port)
@@ -2132,8 +2136,8 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 		bond_mode_8023ad_stop(eth_dev);
 
 		/* Discard all messages to/from mode 4 state machines */
-		for (i = 0; i < internals->active_slave_count; i++) {
-			port = &bond_mode_8023ad_ports[internals->active_slaves[i]];
+		for (i = 0; i < internals->active_member_count; i++) {
+			port = &bond_mode_8023ad_ports[internals->active_members[i]];
 
 			RTE_ASSERT(port->rx_ring != NULL);
 			while (rte_ring_dequeue(port->rx_ring, &pkt) != -ENOENT)
@@ -2148,30 +2152,30 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 	if (internals->mode == BONDING_MODE_TLB ||
 			internals->mode == BONDING_MODE_ALB) {
 		bond_tlb_disable(internals);
-		for (i = 0; i < internals->active_slave_count; i++)
-			tlb_last_obytets[internals->active_slaves[i]] = 0;
+		for (i = 0; i < internals->active_member_count; i++)
+			tlb_last_obytets[internals->active_members[i]] = 0;
 	}
 
 	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 0;
 
 	internals->link_status_polling_enabled = 0;
-	for (i = 0; i < internals->slave_count; i++) {
-		uint16_t slave_id = internals->slaves[i].port_id;
+	for (i = 0; i < internals->member_count; i++) {
+		uint16_t member_id = internals->members[i].port_id;
 
-		internals->slaves[i].last_link_status = 0;
-		ret = rte_eth_dev_stop(slave_id);
+		internals->members[i].last_link_status = 0;
+		ret = rte_eth_dev_stop(member_id);
 		if (ret != 0) {
 			RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
-				     slave_id);
+				     member_id);
 			return ret;
 		}
 
-		/* active slaves need to be deactivated. */
-		if (find_slave_by_id(internals->active_slaves,
-				internals->active_slave_count, slave_id) !=
-					internals->active_slave_count)
-			deactivate_slave(eth_dev, slave_id);
+		/* active members need to be deactivated. */
+		if (find_member_by_id(internals->active_members,
+				internals->active_member_count, member_id) !=
+					internals->active_member_count)
+			deactivate_member(eth_dev, member_id);
 	}
 
 	return 0;
@@ -2188,8 +2192,8 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
 	/* Flush flows in all back-end devices before removing them */
 	bond_flow_ops.flush(dev, &ferror);
 
-	while (internals->slave_count != skipped) {
-		uint16_t port_id = internals->slaves[skipped].port_id;
+	while (internals->member_count != skipped) {
+		uint16_t port_id = internals->members[skipped].port_id;
 		int ret;
 
 		ret = rte_eth_dev_stop(port_id);
@@ -2203,7 +2207,7 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
 			continue;
 		}
 
-		if (rte_eth_bond_slave_remove(bond_port_id, port_id) != 0) {
+		if (rte_eth_bond_member_remove(bond_port_id, port_id) != 0) {
 			RTE_BOND_LOG(ERR,
 				     "Failed to remove port %d from bonded device %s",
 				     port_id, dev->device->name);
@@ -2246,7 +2250,7 @@ static int
 bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 {
 	struct bond_dev_private *internals = dev->data->dev_private;
-	struct bond_slave_details slave;
+	struct bond_member_details member;
 	int ret;
 
 	uint16_t max_nb_rx_queues = UINT16_MAX;
@@ -2259,31 +2263,31 @@ bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 			RTE_ETHER_MAX_JUMBO_FRAME_LEN;
 
 	/* Max number of tx/rx queues that the bonded device can support is the
-	 * minimum values of the bonded slaves, as all slaves must be capable
+	 * minimum values of the bonded members, as all members must be capable
 	 * of supporting the same number of tx/rx queues.
 	 */
-	if (internals->slave_count > 0) {
-		struct rte_eth_dev_info slave_info;
+	if (internals->member_count > 0) {
+		struct rte_eth_dev_info member_info;
 		uint16_t idx;
 
-		for (idx = 0; idx < internals->slave_count; idx++) {
-			slave = internals->slaves[idx];
-			ret = rte_eth_dev_info_get(slave.port_id, &slave_info);
+		for (idx = 0; idx < internals->member_count; idx++) {
+			member = internals->members[idx];
+			ret = rte_eth_dev_info_get(member.port_id, &member_info);
 			if (ret != 0) {
 				RTE_BOND_LOG(ERR,
 					"%s: Error during getting device (port %u) info: %s\n",
 					__func__,
-					slave.port_id,
+					member.port_id,
 					strerror(-ret));
 
 				return ret;
 			}
 
-			if (slave_info.max_rx_queues < max_nb_rx_queues)
-				max_nb_rx_queues = slave_info.max_rx_queues;
+			if (member_info.max_rx_queues < max_nb_rx_queues)
+				max_nb_rx_queues = member_info.max_rx_queues;
 
-			if (slave_info.max_tx_queues < max_nb_tx_queues)
-				max_nb_tx_queues = slave_info.max_tx_queues;
+			if (member_info.max_tx_queues < max_nb_tx_queues)
+				max_nb_tx_queues = member_info.max_tx_queues;
 		}
 	}
 
@@ -2332,7 +2336,7 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 	uint16_t i;
 	struct bond_dev_private *internals = dev->data->dev_private;
 
-	/* don't do this while a slave is being added */
+	/* don't do this while a member is being added */
 	rte_spinlock_lock(&internals->lock);
 
 	if (on)
@@ -2340,13 +2344,13 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 	else
 		rte_bitmap_clear(internals->vlan_filter_bmp, vlan_id);
 
-	for (i = 0; i < internals->slave_count; i++) {
-		uint16_t port_id = internals->slaves[i].port_id;
+	for (i = 0; i < internals->member_count; i++) {
+		uint16_t port_id = internals->members[i].port_id;
 
 		res = rte_eth_dev_vlan_filter(port_id, vlan_id, on);
 		if (res == ENOTSUP)
 			RTE_BOND_LOG(WARNING,
-				     "Setting VLAN filter on slave port %u not supported.",
+				     "Setting VLAN filter on member port %u not supported.",
 				     port_id);
 	}
 
@@ -2424,14 +2428,14 @@ bond_ethdev_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id)
 }
 
 static void
-bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
+bond_ethdev_member_link_status_change_monitor(void *cb_arg)
 {
-	struct rte_eth_dev *bonded_ethdev, *slave_ethdev;
+	struct rte_eth_dev *bonded_ethdev, *member_ethdev;
 	struct bond_dev_private *internals;
 
-	/* Default value for polling slave found is true as we don't want to
+	/* Default value for polling member found is true as we don't want to
 	 * disable the polling thread if we cannot get the lock */
-	int i, polling_slave_found = 1;
+	int i, polling_member_found = 1;
 
 	if (cb_arg == NULL)
 		return;
@@ -2443,28 +2447,28 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
 		!internals->link_status_polling_enabled)
 		return;
 
-	/* If device is currently being configured then don't check slaves link
+	/* If device is currently being configured then don't check members link
 	 * status, wait until next period */
 	if (rte_spinlock_trylock(&internals->lock)) {
-		if (internals->slave_count > 0)
-			polling_slave_found = 0;
+		if (internals->member_count > 0)
+			polling_member_found = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			if (!internals->slaves[i].link_status_poll_enabled)
+		for (i = 0; i < internals->member_count; i++) {
+			if (!internals->members[i].link_status_poll_enabled)
 				continue;
 
-			slave_ethdev = &rte_eth_devices[internals->slaves[i].port_id];
-			polling_slave_found = 1;
+			member_ethdev = &rte_eth_devices[internals->members[i].port_id];
+			polling_member_found = 1;
 
-			/* Update slave link status */
-			(*slave_ethdev->dev_ops->link_update)(slave_ethdev,
-					internals->slaves[i].link_status_wait_to_complete);
+			/* Update member link status */
+			(*member_ethdev->dev_ops->link_update)(member_ethdev,
+					internals->members[i].link_status_wait_to_complete);
 
 			/* if link status has changed since last checked then call lsc
 			 * event callback */
-			if (slave_ethdev->data->dev_link.link_status !=
-					internals->slaves[i].last_link_status) {
-				bond_ethdev_lsc_event_callback(internals->slaves[i].port_id,
+			if (member_ethdev->data->dev_link.link_status !=
+					internals->members[i].last_link_status) {
+				bond_ethdev_lsc_event_callback(internals->members[i].port_id,
 						RTE_ETH_EVENT_INTR_LSC,
 						&bonded_ethdev->data->port_id,
 						NULL);
@@ -2473,10 +2477,10 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
 		rte_spinlock_unlock(&internals->lock);
 	}
 
-	if (polling_slave_found)
-		/* Set alarm to continue monitoring link status of slave ethdev's */
+	if (polling_member_found)
+		/* Set alarm to continue monitoring link status of member ethdev's */
 		rte_eal_alarm_set(internals->link_status_polling_interval_ms * 1000,
-				bond_ethdev_slave_link_status_change_monitor, cb_arg);
+				bond_ethdev_member_link_status_change_monitor, cb_arg);
 }
 
 static int
@@ -2485,7 +2489,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 	int (*link_update)(uint16_t port_id, struct rte_eth_link *eth_link);
 
 	struct bond_dev_private *bond_ctx;
-	struct rte_eth_link slave_link;
+	struct rte_eth_link member_link;
 
 	bool one_link_update_succeeded;
 	uint32_t idx;
@@ -2496,7 +2500,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 	ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	if (ethdev->data->dev_started == 0 ||
-			bond_ctx->active_slave_count == 0) {
+			bond_ctx->active_member_count == 0) {
 		ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 		return 0;
 	}
@@ -2512,51 +2516,51 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 	case BONDING_MODE_BROADCAST:
 		/**
 		 * Setting link speed to UINT32_MAX to ensure we pick up the
-		 * value of the first active slave
+		 * value of the first active member
 		 */
 		ethdev->data->dev_link.link_speed = UINT32_MAX;
 
 		/**
-		 * link speed is minimum value of all the slaves link speed as
-		 * packet loss will occur on this slave if transmission at rates
+		 * link speed is minimum value of all the members link speed as
+		 * packet loss will occur on this member if transmission at rates
 		 * greater than this are attempted
 		 */
-		for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
-			ret = link_update(bond_ctx->active_slaves[idx],
-					  &slave_link);
+		for (idx = 0; idx < bond_ctx->active_member_count; idx++) {
+			ret = link_update(bond_ctx->active_members[idx],
+					  &member_link);
 			if (ret < 0) {
 				ethdev->data->dev_link.link_speed =
 					RTE_ETH_SPEED_NUM_NONE;
 				RTE_BOND_LOG(ERR,
-					"Slave (port %u) link get failed: %s",
-					bond_ctx->active_slaves[idx],
+					"Member (port %u) link get failed: %s",
+					bond_ctx->active_members[idx],
 					rte_strerror(-ret));
 				return 0;
 			}
 
-			if (slave_link.link_speed <
+			if (member_link.link_speed <
 					ethdev->data->dev_link.link_speed)
 				ethdev->data->dev_link.link_speed =
-						slave_link.link_speed;
+						member_link.link_speed;
 		}
 		break;
 	case BONDING_MODE_ACTIVE_BACKUP:
-		/* Current primary slave */
-		ret = link_update(bond_ctx->current_primary_port, &slave_link);
+		/* Current primary member */
+		ret = link_update(bond_ctx->current_primary_port, &member_link);
 		if (ret < 0) {
-			RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
+			RTE_BOND_LOG(ERR, "Member (port %u) link get failed: %s",
 				bond_ctx->current_primary_port,
 				rte_strerror(-ret));
 			return 0;
 		}
 
-		ethdev->data->dev_link.link_speed = slave_link.link_speed;
+		ethdev->data->dev_link.link_speed = member_link.link_speed;
 		break;
 	case BONDING_MODE_8023AD:
 		ethdev->data->dev_link.link_autoneg =
-				bond_ctx->mode4.slave_link.link_autoneg;
+				bond_ctx->mode4.member_link.link_autoneg;
 		ethdev->data->dev_link.link_duplex =
-				bond_ctx->mode4.slave_link.link_duplex;
+				bond_ctx->mode4.member_link.link_duplex;
 		/* fall through */
 		/* to update link speed */
 	case BONDING_MODE_ROUND_ROBIN:
@@ -2566,29 +2570,29 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 	default:
 		/**
 		 * In theses mode the maximum theoretical link speed is the sum
-		 * of all the slaves
+		 * of all the members
 		 */
 		ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		one_link_update_succeeded = false;
 
-		for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
-			ret = link_update(bond_ctx->active_slaves[idx],
-					&slave_link);
+		for (idx = 0; idx < bond_ctx->active_member_count; idx++) {
+			ret = link_update(bond_ctx->active_members[idx],
+					&member_link);
 			if (ret < 0) {
 				RTE_BOND_LOG(ERR,
-					"Slave (port %u) link get failed: %s",
-					bond_ctx->active_slaves[idx],
+					"Member (port %u) link get failed: %s",
+					bond_ctx->active_members[idx],
 					rte_strerror(-ret));
 				continue;
 			}
 
 			one_link_update_succeeded = true;
 			ethdev->data->dev_link.link_speed +=
-					slave_link.link_speed;
+					member_link.link_speed;
 		}
 
 		if (!one_link_update_succeeded) {
-			RTE_BOND_LOG(ERR, "All slaves link get failed");
+			RTE_BOND_LOG(ERR, "All members link get failed");
 			return 0;
 		}
 	}
@@ -2602,27 +2606,27 @@ static int
 bond_ethdev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 {
 	struct bond_dev_private *internals = dev->data->dev_private;
-	struct rte_eth_stats slave_stats;
+	struct rte_eth_stats member_stats;
 	int i, j;
 
-	for (i = 0; i < internals->slave_count; i++) {
-		rte_eth_stats_get(internals->slaves[i].port_id, &slave_stats);
+	for (i = 0; i < internals->member_count; i++) {
+		rte_eth_stats_get(internals->members[i].port_id, &member_stats);
 
-		stats->ipackets += slave_stats.ipackets;
-		stats->opackets += slave_stats.opackets;
-		stats->ibytes += slave_stats.ibytes;
-		stats->obytes += slave_stats.obytes;
-		stats->imissed += slave_stats.imissed;
-		stats->ierrors += slave_stats.ierrors;
-		stats->oerrors += slave_stats.oerrors;
-		stats->rx_nombuf += slave_stats.rx_nombuf;
+		stats->ipackets += member_stats.ipackets;
+		stats->opackets += member_stats.opackets;
+		stats->ibytes += member_stats.ibytes;
+		stats->obytes += member_stats.obytes;
+		stats->imissed += member_stats.imissed;
+		stats->ierrors += member_stats.ierrors;
+		stats->oerrors += member_stats.oerrors;
+		stats->rx_nombuf += member_stats.rx_nombuf;
 
 		for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
-			stats->q_ipackets[j] += slave_stats.q_ipackets[j];
-			stats->q_opackets[j] += slave_stats.q_opackets[j];
-			stats->q_ibytes[j] += slave_stats.q_ibytes[j];
-			stats->q_obytes[j] += slave_stats.q_obytes[j];
-			stats->q_errors[j] += slave_stats.q_errors[j];
+			stats->q_ipackets[j] += member_stats.q_ipackets[j];
+			stats->q_opackets[j] += member_stats.q_opackets[j];
+			stats->q_ibytes[j] += member_stats.q_ibytes[j];
+			stats->q_obytes[j] += member_stats.q_obytes[j];
+			stats->q_errors[j] += member_stats.q_errors[j];
 		}
 
 	}
@@ -2638,8 +2642,8 @@ bond_ethdev_stats_reset(struct rte_eth_dev *dev)
 	int err;
 	int ret;
 
-	for (i = 0, err = 0; i < internals->slave_count; i++) {
-		ret = rte_eth_stats_reset(internals->slaves[i].port_id);
+	for (i = 0, err = 0; i < internals->member_count; i++) {
+		ret = rte_eth_stats_reset(internals->members[i].port_id);
 		if (ret != 0)
 			err = ret;
 	}
@@ -2656,15 +2660,15 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
 	uint16_t port_id;
 
 	switch (internals->mode) {
-	/* Promiscuous mode is propagated to all slaves */
+	/* Promiscuous mode is propagated to all members */
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD: {
-		unsigned int slave_ok = 0;
+		unsigned int member_ok = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			port_id = internals->slaves[i].port_id;
+		for (i = 0; i < internals->member_count; i++) {
+			port_id = internals->members[i].port_id;
 
 			ret = rte_eth_promiscuous_enable(port_id);
 			if (ret != 0)
@@ -2672,23 +2676,23 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
 					"Failed to enable promiscuous mode for port %u: %s",
 					port_id, rte_strerror(-ret));
 			else
-				slave_ok++;
+				member_ok++;
 		}
 		/*
 		 * Report success if operation is successful on at least
-		 * on one slave. Otherwise return last error code.
+		 * on one member. Otherwise return last error code.
 		 */
-		if (slave_ok > 0)
+		if (member_ok > 0)
 			ret = 0;
 		break;
 	}
-	/* Promiscuous mode is propagated only to primary slave */
+	/* Promiscuous mode is propagated only to primary member */
 	case BONDING_MODE_ACTIVE_BACKUP:
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
 		/* Do not touch promisc when there cannot be primary ports */
-		if (internals->slave_count == 0)
+		if (internals->member_count == 0)
 			break;
 		port_id = internals->current_primary_port;
 		ret = rte_eth_promiscuous_enable(port_id);
@@ -2710,20 +2714,20 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
 	uint16_t port_id;
 
 	switch (internals->mode) {
-	/* Promiscuous mode is propagated to all slaves */
+	/* Promiscuous mode is propagated to all members */
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD: {
-		unsigned int slave_ok = 0;
+		unsigned int member_ok = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			port_id = internals->slaves[i].port_id;
+		for (i = 0; i < internals->member_count; i++) {
+			port_id = internals->members[i].port_id;
 
 			if (internals->mode == BONDING_MODE_8023AD &&
 			    bond_mode_8023ad_ports[port_id].forced_rx_flags ==
 					BOND_8023AD_FORCED_PROMISC) {
-				slave_ok++;
+				member_ok++;
 				continue;
 			}
 			ret = rte_eth_promiscuous_disable(port_id);
@@ -2732,23 +2736,23 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
 					"Failed to disable promiscuous mode for port %u: %s",
 					port_id, rte_strerror(-ret));
 			else
-				slave_ok++;
+				member_ok++;
 		}
 		/*
 		 * Report success if operation is successful on at least
-		 * on one slave. Otherwise return last error code.
+		 * on one member. Otherwise return last error code.
 		 */
-		if (slave_ok > 0)
+		if (member_ok > 0)
 			ret = 0;
 		break;
 	}
-	/* Promiscuous mode is propagated only to primary slave */
+	/* Promiscuous mode is propagated only to primary member */
 	case BONDING_MODE_ACTIVE_BACKUP:
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
 		/* Do not touch promisc when there cannot be primary ports */
-		if (internals->slave_count == 0)
+		if (internals->member_count == 0)
 			break;
 		port_id = internals->current_primary_port;
 		ret = rte_eth_promiscuous_disable(port_id);
@@ -2772,7 +2776,7 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD:
-		/* As promiscuous mode is propagated to all slaves for these
+		/* As promiscuous mode is propagated to all members for these
 		 * mode, no need to update for bonding device.
 		 */
 		break;
@@ -2780,9 +2784,9 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
-		/* As promiscuous mode is propagated only to primary slave
+		/* As promiscuous mode is propagated only to primary member
 		 * for these mode. When active/standby switchover, promiscuous
-		 * mode should be set to new primary slave according to bonding
+		 * mode should be set to new primary member according to bonding
 		 * device.
 		 */
 		if (rte_eth_promiscuous_get(internals->port_id) == 1)
@@ -2803,15 +2807,15 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
 	uint16_t port_id;
 
 	switch (internals->mode) {
-	/* allmulti mode is propagated to all slaves */
+	/* allmulti mode is propagated to all members */
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD: {
-		unsigned int slave_ok = 0;
+		unsigned int member_ok = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			port_id = internals->slaves[i].port_id;
+		for (i = 0; i < internals->member_count; i++) {
+			port_id = internals->members[i].port_id;
 
 			ret = rte_eth_allmulticast_enable(port_id);
 			if (ret != 0)
@@ -2819,23 +2823,23 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
 					"Failed to enable allmulti mode for port %u: %s",
 					port_id, rte_strerror(-ret));
 			else
-				slave_ok++;
+				member_ok++;
 		}
 		/*
 		 * Report success if operation is successful on at least
-		 * on one slave. Otherwise return last error code.
+		 * on one member. Otherwise return last error code.
 		 */
-		if (slave_ok > 0)
+		if (member_ok > 0)
 			ret = 0;
 		break;
 	}
-	/* allmulti mode is propagated only to primary slave */
+	/* allmulti mode is propagated only to primary member */
 	case BONDING_MODE_ACTIVE_BACKUP:
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
 		/* Do not touch allmulti when there cannot be primary ports */
-		if (internals->slave_count == 0)
+		if (internals->member_count == 0)
 			break;
 		port_id = internals->current_primary_port;
 		ret = rte_eth_allmulticast_enable(port_id);
@@ -2857,15 +2861,15 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
 	uint16_t port_id;
 
 	switch (internals->mode) {
-	/* allmulti mode is propagated to all slaves */
+	/* allmulti mode is propagated to all members */
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD: {
-		unsigned int slave_ok = 0;
+		unsigned int member_ok = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			uint16_t port_id = internals->slaves[i].port_id;
+		for (i = 0; i < internals->member_count; i++) {
+			uint16_t port_id = internals->members[i].port_id;
 
 			if (internals->mode == BONDING_MODE_8023AD &&
 			    bond_mode_8023ad_ports[port_id].forced_rx_flags ==
@@ -2878,23 +2882,23 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
 					"Failed to disable allmulti mode for port %u: %s",
 					port_id, rte_strerror(-ret));
 			else
-				slave_ok++;
+				member_ok++;
 		}
 		/*
 		 * Report success if operation is successful on at least
-		 * on one slave. Otherwise return last error code.
+		 * on one member. Otherwise return last error code.
 		 */
-		if (slave_ok > 0)
+		if (member_ok > 0)
 			ret = 0;
 		break;
 	}
-	/* allmulti mode is propagated only to primary slave */
+	/* allmulti mode is propagated only to primary member */
 	case BONDING_MODE_ACTIVE_BACKUP:
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
 		/* Do not touch allmulti when there cannot be primary ports */
-		if (internals->slave_count == 0)
+		if (internals->member_count == 0)
 			break;
 		port_id = internals->current_primary_port;
 		ret = rte_eth_allmulticast_disable(port_id);
@@ -2918,7 +2922,7 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD:
-		/* As allmulticast mode is propagated to all slaves for these
+		/* As allmulticast mode is propagated to all members for these
 		 * mode, no need to update for bonding device.
 		 */
 		break;
@@ -2926,9 +2930,9 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
-		/* As allmulticast mode is propagated only to primary slave
+		/* As allmulticast mode is propagated only to primary member
 		 * for these mode. When active/standby switchover, allmulticast
-		 * mode should be set to new primary slave according to bonding
+		 * mode should be set to new primary member according to bonding
 		 * device.
 		 */
 		if (rte_eth_allmulticast_get(internals->port_id) == 1)
@@ -2961,8 +2965,8 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 	int ret;
 
 	uint8_t lsc_flag = 0;
-	int valid_slave = 0;
-	uint16_t active_pos, slave_idx;
+	int valid_member = 0;
+	uint16_t active_pos, member_idx;
 	uint16_t i;
 
 	if (type != RTE_ETH_EVENT_INTR_LSC || param == NULL)
@@ -2979,62 +2983,62 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 	if (!bonded_eth_dev->data->dev_started)
 		return rc;
 
-	/* verify that port_id is a valid slave of bonded port */
-	for (i = 0; i < internals->slave_count; i++) {
-		if (internals->slaves[i].port_id == port_id) {
-			valid_slave = 1;
-			slave_idx = i;
+	/* verify that port_id is a valid member of bonded port */
+	for (i = 0; i < internals->member_count; i++) {
+		if (internals->members[i].port_id == port_id) {
+			valid_member = 1;
+			member_idx = i;
 			break;
 		}
 	}
 
-	if (!valid_slave)
+	if (!valid_member)
 		return rc;
 
 	/* Synchronize lsc callback parallel calls either by real link event
-	 * from the slaves PMDs or by the bonding PMD itself.
+	 * from the members PMDs or by the bonding PMD itself.
 	 */
 	rte_spinlock_lock(&internals->lsc_lock);
 
 	/* Search for port in active port list */
-	active_pos = find_slave_by_id(internals->active_slaves,
-			internals->active_slave_count, port_id);
+	active_pos = find_member_by_id(internals->active_members,
+			internals->active_member_count, port_id);
 
 	ret = rte_eth_link_get_nowait(port_id, &link);
 	if (ret < 0)
-		RTE_BOND_LOG(ERR, "Slave (port %u) link get failed", port_id);
+		RTE_BOND_LOG(ERR, "Member (port %u) link get failed", port_id);
 
 	if (ret == 0 && link.link_status) {
-		if (active_pos < internals->active_slave_count)
+		if (active_pos < internals->active_member_count)
 			goto link_update;
 
 		/* check link state properties if bonded link is up*/
 		if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
 			if (link_properties_valid(bonded_eth_dev, &link) != 0)
 				RTE_BOND_LOG(ERR, "Invalid link properties "
-					     "for slave %d in bonding mode %d",
+					     "for member %d in bonding mode %d",
 					     port_id, internals->mode);
 		} else {
-			/* inherit slave link properties */
+			/* inherit member link properties */
 			link_properties_set(bonded_eth_dev, &link);
 		}
 
-		/* If no active slave ports then set this port to be
+		/* If no active member ports then set this port to be
 		 * the primary port.
 		 */
-		if (internals->active_slave_count < 1) {
-			/* If first active slave, then change link status */
+		if (internals->active_member_count < 1) {
+			/* If first active member, then change link status */
 			bonded_eth_dev->data->dev_link.link_status =
 								RTE_ETH_LINK_UP;
 			internals->current_primary_port = port_id;
 			lsc_flag = 1;
 
-			mac_address_slaves_update(bonded_eth_dev);
+			mac_address_members_update(bonded_eth_dev);
 			bond_ethdev_promiscuous_update(bonded_eth_dev);
 			bond_ethdev_allmulticast_update(bonded_eth_dev);
 		}
 
-		activate_slave(bonded_eth_dev, port_id);
+		activate_member(bonded_eth_dev, port_id);
 
 		/* If the user has defined the primary port then default to
 		 * using it.
@@ -3043,24 +3047,24 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 				internals->primary_port == port_id)
 			bond_ethdev_primary_set(internals, port_id);
 	} else {
-		if (active_pos == internals->active_slave_count)
+		if (active_pos == internals->active_member_count)
 			goto link_update;
 
-		/* Remove from active slave list */
-		deactivate_slave(bonded_eth_dev, port_id);
+		/* Remove from active member list */
+		deactivate_member(bonded_eth_dev, port_id);
 
-		if (internals->active_slave_count < 1)
+		if (internals->active_member_count < 1)
 			lsc_flag = 1;
 
-		/* Update primary id, take first active slave from list or if none
+		/* Update primary id, take first active member from list or if none
 		 * available set to -1 */
 		if (port_id == internals->current_primary_port) {
-			if (internals->active_slave_count > 0)
+			if (internals->active_member_count > 0)
 				bond_ethdev_primary_set(internals,
-						internals->active_slaves[0]);
+						internals->active_members[0]);
 			else
 				internals->current_primary_port = internals->primary_port;
-			mac_address_slaves_update(bonded_eth_dev);
+			mac_address_members_update(bonded_eth_dev);
 			bond_ethdev_promiscuous_update(bonded_eth_dev);
 			bond_ethdev_allmulticast_update(bonded_eth_dev);
 		}
@@ -3069,10 +3073,10 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 link_update:
 	/**
 	 * Update bonded device link properties after any change to active
-	 * slaves
+	 * members
 	 */
 	bond_ethdev_link_update(bonded_eth_dev, 0);
-	internals->slaves[slave_idx].last_link_status = link.link_status;
+	internals->members[member_idx].last_link_status = link.link_status;
 
 	if (lsc_flag) {
 		/* Cancel any possible outstanding interrupts if delays are enabled */
@@ -3114,7 +3118,7 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
 {
 	unsigned i, j;
 	int result = 0;
-	int slave_reta_size;
+	int member_reta_size;
 	unsigned reta_count;
 	struct bond_dev_private *internals = dev->data->dev_private;
 
@@ -3137,11 +3141,11 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
 		memcpy(&internals->reta_conf[i], &internals->reta_conf[0],
 				sizeof(internals->reta_conf[0]) * reta_count);
 
-	/* Propagate RETA over slaves */
-	for (i = 0; i < internals->slave_count; i++) {
-		slave_reta_size = internals->slaves[i].reta_size;
-		result = rte_eth_dev_rss_reta_update(internals->slaves[i].port_id,
-				&internals->reta_conf[0], slave_reta_size);
+	/* Propagate RETA over members */
+	for (i = 0; i < internals->member_count; i++) {
+		member_reta_size = internals->members[i].reta_size;
+		result = rte_eth_dev_rss_reta_update(internals->members[i].port_id,
+				&internals->reta_conf[0], member_reta_size);
 		if (result < 0)
 			return result;
 	}
@@ -3194,8 +3198,8 @@ bond_ethdev_rss_hash_update(struct rte_eth_dev *dev,
 		bond_rss_conf.rss_key_len = internals->rss_key_len;
 	}
 
-	for (i = 0; i < internals->slave_count; i++) {
-		result = rte_eth_dev_rss_hash_update(internals->slaves[i].port_id,
+	for (i = 0; i < internals->member_count; i++) {
+		result = rte_eth_dev_rss_hash_update(internals->members[i].port_id,
 				&bond_rss_conf);
 		if (result < 0)
 			return result;
@@ -3221,21 +3225,21 @@ bond_ethdev_rss_hash_conf_get(struct rte_eth_dev *dev,
 static int
 bond_ethdev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 {
-	struct rte_eth_dev *slave_eth_dev;
+	struct rte_eth_dev *member_eth_dev;
 	struct bond_dev_private *internals = dev->data->dev_private;
 	int ret, i;
 
 	rte_spinlock_lock(&internals->lock);
 
-	for (i = 0; i < internals->slave_count; i++) {
-		slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
-		if (*slave_eth_dev->dev_ops->mtu_set == NULL) {
+	for (i = 0; i < internals->member_count; i++) {
+		member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+		if (*member_eth_dev->dev_ops->mtu_set == NULL) {
 			rte_spinlock_unlock(&internals->lock);
 			return -ENOTSUP;
 		}
 	}
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_eth_dev_set_mtu(internals->slaves[i].port_id, mtu);
+	for (i = 0; i < internals->member_count; i++) {
+		ret = rte_eth_dev_set_mtu(internals->members[i].port_id, mtu);
 		if (ret < 0) {
 			rte_spinlock_unlock(&internals->lock);
 			return ret;
@@ -3271,29 +3275,29 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
 			struct rte_ether_addr *mac_addr,
 			__rte_unused uint32_t index, uint32_t vmdq)
 {
-	struct rte_eth_dev *slave_eth_dev;
+	struct rte_eth_dev *member_eth_dev;
 	struct bond_dev_private *internals = dev->data->dev_private;
 	int ret, i;
 
 	rte_spinlock_lock(&internals->lock);
 
-	for (i = 0; i < internals->slave_count; i++) {
-		slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
-		if (*slave_eth_dev->dev_ops->mac_addr_add == NULL ||
-			 *slave_eth_dev->dev_ops->mac_addr_remove == NULL) {
+	for (i = 0; i < internals->member_count; i++) {
+		member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+		if (*member_eth_dev->dev_ops->mac_addr_add == NULL ||
+			 *member_eth_dev->dev_ops->mac_addr_remove == NULL) {
 			ret = -ENOTSUP;
 			goto end;
 		}
 	}
 
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_eth_dev_mac_addr_add(internals->slaves[i].port_id,
+	for (i = 0; i < internals->member_count; i++) {
+		ret = rte_eth_dev_mac_addr_add(internals->members[i].port_id,
 				mac_addr, vmdq);
 		if (ret < 0) {
 			/* rollback */
 			for (i--; i >= 0; i--)
 				rte_eth_dev_mac_addr_remove(
-					internals->slaves[i].port_id, mac_addr);
+					internals->members[i].port_id, mac_addr);
 			goto end;
 		}
 	}
@@ -3307,22 +3311,22 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
 static void
 bond_ethdev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index)
 {
-	struct rte_eth_dev *slave_eth_dev;
+	struct rte_eth_dev *member_eth_dev;
 	struct bond_dev_private *internals = dev->data->dev_private;
 	int i;
 
 	rte_spinlock_lock(&internals->lock);
 
-	for (i = 0; i < internals->slave_count; i++) {
-		slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
-		if (*slave_eth_dev->dev_ops->mac_addr_remove == NULL)
+	for (i = 0; i < internals->member_count; i++) {
+		member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+		if (*member_eth_dev->dev_ops->mac_addr_remove == NULL)
 			goto end;
 	}
 
 	struct rte_ether_addr *mac_addr = &dev->data->mac_addrs[index];
 
-	for (i = 0; i < internals->slave_count; i++)
-		rte_eth_dev_mac_addr_remove(internals->slaves[i].port_id,
+	for (i = 0; i < internals->member_count; i++)
+		rte_eth_dev_mac_addr_remove(internals->members[i].port_id,
 				mac_addr);
 
 end:
@@ -3402,30 +3406,30 @@ dump_basic(const struct rte_eth_dev *dev, FILE *f)
 		fprintf(f, "\n");
 	}
 
-	if (internals->slave_count > 0) {
-		fprintf(f, "\tSlaves (%u): [", internals->slave_count);
-		for (i = 0; i < internals->slave_count - 1; i++)
-			fprintf(f, "%u ", internals->slaves[i].port_id);
+	if (internals->member_count > 0) {
+		fprintf(f, "\tMembers (%u): [", internals->member_count);
+		for (i = 0; i < internals->member_count - 1; i++)
+			fprintf(f, "%u ", internals->members[i].port_id);
 
-		fprintf(f, "%u]\n", internals->slaves[internals->slave_count - 1].port_id);
+		fprintf(f, "%u]\n", internals->members[internals->member_count - 1].port_id);
 	} else {
-		fprintf(f, "\tSlaves: []\n");
+		fprintf(f, "\tMembers: []\n");
 	}
 
-	if (internals->active_slave_count > 0) {
-		fprintf(f, "\tActive Slaves (%u): [", internals->active_slave_count);
-		for (i = 0; i < internals->active_slave_count - 1; i++)
-			fprintf(f, "%u ", internals->active_slaves[i]);
+	if (internals->active_member_count > 0) {
+		fprintf(f, "\tActive Members (%u): [", internals->active_member_count);
+		for (i = 0; i < internals->active_member_count - 1; i++)
+			fprintf(f, "%u ", internals->active_members[i]);
 
-		fprintf(f, "%u]\n", internals->active_slaves[internals->active_slave_count - 1]);
+		fprintf(f, "%u]\n", internals->active_members[internals->active_member_count - 1]);
 
 	} else {
-		fprintf(f, "\tActive Slaves: []\n");
+		fprintf(f, "\tActive Members: []\n");
 	}
 
 	if (internals->user_defined_primary_port)
 		fprintf(f, "\tUser Defined Primary: [%u]\n", internals->primary_port);
-	if (internals->slave_count > 0)
+	if (internals->member_count > 0)
 		fprintf(f, "\tCurrent Primary: [%u]\n", internals->current_primary_port);
 }
 
@@ -3471,7 +3475,7 @@ dump_lacp_port_param(const struct port_params *params, FILE *f)
 }
 
 static void
-dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
+dump_lacp_member(const struct rte_eth_bond_8023ad_member_info *info, FILE *f)
 {
 	char a_state[256] = { 0 };
 	char p_state[256] = { 0 };
@@ -3520,18 +3524,18 @@ dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
 static void
 dump_lacp(uint16_t port_id, FILE *f)
 {
-	struct rte_eth_bond_8023ad_slave_info slave_info;
+	struct rte_eth_bond_8023ad_member_info member_info;
 	struct rte_eth_bond_8023ad_conf port_conf;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
-	int num_active_slaves;
+	uint16_t members[RTE_MAX_ETHPORTS];
+	int num_active_members;
 	int i, ret;
 
 	fprintf(f, "  - Lacp info:\n");
 
-	num_active_slaves = rte_eth_bond_active_slaves_get(port_id, slaves,
+	num_active_members = rte_eth_bond_active_members_get(port_id, members,
 			RTE_MAX_ETHPORTS);
-	if (num_active_slaves < 0) {
-		fprintf(f, "\tFailed to get active slave list for port %u\n",
+	if (num_active_members < 0) {
+		fprintf(f, "\tFailed to get active member list for port %u\n",
 				port_id);
 		return;
 	}
@@ -3545,16 +3549,16 @@ dump_lacp(uint16_t port_id, FILE *f)
 	}
 	dump_lacp_conf(&port_conf, f);
 
-	for (i = 0; i < num_active_slaves; i++) {
-		ret = rte_eth_bond_8023ad_slave_info(port_id, slaves[i],
-				&slave_info);
+	for (i = 0; i < num_active_members; i++) {
+		ret = rte_eth_bond_8023ad_member_info(port_id, members[i],
+				&member_info);
 		if (ret) {
-			fprintf(f, "\tGet slave device %u 8023ad info failed\n",
-				slaves[i]);
+			fprintf(f, "\tGet member device %u 8023ad info failed\n",
+				members[i]);
 			return;
 		}
-		fprintf(f, "\tSlave Port: %u\n", slaves[i]);
-		dump_lacp_slave(&slave_info, f);
+		fprintf(f, "\tMember Port: %u\n", members[i]);
+		dump_lacp_member(&member_info, f);
 	}
 }
 
@@ -3655,8 +3659,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
 	internals->link_down_delay_ms = 0;
 	internals->link_up_delay_ms = 0;
 
-	internals->slave_count = 0;
-	internals->active_slave_count = 0;
+	internals->member_count = 0;
+	internals->active_member_count = 0;
 	internals->rx_offload_capa = 0;
 	internals->tx_offload_capa = 0;
 	internals->rx_queue_offload_capa = 0;
@@ -3684,8 +3688,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
 	internals->rx_desc_lim.nb_align = 1;
 	internals->tx_desc_lim.nb_align = 1;
 
-	memset(internals->active_slaves, 0, sizeof(internals->active_slaves));
-	memset(internals->slaves, 0, sizeof(internals->slaves));
+	memset(internals->active_members, 0, sizeof(internals->active_members));
+	memset(internals->members, 0, sizeof(internals->members));
 
 	TAILQ_INIT(&internals->flow_list);
 	internals->flow_isolated_valid = 0;
@@ -3770,7 +3774,7 @@ bond_probe(struct rte_vdev_device *dev)
 	/* Parse link bonding mode */
 	if (rte_kvargs_count(kvlist, PMD_BOND_MODE_KVARG) == 1) {
 		if (rte_kvargs_process(kvlist, PMD_BOND_MODE_KVARG,
-				&bond_ethdev_parse_slave_mode_kvarg,
+				&bond_ethdev_parse_member_mode_kvarg,
 				&bonding_mode) != 0) {
 			RTE_BOND_LOG(ERR, "Invalid mode for bonded device %s",
 					name);
@@ -3815,7 +3819,7 @@ bond_probe(struct rte_vdev_device *dev)
 	if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
 		if (rte_kvargs_process(kvlist,
 				PMD_BOND_AGG_MODE_KVARG,
-				&bond_ethdev_parse_slave_agg_mode_kvarg,
+				&bond_ethdev_parse_member_agg_mode_kvarg,
 				&agg_mode) != 0) {
 			RTE_BOND_LOG(ERR,
 					"Failed to parse agg selection mode for bonded device %s",
@@ -3865,7 +3869,7 @@ bond_remove(struct rte_vdev_device *dev)
 	RTE_ASSERT(eth_dev->device == &dev->device);
 
 	internals = eth_dev->data->dev_private;
-	if (internals->slave_count != 0)
+	if (internals->member_count != 0)
 		return -EBUSY;
 
 	if (eth_dev->data->dev_started == 1) {
@@ -3877,7 +3881,7 @@ bond_remove(struct rte_vdev_device *dev)
 	return ret;
 }
 
-/* this part will resolve the slave portids after all the other pdev and vdev
+/* this part will resolve the member portids after all the other pdev and vdev
  * have been allocated */
 static int
 bond_ethdev_configure(struct rte_eth_dev *dev)
@@ -3959,7 +3963,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 	if (link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		if ((link_speeds &
 		    (internals->speed_capa & ~RTE_ETH_LINK_SPEED_FIXED)) == 0) {
-			RTE_BOND_LOG(ERR, "the fixed speed is not supported by all slave devices.");
+			RTE_BOND_LOG(ERR, "the fixed speed is not supported by all member devices.");
 			return -EINVAL;
 		}
 		/*
@@ -4041,7 +4045,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 	if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
 		if (rte_kvargs_process(kvlist,
 				       PMD_BOND_AGG_MODE_KVARG,
-				       &bond_ethdev_parse_slave_agg_mode_kvarg,
+				       &bond_ethdev_parse_member_agg_mode_kvarg,
 				       &agg_mode) != 0) {
 			RTE_BOND_LOG(ERR,
 				     "Failed to parse agg selection mode for bonded device %s",
@@ -4059,60 +4063,60 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	/* Parse/add slave ports to bonded device */
-	if (rte_kvargs_count(kvlist, PMD_BOND_SLAVE_PORT_KVARG) > 0) {
-		struct bond_ethdev_slave_ports slave_ports;
+	/* Parse/add member ports to bonded device */
+	if (rte_kvargs_count(kvlist, PMD_BOND_MEMBER_PORT_KVARG) > 0) {
+		struct bond_ethdev_member_ports member_ports;
 		unsigned i;
 
-		memset(&slave_ports, 0, sizeof(slave_ports));
+		memset(&member_ports, 0, sizeof(member_ports));
 
-		if (rte_kvargs_process(kvlist, PMD_BOND_SLAVE_PORT_KVARG,
-				       &bond_ethdev_parse_slave_port_kvarg, &slave_ports) != 0) {
+		if (rte_kvargs_process(kvlist, PMD_BOND_MEMBER_PORT_KVARG,
+				       &bond_ethdev_parse_member_port_kvarg, &member_ports) != 0) {
 			RTE_BOND_LOG(ERR,
-				     "Failed to parse slave ports for bonded device %s",
+				     "Failed to parse member ports for bonded device %s",
 				     name);
 			return -1;
 		}
 
-		for (i = 0; i < slave_ports.slave_count; i++) {
-			if (rte_eth_bond_slave_add(port_id, slave_ports.slaves[i]) != 0) {
+		for (i = 0; i < member_ports.member_count; i++) {
+			if (rte_eth_bond_member_add(port_id, member_ports.members[i]) != 0) {
 				RTE_BOND_LOG(ERR,
-					     "Failed to add port %d as slave to bonded device %s",
-					     slave_ports.slaves[i], name);
+					     "Failed to add port %d as member to bonded device %s",
+					     member_ports.members[i], name);
 			}
 		}
 
 	} else {
-		RTE_BOND_LOG(INFO, "No slaves specified for bonded device %s", name);
+		RTE_BOND_LOG(INFO, "No members specified for bonded device %s", name);
 		return -1;
 	}
 
-	/* Parse/set primary slave port id*/
-	arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_SLAVE_KVARG);
+	/* Parse/set primary member port id*/
+	arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_MEMBER_KVARG);
 	if (arg_count == 1) {
-		uint16_t primary_slave_port_id;
+		uint16_t primary_member_port_id;
 
 		if (rte_kvargs_process(kvlist,
-				       PMD_BOND_PRIMARY_SLAVE_KVARG,
-				       &bond_ethdev_parse_primary_slave_port_id_kvarg,
-				       &primary_slave_port_id) < 0) {
+				       PMD_BOND_PRIMARY_MEMBER_KVARG,
+				       &bond_ethdev_parse_primary_member_port_id_kvarg,
+				       &primary_member_port_id) < 0) {
 			RTE_BOND_LOG(INFO,
-				     "Invalid primary slave port id specified for bonded device %s",
+				     "Invalid primary member port id specified for bonded device %s",
 				     name);
 			return -1;
 		}
 
 		/* Set balance mode transmit policy*/
-		if (rte_eth_bond_primary_set(port_id, primary_slave_port_id)
+		if (rte_eth_bond_primary_set(port_id, primary_member_port_id)
 		    != 0) {
 			RTE_BOND_LOG(ERR,
-				     "Failed to set primary slave port %d on bonded device %s",
-				     primary_slave_port_id, name);
+				     "Failed to set primary member port %d on bonded device %s",
+				     primary_member_port_id, name);
 			return -1;
 		}
 	} else if (arg_count > 1) {
 		RTE_BOND_LOG(INFO,
-			     "Primary slave can be specified only once for bonded device %s",
+			     "Primary member can be specified only once for bonded device %s",
 			     name);
 		return -1;
 	}
@@ -4206,15 +4210,15 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 		return -1;
 	}
 
-	/* configure slaves so we can pass mtu setting */
-	for (i = 0; i < internals->slave_count; i++) {
-		struct rte_eth_dev *slave_ethdev =
-				&(rte_eth_devices[internals->slaves[i].port_id]);
-		if (slave_configure(dev, slave_ethdev) != 0) {
+	/* configure members so we can pass mtu setting */
+	for (i = 0; i < internals->member_count; i++) {
+		struct rte_eth_dev *member_ethdev =
+				&(rte_eth_devices[internals->members[i].port_id]);
+		if (member_configure(dev, member_ethdev) != 0) {
 			RTE_BOND_LOG(ERR,
-				"bonded port (%d) failed to configure slave device (%d)",
+				"bonded port (%d) failed to configure member device (%d)",
 				dev->data->port_id,
-				internals->slaves[i].port_id);
+				internals->members[i].port_id);
 			return -1;
 		}
 	}
@@ -4230,7 +4234,7 @@ RTE_PMD_REGISTER_VDEV(net_bonding, pmd_bond_drv);
 RTE_PMD_REGISTER_ALIAS(net_bonding, eth_bond);
 
 RTE_PMD_REGISTER_PARAM_STRING(net_bonding,
-	"slave=<ifc> "
+	"member=<ifc> "
 	"primary=<ifc> "
 	"mode=[0-6] "
 	"xmit_policy=[l2 | l23 | l34] "
diff --git a/drivers/net/bonding/version.map b/drivers/net/bonding/version.map
index 9333923b4e..56bc143a89 100644
--- a/drivers/net/bonding/version.map
+++ b/drivers/net/bonding/version.map
@@ -12,8 +12,6 @@ DPDK_23 {
 	rte_eth_bond_8023ad_ext_distrib_get;
 	rte_eth_bond_8023ad_ext_slowtx;
 	rte_eth_bond_8023ad_setup;
-	rte_eth_bond_8023ad_slave_info;
-	rte_eth_bond_active_slaves_get;
 	rte_eth_bond_create;
 	rte_eth_bond_free;
 	rte_eth_bond_link_monitoring_set;
@@ -23,11 +21,18 @@ DPDK_23 {
 	rte_eth_bond_mode_set;
 	rte_eth_bond_primary_get;
 	rte_eth_bond_primary_set;
-	rte_eth_bond_slave_add;
-	rte_eth_bond_slave_remove;
-	rte_eth_bond_slaves_get;
 	rte_eth_bond_xmit_policy_get;
 	rte_eth_bond_xmit_policy_set;
 
 	local: *;
 };
+
+EXPERIMENTAL {
+	# added in 23.07
+	global:
+	rte_eth_bond_8023ad_member_info;
+	rte_eth_bond_active_members_get;
+	rte_eth_bond_member_add;
+	rte_eth_bond_member_remove;
+	rte_eth_bond_members_get;
+};
diff --git a/examples/bond/main.c b/examples/bond/main.c
index 9b076bb39f..90f422ec11 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -105,8 +105,8 @@
 		":%02"PRIx8":%02"PRIx8":%02"PRIx8,	\
 		RTE_ETHER_ADDR_BYTES(&addr))
 
-uint16_t slaves[RTE_MAX_ETHPORTS];
-uint16_t slaves_count;
+uint16_t members[RTE_MAX_ETHPORTS];
+uint16_t members_count;
 
 static uint16_t BOND_PORT = 0xffff;
 
@@ -128,7 +128,7 @@ static struct rte_eth_conf port_conf = {
 };
 
 static void
-slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
+member_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
 {
 	int retval;
 	uint16_t nb_rxd = RTE_RX_DESC_DEFAULT;
@@ -252,10 +252,10 @@ bond_port_init(struct rte_mempool *mbuf_pool)
 		rte_exit(EXIT_FAILURE, "port %u: rte_eth_dev_adjust_nb_rx_tx_desc "
 				"failed (res=%d)\n", BOND_PORT, retval);
 
-	for (i = 0; i < slaves_count; i++) {
-		if (rte_eth_bond_slave_add(BOND_PORT, slaves[i]) == -1)
-			rte_exit(-1, "Oooops! adding slave (%u) to bond (%u) failed!\n",
-					slaves[i], BOND_PORT);
+	for (i = 0; i < members_count; i++) {
+		if (rte_eth_bond_member_add(BOND_PORT, members[i]) == -1)
+			rte_exit(-1, "Oooops! adding member (%u) to bond (%u) failed!\n",
+					members[i], BOND_PORT);
 
 	}
 
@@ -283,18 +283,18 @@ bond_port_init(struct rte_mempool *mbuf_pool)
 	if (retval < 0)
 		rte_exit(retval, "Start port %d failed (res=%d)", BOND_PORT, retval);
 
-	printf("Waiting for slaves to become active...");
+	printf("Waiting for members to become active...");
 	while (wait_counter) {
-		uint16_t act_slaves[16] = {0};
-		if (rte_eth_bond_active_slaves_get(BOND_PORT, act_slaves, 16) ==
-				slaves_count) {
+		uint16_t act_members[16] = {0};
+		if (rte_eth_bond_active_members_get(BOND_PORT, act_members, 16) ==
+				members_count) {
 			printf("\n");
 			break;
 		}
 		sleep(1);
 		printf("...");
 		if (--wait_counter == 0)
-			rte_exit(-1, "\nFailed to activate slaves\n");
+			rte_exit(-1, "\nFailed to activate members\n");
 	}
 
 	retval = rte_eth_promiscuous_enable(BOND_PORT);
@@ -631,7 +631,7 @@ static void cmd_help_parsed(__rte_unused void *parsed_result,
 			"send IP	- sends one ARPrequest through bonding for IP.\n"
 			"start		- starts listening ARPs.\n"
 			"stop		- stops lcore_main.\n"
-			"show		- shows some bond info: ex. active slaves etc.\n"
+			"show		- shows some bond info: ex. active members etc.\n"
 			"help		- prints help.\n"
 			"quit		- terminate all threads and quit.\n"
 		       );
@@ -742,13 +742,13 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
 			    struct cmdline *cl,
 			    __rte_unused void *data)
 {
-	uint16_t slaves[16] = {0};
+	uint16_t members[16] = {0};
 	uint8_t len = 16;
 	struct rte_ether_addr addr;
 	uint16_t i;
 	int ret;
 
-	for (i = 0; i < slaves_count; i++) {
+	for (i = 0; i < members_count; i++) {
 		ret = rte_eth_macaddr_get(i, &addr);
 		if (ret != 0) {
 			cmdline_printf(cl,
@@ -763,9 +763,9 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
 
 	rte_spinlock_lock(&global_flag_stru_p->lock);
 	cmdline_printf(cl,
-			"Active_slaves:%d "
+			"Active_members:%d "
 			"packets received:Tot:%d Arp:%d IPv4:%d\n",
-			rte_eth_bond_active_slaves_get(BOND_PORT, slaves, len),
+			rte_eth_bond_active_members_get(BOND_PORT, members, len),
 			global_flag_stru_p->port_packets[0],
 			global_flag_stru_p->port_packets[1],
 			global_flag_stru_p->port_packets[2]);
@@ -836,10 +836,10 @@ main(int argc, char *argv[])
 		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
 
 	/* initialize all ports */
-	slaves_count = nb_ports;
+	members_count = nb_ports;
 	RTE_ETH_FOREACH_DEV(i) {
-		slave_port_init(i, mbuf_pool);
-		slaves[i] = i;
+		member_port_init(i, mbuf_pool);
+		members[i] = i;
 	}
 
 	bond_port_init(mbuf_pool);
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 99fe9e238b..85439e3a41 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -2035,8 +2035,13 @@ struct rte_eth_dev_owner {
 #define RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE  RTE_BIT32(0)
 /** Device supports link state interrupt */
 #define RTE_ETH_DEV_INTR_LSC              RTE_BIT32(1)
-/** Device is a bonded slave */
-#define RTE_ETH_DEV_BONDED_SLAVE          RTE_BIT32(2)
+/** Device is a bonded member */
+#define RTE_ETH_DEV_BONDED_MEMBER          RTE_BIT32(2)
+#define RTE_ETH_DEV_BONDED_SLAVE                         \
+	do {                                             \
+		RTE_DEPRECATED(RTE_ETH_DEV_BONDED_SLAVE) \
+		RTE_ETH_DEV_BONDED_MEMBER                \
+	} while (0)
 /** Device supports device removal interrupt */
 #define RTE_ETH_DEV_INTR_RMV              RTE_BIT32(3)
 /** Device is port representor */
-- 
2.39.1


^ permalink raw reply	[relevance 1%]

* [PATCH v2] net/bonding: replace master/slave to main/member
    2023-05-17 14:52  1% ` Stephen Hemminger
@ 2023-05-18  6:32  1% ` Chaoyong He
  2023-05-18  7:01  1%   ` [PATCH v3] " Chaoyong He
  1 sibling, 1 reply; 200+ results
From: Chaoyong He @ 2023-05-18  6:32 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, niklas.soderlund, Long Wu, James Hershaw

From: Long Wu <long.wu@corigine.com>

This patch replaces the usage of the word 'master/slave' with more
appropriate word 'main/member' in bonding PMD as well as in its docs
and examples. Also the test app and testpmd were modified to use the
new wording.

The bonding PMD's public API was modified according to the changes
in word:
rte_eth_bond_8023ad_slave_info is now called
rte_eth_bond_8023ad_member_info,
rte_eth_bond_active_slaves_get is now called
rte_eth_bond_active_members_get,
rte_eth_bond_slave_add is now called
rte_eth_bond_member_add,
rte_eth_bond_slave_remove is now called
rte_eth_bond_member_remove,
rte_eth_bond_slaves_get is now called
rte_eth_bond_members_get.

Also the macro RTE_ETH_DEV_BONDED_SLAVE was renamed to
RTE_ETH_DEV_BONDED_MEMBER.

Mark the old visible API's as deprecated and remove
from the ABI.

Signed-off-by: Long Wu <long.wu@corigine.com>
Acked-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Reviewed-by: James Hershaw <james.hershaw@corigine.com>
---
 app/test-pmd/testpmd.c                        |  112 +-
 app/test-pmd/testpmd.h                        |    8 +-
 app/test/test_link_bonding.c                  | 2792 +++++++++--------
 app/test/test_link_bonding_mode4.c            |  588 ++--
 app/test/test_link_bonding_rssconf.c          |  166 +-
 doc/guides/howto/lm_bond_virtio_sriov.rst     |   24 +-
 doc/guides/nics/bnxt.rst                      |    4 +-
 doc/guides/prog_guide/img/bond-mode-1.svg     |    2 +-
 .../link_bonding_poll_mode_drv_lib.rst        |  222 +-
 drivers/net/bonding/bonding_testpmd.c         |  178 +-
 drivers/net/bonding/eth_bond_8023ad_private.h |   40 +-
 drivers/net/bonding/eth_bond_private.h        |  108 +-
 drivers/net/bonding/rte_eth_bond.h            |  126 +-
 drivers/net/bonding/rte_eth_bond_8023ad.c     |  372 +--
 drivers/net/bonding/rte_eth_bond_8023ad.h     |   75 +-
 drivers/net/bonding/rte_eth_bond_alb.c        |   44 +-
 drivers/net/bonding/rte_eth_bond_alb.h        |   20 +-
 drivers/net/bonding/rte_eth_bond_api.c        |  474 +--
 drivers/net/bonding/rte_eth_bond_args.c       |   32 +-
 drivers/net/bonding/rte_eth_bond_flow.c       |   54 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        | 1384 ++++----
 drivers/net/bonding/version.map               |   15 +-
 examples/bond/main.c                          |   40 +-
 lib/ethdev/rte_ethdev.h                       |    9 +-
 24 files changed, 3505 insertions(+), 3384 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 5cb6f92523..d8fd87105a 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -588,27 +588,27 @@ eth_dev_configure_mp(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 }
 
 static int
-change_bonding_slave_port_status(portid_t bond_pid, bool is_stop)
+change_bonding_member_port_status(portid_t bond_pid, bool is_stop)
 {
 #ifdef RTE_NET_BOND
 
-	portid_t slave_pids[RTE_MAX_ETHPORTS];
+	portid_t member_pids[RTE_MAX_ETHPORTS];
 	struct rte_port *port;
-	int num_slaves;
-	portid_t slave_pid;
+	int num_members;
+	portid_t member_pid;
 	int i;
 
-	num_slaves = rte_eth_bond_slaves_get(bond_pid, slave_pids,
+	num_members = rte_eth_bond_members_get(bond_pid, member_pids,
 						RTE_MAX_ETHPORTS);
-	if (num_slaves < 0) {
-		fprintf(stderr, "Failed to get slave list for port = %u\n",
+	if (num_members < 0) {
+		fprintf(stderr, "Failed to get member list for port = %u\n",
 			bond_pid);
-		return num_slaves;
+		return num_members;
 	}
 
-	for (i = 0; i < num_slaves; i++) {
-		slave_pid = slave_pids[i];
-		port = &ports[slave_pid];
+	for (i = 0; i < num_members; i++) {
+		member_pid = member_pids[i];
+		port = &ports[member_pid];
 		port->port_status =
 			is_stop ? RTE_PORT_STOPPED : RTE_PORT_STARTED;
 	}
@@ -632,12 +632,12 @@ eth_dev_start_mp(uint16_t port_id)
 		struct rte_port *port = &ports[port_id];
 
 		/*
-		 * Starting a bonded port also starts all slaves under the bonded
+		 * Starting a bonded port also starts all members under the bonded
 		 * device. So if this port is bond device, we need to modify the
-		 * port status of these slaves.
+		 * port status of these members.
 		 */
 		if (port->bond_flag == 1)
-			return change_bonding_slave_port_status(port_id, false);
+			return change_bonding_member_port_status(port_id, false);
 	}
 
 	return 0;
@@ -656,12 +656,12 @@ eth_dev_stop_mp(uint16_t port_id)
 		struct rte_port *port = &ports[port_id];
 
 		/*
-		 * Stopping a bonded port also stops all slaves under the bonded
+		 * Stopping a bonded port also stops all members under the bonded
 		 * device. So if this port is bond device, we need to modify the
-		 * port status of these slaves.
+		 * port status of these members.
 		 */
 		if (port->bond_flag == 1)
-			return change_bonding_slave_port_status(port_id, true);
+			return change_bonding_member_port_status(port_id, true);
 	}
 
 	return 0;
@@ -2610,7 +2610,7 @@ all_ports_started(void)
 		port = &ports[pi];
 		/* Check if there is a port which is not started */
 		if ((port->port_status != RTE_PORT_STARTED) &&
-			(port->slave_flag == 0))
+			(port->member_flag == 0))
 			return 0;
 	}
 
@@ -2624,7 +2624,7 @@ port_is_stopped(portid_t port_id)
 	struct rte_port *port = &ports[port_id];
 
 	if ((port->port_status != RTE_PORT_STOPPED) &&
-	    (port->slave_flag == 0))
+	    (port->member_flag == 0))
 		return 0;
 	return 1;
 }
@@ -2970,8 +2970,8 @@ fill_xstats_display_info(void)
 
 /*
  * Some capabilities (like, rx_offload_capa and tx_offload_capa) of bonding
- * device in dev_info is zero when no slave is added. And its capability
- * will be updated when add a new slave device. So adding a slave device need
+ * device in dev_info is zero when no member is added. And its capability
+ * will be updated when add a new member device. So adding a member device need
  * to update the port configurations of bonding device.
  */
 static void
@@ -3028,7 +3028,7 @@ start_port(portid_t pid)
 		if (pid != pi && pid != (portid_t)RTE_PORT_ALL)
 			continue;
 
-		if (port_is_bonding_slave(pi)) {
+		if (port_is_bonding_member(pi)) {
 			fprintf(stderr,
 				"Please remove port %d from bonded device.\n",
 				pi);
@@ -3350,7 +3350,7 @@ stop_port(portid_t pid)
 			continue;
 		}
 
-		if (port_is_bonding_slave(pi)) {
+		if (port_is_bonding_member(pi)) {
 			fprintf(stderr,
 				"Please remove port %d from bonded device.\n",
 				pi);
@@ -3439,28 +3439,28 @@ flush_port_owned_resources(portid_t pi)
 }
 
 static void
-clear_bonding_slave_device(portid_t *slave_pids, uint16_t num_slaves)
+clear_bonding_member_device(portid_t *member_pids, uint16_t num_members)
 {
 	struct rte_port *port;
-	portid_t slave_pid;
+	portid_t member_pid;
 	uint16_t i;
 
-	for (i = 0; i < num_slaves; i++) {
-		slave_pid = slave_pids[i];
-		if (port_is_started(slave_pid) == 1) {
-			if (rte_eth_dev_stop(slave_pid) != 0)
+	for (i = 0; i < num_members; i++) {
+		member_pid = member_pids[i];
+		if (port_is_started(member_pid) == 1) {
+			if (rte_eth_dev_stop(member_pid) != 0)
 				fprintf(stderr, "rte_eth_dev_stop failed for port %u\n",
-					slave_pid);
+					member_pid);
 
-			port = &ports[slave_pid];
+			port = &ports[member_pid];
 			port->port_status = RTE_PORT_STOPPED;
 		}
 
-		clear_port_slave_flag(slave_pid);
+		clear_port_member_flag(member_pid);
 
-		/* Close slave device when testpmd quit or is killed. */
+		/* Close member device when testpmd quit or is killed. */
 		if (cl_quit == 1 || f_quit == 1)
-			rte_eth_dev_close(slave_pid);
+			rte_eth_dev_close(member_pid);
 	}
 }
 
@@ -3469,8 +3469,8 @@ close_port(portid_t pid)
 {
 	portid_t pi;
 	struct rte_port *port;
-	portid_t slave_pids[RTE_MAX_ETHPORTS];
-	int num_slaves = 0;
+	portid_t member_pids[RTE_MAX_ETHPORTS];
+	int num_members = 0;
 
 	if (port_id_is_invalid(pid, ENABLED_WARN))
 		return;
@@ -3488,7 +3488,7 @@ close_port(portid_t pid)
 			continue;
 		}
 
-		if (port_is_bonding_slave(pi)) {
+		if (port_is_bonding_member(pi)) {
 			fprintf(stderr,
 				"Please remove port %d from bonded device.\n",
 				pi);
@@ -3505,17 +3505,17 @@ close_port(portid_t pid)
 			flush_port_owned_resources(pi);
 #ifdef RTE_NET_BOND
 			if (port->bond_flag == 1)
-				num_slaves = rte_eth_bond_slaves_get(pi,
-						slave_pids, RTE_MAX_ETHPORTS);
+				num_members = rte_eth_bond_members_get(pi,
+						member_pids, RTE_MAX_ETHPORTS);
 #endif
 			rte_eth_dev_close(pi);
 			/*
-			 * If this port is bonded device, all slaves under the
+			 * If this port is bonded device, all members under the
 			 * device need to be removed or closed.
 			 */
-			if (port->bond_flag == 1 && num_slaves > 0)
-				clear_bonding_slave_device(slave_pids,
-							num_slaves);
+			if (port->bond_flag == 1 && num_members > 0)
+				clear_bonding_member_device(member_pids,
+							num_members);
 		}
 
 		free_xstats_display_info(pi);
@@ -3555,7 +3555,7 @@ reset_port(portid_t pid)
 			continue;
 		}
 
-		if (port_is_bonding_slave(pi)) {
+		if (port_is_bonding_member(pi)) {
 			fprintf(stderr,
 				"Please remove port %d from bonded device.\n",
 				pi);
@@ -4203,38 +4203,38 @@ init_port_config(void)
 	}
 }
 
-void set_port_slave_flag(portid_t slave_pid)
+void set_port_member_flag(portid_t member_pid)
 {
 	struct rte_port *port;
 
-	port = &ports[slave_pid];
-	port->slave_flag = 1;
+	port = &ports[member_pid];
+	port->member_flag = 1;
 }
 
-void clear_port_slave_flag(portid_t slave_pid)
+void clear_port_member_flag(portid_t member_pid)
 {
 	struct rte_port *port;
 
-	port = &ports[slave_pid];
-	port->slave_flag = 0;
+	port = &ports[member_pid];
+	port->member_flag = 0;
 }
 
-uint8_t port_is_bonding_slave(portid_t slave_pid)
+uint8_t port_is_bonding_member(portid_t member_pid)
 {
 	struct rte_port *port;
 	struct rte_eth_dev_info dev_info;
 	int ret;
 
-	port = &ports[slave_pid];
-	ret = eth_dev_info_get_print_err(slave_pid, &dev_info);
+	port = &ports[member_pid];
+	ret = eth_dev_info_get_print_err(member_pid, &dev_info);
 	if (ret != 0) {
 		TESTPMD_LOG(ERR,
 			"Failed to get device info for port id %d,"
-			"cannot determine if the port is a bonded slave",
-			slave_pid);
+			"cannot determine if the port is a bonded member",
+			member_pid);
 		return 0;
 	}
-	if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDED_SLAVE) || (port->slave_flag == 1))
+	if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDED_MEMBER) || (port->member_flag == 1))
 		return 1;
 	return 0;
 }
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index bdfbfd36d3..7bc2f70323 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -321,7 +321,7 @@ struct rte_port {
 	uint32_t                mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
 	queueid_t               queue_nb; /**< nb. of queues for flow rules */
 	uint32_t                queue_sz; /**< size of a queue for flow rules */
-	uint8_t                 slave_flag : 1, /**< bonding slave port */
+	uint8_t                 member_flag : 1, /**< bonding member port */
 				bond_flag : 1, /**< port is bond device */
 				fwd_mac_swap : 1, /**< swap packet MAC before forward */
 				update_conf : 1; /**< need to update bonding device configuration */
@@ -1082,9 +1082,9 @@ void stop_packet_forwarding(void);
 void dev_set_link_up(portid_t pid);
 void dev_set_link_down(portid_t pid);
 void init_port_config(void);
-void set_port_slave_flag(portid_t slave_pid);
-void clear_port_slave_flag(portid_t slave_pid);
-uint8_t port_is_bonding_slave(portid_t slave_pid);
+void set_port_member_flag(portid_t member_pid);
+void clear_port_member_flag(portid_t member_pid);
+uint8_t port_is_bonding_member(portid_t member_pid);
 
 int init_port_dcb_config(portid_t pid, enum dcb_mode_enable dcb_mode,
 		     enum rte_eth_nb_tcs num_tcs,
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 5c496352c2..82daf037f1 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -59,13 +59,13 @@
 #define INVALID_BONDING_MODE	(-1)
 
 
-uint8_t slave_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
+uint8_t member_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
 uint8_t bonded_mac[] = {0xAA, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF };
 
 struct link_bonding_unittest_params {
 	int16_t bonded_port_id;
-	int16_t slave_port_ids[TEST_MAX_NUMBER_OF_PORTS];
-	uint16_t bonded_slave_count;
+	int16_t member_port_ids[TEST_MAX_NUMBER_OF_PORTS];
+	uint16_t bonded_member_count;
 	uint8_t bonding_mode;
 
 	uint16_t nb_rx_q;
@@ -73,7 +73,7 @@ struct link_bonding_unittest_params {
 
 	struct rte_mempool *mbuf_pool;
 
-	struct rte_ether_addr *default_slave_mac;
+	struct rte_ether_addr *default_member_mac;
 	struct rte_ether_addr *default_bonded_mac;
 
 	/* Packet Headers */
@@ -90,8 +90,8 @@ static struct rte_udp_hdr pkt_udp_hdr;
 
 static struct link_bonding_unittest_params default_params  = {
 	.bonded_port_id = -1,
-	.slave_port_ids = { -1 },
-	.bonded_slave_count = 0,
+	.member_port_ids = { -1 },
+	.bonded_member_count = 0,
 	.bonding_mode = BONDING_MODE_ROUND_ROBIN,
 
 	.nb_rx_q = 1,
@@ -99,7 +99,7 @@ static struct link_bonding_unittest_params default_params  = {
 
 	.mbuf_pool = NULL,
 
-	.default_slave_mac = (struct rte_ether_addr *)slave_mac,
+	.default_member_mac = (struct rte_ether_addr *)member_mac,
 	.default_bonded_mac = (struct rte_ether_addr *)bonded_mac,
 
 	.pkt_eth_hdr = NULL,
@@ -202,8 +202,8 @@ configure_ethdev(uint16_t port_id, uint8_t start, uint8_t en_isr)
 	return 0;
 }
 
-static int slaves_initialized;
-static int mac_slaves_initialized;
+static int members_initialized;
+static int mac_members_initialized;
 
 static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
 static pthread_cond_t cvar = PTHREAD_COND_INITIALIZER;
@@ -213,7 +213,7 @@ static int
 test_setup(void)
 {
 	int i, nb_mbuf_per_pool;
-	struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)slave_mac;
+	struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)member_mac;
 
 	/* Allocate ethernet packet header with space for VLAN header */
 	if (test_params->pkt_eth_hdr == NULL) {
@@ -235,7 +235,7 @@ test_setup(void)
 	}
 
 	/* Create / Initialize virtual eth devs */
-	if (!slaves_initialized) {
+	if (!members_initialized) {
 		for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++) {
 			char pmd_name[RTE_ETH_NAME_MAX_LEN];
 
@@ -243,16 +243,16 @@ test_setup(void)
 
 			snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_%d", i);
 
-			test_params->slave_port_ids[i] = virtual_ethdev_create(pmd_name,
+			test_params->member_port_ids[i] = virtual_ethdev_create(pmd_name,
 					mac_addr, rte_socket_id(), 1);
-			TEST_ASSERT(test_params->slave_port_ids[i] >= 0,
+			TEST_ASSERT(test_params->member_port_ids[i] >= 0,
 					"Failed to create virtual virtual ethdev %s", pmd_name);
 
 			TEST_ASSERT_SUCCESS(configure_ethdev(
-					test_params->slave_port_ids[i], 1, 0),
+					test_params->member_port_ids[i], 1, 0),
 					"Failed to configure virtual ethdev %s", pmd_name);
 		}
-		slaves_initialized = 1;
+		members_initialized = 1;
 	}
 
 	return 0;
@@ -261,9 +261,9 @@ test_setup(void)
 static int
 test_create_bonded_device(void)
 {
-	int current_slave_count;
+	int current_member_count;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
 	/* Don't try to recreate bonded device if re-running test suite*/
 	if (test_params->bonded_port_id == -1) {
@@ -281,19 +281,19 @@ test_create_bonded_device(void)
 			test_params->bonding_mode), "Failed to set ethdev %d to mode %d",
 			test_params->bonded_port_id, test_params->bonding_mode);
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(current_slave_count, 0,
-			"Number of slaves %d is great than expected %d.",
-			current_slave_count, 0);
+	TEST_ASSERT_EQUAL(current_member_count, 0,
+			"Number of members %d is great than expected %d.",
+			current_member_count, 0);
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
+	current_member_count = rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(current_slave_count, 0,
-			"Number of active slaves %d is great than expected %d.",
-			current_slave_count, 0);
+	TEST_ASSERT_EQUAL(current_member_count, 0,
+			"Number of active members %d is great than expected %d.",
+			current_member_count, 0);
 
 	return 0;
 }
@@ -329,46 +329,46 @@ test_create_bonded_device_with_invalid_params(void)
 }
 
 static int
-test_add_slave_to_bonded_device(void)
+test_add_member_to_bonded_device(void)
 {
-	int current_slave_count;
+	int current_member_count;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
-			test_params->slave_port_ids[test_params->bonded_slave_count]),
-			"Failed to add slave (%d) to bonded port (%d).",
-			test_params->slave_port_ids[test_params->bonded_slave_count],
+	TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+			test_params->member_port_ids[test_params->bonded_member_count]),
+			"Failed to add member (%d) to bonded port (%d).",
+			test_params->member_port_ids[test_params->bonded_member_count],
 			test_params->bonded_port_id);
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count + 1,
-			"Number of slaves (%d) is greater than expected (%d).",
-			current_slave_count, test_params->bonded_slave_count + 1);
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count + 1,
+			"Number of members (%d) is greater than expected (%d).",
+			current_member_count, test_params->bonded_member_count + 1);
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, 0,
-					"Number of active slaves (%d) is not as expected (%d).\n",
-					current_slave_count, 0);
+	current_member_count = rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, 0,
+					"Number of active members (%d) is not as expected (%d).\n",
+					current_member_count, 0);
 
-	test_params->bonded_slave_count++;
+	test_params->bonded_member_count++;
 
 	return 0;
 }
 
 static int
-test_add_slave_to_invalid_bonded_device(void)
+test_add_member_to_invalid_bonded_device(void)
 {
 	/* Invalid port ID */
-	TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->bonded_port_id + 5,
-			test_params->slave_port_ids[test_params->bonded_slave_count]),
+	TEST_ASSERT_FAIL(rte_eth_bond_member_add(test_params->bonded_port_id + 5,
+			test_params->member_port_ids[test_params->bonded_member_count]),
 			"Expected call to failed as invalid port specified.");
 
 	/* Non bonded device */
-	TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->slave_port_ids[0],
-			test_params->slave_port_ids[test_params->bonded_slave_count]),
+	TEST_ASSERT_FAIL(rte_eth_bond_member_add(test_params->member_port_ids[0],
+			test_params->member_port_ids[test_params->bonded_member_count]),
 			"Expected call to failed as invalid port specified.");
 
 	return 0;
@@ -376,63 +376,63 @@ test_add_slave_to_invalid_bonded_device(void)
 
 
 static int
-test_remove_slave_from_bonded_device(void)
+test_remove_member_from_bonded_device(void)
 {
-	int current_slave_count;
+	int current_member_count;
 	struct rte_ether_addr read_mac_addr, *mac_addr;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params->bonded_port_id,
-			test_params->slave_port_ids[test_params->bonded_slave_count-1]),
-			"Failed to remove slave %d from bonded port (%d).",
-			test_params->slave_port_ids[test_params->bonded_slave_count-1],
+	TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(test_params->bonded_port_id,
+			test_params->member_port_ids[test_params->bonded_member_count-1]),
+			"Failed to remove member %d from bonded port (%d).",
+			test_params->member_port_ids[test_params->bonded_member_count-1],
 			test_params->bonded_port_id);
 
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count - 1,
-			"Number of slaves (%d) is great than expected (%d).\n",
-			current_slave_count, test_params->bonded_slave_count - 1);
+	TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count - 1,
+			"Number of members (%d) is great than expected (%d).\n",
+			current_member_count, test_params->bonded_member_count - 1);
 
 
-	mac_addr = (struct rte_ether_addr *)slave_mac;
+	mac_addr = (struct rte_ether_addr *)member_mac;
 	mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] =
-			test_params->bonded_slave_count-1;
+			test_params->bonded_member_count-1;
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(
-			test_params->slave_port_ids[test_params->bonded_slave_count-1],
+			test_params->member_port_ids[test_params->bonded_member_count-1],
 			&read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[test_params->bonded_slave_count-1]);
+			test_params->member_port_ids[test_params->bonded_member_count-1]);
 	TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
 			"bonded port mac address not set to that of primary port\n");
 
 	rte_eth_stats_reset(
-			test_params->slave_port_ids[test_params->bonded_slave_count-1]);
+			test_params->member_port_ids[test_params->bonded_member_count-1]);
 
 	virtual_ethdev_simulate_link_status_interrupt(test_params->bonded_port_id,
 			0);
 
-	test_params->bonded_slave_count--;
+	test_params->bonded_member_count--;
 
 	return 0;
 }
 
 static int
-test_remove_slave_from_invalid_bonded_device(void)
+test_remove_member_from_invalid_bonded_device(void)
 {
 	/* Invalid port ID */
-	TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
+	TEST_ASSERT_FAIL(rte_eth_bond_member_remove(
 			test_params->bonded_port_id + 5,
-			test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
+			test_params->member_port_ids[test_params->bonded_member_count - 1]),
 			"Expected call to failed as invalid port specified.");
 
 	/* Non bonded device */
-	TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
-			test_params->slave_port_ids[0],
-			test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
+	TEST_ASSERT_FAIL(rte_eth_bond_member_remove(
+			test_params->member_port_ids[0],
+			test_params->member_port_ids[test_params->bonded_member_count - 1]),
 			"Expected call to failed as invalid port specified.");
 
 	return 0;
@@ -441,19 +441,19 @@ test_remove_slave_from_invalid_bonded_device(void)
 static int bonded_id = 2;
 
 static int
-test_add_already_bonded_slave_to_bonded_device(void)
+test_add_already_bonded_member_to_bonded_device(void)
 {
-	int port_id, current_slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int port_id, current_member_count;
+	uint16_t members[RTE_MAX_ETHPORTS];
 	char pmd_name[RTE_ETH_NAME_MAX_LEN];
 
-	test_add_slave_to_bonded_device();
+	test_add_member_to_bonded_device();
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, 1,
-			"Number of slaves (%d) is not that expected (%d).",
-			current_slave_count, 1);
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, 1,
+			"Number of members (%d) is not that expected (%d).",
+			current_member_count, 1);
 
 	snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "%s_%d", BONDED_DEV_NAME, ++bonded_id);
 
@@ -461,93 +461,93 @@ test_add_already_bonded_slave_to_bonded_device(void)
 			rte_socket_id());
 	TEST_ASSERT(port_id >= 0, "Failed to create bonded device.");
 
-	TEST_ASSERT(rte_eth_bond_slave_add(port_id,
-			test_params->slave_port_ids[test_params->bonded_slave_count - 1])
+	TEST_ASSERT(rte_eth_bond_member_add(port_id,
+			test_params->member_port_ids[test_params->bonded_member_count - 1])
 			< 0,
-			"Added slave (%d) to bonded port (%d) unexpectedly.",
-			test_params->slave_port_ids[test_params->bonded_slave_count-1],
+			"Added member (%d) to bonded port (%d) unexpectedly.",
+			test_params->member_port_ids[test_params->bonded_member_count-1],
 			port_id);
 
-	return test_remove_slave_from_bonded_device();
+	return test_remove_member_from_bonded_device();
 }
 
 
 static int
-test_get_slaves_from_bonded_device(void)
+test_get_members_from_bonded_device(void)
 {
-	int current_slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int current_member_count;
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-			"Failed to add slave to bonded device");
+	TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+			"Failed to add member to bonded device");
 
 	/* Invalid port id */
-	current_slave_count = rte_eth_bond_slaves_get(INVALID_PORT_ID, slaves,
+	current_member_count = rte_eth_bond_members_get(INVALID_PORT_ID, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
+	TEST_ASSERT(current_member_count < 0,
 			"Invalid port id unexpectedly succeeded");
 
-	current_slave_count = rte_eth_bond_active_slaves_get(INVALID_PORT_ID,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
+	current_member_count = rte_eth_bond_active_members_get(INVALID_PORT_ID,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT(current_member_count < 0,
 			"Invalid port id unexpectedly succeeded");
 
-	/* Invalid slaves pointer */
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
+	/* Invalid members pointer */
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
 			NULL, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
-			"Invalid slave array unexpectedly succeeded");
+	TEST_ASSERT(current_member_count < 0,
+			"Invalid member array unexpectedly succeeded");
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
+	current_member_count = rte_eth_bond_active_members_get(
 			test_params->bonded_port_id, NULL, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
-			"Invalid slave array unexpectedly succeeded");
+	TEST_ASSERT(current_member_count < 0,
+			"Invalid member array unexpectedly succeeded");
 
 	/* non bonded device*/
-	current_slave_count = rte_eth_bond_slaves_get(
-			test_params->slave_port_ids[0], NULL, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
+	current_member_count = rte_eth_bond_members_get(
+			test_params->member_port_ids[0], NULL, RTE_MAX_ETHPORTS);
+	TEST_ASSERT(current_member_count < 0,
 			"Invalid port id unexpectedly succeeded");
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->slave_port_ids[0],	NULL, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
+	current_member_count = rte_eth_bond_active_members_get(
+			test_params->member_port_ids[0],	NULL, RTE_MAX_ETHPORTS);
+	TEST_ASSERT(current_member_count < 0,
 			"Invalid port id unexpectedly succeeded");
 
-	TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
-			"Failed to remove slaves from bonded device");
+	TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+			"Failed to remove members from bonded device");
 
 	return 0;
 }
 
 
 static int
-test_add_remove_multiple_slaves_to_from_bonded_device(void)
+test_add_remove_multiple_members_to_from_bonded_device(void)
 {
 	int i;
 
 	for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
-		TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-				"Failed to add slave to bonded device");
+		TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+				"Failed to add member to bonded device");
 
 	for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
-		TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
-				"Failed to remove slaves from bonded device");
+		TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+				"Failed to remove members from bonded device");
 
 	return 0;
 }
 
 static void
-enable_bonded_slaves(void)
+enable_bonded_members(void)
 {
 	int i;
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		virtual_ethdev_tx_burst_fn_set_success(test_params->slave_port_ids[i],
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		virtual_ethdev_tx_burst_fn_set_success(test_params->member_port_ids[i],
 				1);
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 1);
+				test_params->member_port_ids[i], 1);
 	}
 }
 
@@ -556,34 +556,36 @@ test_start_bonded_device(void)
 {
 	struct rte_eth_link link_status;
 
-	int current_slave_count, current_bonding_mode, primary_port;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int current_member_count, current_bonding_mode, primary_port;
+	uint16_t members[RTE_MAX_ETHPORTS];
 	int retval;
 
-	/* Add slave to bonded device*/
-	TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-			"Failed to add slave to bonded device");
+	/* Add member to bonded device*/
+	TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+			"Failed to add member to bonded device");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params->bonded_port_id),
 		"Failed to start bonded pmd eth device %d.",
 		test_params->bonded_port_id);
 
-	/* Change link status of virtual pmd so it will be added to the active
-	 * slave list of the bonded device*/
+	/*
+	 * Change link status of virtual pmd so it will be added to the active
+	 * member list of the bonded device.
+	 */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[test_params->bonded_slave_count-1], 1);
+			test_params->member_port_ids[test_params->bonded_member_count-1], 1);
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
-			"Number of slaves (%d) is not expected value (%d).",
-			current_slave_count, test_params->bonded_slave_count);
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+			"Number of members (%d) is not expected value (%d).",
+			current_member_count, test_params->bonded_member_count);
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
-			"Number of active slaves (%d) is not expected value (%d).",
-			current_slave_count, test_params->bonded_slave_count);
+	current_member_count = rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+			"Number of active members (%d) is not expected value (%d).",
+			current_member_count, test_params->bonded_member_count);
 
 	current_bonding_mode = rte_eth_bond_mode_get(test_params->bonded_port_id);
 	TEST_ASSERT_EQUAL(current_bonding_mode, test_params->bonding_mode,
@@ -591,9 +593,9 @@ test_start_bonded_device(void)
 			current_bonding_mode, test_params->bonding_mode);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+	TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
 			"Primary port (%d) is not expected value (%d).",
-			primary_port, test_params->slave_port_ids[0]);
+			primary_port, test_params->member_port_ids[0]);
 
 	retval = rte_eth_link_get(test_params->bonded_port_id, &link_status);
 	TEST_ASSERT(retval >= 0,
@@ -609,8 +611,8 @@ test_start_bonded_device(void)
 static int
 test_stop_bonded_device(void)
 {
-	int current_slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int current_member_count;
+	uint16_t members[RTE_MAX_ETHPORTS];
 
 	struct rte_eth_link link_status;
 	int retval;
@@ -627,29 +629,29 @@ test_stop_bonded_device(void)
 			"Bonded port (%d) status (%d) is not expected value (%d).",
 			test_params->bonded_port_id, link_status.link_status, 0);
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
-			"Number of slaves (%d) is not expected value (%d).",
-			current_slave_count, test_params->bonded_slave_count);
+	current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+			"Number of members (%d) is not expected value (%d).",
+			current_member_count, test_params->bonded_member_count);
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, 0,
-			"Number of active slaves (%d) is not expected value (%d).",
-			current_slave_count, 0);
+	current_member_count = rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_member_count, 0,
+			"Number of active members (%d) is not expected value (%d).",
+			current_member_count, 0);
 
 	return 0;
 }
 
 static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
 {
-	/* Clean up and remove slaves from bonded device */
+	/* Clean up and remove members from bonded device */
 	free_virtualpmd_tx_queue();
-	while (test_params->bonded_slave_count > 0)
-		TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
-				"test_remove_slave_from_bonded_device failed");
+	while (test_params->bonded_member_count > 0)
+		TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+				"test_remove_member_from_bonded_device failed");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -681,10 +683,10 @@ test_set_bonding_mode(void)
 				INVALID_PORT_ID);
 
 		/* Non bonded device */
-		TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->slave_port_ids[0],
+		TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->member_port_ids[0],
 				bonding_modes[i]),
 				"Expected call to failed as invalid port (%d) specified.",
-				test_params->slave_port_ids[0]);
+				test_params->member_port_ids[0]);
 
 		TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
 				bonding_modes[i]),
@@ -704,26 +706,26 @@ test_set_bonding_mode(void)
 				INVALID_PORT_ID);
 
 		/* Non bonded device */
-		bonding_mode = rte_eth_bond_mode_get(test_params->slave_port_ids[0]);
+		bonding_mode = rte_eth_bond_mode_get(test_params->member_port_ids[0]);
 		TEST_ASSERT(bonding_mode < 0,
 				"Expected call to failed as invalid port (%d) specified.",
-				test_params->slave_port_ids[0]);
+				test_params->member_port_ids[0]);
 	}
 
-	return remove_slaves_and_stop_bonded_device();
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
-test_set_primary_slave(void)
+test_set_primary_member(void)
 {
 	int i, j, retval;
 	struct rte_ether_addr read_mac_addr;
 	struct rte_ether_addr *expected_mac_addr;
 
-	/* Add 4 slaves to bonded device */
-	for (i = test_params->bonded_slave_count; i < 4; i++)
-		TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-				"Failed to add slave to bonded device.");
+	/* Add 4 members to bonded device */
+	for (i = test_params->bonded_member_count; i < 4; i++)
+		TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+				"Failed to add member to bonded device.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
 			BONDING_MODE_ROUND_ROBIN),
@@ -732,34 +734,34 @@ test_set_primary_slave(void)
 
 	/* Invalid port ID */
 	TEST_ASSERT_FAIL(rte_eth_bond_primary_set(INVALID_PORT_ID,
-			test_params->slave_port_ids[i]),
+			test_params->member_port_ids[i]),
 			"Expected call to failed as invalid port specified.");
 
 	/* Non bonded device */
-	TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->slave_port_ids[i],
-			test_params->slave_port_ids[i]),
+	TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->member_port_ids[i],
+			test_params->member_port_ids[i]),
 			"Expected call to failed as invalid port specified.");
 
-	/* Set slave as primary
-	 * Verify slave it is now primary slave
-	 * Verify that MAC address of bonded device is that of primary slave
-	 * Verify that MAC address of all bonded slaves are that of primary slave
+	/* Set member as primary
+	 * Verify member it is now primary member
+	 * Verify that MAC address of bonded device is that of primary member
+	 * Verify that MAC address of all bonded members are that of primary member
 	 */
 	for (i = 0; i < 4; i++) {
 		TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
-				test_params->slave_port_ids[i]),
+				test_params->member_port_ids[i]),
 				"Failed to set bonded port (%d) primary port to (%d)",
-				test_params->bonded_port_id, test_params->slave_port_ids[i]);
+				test_params->bonded_port_id, test_params->member_port_ids[i]);
 
 		retval = rte_eth_bond_primary_get(test_params->bonded_port_id);
 		TEST_ASSERT(retval >= 0,
 				"Failed to read primary port from bonded port (%d)\n",
 					test_params->bonded_port_id);
 
-		TEST_ASSERT_EQUAL(retval, test_params->slave_port_ids[i],
+		TEST_ASSERT_EQUAL(retval, test_params->member_port_ids[i],
 				"Bonded port (%d) primary port (%d) not expected value (%d)\n",
 				test_params->bonded_port_id, retval,
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 
 		/* stop/start bonded eth dev to apply new MAC */
 		TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
@@ -770,13 +772,14 @@ test_set_primary_slave(void)
 				"Failed to start bonded port %d",
 				test_params->bonded_port_id);
 
-		expected_mac_addr = (struct rte_ether_addr *)&slave_mac;
+		expected_mac_addr = (struct rte_ether_addr *)&member_mac;
 		expected_mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
 
-		/* Check primary slave MAC */
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+		/* Check primary member MAC */
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
 				sizeof(read_mac_addr)),
 				"bonded port mac address not set to that of primary port\n");
@@ -789,16 +792,17 @@ test_set_primary_slave(void)
 				sizeof(read_mac_addr)),
 				"bonded port mac address not set to that of primary port\n");
 
-		/* Check other slaves MACs */
+		/* Check other members MACs */
 		for (j = 0; j < 4; j++) {
 			if (j != i) {
-				TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[j],
+				TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(
+						test_params->member_port_ids[j],
 						&read_mac_addr),
 						"Failed to get mac address (port %d)",
-						test_params->slave_port_ids[j]);
+						test_params->member_port_ids[j]);
 				TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
 						sizeof(read_mac_addr)),
-						"slave port mac address not set to that of primary "
+						"member port mac address not set to that of primary "
 						"port");
 			}
 		}
@@ -809,14 +813,14 @@ test_set_primary_slave(void)
 	TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->bonded_port_id + 10),
 			"read primary port from expectedly");
 
-	/* Test with slave port */
-	TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->slave_port_ids[0]),
+	/* Test with member port */
+	TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->member_port_ids[0]),
 			"read primary port from expectedly\n");
 
-	TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
-			"Failed to stop and remove slaves from bonded device");
+	TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
+			"Failed to stop and remove members from bonded device");
 
-	/* No slaves  */
+	/* No members  */
 	TEST_ASSERT(rte_eth_bond_primary_get(test_params->bonded_port_id)  < 0,
 			"read primary port from expectedly\n");
 
@@ -840,7 +844,7 @@ test_set_explicit_bonded_mac(void)
 
 	/* Non bonded device */
 	TEST_ASSERT_FAIL(rte_eth_bond_mac_address_set(
-			test_params->slave_port_ids[0],	mac_addr),
+			test_params->member_port_ids[0],	mac_addr),
 			"Expected call to failed as invalid port specified.");
 
 	/* NULL MAC address */
@@ -853,10 +857,10 @@ test_set_explicit_bonded_mac(void)
 			"Failed to set MAC address on bonded port (%d)",
 			test_params->bonded_port_id);
 
-	/* Add 4 slaves to bonded device */
-	for (i = test_params->bonded_slave_count; i < 4; i++) {
-		TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-				"Failed to add slave to bonded device.\n");
+	/* Add 4 members to bonded device */
+	for (i = test_params->bonded_member_count; i < 4; i++) {
+		TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+				"Failed to add member to bonded device.\n");
 	}
 
 	/* Check bonded MAC */
@@ -866,14 +870,15 @@ test_set_explicit_bonded_mac(void)
 	TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
 			"bonded port mac address not set to that of primary port");
 
-	/* Check other slaves MACs */
+	/* Check other members MACs */
 	for (i = 0; i < 4; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port mac address not set to that of primary port");
+				"member port mac address not set to that of primary port");
 	}
 
 	/* test resetting mac address on bonded device */
@@ -883,13 +888,13 @@ test_set_explicit_bonded_mac(void)
 			test_params->bonded_port_id);
 
 	TEST_ASSERT_FAIL(
-			rte_eth_bond_mac_address_reset(test_params->slave_port_ids[0]),
+			rte_eth_bond_mac_address_reset(test_params->member_port_ids[0]),
 			"Reset MAC address on bonded port (%d) unexpectedly",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	/* test resetting mac address on bonded device with no slaves */
-	TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
-			"Failed to remove slaves and stop bonded device");
+	/* test resetting mac address on bonded device with no members */
+	TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
+			"Failed to remove members and stop bonded device");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_reset(test_params->bonded_port_id),
 			"Failed to reset MAC address on bonded port (%d)",
@@ -898,25 +903,25 @@ test_set_explicit_bonded_mac(void)
 	return 0;
 }
 
-#define BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT (3)
+#define BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT (3)
 
 static int
 test_set_bonded_port_initialization_mac_assignment(void)
 {
-	int i, slave_count;
+	int i, member_count;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 	static int bonded_port_id = -1;
-	static int slave_port_ids[BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT];
+	static int member_port_ids[BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT];
 
-	struct rte_ether_addr slave_mac_addr, bonded_mac_addr, read_mac_addr;
+	struct rte_ether_addr member_mac_addr, bonded_mac_addr, read_mac_addr;
 
 	/* Initialize default values for MAC addresses */
-	memcpy(&slave_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
-	memcpy(&bonded_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
+	memcpy(&member_mac_addr, member_mac, sizeof(struct rte_ether_addr));
+	memcpy(&bonded_mac_addr, member_mac, sizeof(struct rte_ether_addr));
 
 	/*
-	 * 1. a - Create / configure  bonded / slave ethdevs
+	 * 1. a - Create / configure  bonded / member ethdevs
 	 */
 	if (bonded_port_id == -1) {
 		bonded_port_id = rte_eth_bond_create("net_bonding_mac_ass_test",
@@ -927,46 +932,46 @@ test_set_bonded_port_initialization_mac_assignment(void)
 					"Failed to configure bonded ethdev");
 	}
 
-	if (!mac_slaves_initialized) {
-		for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
+	if (!mac_members_initialized) {
+		for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
 			char pmd_name[RTE_ETH_NAME_MAX_LEN];
 
-			slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
+			member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
 				i + 100;
 
 			snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN,
-				"eth_slave_%d", i);
+				"eth_member_%d", i);
 
-			slave_port_ids[i] = virtual_ethdev_create(pmd_name,
-					&slave_mac_addr, rte_socket_id(), 1);
+			member_port_ids[i] = virtual_ethdev_create(pmd_name,
+					&member_mac_addr, rte_socket_id(), 1);
 
-			TEST_ASSERT(slave_port_ids[i] >= 0,
-					"Failed to create slave ethdev %s",
+			TEST_ASSERT(member_port_ids[i] >= 0,
+					"Failed to create member ethdev %s",
 					pmd_name);
 
-			TEST_ASSERT_SUCCESS(configure_ethdev(slave_port_ids[i], 1, 0),
+			TEST_ASSERT_SUCCESS(configure_ethdev(member_port_ids[i], 1, 0),
 					"Failed to configure virtual ethdev %s",
 					pmd_name);
 		}
-		mac_slaves_initialized = 1;
+		mac_members_initialized = 1;
 	}
 
 
 	/*
-	 * 2. Add slave ethdevs to bonded device
+	 * 2. Add member ethdevs to bonded device
 	 */
-	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(bonded_port_id,
-				slave_port_ids[i]),
-				"Failed to add slave (%d) to bonded port (%d).",
-				slave_port_ids[i], bonded_port_id);
+	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(bonded_port_id,
+				member_port_ids[i]),
+				"Failed to add member (%d) to bonded port (%d).",
+				member_port_ids[i], bonded_port_id);
 	}
 
-	slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
+	member_count = rte_eth_bond_members_get(bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT, slave_count,
-			"Number of slaves (%d) is not as expected (%d)",
-			slave_count, BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT);
+	TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT, member_count,
+			"Number of members (%d) is not as expected (%d)",
+			member_count, BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT);
 
 
 	/*
@@ -982,16 +987,16 @@ test_set_bonded_port_initialization_mac_assignment(void)
 
 
 	/* 4. a - Start bonded ethdev
-	 *    b - Enable slave devices
-	 *    c - Verify bonded/slaves ethdev MAC addresses
+	 *    b - Enable member devices
+	 *    c - Verify bonded/members ethdev MAC addresses
 	 */
 	TEST_ASSERT_SUCCESS(rte_eth_dev_start(bonded_port_id),
 			"Failed to start bonded pmd eth device %d.",
 			bonded_port_id);
 
-	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
+	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				slave_port_ids[i], 1);
+				member_port_ids[i], 1);
 	}
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(bonded_port_id, &read_mac_addr),
@@ -1001,36 +1006,36 @@ test_set_bonded_port_initialization_mac_assignment(void)
 			sizeof(read_mac_addr)),
 			"bonded port mac address not as expected");
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[0]);
+			member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 0 mac address not as expected");
+			"member port 0 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[1]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[1]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 1 mac address not as expected");
+			"member port 1 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[2]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[2]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 2 mac address not as expected");
+			"member port 2 mac address not as expected");
 
 
 	/* 7. a - Change primary port
 	 *    b - Stop / Start bonded port
-	 *    d - Verify slave ethdev MAC addresses
+	 *    d - Verify member ethdev MAC addresses
 	 */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(bonded_port_id,
-			slave_port_ids[2]),
+			member_port_ids[2]),
 			"failed to set primary port on bonded device.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
@@ -1048,94 +1053,94 @@ test_set_bonded_port_initialization_mac_assignment(void)
 			sizeof(read_mac_addr)),
 			"bonded port mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 0 mac address not as expected");
+			"member port 0 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[1]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[1]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 1 mac address not as expected");
+			"member port 1 mac address not as expected");
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[2]);
+			member_port_ids[2]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 2 mac address not as expected");
+			"member port 2 mac address not as expected");
 
 	/* 6. a - Stop bonded ethdev
-	 *    b - remove slave ethdevs
-	 *    c - Verify slave ethdevs MACs are restored
+	 *    b - remove member ethdevs
+	 *    c - Verify member ethdevs MACs are restored
 	 */
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
 			"Failed to stop bonded port %u",
 			bonded_port_id);
 
-	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(bonded_port_id,
-				slave_port_ids[i]),
-				"Failed to remove slave %d from bonded port (%d).",
-				slave_port_ids[i], bonded_port_id);
+	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(bonded_port_id,
+				member_port_ids[i]),
+				"Failed to remove member %d from bonded port (%d).",
+				member_port_ids[i], bonded_port_id);
 	}
 
-	slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
+	member_count = rte_eth_bond_members_get(bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(slave_count, 0,
-			"Number of slaves (%d) is great than expected (%d).",
-			slave_count, 0);
+	TEST_ASSERT_EQUAL(member_count, 0,
+			"Number of members (%d) is great than expected (%d).",
+			member_count, 0);
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 0 mac address not as expected");
+			"member port 0 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[1]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[1]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 1 mac address not as expected");
+			"member port 1 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+	member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[2]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			member_port_ids[2]);
+	TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 2 mac address not as expected");
+			"member port 2 mac address not as expected");
 
 	return 0;
 }
 
 
 static int
-initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
-		uint16_t number_of_slaves, uint8_t enable_slave)
+initialize_bonded_device_with_members(uint8_t bonding_mode, uint8_t bond_en_isr,
+		uint16_t number_of_members, uint8_t enable_member)
 {
 	/* Configure bonded device */
 	TEST_ASSERT_SUCCESS(configure_ethdev(test_params->bonded_port_id, 0,
 			bond_en_isr), "Failed to configure bonding port (%d) in mode %d "
-			"with (%d) slaves.", test_params->bonded_port_id, bonding_mode,
-			number_of_slaves);
-
-	/* Add slaves to bonded device */
-	while (number_of_slaves > test_params->bonded_slave_count)
-		TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-				"Failed to add slave (%d to  bonding port (%d).",
-				test_params->bonded_slave_count - 1,
+			"with (%d) members.", test_params->bonded_port_id, bonding_mode,
+			number_of_members);
+
+	/* Add members to bonded device */
+	while (number_of_members > test_params->bonded_member_count)
+		TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+				"Failed to add member (%d to  bonding port (%d).",
+				test_params->bonded_member_count - 1,
 				test_params->bonded_port_id);
 
 	/* Set link bonding mode  */
@@ -1148,40 +1153,40 @@ initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
 		"Failed to start bonded pmd eth device %d.",
 		test_params->bonded_port_id);
 
-	if (enable_slave)
-		enable_bonded_slaves();
+	if (enable_member)
+		enable_bonded_members();
 
 	return 0;
 }
 
 static int
-test_adding_slave_after_bonded_device_started(void)
+test_adding_member_after_bonded_device_started(void)
 {
 	int i;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 0, 4, 0),
-			"Failed to add slaves to bonded device");
+			"Failed to add members to bonded device");
 
-	/* Enabled slave devices */
-	for (i = 0; i < test_params->bonded_slave_count + 1; i++) {
+	/* Enabled member devices */
+	for (i = 0; i < test_params->bonded_member_count + 1; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 1);
+				test_params->member_port_ids[i], 1);
 	}
 
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
-			test_params->slave_port_ids[test_params->bonded_slave_count]),
-			"Failed to add slave to bonded port.\n");
+	TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+			test_params->member_port_ids[test_params->bonded_member_count]),
+			"Failed to add member to bonded port.\n");
 
 	rte_eth_stats_reset(
-			test_params->slave_port_ids[test_params->bonded_slave_count]);
+			test_params->member_port_ids[test_params->bonded_member_count]);
 
-	test_params->bonded_slave_count++;
+	test_params->bonded_member_count++;
 
-	return remove_slaves_and_stop_bonded_device();
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_STATUS_INTERRUPT_SLAVE_COUNT	4
+#define TEST_STATUS_INTERRUPT_MEMBER_COUNT	4
 #define TEST_LSC_WAIT_TIMEOUT_US	500000
 
 int test_lsc_interrupt_count;
@@ -1237,13 +1242,13 @@ lsc_timeout(int wait_us)
 static int
 test_status_interrupt(void)
 {
-	int slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int member_count;
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	/* initialized bonding device with T slaves */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* initialized bonding device with T members */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 1,
-			TEST_STATUS_INTERRUPT_SLAVE_COUNT, 1),
+			TEST_STATUS_INTERRUPT_MEMBER_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	test_lsc_interrupt_count = 0;
@@ -1253,27 +1258,27 @@ test_status_interrupt(void)
 			RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
 			&test_params->bonded_port_id);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT,
-			"Number of active slaves (%d) is not as expected (%d)",
-			slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT);
+	TEST_ASSERT_EQUAL(member_count, TEST_STATUS_INTERRUPT_MEMBER_COUNT,
+			"Number of active members (%d) is not as expected (%d)",
+			member_count, TEST_STATUS_INTERRUPT_MEMBER_COUNT);
 
-	/* Bring all 4 slaves link status to down and test that we have received a
+	/* Bring all 4 members link status to down and test that we have received a
 	 * lsc interrupts */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 0);
+			test_params->member_port_ids[0], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->member_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[2], 0);
+			test_params->member_port_ids[2], 0);
 
 	TEST_ASSERT_EQUAL(test_lsc_interrupt_count, 0,
 			"Received a link status change interrupt unexpectedly");
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->member_port_ids[3], 0);
 
 	TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
 			"timed out waiting for interrupt");
@@ -1281,18 +1286,18 @@ test_status_interrupt(void)
 	TEST_ASSERT(test_lsc_interrupt_count > 0,
 			"Did not receive link status change interrupt");
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(slave_count, 0,
-			"Number of active slaves (%d) is not as expected (%d)",
-			slave_count, 0);
+	TEST_ASSERT_EQUAL(member_count, 0,
+			"Number of active members (%d) is not as expected (%d)",
+			member_count, 0);
 
-	/* bring one slave port up so link status will change */
+	/* bring one member port up so link status will change */
 	test_lsc_interrupt_count = 0;
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 1);
+			test_params->member_port_ids[0], 1);
 
 	TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
 			"timed out waiting for interrupt");
@@ -1301,12 +1306,12 @@ test_status_interrupt(void)
 	TEST_ASSERT(test_lsc_interrupt_count > 0,
 			"Did not receive link status change interrupt");
 
-	/* Verify that calling the same slave lsc interrupt doesn't cause another
+	/* Verify that calling the same member lsc interrupt doesn't cause another
 	 * lsc interrupt from bonded device */
 	test_lsc_interrupt_count = 0;
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 1);
+			test_params->member_port_ids[0], 1);
 
 	TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) != 0,
 			"received unexpected interrupt");
@@ -1320,8 +1325,8 @@ test_status_interrupt(void)
 				RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
 				&test_params->bonded_port_id);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -1398,11 +1403,11 @@ test_roundrobin_tx_burst(void)
 	struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 0, 2, 1),
 			"Failed to initialise bonded device");
 
-	burst_size = 20 * test_params->bonded_slave_count;
+	burst_size = 20 * test_params->bonded_member_count;
 
 	TEST_ASSERT(burst_size <= MAX_PKT_BURST,
 			"Burst size specified is greater than supported.");
@@ -1423,20 +1428,20 @@ test_roundrobin_tx_burst(void)
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
 			burst_size);
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+	/* Verify member ports tx stats */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
 		TEST_ASSERT_EQUAL(port_stats.opackets,
-				(uint64_t)burst_size / test_params->bonded_slave_count,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)\n",
+				(uint64_t)burst_size / test_params->bonded_member_count,
+				"Member Port (%d) opackets value (%u) not as expected (%d)\n",
 				test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-				burst_size / test_params->bonded_slave_count);
+				burst_size / test_params->bonded_member_count);
 	}
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -1444,8 +1449,8 @@ test_roundrobin_tx_burst(void)
 			pkt_burst, burst_size), 0,
 			"tx burst return unexpected value");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -1471,13 +1476,13 @@ free_mbufs(struct rte_mbuf **mbufs, int nb_mbufs)
 		rte_pktmbuf_free(mbufs[i]);
 }
 
-#define TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT		(2)
-#define TEST_RR_SLAVE_TX_FAIL_BURST_SIZE		(64)
-#define TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT		(22)
-#define TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX	(1)
+#define TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT		(2)
+#define TEST_RR_MEMBER_TX_FAIL_BURST_SIZE		(64)
+#define TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT		(22)
+#define TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX	(1)
 
 static int
-test_roundrobin_tx_burst_slave_tx_fail(void)
+test_roundrobin_tx_burst_member_tx_fail(void)
 {
 	struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
 	struct rte_mbuf *expected_tx_fail_pkts[MAX_PKT_BURST];
@@ -1486,49 +1491,51 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
 
 	int i, first_fail_idx, tx_count;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 0,
-			TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+			TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	/* Generate test bursts of packets to transmit */
 	TEST_ASSERT_EQUAL(generate_test_burst(pkt_burst,
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE,
+			TEST_RR_MEMBER_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
+			TEST_RR_MEMBER_TX_FAIL_BURST_SIZE,
 			"Failed to generate test packet burst");
 
 	/* Copy references to packets which we expect not to be transmitted */
-	first_fail_idx = (TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			(TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT *
-			TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)) +
-			TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX;
+	first_fail_idx = (TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+			(TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT *
+			TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT)) +
+			TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX;
 
-	for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+	for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
 		expected_tx_fail_pkts[i] = pkt_burst[first_fail_idx +
-				(i * TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)];
+				(i * TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT)];
 	}
 
-	/* Set virtual slave to only fail transmission of
-	 * TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT packets in burst */
+	/*
+	 * Set virtual member to only fail transmission of
+	 * TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT packets in burst.
+	 */
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
+			test_params->member_port_ids[TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
 			0);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+			test_params->member_port_ids[TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
+			TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
 
 	tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkt_burst,
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE);
+			TEST_RR_MEMBER_TX_FAIL_BURST_SIZE);
 
-	TEST_ASSERT_EQUAL(tx_count, TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
+	TEST_ASSERT_EQUAL(tx_count, TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT,
 			"Transmitted (%d) an unexpected (%d) number of packets", tx_count,
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+			TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
 
 	/* Verify that failed packet are expected failed packets */
-	for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+	for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
 		TEST_ASSERT_EQUAL(expected_tx_fail_pkts[i], pkt_burst[i + tx_count],
 				"expected mbuf (%d) pointer %p not expected pointer %p",
 				i, expected_tx_fail_pkts[i], pkt_burst[i + tx_count]);
@@ -1538,45 +1545,45 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
 	rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
+			(uint64_t)TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT,
 			"Bonded Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+			TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		int slave_expected_tx_count;
+	/* Verify member ports tx stats */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		int member_expected_tx_count;
 
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+		rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
 
-		slave_expected_tx_count = TEST_RR_SLAVE_TX_FAIL_BURST_SIZE /
-				test_params->bonded_slave_count;
+		member_expected_tx_count = TEST_RR_MEMBER_TX_FAIL_BURST_SIZE /
+				test_params->bonded_member_count;
 
-		if (i == TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX)
-			slave_expected_tx_count = slave_expected_tx_count -
-					TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT;
+		if (i == TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX)
+			member_expected_tx_count = member_expected_tx_count -
+					TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT;
 
 		TEST_ASSERT_EQUAL(port_stats.opackets,
-				(uint64_t)slave_expected_tx_count,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[i],
-				(unsigned int)port_stats.opackets, slave_expected_tx_count);
+				(uint64_t)member_expected_tx_count,
+				"Member Port (%d) opackets value (%u) not as expected (%d)",
+				test_params->member_port_ids[i],
+				(unsigned int)port_stats.opackets, member_expected_tx_count);
 	}
 
 	/* Verify that all mbufs have a ref value of zero */
 	TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkt_burst[tx_count],
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
+			TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT, 1),
 			"mbufs refcnts not as expected");
-	free_mbufs(&pkt_burst[tx_count], TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+	free_mbufs(&pkt_burst[tx_count], TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
-test_roundrobin_rx_burst_on_single_slave(void)
+test_roundrobin_rx_burst_on_single_member(void)
 {
 	struct rte_mbuf *gen_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
@@ -1585,19 +1592,19 @@ test_roundrobin_rx_burst_on_single_slave(void)
 
 	int i, j, burst_size = 25;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with members");
 
 	/* Generate test bursts of packets to transmit */
 	TEST_ASSERT_EQUAL(generate_test_burst(
 			gen_pkt_burst, burst_size, 0, 1, 0, 0, 0), burst_size,
 			"burst generation failed");
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		/* Add rx data to slave */
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		/* Add rx data to member */
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[0], burst_size);
 
 		/* Call rx burst on bonded device */
@@ -1616,25 +1623,25 @@ test_roundrobin_rx_burst_on_single_slave(void)
 
 
 
-		/* Verify bonded slave devices rx count */
-		/* Verify slave ports tx stats */
-		for (j = 0; j < test_params->bonded_slave_count; j++) {
-			rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+		/* Verify bonded member devices rx count */
+		/* Verify member ports tx stats */
+		for (j = 0; j < test_params->bonded_member_count; j++) {
+			rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
 
 			if (i == j) {
 				TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
-						"Slave Port (%d) ipackets value (%u) not as expected"
-						" (%d)", test_params->slave_port_ids[i],
+						"Member Port (%d) ipackets value (%u) not as expected"
+						" (%d)", test_params->member_port_ids[i],
 						(unsigned int)port_stats.ipackets, burst_size);
 			} else {
 				TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-						"Slave Port (%d) ipackets value (%u) not as expected"
-						" (%d)", test_params->slave_port_ids[i],
+						"Member Port (%d) ipackets value (%u) not as expected"
+						" (%d)", test_params->member_port_ids[i],
 						(unsigned int)port_stats.ipackets, 0);
 			}
 
-			/* Reset bonded slaves stats */
-			rte_eth_stats_reset(test_params->slave_port_ids[j]);
+			/* Reset bonded members stats */
+			rte_eth_stats_reset(test_params->member_port_ids[j]);
 		}
 		/* reset bonded device stats */
 		rte_eth_stats_reset(test_params->bonded_port_id);
@@ -1646,38 +1653,38 @@ test_roundrobin_rx_burst_on_single_slave(void)
 	}
 
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT (3)
+#define TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT (3)
 
 static int
-test_roundrobin_rx_burst_on_multiple_slaves(void)
+test_roundrobin_rx_burst_on_multiple_members(void)
 {
-	struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
 
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	int burst_size[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT] = { 15, 13, 36 };
+	int burst_size[TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT] = { 15, 13, 36 };
 	int i, nb_rx;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with members");
 
 	/* Generate test bursts of packets to transmit */
-	for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[i][0], burst_size[i], 0, 1, 0, 0, 0),
 				burst_size[i], "burst generation failed");
 	}
 
-	/* Add rx data to slaves */
-	for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+	/* Add rx data to members */
+	for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT; i++) {
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[i][0], burst_size[i]);
 	}
 
@@ -1697,29 +1704,29 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
 			test_params->bonded_port_id, (unsigned int)port_stats.ipackets,
 			burst_size[0] + burst_size[1] + burst_size[2]);
 
-	/* Verify bonded slave devices rx counts */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify bonded member devices rx counts */
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0],
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[0],
 			(unsigned int)port_stats.ipackets, burst_size[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[1], (unsigned int)port_stats.ipackets,
 			burst_size[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[2],
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+				test_params->member_port_ids[2],
 				(unsigned int)port_stats.ipackets, burst_size[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[3],
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[3],
 			(unsigned int)port_stats.ipackets, 0);
 
 	/* free mbufs */
@@ -1727,8 +1734,8 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
 		rte_pktmbuf_free(rx_pkt_burst[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -1739,48 +1746,54 @@ test_roundrobin_verify_mac_assignment(void)
 
 	int i;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+			&expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_2),
+			test_params->member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[2],
+			&expected_mac_addr_2),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[2]);
+			test_params->member_port_ids[2]);
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 				BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
-				"Failed to initialize bonded device with slaves");
+				"Failed to initialize bonded device with members");
 
-	/* Verify that all MACs are the same as first slave added to bonded dev */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	/* Verify that all MACs are the same as first member added to bonded dev */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of primary port",
-				test_params->slave_port_ids[i]);
+				"member port (%d) mac address not set to that of primary port",
+				test_params->member_port_ids[i]);
 	}
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[2]),
+			test_params->member_port_ids[2]),
 			"Failed to set bonded port (%d) primary port to (%d)",
-			test_params->bonded_port_id, test_params->slave_port_ids[i]);
+			test_params->bonded_port_id, test_params->member_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address has changed to that of primary"
+				"member port (%d) mac address has changed to that of primary"
 				" port without stop/start toggle of bonded device",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
-	/* stop / start bonded device and verify that primary MAC address is
-	 * propagate to bonded device and slaves */
+	/*
+	 * stop / start bonded device and verify that primary MAC address is
+	 * propagate to bonded device and members.
+	 */
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
 			test_params->bonded_port_id);
@@ -1794,16 +1807,17 @@ test_roundrobin_verify_mac_assignment(void)
 	TEST_ASSERT_SUCCESS(
 			memcmp(&expected_mac_addr_2, &read_mac_addr, sizeof(read_mac_addr)),
 			"bonded port (%d) mac address not set to that of new primary port",
-			test_params->slave_port_ids[i]);
+			test_params->member_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_2, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of new primary"
-				" port", test_params->slave_port_ids[i]);
+				"member port (%d) mac address not set to that of new primary"
+				" port", test_params->member_port_ids[i]);
 	}
 
 	/* Set explicit MAC address */
@@ -1818,19 +1832,20 @@ test_roundrobin_verify_mac_assignment(void)
 	TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
 			"bonded port (%d) mac address not set to that of new primary port",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
-				sizeof(read_mac_addr)), "slave port (%d) mac address not set to"
-				" that of new primary port\n", test_params->slave_port_ids[i]);
+				sizeof(read_mac_addr)), "member port (%d) mac address not set to"
+				" that of new primary port\n", test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -1839,10 +1854,10 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
 	int i, promiscuous_en;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with members");
 
 	ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
 	TEST_ASSERT_SUCCESS(ret,
@@ -1854,12 +1869,12 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not enabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_EQUAL(promiscuous_en, 1,
-				"slave port (%d) promiscuous mode not enabled",
-				test_params->slave_port_ids[i]);
+				"member port (%d) promiscuous mode not enabled",
+				test_params->member_port_ids[i]);
 	}
 
 	ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -1872,76 +1887,76 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled\n",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_EQUAL(promiscuous_en, 0,
 				"Port (%d) promiscuous mode not disabled\n",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_RR_LINK_STATUS_SLAVE_COUNT (4)
-#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT (2)
+#define TEST_RR_LINK_STATUS_MEMBER_COUNT (4)
+#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT (2)
 
 static int
-test_roundrobin_verify_slave_link_status_change_behaviour(void)
+test_roundrobin_verify_member_link_status_change_behaviour(void)
 {
 	struct rte_mbuf *tx_pkt_burst[MAX_PKT_BURST] = { NULL };
-	struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_MEMBER_COUNT][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 
 	struct rte_eth_stats port_stats;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count;
+	int i, burst_size, member_count;
 
 	/* NULL all pointers in array to simplify cleanup */
 	memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
 
-	/* Initialize bonded device with TEST_RR_LINK_STATUS_SLAVE_COUNT slaves
+	/* Initialize bonded device with TEST_RR_LINK_STATUS_MEMBER_COUNT members
 	 * in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
-			BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_SLAVE_COUNT, 1),
-			"Failed to initialize bonded device with slaves");
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+			BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_MEMBER_COUNT, 1),
+			"Failed to initialize bonded device with members");
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Members Count /Active Member Count is */
+	member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
-			"Number of slaves (%d) is not as expected (%d).",
-			slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
+	TEST_ASSERT_EQUAL(member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT,
+			"Number of members (%d) is not as expected (%d).",
+			member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT);
 
-	/* Set 2 slaves eth_devs link status to down */
+	/* Set 2 members eth_devs link status to down */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->member_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->member_port_ids[3], 0);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count,
-			TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT,
-			"Number of active slaves (%d) is not as expected (%d).\n",
-			slave_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count,
+			TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT,
+			"Number of active members (%d) is not as expected (%d).\n",
+			member_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT);
 
 	burst_size = 20;
 
-	/* Verify that pkts are not sent on slaves with link status down:
+	/* Verify that pkts are not sent on members with link status down:
 	 *
 	 * 1. Generate test burst of traffic
 	 * 2. Transmit burst on bonded eth_dev
 	 * 3. Verify stats for bonded eth_dev (opackets = burst_size)
-	 * 4. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
+	 * 4. Verify stats for member eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
 	 */
 	TEST_ASSERT_EQUAL(
 			generate_test_burst(tx_pkt_burst, burst_size, 0, 1, 0, 0, 0),
@@ -1960,41 +1975,41 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
 			test_params->bonded_port_id, (int)port_stats.opackets,
 			burst_size);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
 			"Port (%d) opackets stats (%d) not expected (%d) value",
-			test_params->slave_port_ids[0], (int)port_stats.opackets, 10);
+			test_params->member_port_ids[0], (int)port_stats.opackets, 10);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
 			"Port (%d) opackets stats (%d) not expected (%d) value",
-			test_params->slave_port_ids[1], (int)port_stats.opackets, 0);
+			test_params->member_port_ids[1], (int)port_stats.opackets, 0);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
 			"Port (%d) opackets stats (%d) not expected (%d) value",
-			test_params->slave_port_ids[2], (int)port_stats.opackets, 10);
+			test_params->member_port_ids[2], (int)port_stats.opackets, 10);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
 			"Port (%d) opackets stats (%d) not expected (%d) value",
-			test_params->slave_port_ids[3], (int)port_stats.opackets, 0);
+			test_params->member_port_ids[3], (int)port_stats.opackets, 0);
 
-	/* Verify that pkts are not sent on slaves with link status down:
+	/* Verify that pkts are not sent on members with link status down:
 	 *
 	 * 1. Generate test bursts of traffic
 	 * 2. Add bursts on to virtual eth_devs
 	 * 3. Rx burst on bonded eth_dev, expected (burst_ size *
-	 *    TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT) received
+	 *    TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT) received
 	 * 4. Verify stats for bonded eth_dev
-	 * 6. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
+	 * 6. Verify stats for member eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
 	 */
-	for (i = 0; i < TEST_RR_LINK_STATUS_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_RR_LINK_STATUS_MEMBER_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0),
 				burst_size, "failed to generate packet burst");
 
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[i][0], burst_size);
 	}
 
@@ -2014,49 +2029,49 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
 		rte_pktmbuf_free(rx_pkt_burst[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT (2)
+#define TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT (2)
 
-uint8_t polling_slave_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
+uint8_t polling_member_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
 
 
-int polling_test_slaves[TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT] = { -1, -1 };
+int polling_test_members[TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT] = { -1, -1 };
 
 static int
-test_roundrobin_verfiy_polling_slave_link_status_change(void)
+test_roundrobin_verify_polling_member_link_status_change(void)
 {
 	struct rte_ether_addr *mac_addr =
-		(struct rte_ether_addr *)polling_slave_mac;
-	char slave_name[RTE_ETH_NAME_MAX_LEN];
+		(struct rte_ether_addr *)polling_member_mac;
+	char member_name[RTE_ETH_NAME_MAX_LEN];
 
 	int i;
 
-	for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
-		/* Generate slave name / MAC address */
-		snprintf(slave_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
+	for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT; i++) {
+		/* Generate member name / MAC address */
+		snprintf(member_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
 		mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
 
-		/* Create slave devices with no ISR Support */
-		if (polling_test_slaves[i] == -1) {
-			polling_test_slaves[i] = virtual_ethdev_create(slave_name, mac_addr,
+		/* Create member devices with no ISR Support */
+		if (polling_test_members[i] == -1) {
+			polling_test_members[i] = virtual_ethdev_create(member_name, mac_addr,
 					rte_socket_id(), 0);
-			TEST_ASSERT(polling_test_slaves[i] >= 0,
-					"Failed to create virtual virtual ethdev %s\n", slave_name);
+			TEST_ASSERT(polling_test_members[i] >= 0,
+					"Failed to create virtual ethdev %s\n", member_name);
 
-			/* Configure slave */
-			TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_slaves[i], 0, 0),
-					"Failed to configure virtual ethdev %s(%d)", slave_name,
-					polling_test_slaves[i]);
+			/* Configure member */
+			TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_members[i], 0, 0),
+					"Failed to configure virtual ethdev %s(%d)", member_name,
+					polling_test_members[i]);
 		}
 
-		/* Add slave to bonded device */
-		TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
-				polling_test_slaves[i]),
-				"Failed to add slave %s(%d) to bonded device %d",
-				slave_name, polling_test_slaves[i],
+		/* Add member to bonded device */
+		TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+				polling_test_members[i]),
+				"Failed to add member %s(%d) to bonded device %d",
+				member_name, polling_test_members[i],
 				test_params->bonded_port_id);
 	}
 
@@ -2071,26 +2086,26 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
 			RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
 			&test_params->bonded_port_id);
 
-	/* link status change callback for first slave link up */
+	/* link status change callback for first member link up */
 	test_lsc_interrupt_count = 0;
 
-	virtual_ethdev_set_link_status(polling_test_slaves[0], 1);
+	virtual_ethdev_set_link_status(polling_test_members[0], 1);
 
 	TEST_ASSERT_SUCCESS(lsc_timeout(15000), "timed out waiting for interrupt");
 
 
-	/* no link status change callback for second slave link up */
+	/* no link status change callback for second member link up */
 	test_lsc_interrupt_count = 0;
 
-	virtual_ethdev_set_link_status(polling_test_slaves[1], 1);
+	virtual_ethdev_set_link_status(polling_test_members[1], 1);
 
 	TEST_ASSERT_FAIL(lsc_timeout(15000), "unexpectedly succeeded");
 
-	/* link status change callback for both slave links down */
+	/* link status change callback for both member links down */
 	test_lsc_interrupt_count = 0;
 
-	virtual_ethdev_set_link_status(polling_test_slaves[0], 0);
-	virtual_ethdev_set_link_status(polling_test_slaves[1], 0);
+	virtual_ethdev_set_link_status(polling_test_members[0], 0);
+	virtual_ethdev_set_link_status(polling_test_members[1], 0);
 
 	TEST_ASSERT_SUCCESS(lsc_timeout(20000), "timed out waiting for interrupt");
 
@@ -2100,17 +2115,17 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
 			&test_params->bonded_port_id);
 
 
-	/* Clean up and remove slaves from bonded device */
-	for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
+	/* Clean up and remove members from bonded device */
+	for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT; i++) {
 
 		TEST_ASSERT_SUCCESS(
-				rte_eth_bond_slave_remove(test_params->bonded_port_id,
-						polling_test_slaves[i]),
-				"Failed to remove slave %d from bonded port (%d)",
-				polling_test_slaves[i], test_params->bonded_port_id);
+				rte_eth_bond_member_remove(test_params->bonded_port_id,
+						polling_test_members[i]),
+				"Failed to remove member %d from bonded port (%d)",
+				polling_test_members[i], test_params->bonded_port_id);
 	}
 
-	return remove_slaves_and_stop_bonded_device();
+	return remove_members_and_stop_bonded_device();
 }
 
 
@@ -2123,9 +2138,9 @@ test_activebackup_tx_burst(void)
 	struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ACTIVE_BACKUP, 0, 1, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with members");
 
 	initialize_eth_header(test_params->pkt_eth_hdr,
 			(struct rte_ether_addr *)src_mac,
@@ -2136,7 +2151,7 @@ test_activebackup_tx_burst(void)
 	pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
 			dst_addr_0, pktlen);
 
-	burst_size = 20 * test_params->bonded_slave_count;
+	burst_size = 20 * test_params->bonded_member_count;
 
 	TEST_ASSERT(burst_size < MAX_PKT_BURST,
 			"Burst size specified is greater than supported.");
@@ -2160,38 +2175,38 @@ test_activebackup_tx_burst(void)
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
-		if (test_params->slave_port_ids[i] == primary_port) {
+	/* Verify member ports tx stats */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
+		if (test_params->member_port_ids[i] == primary_port) {
 			TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
-					"Slave Port (%d) opackets value (%u) not as expected (%d)",
+					"Member Port (%d) opackets value (%u) not as expected (%d)",
 					test_params->bonded_port_id,
 					(unsigned int)port_stats.opackets,
-					burst_size / test_params->bonded_slave_count);
+					burst_size / test_params->bonded_member_count);
 		} else {
 			TEST_ASSERT_EQUAL(port_stats.opackets, 0,
-					"Slave Port (%d) opackets value (%u) not as expected (%d)",
+					"Member Port (%d) opackets value (%u) not as expected (%d)",
 					test_params->bonded_port_id,
 					(unsigned int)port_stats.opackets, 0);
 		}
 	}
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
 	TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
 			pkts_burst, burst_size), 0, "Sending empty burst failed");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT (4)
+#define TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT (4)
 
 static int
 test_activebackup_rx_burst(void)
@@ -2205,24 +2220,24 @@ test_activebackup_rx_burst(void)
 
 	int i, j, burst_size = 17;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ACTIVE_BACKUP, 0,
-			TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
-			"Failed to initialize bonded device with slaves");
+			TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT, 1),
+			"Failed to initialize bonded device with members");
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 	TEST_ASSERT(primary_port >= 0,
-			"failed to get primary slave for bonded port (%d)",
+			"failed to get primary member for bonded port (%d)",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		/* Generate test bursts of packets to transmit */
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0),
 				burst_size, "burst generation failed");
 
-		/* Add rx data to slave */
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		/* Add rx data to member */
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[0], burst_size);
 
 		/* Call rx burst on bonded device */
@@ -2230,7 +2245,7 @@ test_activebackup_rx_burst(void)
 				&rx_pkt_burst[0], MAX_PKT_BURST), burst_size,
 				"rte_eth_rx_burst failed");
 
-		if (test_params->slave_port_ids[i] == primary_port) {
+		if (test_params->member_port_ids[i] == primary_port) {
 			/* Verify bonded device rx count */
 			rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 			TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
@@ -2238,27 +2253,30 @@ test_activebackup_rx_burst(void)
 					test_params->bonded_port_id,
 					(unsigned int)port_stats.ipackets, burst_size);
 
-			/* Verify bonded slave devices rx count */
-			for (j = 0; j < test_params->bonded_slave_count; j++) {
-				rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+			/* Verify bonded member devices rx count */
+			for (j = 0; j < test_params->bonded_member_count; j++) {
+				rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
 				if (i == j) {
 					TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
-							"Slave Port (%d) ipackets value (%u) not as "
-							"expected (%d)", test_params->slave_port_ids[i],
-							(unsigned int)port_stats.ipackets, burst_size);
+							"Member Port (%d) ipackets value (%u) not as "
+							"expected (%d)",
+							test_params->member_port_ids[i],
+							(unsigned int)port_stats.ipackets,
+							burst_size);
 				} else {
 					TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-							"Slave Port (%d) ipackets value (%u) not as "
-							"expected (%d)\n", test_params->slave_port_ids[i],
+							"Member Port (%d) ipackets value (%u) not as "
+							"expected (%d)\n",
+							test_params->member_port_ids[i],
 							(unsigned int)port_stats.ipackets, 0);
 				}
 			}
 		} else {
-			for (j = 0; j < test_params->bonded_slave_count; j++) {
-				rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+			for (j = 0; j < test_params->bonded_member_count; j++) {
+				rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
 				TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-						"Slave Port (%d) ipackets value (%u) not as expected "
-						"(%d)", test_params->slave_port_ids[i],
+						"Member Port (%d) ipackets value (%u) not as expected "
+						"(%d)", test_params->member_port_ids[i],
 						(unsigned int)port_stats.ipackets, 0);
 			}
 		}
@@ -2275,8 +2293,8 @@ test_activebackup_rx_burst(void)
 		rte_eth_stats_reset(test_params->bonded_port_id);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -2285,14 +2303,14 @@ test_activebackup_verify_promiscuous_enable_disable(void)
 	int i, primary_port, promiscuous_en;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ACTIVE_BACKUP, 0, 4, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with members");
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 	TEST_ASSERT(primary_port >= 0,
-			"failed to get primary slave for bonded port (%d)",
+			"failed to get primary member for bonded port (%d)",
 			test_params->bonded_port_id);
 
 	ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -2304,17 +2322,17 @@ test_activebackup_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not enabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
-		if (primary_port == test_params->slave_port_ids[i]) {
+				test_params->member_port_ids[i]);
+		if (primary_port == test_params->member_port_ids[i]) {
 			TEST_ASSERT_EQUAL(promiscuous_en, 1,
-					"slave port (%d) promiscuous mode not enabled",
-					test_params->slave_port_ids[i]);
+					"member port (%d) promiscuous mode not enabled",
+					test_params->member_port_ids[i]);
 		} else {
 			TEST_ASSERT_EQUAL(promiscuous_en, 0,
-					"slave port (%d) promiscuous mode enabled",
-					test_params->slave_port_ids[i]);
+					"member port (%d) promiscuous mode enabled",
+					test_params->member_port_ids[i]);
 		}
 
 	}
@@ -2328,16 +2346,16 @@ test_activebackup_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled\n",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_EQUAL(promiscuous_en, 0,
-				"slave port (%d) promiscuous mode not disabled\n",
-				test_params->slave_port_ids[i]);
+				"member port (%d) promiscuous mode not disabled\n",
+				test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -2346,19 +2364,21 @@ test_activebackup_verify_mac_assignment(void)
 	struct rte_ether_addr read_mac_addr;
 	struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+			&expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+			test_params->member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+			&expected_mac_addr_1),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	/* Initialize bonded device with 2 slaves in active backup mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 2 members in active backup mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with members");
 
-	/* Verify that bonded MACs is that of first slave and that the other slave
+	/* Verify that bonded MACs is that of first member and that the other member
 	 * MAC hasn't been changed */
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -2368,27 +2388,27 @@ test_activebackup_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[1]);
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[1]), 0,
+			test_params->member_port_ids[1]), 0,
 			"Failed to set bonded port (%d) primary port to (%d)",
-			test_params->bonded_port_id, test_params->slave_port_ids[1]);
+			test_params->bonded_port_id, test_params->member_port_ids[1]);
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -2398,24 +2418,26 @@ test_activebackup_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[1]);
 
-	/* stop / start bonded device and verify that primary MAC address is
-	 * propagated to bonded device and slaves */
+	/*
+	 * stop / start bonded device and verify that primary MAC address is
+	 * propagated to bonded device and members.
+	 */
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -2432,21 +2454,21 @@ test_activebackup_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[1]);
 
 	/* Set explicit MAC address */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
@@ -2462,36 +2484,36 @@ test_activebackup_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of bonded port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of bonded port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of bonded port",
+			test_params->member_port_ids[1]);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
-test_activebackup_verify_slave_link_status_change_failover(void)
+test_activebackup_verify_member_link_status_change_failover(void)
 {
-	struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count, primary_port;
+	int i, burst_size, member_count, primary_port;
 
 	burst_size = 21;
 
@@ -2502,96 +2524,96 @@ test_activebackup_verify_slave_link_status_change_failover(void)
 			&pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
 			"generate_test_burst failed");
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ACTIVE_BACKUP, 0,
-			TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
-			"Failed to initialize bonded device with slaves");
+			TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT, 1),
+			"Failed to initialize bonded device with members");
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Members Count /Active Member Count is */
+	member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of slaves (%d) is not as expected (%d).",
-			slave_count, 4);
+	TEST_ASSERT_EQUAL(member_count, 4,
+			"Number of members (%d) is not as expected (%d).",
+			member_count, 4);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 4);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count, 4,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 4);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+	TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
 			"Primary port not as expected");
 
-	/* Bring 2 slaves down and verify active slave count */
+	/* Bring 2 members down and verify active member count */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->member_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->member_port_ids[3], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 2);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 2);
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 1);
+			test_params->member_port_ids[1], 1);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 1);
+			test_params->member_port_ids[3], 1);
 
 
-	/* Bring primary port down, verify that active slave count is 3 and primary
+	/* Bring primary port down, verify that active member count is 3 and primary
 	 *  has changed */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 0);
+			test_params->member_port_ids[0], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS),
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS),
 			3,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 3);
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 3);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
+	TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[2],
 			"Primary port not as expected");
 
-	/* Verify that pkts are sent on new primary slave */
+	/* Verify that pkts are sent on new primary member */
 
 	TEST_ASSERT_EQUAL(rte_eth_tx_burst(
 			test_params->bonded_port_id, 0, &pkt_burst[0][0],
 			burst_size), burst_size, "rte_eth_tx_burst failed");
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[2]);
+			test_params->member_port_ids[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[3]);
+			test_params->member_port_ids[3]);
 
 	/* Generate packet burst for testing */
 
-	for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
 				"generate_test_burst failed");
 
 		virtual_ethdev_add_mbufs_to_rx_queue(
-			test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
+			test_params->member_port_ids[i], &pkt_burst[i][0], burst_size);
 	}
 
 	TEST_ASSERT_EQUAL(rte_eth_rx_burst(
@@ -2604,28 +2626,28 @@ test_activebackup_verify_slave_link_status_change_failover(void)
 			"(%d) port_stats.ipackets not as expected",
 			test_params->bonded_port_id);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[2]);
+			test_params->member_port_ids[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[3]);
+			test_params->member_port_ids[3]);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 /** Balance Mode Tests */
@@ -2633,9 +2655,9 @@ test_activebackup_verify_slave_link_status_change_failover(void)
 static int
 test_balance_xmit_policy_configuration(void)
 {
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			"Failed to initialize_bonded_device_with_members.");
 
 	/* Invalid port id */
 	TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
@@ -2644,7 +2666,7 @@ test_balance_xmit_policy_configuration(void)
 
 	/* Set xmit policy on non bonded device */
 	TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
-			test_params->slave_port_ids[0],	BALANCE_XMIT_POLICY_LAYER2),
+			test_params->member_port_ids[0],	BALANCE_XMIT_POLICY_LAYER2),
 			"Expected call to failed as invalid port specified.");
 
 
@@ -2677,25 +2699,25 @@ test_balance_xmit_policy_configuration(void)
 	TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_get(INVALID_PORT_ID),
 			"Expected call to failed as invalid port specified.");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT (2)
+#define TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT (2)
 
 static int
 test_balance_l2_tx_burst(void)
 {
-	struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
-	int burst_size[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT] = { 10, 15 };
+	struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
+	int burst_size[TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT] = { 10, 15 };
 
 	uint16_t pktlen;
 	int i;
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
-			BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+			BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT, 1),
+			"Failed to initialize_bonded_device_with_members.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
 			test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER2),
@@ -2730,7 +2752,7 @@ test_balance_l2_tx_burst(void)
 			"failed to generate packet burst");
 
 	/* Send burst 1 on bonded port */
-	for (i = 0; i < TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
 				&pkts_burst[i][0], burst_size[i]),
 				burst_size[i], "Failed to transmit packet burst");
@@ -2745,24 +2767,24 @@ test_balance_l2_tx_burst(void)
 			burst_size[0] + burst_size[1]);
 
 
-	/* Verify slave ports tx stats */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify member ports tx stats */
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[0],
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+			"Member Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
 			burst_size[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[1],
-			"Slave Port (%d) opackets value (%u) not as expected (%d)\n",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+			"Member Port (%d) opackets value (%u) not as expected (%d)\n",
+			test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
 			burst_size[1]);
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -2770,8 +2792,8 @@ test_balance_l2_tx_burst(void)
 			test_params->bonded_port_id, 0, &pkts_burst[0][0], burst_size[0]),
 			0, "Expected zero packet");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -2785,9 +2807,9 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BALANCE, 0, 2, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			"Failed to initialize_bonded_device_with_members.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
 			test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER23),
@@ -2825,24 +2847,24 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
 			nb_tx_1 + nb_tx_2);
 
-	/* Verify slave ports tx stats */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify member ports tx stats */
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+			"Member Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
 			nb_tx_1);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+			"Member Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
 			nb_tx_2);
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -2851,8 +2873,8 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 			burst_size_1), 0, "Expected zero packet");
 
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -2897,9 +2919,9 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BALANCE, 0, 2, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			"Failed to initialize_bonded_device_with_members.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
 			test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER34),
@@ -2938,24 +2960,24 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
 			nb_tx_1 + nb_tx_2);
 
-	/* Verify slave ports tx stats */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify member ports tx stats */
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+			"Member Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
 			nb_tx_1);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+			"Member Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
 			nb_tx_2);
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -2963,8 +2985,8 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 			test_params->bonded_port_id, 0, pkts_burst_1,
 			burst_size_1), 0, "Expected zero packet");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -3003,27 +3025,27 @@ test_balance_l34_tx_burst_ipv6_toggle_udp_port(void)
 	return balance_l34_tx_burst(0, 0, 0, 0, 1);
 }
 
-#define TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT			(2)
-#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1			(40)
-#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2			(20)
-#define TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT		(25)
-#define TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX	(0)
+#define TEST_BAL_MEMBER_TX_FAIL_MEMBER_COUNT			(2)
+#define TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1			(40)
+#define TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2			(20)
+#define TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT		(25)
+#define TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX	(0)
 
 static int
-test_balance_tx_burst_slave_tx_fail(void)
+test_balance_tx_burst_member_tx_fail(void)
 {
-	struct rte_mbuf *pkts_burst_1[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1];
-	struct rte_mbuf *pkts_burst_2[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2];
+	struct rte_mbuf *pkts_burst_1[TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1];
+	struct rte_mbuf *pkts_burst_2[TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2];
 
-	struct rte_mbuf *expected_fail_pkts[TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT];
+	struct rte_mbuf *expected_fail_pkts[TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT];
 
 	struct rte_eth_stats port_stats;
 
 	int i, first_tx_fail_idx, tx_count_1, tx_count_2;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BALANCE, 0,
-			TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+			TEST_BAL_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
@@ -3033,46 +3055,48 @@ test_balance_tx_burst_slave_tx_fail(void)
 
 	/* Generate test bursts for transmission */
 	TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_1,
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1,
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1,
 			"Failed to generate test packet burst 1");
 
-	first_tx_fail_idx = TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT;
+	first_tx_fail_idx = TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT;
 
 	/* copy mbuf references for expected transmission failures */
-	for (i = 0; i < TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT; i++)
+	for (i = 0; i < TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT; i++)
 		expected_fail_pkts[i] = pkts_burst_1[i + first_tx_fail_idx];
 
 	TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_2,
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
 			"Failed to generate test packet burst 2");
 
 
-	/* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
-	 * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
+	/*
+	 * Set virtual member TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX to only fail
+	 * transmission of TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT packets of burst.
+	 */
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
+			test_params->member_port_ids[TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
 			0);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+			test_params->member_port_ids[TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
 
 
 	/* Transmit burst 1 */
 	tx_count_1 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_1,
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1);
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1);
 
-	TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
+	TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT,
 			"Transmitted (%d) packets, expected to transmit (%d) packets",
-			tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+			tx_count_1, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
 
 	/* Verify that failed packet are expected failed packets */
-	for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+	for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
 		TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst_1[i + tx_count_1],
 				"expected mbuf (%d) pointer %p not expected pointer %p",
 				i, expected_fail_pkts[i], pkts_burst_1[i + tx_count_1]);
@@ -3080,94 +3104,94 @@ test_balance_tx_burst_slave_tx_fail(void)
 
 	/* Transmit burst 2 */
 	tx_count_2 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_2,
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
 
-	TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
+	TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
 			"Transmitted (%d) packets, expected to transmit (%d) packets",
-			tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+			tx_count_2, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
 
 
 	/* Verify bonded port tx stats */
 	rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)((TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2),
+			(uint64_t)((TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT) +
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2),
 			"Bonded Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			(TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+			(TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT) +
+			TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
 
-	/* Verify slave ports tx stats */
+	/* Verify member ports tx stats */
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)
-				TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-				TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[0],
+				TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+				TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT,
+				"Member Port (%d) opackets value (%u) not as expected (%d)",
+				test_params->member_port_ids[0],
 				(unsigned int)port_stats.opackets,
-				TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-				TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+				TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+				TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
 
 
 
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-				(uint64_t)TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[1],
+				(uint64_t)TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
+				"Member Port (%d) opackets value (%u) not as expected (%d)",
+				test_params->member_port_ids[1],
 				(unsigned int)port_stats.opackets,
-				TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+				TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
 
 	/* Verify that all mbufs have a ref value of zero */
 	TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst_1[tx_count_1],
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT, 1),
 			"mbufs refcnts not as expected");
 
 	free_mbufs(&pkts_burst_1[tx_count_1],
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+			TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_BALANCE_RX_BURST_SLAVE_COUNT (3)
+#define TEST_BALANCE_RX_BURST_MEMBER_COUNT (3)
 
 static int
 test_balance_rx_burst(void)
 {
-	struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
 
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	int burst_size[TEST_BALANCE_RX_BURST_SLAVE_COUNT] = { 10, 5, 30 };
+	int burst_size[TEST_BALANCE_RX_BURST_MEMBER_COUNT] = { 10, 5, 30 };
 	int i, j;
 
 	memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BALANCE, 0, 3, 1),
 			"Failed to initialise bonded device");
 
 	/* Generate test bursts of packets to transmit */
-	for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[i][0], burst_size[i], 0, 0, 1,
 				0, 0), burst_size[i],
 				"failed to generate packet burst");
 	}
 
-	/* Add rx data to slaves */
-	for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+	/* Add rx data to members */
+	for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[i][0], burst_size[i]);
 	}
 
@@ -3187,33 +3211,33 @@ test_balance_rx_burst(void)
 			burst_size[0] + burst_size[1] + burst_size[2]);
 
 
-	/* Verify bonded slave devices rx counts */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify bonded member devices rx counts */
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[0],
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+				test_params->member_port_ids[0],
 				(unsigned int)port_stats.ipackets, burst_size[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[1], (unsigned int)port_stats.ipackets,
 			burst_size[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[2], (unsigned int)port_stats.ipackets,
 			burst_size[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[3],	(unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[3],	(unsigned int)port_stats.ipackets,
 			0);
 
 	/* free mbufs */
-	for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
 		for (j = 0; j < MAX_PKT_BURST; j++) {
 			if (gen_pkt_burst[i][j] != NULL) {
 				rte_pktmbuf_free(gen_pkt_burst[i][j]);
@@ -3222,8 +3246,8 @@ test_balance_rx_burst(void)
 		}
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -3232,8 +3256,8 @@ test_balance_verify_promiscuous_enable_disable(void)
 	int i;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BALANCE, 0, 4, 1),
 			"Failed to initialise bonded device");
 
@@ -3246,11 +3270,11 @@ test_balance_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not enabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]), 1,
+				test_params->member_port_ids[i]), 1,
 				"Port (%d) promiscuous mode not enabled",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
 	ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -3262,15 +3286,15 @@ test_balance_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]), 0,
+				test_params->member_port_ids[i]), 0,
 				"Port (%d) promiscuous mode not disabled",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -3279,19 +3303,21 @@ test_balance_verify_mac_assignment(void)
 	struct rte_ether_addr read_mac_addr;
 	struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+			&expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+			test_params->member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+			&expected_mac_addr_1),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	/* Initialize bonded device with 2 slaves in active backup mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 2 members in active backup mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BALANCE, 0, 2, 1),
 			"Failed to initialise bonded device");
 
-	/* Verify that bonded MACs is that of first slave and that the other slave
+	/* Verify that bonded MACs is that of first member and that the other member
 	 * MAC hasn't been changed */
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -3301,27 +3327,27 @@ test_balance_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[1]);
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[1]),
+			test_params->member_port_ids[1]),
 			"Failed to set bonded port (%d) primary port to (%d)\n",
-			test_params->bonded_port_id, test_params->slave_port_ids[1]);
+			test_params->bonded_port_id, test_params->member_port_ids[1]);
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -3331,24 +3357,26 @@ test_balance_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[1]);
 
-	/* stop / start bonded device and verify that primary MAC address is
-	 * propagated to bonded device and slaves */
+	/*
+	 * stop / start bonded device and verify that primary MAC address is
+	 * propagated to bonded device and members.
+	 */
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -3365,21 +3393,21 @@ test_balance_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[1]);
 
 	/* Set explicit MAC address */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
@@ -3395,44 +3423,44 @@ test_balance_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of bonded port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected\n",
-				test_params->slave_port_ids[0]);
+			"member port (%d) mac address not as expected\n",
+				test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of bonded port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of bonded port",
+			test_params->member_port_ids[1]);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_BALANCE_LINK_STATUS_SLAVE_COUNT (4)
+#define TEST_BALANCE_LINK_STATUS_MEMBER_COUNT (4)
 
 static int
-test_balance_verify_slave_link_status_change_behaviour(void)
+test_balance_verify_member_link_status_change_behaviour(void)
 {
-	struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_MEMBER_COUNT][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count;
+	int i, burst_size, member_count;
 
 	memset(pkt_burst, 0, sizeof(pkt_burst));
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
-			BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT, 1),
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+			BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
@@ -3440,32 +3468,34 @@ test_balance_verify_slave_link_status_change_behaviour(void)
 			"Failed to set balance xmit policy.");
 
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Members Count /Active Member Count is */
+	member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
-			"Number of slaves (%d) is not as expected (%d).",
-			slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
+	TEST_ASSERT_EQUAL(member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT,
+			"Number of members (%d) is not as expected (%d).",
+			member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT);
 
-	/* Set 2 slaves link status to down */
+	/* Set 2 members link status to down */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->member_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->member_port_ids[3], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 2);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 2);
 
-	/* Send to sets of packet burst and verify that they are balanced across
-	 *  slaves */
+	/*
+	 * Send to sets of packet burst and verify that they are balanced across
+	 *  members.
+	 */
 	burst_size = 21;
 
 	TEST_ASSERT_EQUAL(generate_test_burst(
@@ -3491,27 +3521,27 @@ test_balance_verify_slave_link_status_change_behaviour(void)
 			test_params->bonded_port_id, (int)port_stats.opackets,
 			burst_size + burst_size);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets (%d) not as expected (%d).",
-			test_params->slave_port_ids[0], (int)port_stats.opackets,
+			test_params->member_port_ids[0], (int)port_stats.opackets,
 			burst_size);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets (%d) not as expected (%d).",
-			test_params->slave_port_ids[2], (int)port_stats.opackets,
+			test_params->member_port_ids[2], (int)port_stats.opackets,
 			burst_size);
 
-	/* verify that all packets get send on primary slave when no other slaves
+	/* verify that all packets get send on primary member when no other members
 	 * are available */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[2], 0);
+			test_params->member_port_ids[2], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 1,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 1);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 1,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 1);
 
 	TEST_ASSERT_EQUAL(generate_test_burst(
 			&pkt_burst[1][0], burst_size, 0, 1, 1, 0, 0), burst_size,
@@ -3528,31 +3558,31 @@ test_balance_verify_slave_link_status_change_behaviour(void)
 			test_params->bonded_port_id, (int)port_stats.opackets,
 			burst_size + burst_size + burst_size);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size + burst_size),
 			"(%d) port_stats.opackets (%d) not as expected (%d).",
-			test_params->slave_port_ids[0], (int)port_stats.opackets,
+			test_params->member_port_ids[0], (int)port_stats.opackets,
 			burst_size + burst_size);
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 0);
+			test_params->member_port_ids[0], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 1);
+			test_params->member_port_ids[1], 1);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[2], 1);
+			test_params->member_port_ids[2], 1);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 1);
+			test_params->member_port_ids[3], 1);
 
-	for (i = 0; i < TEST_BALANCE_LINK_STATUS_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_BALANCE_LINK_STATUS_MEMBER_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
 				"Failed to generate packet burst");
 
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&pkt_burst[i][0], burst_size);
 	}
 
-	/* Verify that pkts are not received on slaves with link status down */
+	/* Verify that pkts are not received on members with link status down */
 
 	rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
 			MAX_PKT_BURST);
@@ -3564,8 +3594,8 @@ test_balance_verify_slave_link_status_change_behaviour(void)
 			test_params->bonded_port_id, (int)port_stats.ipackets,
 			burst_size * 3);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -3576,7 +3606,7 @@ test_broadcast_tx_burst(void)
 
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BROADCAST, 0, 2, 1),
 			"Failed to initialise bonded device");
 
@@ -3590,7 +3620,7 @@ test_broadcast_tx_burst(void)
 	pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
 			dst_addr_0, pktlen);
 
-	burst_size = 20 * test_params->bonded_slave_count;
+	burst_size = 20 * test_params->bonded_member_count;
 
 	TEST_ASSERT(burst_size < MAX_PKT_BURST,
 			"Burst size specified is greater than supported.");
@@ -3611,25 +3641,25 @@ test_broadcast_tx_burst(void)
 	/* Verify bonded port tx stats */
 	rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)burst_size * test_params->bonded_slave_count,
+			(uint64_t)burst_size * test_params->bonded_member_count,
 			"Bonded Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
 			burst_size);
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+	/* Verify member ports tx stats */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
 		TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)\n",
+				"Member Port (%d) opackets value (%u) not as expected (%d)\n",
 				test_params->bonded_port_id,
 				(unsigned int)port_stats.opackets, burst_size);
 	}
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -3637,159 +3667,161 @@ test_broadcast_tx_burst(void)
 			test_params->bonded_port_id, 0, pkts_burst, burst_size),  0,
 			"transmitted an unexpected number of packets");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 
-#define TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT		(3)
-#define TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE			(40)
-#define TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT	(15)
-#define TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT	(10)
+#define TEST_BCAST_MEMBER_TX_FAIL_MEMBER_COUNT		(3)
+#define TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE			(40)
+#define TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT	(15)
+#define TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT	(10)
 
 static int
-test_broadcast_tx_burst_slave_tx_fail(void)
+test_broadcast_tx_burst_member_tx_fail(void)
 {
-	struct rte_mbuf *pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE];
-	struct rte_mbuf *expected_fail_pkts[TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT];
+	struct rte_mbuf *pkts_burst[TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE];
+	struct rte_mbuf *expected_fail_pkts[TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT];
 
 	struct rte_eth_stats port_stats;
 
 	int i, tx_count;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BROADCAST, 0,
-			TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+			TEST_BCAST_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	/* Generate test bursts for transmission */
 	TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE,
+			TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
+			TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE,
 			"Failed to generate test packet burst");
 
-	for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
-		expected_fail_pkts[i] = pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT + i];
+	for (i = 0; i < TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT; i++) {
+		expected_fail_pkts[i] = pkts_burst[TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT + i];
 	}
 
-	/* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
-	 * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
+	/*
+	 * Set virtual member TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX to only fail
+	 * transmission of TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT packets of burst.
+	 */
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[0],
+			test_params->member_port_ids[0],
 			0);
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[1],
+			test_params->member_port_ids[1],
 			0);
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[2],
+			test_params->member_port_ids[2],
 			0);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[0],
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+			test_params->member_port_ids[0],
+			TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[1],
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+			test_params->member_port_ids[1],
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[2],
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+			test_params->member_port_ids[2],
+			TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
 
 	/* Transmit burst */
 	tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE);
+			TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE);
 
-	TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
+	TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT,
 			"Transmitted (%d) packets, expected to transmit (%d) packets",
-			tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+			tx_count, TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
 
 	/* Verify that failed packet are expected failed packets */
-	for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
+	for (i = 0; i < TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT; i++) {
 		TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst[i + tx_count],
 				"expected mbuf (%d) pointer %p not expected pointer %p",
 				i, expected_fail_pkts[i], pkts_burst[i + tx_count]);
 	}
 
-	/* Verify slave ports tx stats */
+	/* Verify member ports tx stats */
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
+			(uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT,
 			"Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+			TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
 
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
+			(uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT,
 			"Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+			TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
+			(uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT,
 			"Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+			TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
 
 
 	/* Verify that all mbufs who transmission failed have a ref value of one */
 	TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst[tx_count],
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT, 1),
+			TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT, 1),
 			"mbufs refcnts not as expected");
 
 	free_mbufs(&pkts_burst[tx_count],
-		TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+		TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define BROADCAST_RX_BURST_NUM_OF_SLAVES (3)
+#define BROADCAST_RX_BURST_NUM_OF_MEMBERS (3)
 
 static int
 test_broadcast_rx_burst(void)
 {
-	struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_SLAVES][MAX_PKT_BURST];
+	struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_MEMBERS][MAX_PKT_BURST];
 
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	int burst_size[BROADCAST_RX_BURST_NUM_OF_SLAVES] = { 10, 5, 30 };
+	int burst_size[BROADCAST_RX_BURST_NUM_OF_MEMBERS] = { 10, 5, 30 };
 	int i, j;
 
 	memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BROADCAST, 0, 3, 1),
 			"Failed to initialise bonded device");
 
 	/* Generate test bursts of packets to transmit */
-	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
+	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[i][0], burst_size[i], 0, 0, 1, 0, 0),
 				burst_size[i], "failed to generate packet burst");
 	}
 
-	/* Add rx data to slave 0 */
-	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+	/* Add rx data to member 0 */
+	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[i][0], burst_size[i]);
 	}
 
@@ -3810,33 +3842,33 @@ test_broadcast_rx_burst(void)
 			burst_size[0] + burst_size[1] + burst_size[2]);
 
 
-	/* Verify bonded slave devices rx counts */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify bonded member devices rx counts */
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[0], (unsigned int)port_stats.ipackets,
 			burst_size[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[0], (unsigned int)port_stats.ipackets,
 			burst_size[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[2], (unsigned int)port_stats.ipackets,
 			burst_size[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[3], (unsigned int)port_stats.ipackets,
+			"Member Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->member_port_ids[3], (unsigned int)port_stats.ipackets,
 			0);
 
 	/* free mbufs allocate for rx testing */
-	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
+	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
 		for (j = 0; j < MAX_PKT_BURST; j++) {
 			if (gen_pkt_burst[i][j] != NULL) {
 				rte_pktmbuf_free(gen_pkt_burst[i][j]);
@@ -3845,8 +3877,8 @@ test_broadcast_rx_burst(void)
 		}
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -3855,8 +3887,8 @@ test_broadcast_verify_promiscuous_enable_disable(void)
 	int i;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BROADCAST, 0, 4, 1),
 			"Failed to initialise bonded device");
 
@@ -3870,11 +3902,11 @@ test_broadcast_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not enabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]), 1,
+				test_params->member_port_ids[i]), 1,
 				"Port (%d) promiscuous mode not enabled",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
 	ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -3886,15 +3918,15 @@ test_broadcast_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]), 0,
+				test_params->member_port_ids[i]), 0,
 				"Port (%d) promiscuous mode not disabled",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -3905,49 +3937,55 @@ test_broadcast_verify_mac_assignment(void)
 
 	int i;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+			&expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_1),
+			test_params->member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[2],
+			&expected_mac_addr_1),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[2]);
+			test_params->member_port_ids[2]);
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_BROADCAST, 0, 4, 1),
 			"Failed to initialise bonded device");
 
-	/* Verify that all MACs are the same as first slave added to bonded
+	/* Verify that all MACs are the same as first member added to bonded
 	 * device */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of primary port",
-				test_params->slave_port_ids[i]);
+				"member port (%d) mac address not set to that of primary port",
+				test_params->member_port_ids[i]);
 	}
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[2]),
+			test_params->member_port_ids[2]),
 			"Failed to set bonded port (%d) primary port to (%d)",
-			test_params->bonded_port_id, test_params->slave_port_ids[i]);
+			test_params->bonded_port_id, test_params->member_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address has changed to that of primary "
+				"member port (%d) mac address has changed to that of primary "
 				"port without stop/start toggle of bonded device",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 	}
 
-	/* stop / start bonded device and verify that primary MAC address is
-	 * propagated to bonded device and slaves */
+	/*
+	 * stop / start bonded device and verify that primary MAC address is
+	 * propagated to bonded device and members.
+	 */
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -3962,16 +4000,17 @@ test_broadcast_verify_mac_assignment(void)
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
 			"bonded port (%d) mac address not set to that of new primary  port",
-			test_params->slave_port_ids[i]);
+			test_params->member_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of new primary "
-				"port", test_params->slave_port_ids[i]);
+				"member port (%d) mac address not set to that of new primary "
+				"port", test_params->member_port_ids[i]);
 	}
 
 	/* Set explicit MAC address */
@@ -3986,71 +4025,72 @@ test_broadcast_verify_mac_assignment(void)
 	TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
 			"bonded port (%d) mac address not set to that of new primary port",
-			test_params->slave_port_ids[i]);
+			test_params->member_port_ids[i]);
 
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+				&read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of new primary "
-				"port", test_params->slave_port_ids[i]);
+				"member port (%d) mac address not set to that of new primary "
+				"port", test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define BROADCAST_LINK_STATUS_NUM_OF_SLAVES (4)
+#define BROADCAST_LINK_STATUS_NUM_OF_MEMBERS (4)
 static int
-test_broadcast_verify_slave_link_status_change_behaviour(void)
+test_broadcast_verify_member_link_status_change_behaviour(void)
 {
-	struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_SLAVES][MAX_PKT_BURST];
+	struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_MEMBERS][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count;
+	int i, burst_size, member_count;
 
 	memset(pkt_burst, 0, sizeof(pkt_burst));
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
-				BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_SLAVES,
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+				BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_MEMBERS,
 				1), "Failed to initialise bonded device");
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Members Count /Active Member Count is */
+	member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of slaves (%d) is not as expected (%d).",
-			slave_count, 4);
+	TEST_ASSERT_EQUAL(member_count, 4,
+			"Number of members (%d) is not as expected (%d).",
+			member_count, 4);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 4);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count, 4,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 4);
 
-	/* Set 2 slaves link status to down */
+	/* Set 2 members link status to down */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->member_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->member_port_ids[3], 0);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 2,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 2);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count, 2,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 2);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++)
-		rte_eth_stats_reset(test_params->slave_port_ids[i]);
+	for (i = 0; i < test_params->bonded_member_count; i++)
+		rte_eth_stats_reset(test_params->member_port_ids[i]);
 
-	/* Verify that pkts are not sent on slaves with link status down */
+	/* Verify that pkts are not sent on members with link status down */
 	burst_size = 21;
 
 	TEST_ASSERT_EQUAL(generate_test_burst(
@@ -4062,43 +4102,43 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
 			"rte_eth_tx_burst failed\n");
 
 	rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
-	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * slave_count),
+	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * member_count),
 			"(%d) port_stats.opackets (%d) not as expected (%d)\n",
 			test_params->bonded_port_id, (int)port_stats.opackets,
-			burst_size * slave_count);
+			burst_size * member_count);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-				test_params->slave_port_ids[1]);
+				test_params->member_port_ids[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets not as expected",
-				test_params->slave_port_ids[2]);
+				test_params->member_port_ids[2]);
 
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[3]);
+			test_params->member_port_ids[3]);
 
 
-	for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_SLAVES; i++) {
+	for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_MEMBERS; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&pkt_burst[i][0], burst_size, 0, 0, 1, 0, 0),
 				burst_size, "failed to generate packet burst");
 
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&pkt_burst[i][0], burst_size);
 	}
 
-	/* Verify that pkts are not received on slaves with link status down */
+	/* Verify that pkts are not received on members with link status down */
 	TEST_ASSERT_EQUAL(rte_eth_rx_burst(
 			test_params->bonded_port_id, 0, rx_pkt_burst, MAX_PKT_BURST),
 			burst_size + burst_size, "rte_eth_rx_burst failed");
@@ -4110,8 +4150,8 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
 			"(%d) port_stats.ipackets not as expected\n",
 			test_params->bonded_port_id);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -4146,21 +4186,21 @@ testsuite_teardown(void)
 	free(test_params->pkt_eth_hdr);
 	test_params->pkt_eth_hdr = NULL;
 
-	/* Clean up and remove slaves from bonded device */
-	remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	remove_members_and_stop_bonded_device();
 }
 
 static void
 free_virtualpmd_tx_queue(void)
 {
-	int i, slave_port, to_free_cnt;
+	int i, member_port, to_free_cnt;
 	struct rte_mbuf *pkts_to_free[MAX_PKT_BURST];
 
 	/* Free tx queue of virtual pmd */
-	for (slave_port = 0; slave_port < test_params->bonded_slave_count;
-			slave_port++) {
+	for (member_port = 0; member_port < test_params->bonded_member_count;
+			member_port++) {
 		to_free_cnt = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_port],
+				test_params->member_port_ids[member_port],
 				pkts_to_free, MAX_PKT_BURST);
 		for (i = 0; i < to_free_cnt; i++)
 			rte_pktmbuf_free(pkts_to_free[i]);
@@ -4177,11 +4217,11 @@ test_tlb_tx_burst(void)
 	uint64_t sum_ports_opackets = 0, all_bond_opackets = 0, all_bond_obytes = 0;
 	uint16_t pktlen;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members
 			(BONDING_MODE_TLB, 1, 3, 1),
 			"Failed to initialise bonded device");
 
-	burst_size = 20 * test_params->bonded_slave_count;
+	burst_size = 20 * test_params->bonded_member_count;
 
 	TEST_ASSERT(burst_size < MAX_PKT_BURST,
 			"Burst size specified is greater than supported.\n");
@@ -4197,7 +4237,7 @@ test_tlb_tx_burst(void)
 					RTE_ETHER_TYPE_IPV4, 0, 0);
 		} else {
 			initialize_eth_header(test_params->pkt_eth_hdr,
-					(struct rte_ether_addr *)test_params->default_slave_mac,
+					(struct rte_ether_addr *)test_params->default_member_mac,
 					(struct rte_ether_addr *)dst_mac_0,
 					RTE_ETHER_TYPE_IPV4, 0, 0);
 		}
@@ -4234,26 +4274,26 @@ test_tlb_tx_burst(void)
 			burst_size);
 
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats[i]);
+	/* Verify member ports tx stats */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
+		rte_eth_stats_get(test_params->member_port_ids[i], &port_stats[i]);
 		sum_ports_opackets += port_stats[i].opackets;
 	}
 
 	TEST_ASSERT_EQUAL(sum_ports_opackets, (uint64_t)all_bond_opackets,
-			"Total packets sent by slaves is not equal to packets sent by bond interface");
+			"Total packets sent by members is not equal to packets sent by bond interface");
 
-	/* checking if distribution of packets is balanced over slaves */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* checking if distribution of packets is balanced over members */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		TEST_ASSERT(port_stats[i].obytes > 0 &&
 				port_stats[i].obytes < all_bond_obytes,
-						"Packets are not balanced over slaves");
+						"Packets are not balanced over members");
 	}
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all members down and try and transmit */
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->member_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -4261,11 +4301,11 @@ test_tlb_tx_burst(void)
 			burst_size);
 	TEST_ASSERT_EQUAL(nb_tx, 0, " bad number of packet in burst");
 
-	/* Clean ugit checkout masterp and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean ugit checkout masterp and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT (4)
+#define TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT (4)
 
 static int
 test_tlb_rx_burst(void)
@@ -4279,26 +4319,26 @@ test_tlb_rx_burst(void)
 
 	uint16_t i, j, nb_rx, burst_size = 17;
 
-	/* Initialize bonded device with 4 slaves in transmit load balancing mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in transmit load balancing mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_TLB,
-			TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1, 1),
+			TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT, 1, 1),
 			"Failed to initialize bonded device");
 
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 	TEST_ASSERT(primary_port >= 0,
-			"failed to get primary slave for bonded port (%d)",
+			"failed to get primary member for bonded port (%d)",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		/* Generate test bursts of packets to transmit */
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0), burst_size,
 				"burst generation failed");
 
-		/* Add rx data to slave */
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		/* Add rx data to member */
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
 				&gen_pkt_burst[0], burst_size);
 
 		/* Call rx burst on bonded device */
@@ -4307,7 +4347,7 @@ test_tlb_rx_burst(void)
 
 		TEST_ASSERT_EQUAL(nb_rx, burst_size, "rte_eth_rx_burst failed\n");
 
-		if (test_params->slave_port_ids[i] == primary_port) {
+		if (test_params->member_port_ids[i] == primary_port) {
 			/* Verify bonded device rx count */
 			rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 			TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
@@ -4315,27 +4355,27 @@ test_tlb_rx_burst(void)
 					test_params->bonded_port_id,
 					(unsigned int)port_stats.ipackets, burst_size);
 
-			/* Verify bonded slave devices rx count */
-			for (j = 0; j < test_params->bonded_slave_count; j++) {
-				rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+			/* Verify bonded member devices rx count */
+			for (j = 0; j < test_params->bonded_member_count; j++) {
+				rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
 				if (i == j) {
 					TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
-							"Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
-							test_params->slave_port_ids[i],
+							"Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+							test_params->member_port_ids[i],
 							(unsigned int)port_stats.ipackets, burst_size);
 				} else {
 					TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
-							"Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
-							test_params->slave_port_ids[i],
+							"Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+							test_params->member_port_ids[i],
 							(unsigned int)port_stats.ipackets, 0);
 				}
 			}
 		} else {
-			for (j = 0; j < test_params->bonded_slave_count; j++) {
-				rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+			for (j = 0; j < test_params->bonded_member_count; j++) {
+				rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
 				TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
-						"Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
-						test_params->slave_port_ids[i],
+						"Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+						test_params->member_port_ids[i],
 						(unsigned int)port_stats.ipackets, 0);
 			}
 		}
@@ -4348,8 +4388,8 @@ test_tlb_rx_burst(void)
 		rte_eth_stats_reset(test_params->bonded_port_id);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -4358,14 +4398,14 @@ test_tlb_verify_promiscuous_enable_disable(void)
 	int i, primary_port, promiscuous_en;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in transmit load balancing mode */
-	TEST_ASSERT_SUCCESS( initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in transmit load balancing mode */
+	TEST_ASSERT_SUCCESS( initialize_bonded_device_with_members(
 			BONDING_MODE_TLB, 0, 4, 1),
 			"Failed to initialize bonded device");
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 	TEST_ASSERT(primary_port >= 0,
-			"failed to get primary slave for bonded port (%d)",
+			"failed to get primary member for bonded port (%d)",
 			test_params->bonded_port_id);
 
 	ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -4377,10 +4417,10 @@ test_tlb_verify_promiscuous_enable_disable(void)
 	TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
 			"Port (%d) promiscuous mode not enabled\n",
 			test_params->bonded_port_id);
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
-		if (primary_port == test_params->slave_port_ids[i]) {
+				test_params->member_port_ids[i]);
+		if (primary_port == test_params->member_port_ids[i]) {
 			TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
 					"Port (%d) promiscuous mode not enabled\n",
 					test_params->bonded_port_id);
@@ -4402,16 +4442,16 @@ test_tlb_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled\n",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_member_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
+				test_params->member_port_ids[i]);
 		TEST_ASSERT_EQUAL(promiscuous_en, (int)0,
-				"slave port (%d) promiscuous mode not disabled\n",
-				test_params->slave_port_ids[i]);
+				"member port (%d) promiscuous mode not disabled\n",
+				test_params->member_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
@@ -4420,20 +4460,24 @@ test_tlb_verify_mac_assignment(void)
 	struct rte_ether_addr read_mac_addr;
 	struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+			&expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+			test_params->member_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+			&expected_mac_addr_1),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	/* Initialize bonded device with 2 slaves in active backup mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 2 members in active backup mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_TLB, 0, 2, 1),
 			"Failed to initialize bonded device");
 
-	/* Verify that bonded MACs is that of first slave and that the other slave
-	 * MAC hasn't been changed */
+	/*
+	 * Verify that bonded MACs is that of first member and that the other member
+	 * MAC hasn't been changed.
+	 */
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
 			test_params->bonded_port_id);
@@ -4442,27 +4486,27 @@ test_tlb_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[1]);
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[1]), 0,
+			test_params->member_port_ids[1]), 0,
 			"Failed to set bonded port (%d) primary port to (%d)",
-			test_params->bonded_port_id, test_params->slave_port_ids[1]);
+			test_params->bonded_port_id, test_params->member_port_ids[1]);
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -4472,24 +4516,26 @@ test_tlb_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[1]);
 
-	/* stop / start bonded device and verify that primary MAC address is
-	 * propagated to bonded device and slaves */
+	/*
+	 * stop / start bonded device and verify that primary MAC address is
+	 * propagated to bonded device and members.
+	 */
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -4506,21 +4552,21 @@ test_tlb_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of primary port",
+			test_params->member_port_ids[1]);
 
 
 	/* Set explicit MAC address */
@@ -4537,36 +4583,36 @@ test_tlb_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of bonded port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[0]);
+			"member port (%d) mac address not as expected",
+			test_params->member_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of bonded port",
-			test_params->slave_port_ids[1]);
+			"member port (%d) mac address not set to that of bonded port",
+			test_params->member_port_ids[1]);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
 static int
-test_tlb_verify_slave_link_status_change_failover(void)
+test_tlb_verify_member_link_status_change_failover(void)
 {
-	struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count, primary_port;
+	int i, burst_size, member_count, primary_port;
 
 	burst_size = 21;
 
@@ -4574,61 +4620,63 @@ test_tlb_verify_slave_link_status_change_failover(void)
 
 
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 members in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
 			BONDING_MODE_TLB, 0,
-			TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1),
-			"Failed to initialize bonded device with slaves");
+			TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT, 1),
+			"Failed to initialize bonded device with members");
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Members Count /Active Member Count is */
+	member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of slaves (%d) is not as expected (%d).\n",
-			slave_count, 4);
+	TEST_ASSERT_EQUAL(member_count, 4,
+			"Number of members (%d) is not as expected (%d).\n",
+			member_count, 4);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, (int)4,
-			"Number of slaves (%d) is not as expected (%d).\n",
-			slave_count, 4);
+	member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+			members, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(member_count, 4,
+			"Number of members (%d) is not as expected (%d).\n",
+			member_count, 4);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+	TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
 			"Primary port not as expected");
 
-	/* Bring 2 slaves down and verify active slave count */
+	/* Bring 2 members down and verify active member count */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->member_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->member_port_ids[3], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 2);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 2);
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 1);
+			test_params->member_port_ids[1], 1);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 1);
+			test_params->member_port_ids[3], 1);
 
 
-	/* Bring primary port down, verify that active slave count is 3 and primary
-	 *  has changed */
+	/*
+	 * Bring primary port down, verify that active member count is 3 and primary
+	 *  has changed.
+	 */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 0);
+			test_params->member_port_ids[0], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 3,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 3);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+			test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 3,
+			"Number of active members (%d) is not as expected (%d).",
+			member_count, 3);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
+	TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[2],
 			"Primary port not as expected");
 	rte_delay_us(500000);
-	/* Verify that pkts are sent on new primary slave */
+	/* Verify that pkts are sent on new primary member */
 	for (i = 0; i < 4; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
@@ -4639,36 +4687,36 @@ test_tlb_verify_slave_link_status_change_failover(void)
 		rte_delay_us(11000);
 	}
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (int8_t)0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[0]);
+			test_params->member_port_ids[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
 	TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[1]);
+			test_params->member_port_ids[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
 	TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[2]);
+			test_params->member_port_ids[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
 	TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[3]);
+			test_params->member_port_ids[3]);
 
 
 	/* Generate packet burst for testing */
 
-	for (i = 0; i < TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT; i++) {
 		if (generate_test_burst(&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0) !=
 				burst_size)
 			return -1;
 
 		virtual_ethdev_add_mbufs_to_rx_queue(
-				test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
+				test_params->member_port_ids[i], &pkt_burst[i][0], burst_size);
 	}
 
 	if (rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
@@ -4684,11 +4732,11 @@ test_tlb_verify_slave_link_status_change_failover(void)
 			"(%d) port_stats.ipackets not as expected\n",
 			test_params->bonded_port_id);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove members from bonded device */
+	return remove_members_and_stop_bonded_device();
 }
 
-#define TEST_ALB_SLAVE_COUNT	2
+#define TEST_ALB_MEMBER_COUNT	2
 
 static uint8_t mac_client1[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 1};
 static uint8_t mac_client2[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 2};
@@ -4710,23 +4758,23 @@ test_alb_change_mac_in_reply_sent(void)
 	struct rte_ether_hdr *eth_pkt;
 	struct rte_arp_hdr *arp_pkt;
 
-	int slave_idx, nb_pkts, pkt_idx;
+	int member_idx, nb_pkts, pkt_idx;
 	int retval = 0;
 
 	struct rte_ether_addr bond_mac, client_mac;
-	struct rte_ether_addr *slave_mac1, *slave_mac2;
+	struct rte_ether_addr *member_mac1, *member_mac2;
 
 	TEST_ASSERT_SUCCESS(
-			initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
-					0, TEST_ALB_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			initialize_bonded_device_with_members(BONDING_MODE_ALB,
+					0, TEST_ALB_MEMBER_COUNT, 1),
+			"Failed to initialize_bonded_device_with_members.");
 
 	/* Flush tx queue */
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count;
-			slave_idx++) {
+	for (member_idx = 0; member_idx < test_params->bonded_member_count;
+			member_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->member_port_ids[member_idx], pkts_sent,
 				MAX_PKT_BURST);
 	}
 
@@ -4782,18 +4830,18 @@ test_alb_change_mac_in_reply_sent(void)
 			RTE_ARP_OP_REPLY);
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, &pkt, 1);
 
-	slave_mac1 =
-			rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
-	slave_mac2 =
-			rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
+	member_mac1 =
+			rte_eth_devices[test_params->member_port_ids[0]].data->mac_addrs;
+	member_mac2 =
+			rte_eth_devices[test_params->member_port_ids[1]].data->mac_addrs;
 
 	/*
 	 * Checking if packets are properly distributed on bonding ports. Packets
 	 * 0 and 2 should be sent on port 0 and packets 1 and 3 on port 1.
 	 */
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->member_port_ids[member_idx], pkts_sent,
 				MAX_PKT_BURST);
 
 		for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -4802,14 +4850,14 @@ test_alb_change_mac_in_reply_sent(void)
 			arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
 						sizeof(struct rte_ether_hdr));
 
-			if (slave_idx%2 == 0) {
-				if (!rte_is_same_ether_addr(slave_mac1,
+			if (member_idx%2 == 0) {
+				if (!rte_is_same_ether_addr(member_mac1,
 						&arp_pkt->arp_data.arp_sha)) {
 					retval = -1;
 					goto test_end;
 				}
 			} else {
-				if (!rte_is_same_ether_addr(slave_mac2,
+				if (!rte_is_same_ether_addr(member_mac2,
 						&arp_pkt->arp_data.arp_sha)) {
 					retval = -1;
 					goto test_end;
@@ -4819,7 +4867,7 @@ test_alb_change_mac_in_reply_sent(void)
 	}
 
 test_end:
-	retval += remove_slaves_and_stop_bonded_device();
+	retval += remove_members_and_stop_bonded_device();
 	return retval;
 }
 
@@ -4832,22 +4880,22 @@ test_alb_reply_from_client(void)
 	struct rte_mbuf *pkt;
 	struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
 
-	int slave_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
+	int member_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
 	int retval = 0;
 
 	struct rte_ether_addr bond_mac, client_mac;
-	struct rte_ether_addr *slave_mac1, *slave_mac2;
+	struct rte_ether_addr *member_mac1, *member_mac2;
 
 	TEST_ASSERT_SUCCESS(
-			initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
-					0, TEST_ALB_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			initialize_bonded_device_with_members(BONDING_MODE_ALB,
+					0, TEST_ALB_MEMBER_COUNT, 1),
+			"Failed to initialize_bonded_device_with_members.");
 
 	/* Flush tx queue */
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->member_port_ids[member_idx], pkts_sent,
 				MAX_PKT_BURST);
 	}
 
@@ -4868,7 +4916,7 @@ test_alb_reply_from_client(void)
 					sizeof(struct rte_ether_hdr));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
 			1);
 
 	pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4880,7 +4928,7 @@ test_alb_reply_from_client(void)
 					sizeof(struct rte_ether_hdr));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client2, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
 			1);
 
 	pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4892,7 +4940,7 @@ test_alb_reply_from_client(void)
 					sizeof(struct rte_ether_hdr));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client3, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
 			1);
 
 	pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4904,7 +4952,7 @@ test_alb_reply_from_client(void)
 					sizeof(struct rte_ether_hdr));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client4, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
 			1);
 
 	/*
@@ -4914,15 +4962,15 @@ test_alb_reply_from_client(void)
 	rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
 
-	slave_mac1 = rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
-	slave_mac2 = rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
+	member_mac1 = rte_eth_devices[test_params->member_port_ids[0]].data->mac_addrs;
+	member_mac2 = rte_eth_devices[test_params->member_port_ids[1]].data->mac_addrs;
 
 	/*
-	 * Checking if update ARP packets were properly send on slave ports.
+	 * Checking if update ARP packets were properly send on member ports.
 	 */
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent, MAX_PKT_BURST);
+				test_params->member_port_ids[member_idx], pkts_sent, MAX_PKT_BURST);
 		nb_pkts_sum += nb_pkts;
 
 		for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -4931,14 +4979,14 @@ test_alb_reply_from_client(void)
 			arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
 						sizeof(struct rte_ether_hdr));
 
-			if (slave_idx%2 == 0) {
-				if (!rte_is_same_ether_addr(slave_mac1,
+			if (member_idx%2 == 0) {
+				if (!rte_is_same_ether_addr(member_mac1,
 						&arp_pkt->arp_data.arp_sha)) {
 					retval = -1;
 					goto test_end;
 				}
 			} else {
-				if (!rte_is_same_ether_addr(slave_mac2,
+				if (!rte_is_same_ether_addr(member_mac2,
 						&arp_pkt->arp_data.arp_sha)) {
 					retval = -1;
 					goto test_end;
@@ -4954,7 +5002,7 @@ test_alb_reply_from_client(void)
 	}
 
 test_end:
-	retval += remove_slaves_and_stop_bonded_device();
+	retval += remove_members_and_stop_bonded_device();
 	return retval;
 }
 
@@ -4968,21 +5016,21 @@ test_alb_receive_vlan_reply(void)
 	struct rte_mbuf *pkt;
 	struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
 
-	int slave_idx, nb_pkts, pkt_idx;
+	int member_idx, nb_pkts, pkt_idx;
 	int retval = 0;
 
 	struct rte_ether_addr bond_mac, client_mac;
 
 	TEST_ASSERT_SUCCESS(
-			initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
-					0, TEST_ALB_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			initialize_bonded_device_with_members(BONDING_MODE_ALB,
+					0, TEST_ALB_MEMBER_COUNT, 1),
+			"Failed to initialize_bonded_device_with_members.");
 
 	/* Flush tx queue */
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->member_port_ids[member_idx], pkts_sent,
 				MAX_PKT_BURST);
 	}
 
@@ -5007,7 +5055,7 @@ test_alb_receive_vlan_reply(void)
 	arp_pkt = (struct rte_arp_hdr *)((char *)(vlan_pkt + 1));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
 			1);
 
 	rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
@@ -5016,9 +5064,9 @@ test_alb_receive_vlan_reply(void)
 	/*
 	 * Checking if VLAN headers in generated ARP Update packet are correct.
 	 */
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->member_port_ids[member_idx], pkts_sent,
 				MAX_PKT_BURST);
 
 		for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -5049,7 +5097,7 @@ test_alb_receive_vlan_reply(void)
 	}
 
 test_end:
-	retval += remove_slaves_and_stop_bonded_device();
+	retval += remove_members_and_stop_bonded_device();
 	return retval;
 }
 
@@ -5062,9 +5110,9 @@ test_alb_ipv4_tx(void)
 	retval = 0;
 
 	TEST_ASSERT_SUCCESS(
-			initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
-					0, TEST_ALB_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			initialize_bonded_device_with_members(BONDING_MODE_ALB,
+					0, TEST_ALB_MEMBER_COUNT, 1),
+			"Failed to initialize_bonded_device_with_members.");
 
 	burst_size = 32;
 
@@ -5085,7 +5133,7 @@ test_alb_ipv4_tx(void)
 	}
 
 test_end:
-	retval += remove_slaves_and_stop_bonded_device();
+	retval += remove_members_and_stop_bonded_device();
 	return retval;
 }
 
@@ -5096,34 +5144,34 @@ static struct unit_test_suite link_bonding_test_suite  = {
 	.unit_test_cases = {
 		TEST_CASE(test_create_bonded_device),
 		TEST_CASE(test_create_bonded_device_with_invalid_params),
-		TEST_CASE(test_add_slave_to_bonded_device),
-		TEST_CASE(test_add_slave_to_invalid_bonded_device),
-		TEST_CASE(test_remove_slave_from_bonded_device),
-		TEST_CASE(test_remove_slave_from_invalid_bonded_device),
-		TEST_CASE(test_get_slaves_from_bonded_device),
-		TEST_CASE(test_add_already_bonded_slave_to_bonded_device),
-		TEST_CASE(test_add_remove_multiple_slaves_to_from_bonded_device),
+		TEST_CASE(test_add_member_to_bonded_device),
+		TEST_CASE(test_add_member_to_invalid_bonded_device),
+		TEST_CASE(test_remove_member_from_bonded_device),
+		TEST_CASE(test_remove_member_from_invalid_bonded_device),
+		TEST_CASE(test_get_members_from_bonded_device),
+		TEST_CASE(test_add_already_bonded_member_to_bonded_device),
+		TEST_CASE(test_add_remove_multiple_members_to_from_bonded_device),
 		TEST_CASE(test_start_bonded_device),
 		TEST_CASE(test_stop_bonded_device),
 		TEST_CASE(test_set_bonding_mode),
-		TEST_CASE(test_set_primary_slave),
+		TEST_CASE(test_set_primary_member),
 		TEST_CASE(test_set_explicit_bonded_mac),
 		TEST_CASE(test_set_bonded_port_initialization_mac_assignment),
 		TEST_CASE(test_status_interrupt),
-		TEST_CASE(test_adding_slave_after_bonded_device_started),
+		TEST_CASE(test_adding_member_after_bonded_device_started),
 		TEST_CASE(test_roundrobin_tx_burst),
-		TEST_CASE(test_roundrobin_tx_burst_slave_tx_fail),
-		TEST_CASE(test_roundrobin_rx_burst_on_single_slave),
-		TEST_CASE(test_roundrobin_rx_burst_on_multiple_slaves),
+		TEST_CASE(test_roundrobin_tx_burst_member_tx_fail),
+		TEST_CASE(test_roundrobin_rx_burst_on_single_member),
+		TEST_CASE(test_roundrobin_rx_burst_on_multiple_members),
 		TEST_CASE(test_roundrobin_verify_promiscuous_enable_disable),
 		TEST_CASE(test_roundrobin_verify_mac_assignment),
-		TEST_CASE(test_roundrobin_verify_slave_link_status_change_behaviour),
-		TEST_CASE(test_roundrobin_verfiy_polling_slave_link_status_change),
+		TEST_CASE(test_roundrobin_verify_member_link_status_change_behaviour),
+		TEST_CASE(test_roundrobin_verify_polling_member_link_status_change),
 		TEST_CASE(test_activebackup_tx_burst),
 		TEST_CASE(test_activebackup_rx_burst),
 		TEST_CASE(test_activebackup_verify_promiscuous_enable_disable),
 		TEST_CASE(test_activebackup_verify_mac_assignment),
-		TEST_CASE(test_activebackup_verify_slave_link_status_change_failover),
+		TEST_CASE(test_activebackup_verify_member_link_status_change_failover),
 		TEST_CASE(test_balance_xmit_policy_configuration),
 		TEST_CASE(test_balance_l2_tx_burst),
 		TEST_CASE(test_balance_l23_tx_burst_ipv4_toggle_ip_addr),
@@ -5137,26 +5185,26 @@ static struct unit_test_suite link_bonding_test_suite  = {
 		TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_ip_addr),
 		TEST_CASE(test_balance_l34_tx_burst_vlan_ipv6_toggle_ip_addr),
 		TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_udp_port),
-		TEST_CASE(test_balance_tx_burst_slave_tx_fail),
+		TEST_CASE(test_balance_tx_burst_member_tx_fail),
 		TEST_CASE(test_balance_rx_burst),
 		TEST_CASE(test_balance_verify_promiscuous_enable_disable),
 		TEST_CASE(test_balance_verify_mac_assignment),
-		TEST_CASE(test_balance_verify_slave_link_status_change_behaviour),
+		TEST_CASE(test_balance_verify_member_link_status_change_behaviour),
 		TEST_CASE(test_tlb_tx_burst),
 		TEST_CASE(test_tlb_rx_burst),
 		TEST_CASE(test_tlb_verify_mac_assignment),
 		TEST_CASE(test_tlb_verify_promiscuous_enable_disable),
-		TEST_CASE(test_tlb_verify_slave_link_status_change_failover),
+		TEST_CASE(test_tlb_verify_member_link_status_change_failover),
 		TEST_CASE(test_alb_change_mac_in_reply_sent),
 		TEST_CASE(test_alb_reply_from_client),
 		TEST_CASE(test_alb_receive_vlan_reply),
 		TEST_CASE(test_alb_ipv4_tx),
 		TEST_CASE(test_broadcast_tx_burst),
-		TEST_CASE(test_broadcast_tx_burst_slave_tx_fail),
+		TEST_CASE(test_broadcast_tx_burst_member_tx_fail),
 		TEST_CASE(test_broadcast_rx_burst),
 		TEST_CASE(test_broadcast_verify_promiscuous_enable_disable),
 		TEST_CASE(test_broadcast_verify_mac_assignment),
-		TEST_CASE(test_broadcast_verify_slave_link_status_change_behaviour),
+		TEST_CASE(test_broadcast_verify_member_link_status_change_behaviour),
 		TEST_CASE(test_reconfigure_bonded_device),
 		TEST_CASE(test_close_bonded_device),
 
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 21c512c94b..2de907e7f3 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -31,7 +31,7 @@
 
 #include "test.h"
 
-#define SLAVE_COUNT (4)
+#define MEMBER_COUNT (4)
 
 #define RX_RING_SIZE 1024
 #define TX_RING_SIZE 1024
@@ -46,15 +46,15 @@
 
 #define BONDED_DEV_NAME         ("net_bonding_m4_bond_dev")
 
-#define SLAVE_DEV_NAME_FMT      ("net_virt_%d")
-#define SLAVE_RX_QUEUE_FMT      ("net_virt_%d_rx")
-#define SLAVE_TX_QUEUE_FMT      ("net_virt_%d_tx")
+#define MEMBER_DEV_NAME_FMT      ("net_virt_%d")
+#define MEMBER_RX_QUEUE_FMT      ("net_virt_%d_rx")
+#define MEMBER_TX_QUEUE_FMT      ("net_virt_%d_tx")
 
 #define INVALID_SOCKET_ID       (-1)
 #define INVALID_PORT_ID         (0xFF)
 #define INVALID_BONDING_MODE    (-1)
 
-static const struct rte_ether_addr slave_mac_default = {
+static const struct rte_ether_addr member_mac_default = {
 	{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 }
 };
 
@@ -70,7 +70,7 @@ static const struct rte_ether_addr slow_protocol_mac_addr = {
 	{ 0x01, 0x80, 0xC2, 0x00, 0x00, 0x02 }
 };
 
-struct slave_conf {
+struct member_conf {
 	struct rte_ring *rx_queue;
 	struct rte_ring *tx_queue;
 	uint16_t port_id;
@@ -86,21 +86,21 @@ struct ether_vlan_hdr {
 
 struct link_bonding_unittest_params {
 	uint8_t bonded_port_id;
-	struct slave_conf slave_ports[SLAVE_COUNT];
+	struct member_conf member_ports[MEMBER_COUNT];
 
 	struct rte_mempool *mbuf_pool;
 };
 
-#define TEST_DEFAULT_SLAVE_COUNT     RTE_DIM(test_params.slave_ports)
-#define TEST_RX_SLAVE_COUT           TEST_DEFAULT_SLAVE_COUNT
-#define TEST_TX_SLAVE_COUNT          TEST_DEFAULT_SLAVE_COUNT
-#define TEST_MARKER_SLAVE_COUT       TEST_DEFAULT_SLAVE_COUNT
-#define TEST_EXPIRED_SLAVE_COUNT     TEST_DEFAULT_SLAVE_COUNT
-#define TEST_PROMISC_SLAVE_COUNT     TEST_DEFAULT_SLAVE_COUNT
+#define TEST_DEFAULT_MEMBER_COUNT     RTE_DIM(test_params.member_ports)
+#define TEST_RX_MEMBER_COUT           TEST_DEFAULT_MEMBER_COUNT
+#define TEST_TX_MEMBER_COUNT          TEST_DEFAULT_MEMBER_COUNT
+#define TEST_MARKER_MEMBER_COUT       TEST_DEFAULT_MEMBER_COUNT
+#define TEST_EXPIRED_MEMBER_COUNT     TEST_DEFAULT_MEMBER_COUNT
+#define TEST_PROMISC_MEMBER_COUNT     TEST_DEFAULT_MEMBER_COUNT
 
 static struct link_bonding_unittest_params test_params  = {
 	.bonded_port_id = INVALID_PORT_ID,
-	.slave_ports = { [0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
+	.member_ports = { [0 ... MEMBER_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
 
 	.mbuf_pool = NULL,
 };
@@ -120,58 +120,58 @@ static uint8_t lacpdu_rx_count[RTE_MAX_ETHPORTS] = {0, };
 #define FOR_EACH(_i, _item, _array, _size) \
 	for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
 
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
  * in this test.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
  */
 #define FOR_EACH_PORT(_i, _port) \
-	FOR_EACH(_i, _port, test_params.slave_ports, \
-		RTE_DIM(test_params.slave_ports))
+	FOR_EACH(_i, _port, test_params.member_ports, \
+		RTE_DIM(test_params.member_ports))
 
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
  * in this test and satisfy given condition.
  *
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
  * _condition condition that need to be checked
  */
 #define FOR_EACH_PORT_IF(_i, _port, _condition) FOR_EACH_PORT((_i), (_port)) \
 	if (!!(_condition))
 
-/* Macro for iterating over every port that is currently a slave of a bonded
+/* Macro for iterating over every port that is currently a member of a bonded
  * device.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
  * */
-#define FOR_EACH_SLAVE(_i, _slave) \
-	FOR_EACH_PORT_IF(_i, _slave, (_slave)->bonded != 0)
+#define FOR_EACH_MEMBER(_i, _member) \
+	FOR_EACH_PORT_IF(_i, _member, (_member)->bonded != 0)
 
 /*
- * Returns packets from slaves TX queue.
- * slave slave port
+ * Returns packets from members TX queue.
+ * member port
  * buffer for packets
  * size size of buffer
  * return number of packets or negative error number
  */
 static int
-slave_get_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
+member_get_pkts(struct member_conf *member, struct rte_mbuf **buf, uint16_t size)
 {
-	return rte_ring_dequeue_burst(slave->tx_queue, (void **)buf,
+	return rte_ring_dequeue_burst(member->tx_queue, (void **)buf,
 			size, NULL);
 }
 
 /*
- * Injects given packets into slaves RX queue.
- * slave slave port
+ * Injects given packets into members RX queue.
+ * member port
  * buffer for packets
  * size number of packets to be injected
  * return number of queued packets or negative error number
  */
 static int
-slave_put_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
+member_put_pkts(struct member_conf *member, struct rte_mbuf **buf, uint16_t size)
 {
-	return rte_ring_enqueue_burst(slave->rx_queue, (void **)buf,
+	return rte_ring_enqueue_burst(member->rx_queue, (void **)buf,
 			size, NULL);
 }
 
@@ -219,79 +219,79 @@ configure_ethdev(uint16_t port_id, uint8_t start)
 }
 
 static int
-add_slave(struct slave_conf *slave, uint8_t start)
+add_member(struct member_conf *member, uint8_t start)
 {
 	struct rte_ether_addr addr, addr_check;
 	int retval;
 
 	/* Some sanity check */
-	RTE_VERIFY(test_params.slave_ports <= slave &&
-		slave - test_params.slave_ports < (int)RTE_DIM(test_params.slave_ports));
-	RTE_VERIFY(slave->bonded == 0);
-	RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
+	RTE_VERIFY(test_params.member_ports <= member &&
+		member - test_params.member_ports < (int)RTE_DIM(test_params.member_ports));
+	RTE_VERIFY(member->bonded == 0);
+	RTE_VERIFY(member->port_id != INVALID_PORT_ID);
 
-	rte_ether_addr_copy(&slave_mac_default, &addr);
-	addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
+	rte_ether_addr_copy(&member_mac_default, &addr);
+	addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
 
-	rte_eth_dev_mac_addr_remove(slave->port_id, &addr);
+	rte_eth_dev_mac_addr_remove(member->port_id, &addr);
 
-	TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(slave->port_id, &addr, 0),
-		"Failed to set slave MAC address");
+	TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(member->port_id, &addr, 0),
+		"Failed to set member MAC address");
 
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bonded_port_id,
-		slave->port_id),
-			"Failed to add slave (idx=%u, id=%u) to bonding (id=%u)",
-			(uint8_t)(slave - test_params.slave_ports), slave->port_id,
+	TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bonded_port_id,
+		member->port_id),
+			"Failed to add member (idx=%u, id=%u) to bonding (id=%u)",
+			(uint8_t)(member - test_params.member_ports), member->port_id,
 			test_params.bonded_port_id);
 
-	slave->bonded = 1;
+	member->bonded = 1;
 	if (start) {
-		TEST_ASSERT_SUCCESS(rte_eth_dev_start(slave->port_id),
-			"Failed to start slave %u", slave->port_id);
+		TEST_ASSERT_SUCCESS(rte_eth_dev_start(member->port_id),
+			"Failed to start member %u", member->port_id);
 	}
 
-	retval = rte_eth_macaddr_get(slave->port_id, &addr_check);
-	TEST_ASSERT_SUCCESS(retval, "Failed to get slave mac address: %s",
+	retval = rte_eth_macaddr_get(member->port_id, &addr_check);
+	TEST_ASSERT_SUCCESS(retval, "Failed to get member mac address: %s",
 			    strerror(-retval));
 	TEST_ASSERT_EQUAL(rte_is_same_ether_addr(&addr, &addr_check), 1,
-			"Slave MAC address is not as expected");
+			"Member MAC address is not as expected");
 
-	RTE_VERIFY(slave->lacp_parnter_state == 0);
+	RTE_VERIFY(member->lacp_parnter_state == 0);
 	return 0;
 }
 
 static int
-remove_slave(struct slave_conf *slave)
+remove_member(struct member_conf *member)
 {
-	ptrdiff_t slave_idx = slave - test_params.slave_ports;
+	ptrdiff_t member_idx = member - test_params.member_ports;
 
-	RTE_VERIFY(test_params.slave_ports <= slave &&
-		slave_idx < (ptrdiff_t)RTE_DIM(test_params.slave_ports));
+	RTE_VERIFY(test_params.member_ports <= member &&
+		member_idx < (ptrdiff_t)RTE_DIM(test_params.member_ports));
 
-	RTE_VERIFY(slave->bonded == 1);
-	RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
+	RTE_VERIFY(member->bonded == 1);
+	RTE_VERIFY(member->port_id != INVALID_PORT_ID);
 
-	TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
-		"Slave %u tx queue not empty while removing from bonding.",
-		slave->port_id);
+	TEST_ASSERT_EQUAL(rte_ring_count(member->rx_queue), 0,
+		"Member %u tx queue not empty while removing from bonding.",
+		member->port_id);
 
-	TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
-		"Slave %u tx queue not empty while removing from bonding.",
-		slave->port_id);
+	TEST_ASSERT_EQUAL(rte_ring_count(member->rx_queue), 0,
+		"Member %u tx queue not empty while removing from bonding.",
+		member->port_id);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_slave_remove(test_params.bonded_port_id,
-			slave->port_id), 0,
-			"Failed to remove slave (idx=%u, id=%u) from bonding (id=%u)",
-			(uint8_t)slave_idx, slave->port_id,
+	TEST_ASSERT_EQUAL(rte_eth_bond_member_remove(test_params.bonded_port_id,
+			member->port_id), 0,
+			"Failed to remove member (idx=%u, id=%u) from bonding (id=%u)",
+			(uint8_t)member_idx, member->port_id,
 			test_params.bonded_port_id);
 
-	slave->bonded = 0;
-	slave->lacp_parnter_state = 0;
+	member->bonded = 0;
+	member->lacp_parnter_state = 0;
 	return 0;
 }
 
 static void
-lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
+lacp_recv_cb(uint16_t member_id, struct rte_mbuf *lacp_pkt)
 {
 	struct rte_ether_hdr *hdr;
 	struct slow_protocol_frame *slow_hdr;
@@ -304,22 +304,22 @@ lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
 	slow_hdr = rte_pktmbuf_mtod(lacp_pkt, struct slow_protocol_frame *);
 	RTE_VERIFY(slow_hdr->slow_protocol.subtype == SLOW_SUBTYPE_LACP);
 
-	lacpdu_rx_count[slave_id]++;
+	lacpdu_rx_count[member_id]++;
 	rte_pktmbuf_free(lacp_pkt);
 }
 
 static int
-initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
+initialize_bonded_device_with_members(uint16_t member_count, uint8_t external_sm)
 {
 	uint8_t i;
 	int ret;
 
 	RTE_VERIFY(test_params.bonded_port_id != INVALID_PORT_ID);
 
-	for (i = 0; i < slave_count; i++) {
-		TEST_ASSERT_SUCCESS(add_slave(&test_params.slave_ports[i], 1),
+	for (i = 0; i < member_count; i++) {
+		TEST_ASSERT_SUCCESS(add_member(&test_params.member_ports[i], 1),
 			"Failed to add port %u to bonded device.\n",
-			test_params.slave_ports[i].port_id);
+			test_params.member_ports[i].port_id);
 	}
 
 	/* Reset mode 4 configuration */
@@ -345,34 +345,34 @@ initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
 }
 
 static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
 {
-	struct slave_conf *slave;
+	struct member_conf *member;
 	int retval;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 	uint16_t i;
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bonded_port_id),
 			"Failed to stop bonded port %u",
 			test_params.bonded_port_id);
 
-	FOR_EACH_SLAVE(i, slave)
-		remove_slave(slave);
+	FOR_EACH_MEMBER(i, member)
+		remove_member(member);
 
-	retval = rte_eth_bond_slaves_get(test_params.bonded_port_id, slaves,
-		RTE_DIM(slaves));
+	retval = rte_eth_bond_members_get(test_params.bonded_port_id, members,
+		RTE_DIM(members));
 
 	TEST_ASSERT_EQUAL(retval, 0,
-		"Expected bonded device %u have 0 slaves but returned %d.",
+		"Expected bonded device %u have 0 members but returned %d.",
 			test_params.bonded_port_id, retval);
 
-	FOR_EACH_PORT(i, slave) {
-		TEST_ASSERT_SUCCESS(rte_eth_dev_stop(slave->port_id),
+	FOR_EACH_PORT(i, member) {
+		TEST_ASSERT_SUCCESS(rte_eth_dev_stop(member->port_id),
 				"Failed to stop bonded port %u",
-				slave->port_id);
+				member->port_id);
 
-		TEST_ASSERT(slave->bonded == 0,
-			"Port id=%u is still marked as enslaved.", slave->port_id);
+		TEST_ASSERT(member->bonded == 0,
+			"Port id=%u is still marked as enmemberd.", member->port_id);
 	}
 
 	return TEST_SUCCESS;
@@ -383,7 +383,7 @@ test_setup(void)
 {
 	int retval, nb_mbuf_per_pool;
 	char name[RTE_ETH_NAME_MAX_LEN];
-	struct slave_conf *port;
+	struct member_conf *port;
 	const uint8_t socket_id = rte_socket_id();
 	uint16_t i;
 
@@ -400,10 +400,10 @@ test_setup(void)
 
 	/* Create / initialize ring eth devs. */
 	FOR_EACH_PORT(i, port) {
-		port = &test_params.slave_ports[i];
+		port = &test_params.member_ports[i];
 
 		if (port->rx_queue == NULL) {
-			retval = snprintf(name, RTE_DIM(name), SLAVE_RX_QUEUE_FMT, i);
+			retval = snprintf(name, RTE_DIM(name), MEMBER_RX_QUEUE_FMT, i);
 			TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
 			port->rx_queue = rte_ring_create(name, RX_RING_SIZE, socket_id, 0);
 			TEST_ASSERT(port->rx_queue != NULL,
@@ -412,7 +412,7 @@ test_setup(void)
 		}
 
 		if (port->tx_queue == NULL) {
-			retval = snprintf(name, RTE_DIM(name), SLAVE_TX_QUEUE_FMT, i);
+			retval = snprintf(name, RTE_DIM(name), MEMBER_TX_QUEUE_FMT, i);
 			TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
 			port->tx_queue = rte_ring_create(name, TX_RING_SIZE, socket_id, 0);
 			TEST_ASSERT_NOT_NULL(port->tx_queue,
@@ -421,7 +421,7 @@ test_setup(void)
 		}
 
 		if (port->port_id == INVALID_PORT_ID) {
-			retval = snprintf(name, RTE_DIM(name), SLAVE_DEV_NAME_FMT, i);
+			retval = snprintf(name, RTE_DIM(name), MEMBER_DEV_NAME_FMT, i);
 			TEST_ASSERT(retval < (int)RTE_DIM(name) - 1, "Name too long");
 			retval = rte_eth_from_rings(name, &port->rx_queue, 1,
 					&port->tx_queue, 1, socket_id);
@@ -460,7 +460,7 @@ test_setup(void)
 static void
 testsuite_teardown(void)
 {
-	struct slave_conf *port;
+	struct member_conf *port;
 	uint8_t i;
 
 	/* Only stop ports.
@@ -480,7 +480,7 @@ testsuite_teardown(void)
  * frame but not LACP
  */
 static int
-make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
+make_lacp_reply(struct member_conf *member, struct rte_mbuf *pkt)
 {
 	struct rte_ether_hdr *hdr;
 	struct slow_protocol_frame *slow_hdr;
@@ -501,11 +501,11 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
 	/* Change source address to partner address */
 	rte_ether_addr_copy(&parnter_mac_default, &slow_hdr->eth_hdr.src_addr);
 	slow_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
-		slave->port_id;
+		member->port_id;
 
 	lacp = (struct lacpdu *) &slow_hdr->slow_protocol;
 	/* Save last received state */
-	slave->lacp_parnter_state = lacp->actor.state;
+	member->lacp_parnter_state = lacp->actor.state;
 	/* Change it into LACP replay by matching parameters. */
 	memcpy(&lacp->partner.port_params, &lacp->actor.port_params,
 		sizeof(struct port_params));
@@ -523,27 +523,27 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
 }
 
 /*
- * Reads packets from given slave, search for LACP packet and reply them.
+ * Reads packets from given member, search for LACP packet and reply them.
  *
- * Receives burst of packets from slave. Looks for LACP packet. Drops
+ * Receives burst of packets from member. Looks for LACP packet. Drops
  * all other packets. Prepares response LACP and sends it back.
  *
  * return number of LACP received and replied, -1 on error.
  */
 static int
-bond_handshake_reply(struct slave_conf *slave)
+bond_handshake_reply(struct member_conf *member)
 {
 	int retval;
 	struct rte_mbuf *rx_buf[MAX_PKT_BURST];
 	struct rte_mbuf *lacp_tx_buf[MAX_PKT_BURST];
 	uint16_t lacp_tx_buf_cnt = 0, i;
 
-	retval = slave_get_pkts(slave, rx_buf, RTE_DIM(rx_buf));
-	TEST_ASSERT(retval >= 0, "Getting slave %u packets failed.",
-			slave->port_id);
+	retval = member_get_pkts(member, rx_buf, RTE_DIM(rx_buf));
+	TEST_ASSERT(retval >= 0, "Getting member %u packets failed.",
+			member->port_id);
 
 	for (i = 0; i < (uint16_t)retval; i++) {
-		if (make_lacp_reply(slave, rx_buf[i]) == 0) {
+		if (make_lacp_reply(member, rx_buf[i]) == 0) {
 			/* reply with actor's LACP */
 			lacp_tx_buf[lacp_tx_buf_cnt++] = rx_buf[i];
 		} else
@@ -553,7 +553,7 @@ bond_handshake_reply(struct slave_conf *slave)
 	if (lacp_tx_buf_cnt == 0)
 		return 0;
 
-	retval = slave_put_pkts(slave, lacp_tx_buf, lacp_tx_buf_cnt);
+	retval = member_put_pkts(member, lacp_tx_buf, lacp_tx_buf_cnt);
 	if (retval <= lacp_tx_buf_cnt) {
 		/* retval might be negative */
 		for (i = RTE_MAX(0, retval); retval < lacp_tx_buf_cnt; retval++)
@@ -561,24 +561,24 @@ bond_handshake_reply(struct slave_conf *slave)
 	}
 
 	TEST_ASSERT_EQUAL(retval, lacp_tx_buf_cnt,
-		"Failed to equeue lacp packets into slave %u tx queue.",
-		slave->port_id);
+		"Failed to equeue lacp packets into member %u tx queue.",
+		member->port_id);
 
 	return lacp_tx_buf_cnt;
 }
 
 /*
- * Function check if given slave tx queue contains packets that make mode 4
- * handshake complete. It will drain slave queue.
+ * Function check if given member tx queue contains packets that make mode 4
+ * handshake complete. It will drain member queue.
  * return 0 if handshake not completed, 1 if handshake was complete,
  */
 static int
-bond_handshake_done(struct slave_conf *slave)
+bond_handshake_done(struct member_conf *member)
 {
 	const uint8_t expected_state = STATE_LACP_ACTIVE | STATE_SYNCHRONIZATION |
 			STATE_AGGREGATION | STATE_COLLECTING | STATE_DISTRIBUTING;
 
-	return slave->lacp_parnter_state == expected_state;
+	return member->lacp_parnter_state == expected_state;
 }
 
 static unsigned
@@ -603,32 +603,32 @@ bond_get_update_timeout_ms(void)
 static int
 bond_handshake(void)
 {
-	struct slave_conf *slave;
+	struct member_conf *member;
 	struct rte_mbuf *buf[MAX_PKT_BURST];
 	uint16_t nb_pkts;
-	uint8_t all_slaves_done, i, j;
-	uint8_t status[RTE_DIM(test_params.slave_ports)] = { 0 };
+	uint8_t all_members_done, i, j;
+	uint8_t status[RTE_DIM(test_params.member_ports)] = { 0 };
 	const unsigned delay = bond_get_update_timeout_ms();
 
 	/* Exchange LACP frames */
-	all_slaves_done = 0;
-	for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
+	all_members_done = 0;
+	for (i = 0; i < 30 && all_members_done == 0; ++i) {
 		rte_delay_ms(delay);
 
-		all_slaves_done = 1;
-		FOR_EACH_SLAVE(j, slave) {
-			/* If response already send, skip slave */
+		all_members_done = 1;
+		FOR_EACH_MEMBER(j, member) {
+			/* If response already send, skip member */
 			if (status[j] != 0)
 				continue;
 
-			if (bond_handshake_reply(slave) < 0) {
-				all_slaves_done = 0;
+			if (bond_handshake_reply(member) < 0) {
+				all_members_done = 0;
 				break;
 			}
 
-			status[j] = bond_handshake_done(slave);
+			status[j] = bond_handshake_done(member);
 			if (status[j] == 0)
-				all_slaves_done = 0;
+				all_members_done = 0;
 		}
 
 		nb_pkts = bond_tx(NULL, 0);
@@ -639,26 +639,26 @@ bond_handshake(void)
 		TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
 	}
 	/* If response didn't send - report failure */
-	TEST_ASSERT_EQUAL(all_slaves_done, 1, "Bond handshake failed\n");
+	TEST_ASSERT_EQUAL(all_members_done, 1, "Bond handshake failed\n");
 
 	/* If flags doesn't match - report failure */
-	return all_slaves_done == 1 ? TEST_SUCCESS : TEST_FAILED;
+	return all_members_done == 1 ? TEST_SUCCESS : TEST_FAILED;
 }
 
-#define TEST_LACP_SLAVE_COUT RTE_DIM(test_params.slave_ports)
+#define TEST_LACP_MEMBER_COUT RTE_DIM(test_params.member_ports)
 static int
 test_mode4_lacp(void)
 {
 	int retval;
 
-	retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+	retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 	/* Test LACP handshake function */
 	retval = bond_handshake();
 	TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -668,7 +668,7 @@ test_mode4_agg_mode_selection(void)
 {
 	int retval;
 	/* Test and verify for Stable mode */
-	retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+	retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 
@@ -684,12 +684,12 @@ test_mode4_agg_mode_selection(void)
 	TEST_ASSERT_EQUAL(retval, AGG_STABLE,
 			"Wrong agg mode received from bonding device");
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 
 	/* test and verify for Bandwidth mode */
-	retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+	retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 
@@ -706,11 +706,11 @@ test_mode4_agg_mode_selection(void)
 	TEST_ASSERT_EQUAL(retval, AGG_BANDWIDTH,
 			"Wrong agg mode received from bonding device");
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	/* test and verify selection for count mode */
-	retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+	retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 
@@ -726,7 +726,7 @@ test_mode4_agg_mode_selection(void)
 	TEST_ASSERT_EQUAL(retval, AGG_COUNT,
 			"Wrong agg mode received from bonding device");
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -780,7 +780,7 @@ generate_packets(struct rte_ether_addr *src_mac,
 }
 
 static int
-generate_and_put_packets(struct slave_conf *slave,
+generate_and_put_packets(struct member_conf *member,
 			struct rte_ether_addr *src_mac,
 			struct rte_ether_addr *dst_mac, uint16_t count)
 {
@@ -791,12 +791,12 @@ generate_and_put_packets(struct slave_conf *slave,
 	if (retval != (int)count)
 		return retval;
 
-	retval = slave_put_pkts(slave, pkts, count);
+	retval = member_put_pkts(member, pkts, count);
 	if (retval > 0 && retval != count)
 		free_pkts(&pkts[retval], count - retval);
 
 	TEST_ASSERT_EQUAL(retval, count,
-		"Failed to enqueue packets into slave %u RX queue", slave->port_id);
+		"Failed to enqueue packets into member %u RX queue", member->port_id);
 
 	return TEST_SUCCESS;
 }
@@ -804,7 +804,7 @@ generate_and_put_packets(struct slave_conf *slave,
 static int
 test_mode4_rx(void)
 {
-	struct slave_conf *slave;
+	struct member_conf *member;
 	uint16_t i, j;
 
 	uint16_t expected_pkts_cnt;
@@ -819,7 +819,7 @@ test_mode4_rx(void)
 	struct rte_ether_addr dst_mac;
 	struct rte_ether_addr bonded_mac;
 
-	retval = initialize_bonded_device_with_slaves(TEST_PROMISC_SLAVE_COUNT,
+	retval = initialize_bonded_device_with_members(TEST_PROMISC_MEMBER_COUNT,
 						      0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
@@ -838,7 +838,7 @@ test_mode4_rx(void)
 	dst_mac.addr_bytes[0] += 2;
 
 	/* First try with promiscuous mode enabled.
-	 * Add 2 packets to each slave. First with bonding MAC address, second with
+	 * Add 2 packets to each member. First with bonding MAC address, second with
 	 * different. Check if we received all of them. */
 	retval = rte_eth_promiscuous_enable(test_params.bonded_port_id);
 	TEST_ASSERT_SUCCESS(retval,
@@ -846,16 +846,16 @@ test_mode4_rx(void)
 			test_params.bonded_port_id, rte_strerror(-retval));
 
 	expected_pkts_cnt = 0;
-	FOR_EACH_SLAVE(i, slave) {
-		retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
-		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
-			slave->port_id);
+	FOR_EACH_MEMBER(i, member) {
+		retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
+		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+			member->port_id);
 
-		retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
-		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
-			slave->port_id);
+		retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
+		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+			member->port_id);
 
-		/* Expect 2 packets per slave */
+		/* Expect 2 packets per member */
 		expected_pkts_cnt += 2;
 	}
 
@@ -894,16 +894,16 @@ test_mode4_rx(void)
 		test_params.bonded_port_id, rte_strerror(-retval));
 
 	expected_pkts_cnt = 0;
-	FOR_EACH_SLAVE(i, slave) {
-		retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
-		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
-			slave->port_id);
+	FOR_EACH_MEMBER(i, member) {
+		retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
+		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+			member->port_id);
 
-		retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
-		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
-			slave->port_id);
+		retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
+		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+			member->port_id);
 
-		/* Expect only one packet per slave */
+		/* Expect only one packet per member */
 		expected_pkts_cnt += 1;
 	}
 
@@ -927,19 +927,19 @@ test_mode4_rx(void)
 	TEST_ASSERT_EQUAL(retval, expected_pkts_cnt,
 		"Expected %u packets but received only %d", expected_pkts_cnt, retval);
 
-	/* Link down test: simulate link down for first slave. */
+	/* Link down test: simulate link down for first member. */
 	delay = bond_get_update_timeout_ms();
 
-	uint8_t slave_down_id = INVALID_PORT_ID;
+	uint8_t member_down_id = INVALID_PORT_ID;
 
-	/* Find first slave and make link down on it*/
-	FOR_EACH_SLAVE(i, slave) {
-		rte_eth_dev_set_link_down(slave->port_id);
-		slave_down_id = slave->port_id;
+	/* Find first member and make link down on it*/
+	FOR_EACH_MEMBER(i, member) {
+		rte_eth_dev_set_link_down(member->port_id);
+		member_down_id = member->port_id;
 		break;
 	}
 
-	RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
+	RTE_VERIFY(member_down_id != INVALID_PORT_ID);
 
 	/* Give some time to rearrange bonding */
 	for (i = 0; i < 3; i++) {
@@ -949,16 +949,16 @@ test_mode4_rx(void)
 
 	TEST_ASSERT_SUCCESS(bond_handshake(), "Handshake after link down failed");
 
-	/* Put packet to each slave */
-	FOR_EACH_SLAVE(i, slave) {
+	/* Put packet to each member */
+	FOR_EACH_MEMBER(i, member) {
 		void *pkt = NULL;
 
-		dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
-		retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
+		dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
+		retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
 		TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
 
-		src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
-		retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
+		src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
+		retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
 		TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
 
 		retval = bond_rx(pkts, RTE_DIM(pkts));
@@ -967,36 +967,36 @@ test_mode4_rx(void)
 		if (retval > 0)
 			free_pkts(pkts, retval);
 
-		while (rte_ring_dequeue(slave->rx_queue, (void **)&pkt) == 0)
+		while (rte_ring_dequeue(member->rx_queue, (void **)&pkt) == 0)
 			rte_pktmbuf_free(pkt);
 
-		if (slave_down_id == slave->port_id)
+		if (member_down_id == member->port_id)
 			TEST_ASSERT_EQUAL(retval, 0, "Packets received unexpectedly.");
 		else
 			TEST_ASSERT_NOT_EQUAL(retval, 0,
-				"Expected to receive some packets on slave %u.",
-				slave->port_id);
-		rte_eth_dev_start(slave->port_id);
+				"Expected to receive some packets on member %u.",
+				member->port_id);
+		rte_eth_dev_start(member->port_id);
 
 		for (j = 0; j < 5; j++) {
-			TEST_ASSERT(bond_handshake_reply(slave) >= 0,
+			TEST_ASSERT(bond_handshake_reply(member) >= 0,
 				"Handshake after link up");
 
-			if (bond_handshake_done(slave) == 1)
+			if (bond_handshake_done(member) == 1)
 				break;
 		}
 
-		TEST_ASSERT(j < 5, "Failed to aggregate slave after link up");
+		TEST_ASSERT(j < 5, "Failed to aggregate member after link up");
 	}
 
-	remove_slaves_and_stop_bonded_device();
+	remove_members_and_stop_bonded_device();
 	return TEST_SUCCESS;
 }
 
 static int
 test_mode4_tx_burst(void)
 {
-	struct slave_conf *slave;
+	struct member_conf *member;
 	uint16_t i, j;
 
 	uint16_t exp_pkts_cnt, pkts_cnt = 0;
@@ -1008,7 +1008,7 @@ test_mode4_tx_burst(void)
 		{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 } };
 	struct rte_ether_addr bonded_mac;
 
-	retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
+	retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 	retval = bond_handshake();
@@ -1036,19 +1036,19 @@ test_mode4_tx_burst(void)
 
 	TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
 
-	/* Check if packets were transmitted properly. Every slave should have
+	/* Check if packets were transmitted properly. Every member should have
 	 * at least one packet, and sum must match. Under normal operation
 	 * there should be no LACP nor MARKER frames. */
 	pkts_cnt = 0;
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_MEMBER(i, member) {
 		uint16_t normal_cnt, slow_cnt;
 
-		retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
+		retval = member_get_pkts(member, pkts, RTE_DIM(pkts));
 		normal_cnt = 0;
 		slow_cnt = 0;
 
 		for (j = 0; j < retval; j++) {
-			if (make_lacp_reply(slave, pkts[j]) == 1)
+			if (make_lacp_reply(member, pkts[j]) == 1)
 				normal_cnt++;
 			else
 				slow_cnt++;
@@ -1056,11 +1056,11 @@ test_mode4_tx_burst(void)
 
 		free_pkts(pkts, normal_cnt + slow_cnt);
 		TEST_ASSERT_EQUAL(slow_cnt, 0,
-			"slave %u unexpectedly transmitted %d SLOW packets", slave->port_id,
+			"member %u unexpectedly transmitted %d SLOW packets", member->port_id,
 			slow_cnt);
 
 		TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
-			"slave %u did not transmitted any packets", slave->port_id);
+			"member %u did not transmitted any packets", member->port_id);
 
 		pkts_cnt += normal_cnt;
 	}
@@ -1068,19 +1068,21 @@ test_mode4_tx_burst(void)
 	TEST_ASSERT_EQUAL(exp_pkts_cnt, pkts_cnt,
 		"Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
 
-	/* Link down test:
-	 * simulate link down for first slave. */
+	/*
+	 * Link down test:
+	 * simulate link down for first member.
+	 */
 	delay = bond_get_update_timeout_ms();
 
-	uint8_t slave_down_id = INVALID_PORT_ID;
+	uint8_t member_down_id = INVALID_PORT_ID;
 
-	FOR_EACH_SLAVE(i, slave) {
-		rte_eth_dev_set_link_down(slave->port_id);
-		slave_down_id = slave->port_id;
+	FOR_EACH_MEMBER(i, member) {
+		rte_eth_dev_set_link_down(member->port_id);
+		member_down_id = member->port_id;
 		break;
 	}
 
-	RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
+	RTE_VERIFY(member_down_id != INVALID_PORT_ID);
 
 	/* Give some time to rearrange bonding. */
 	for (i = 0; i < 3; i++) {
@@ -1110,19 +1112,19 @@ test_mode4_tx_burst(void)
 
 	TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
 
-	/* Check if packets was transmitted properly. Every slave should have
+	/* Check if packets was transmitted properly. Every member should have
 	 * at least one packet, and sum must match. Under normal operation
 	 * there should be no LACP nor MARKER frames. */
 	pkts_cnt = 0;
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_MEMBER(i, member) {
 		uint16_t normal_cnt, slow_cnt;
 
-		retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
+		retval = member_get_pkts(member, pkts, RTE_DIM(pkts));
 		normal_cnt = 0;
 		slow_cnt = 0;
 
 		for (j = 0; j < retval; j++) {
-			if (make_lacp_reply(slave, pkts[j]) == 1)
+			if (make_lacp_reply(member, pkts[j]) == 1)
 				normal_cnt++;
 			else
 				slow_cnt++;
@@ -1130,17 +1132,17 @@ test_mode4_tx_burst(void)
 
 		free_pkts(pkts, normal_cnt + slow_cnt);
 
-		if (slave_down_id == slave->port_id) {
+		if (member_down_id == member->port_id) {
 			TEST_ASSERT_EQUAL(normal_cnt + slow_cnt, 0,
-				"slave %u enexpectedly transmitted %u packets",
-				normal_cnt + slow_cnt, slave->port_id);
+				"member %u enexpectedly transmitted %u packets",
+				normal_cnt + slow_cnt, member->port_id);
 		} else {
 			TEST_ASSERT_EQUAL(slow_cnt, 0,
-				"slave %u unexpectedly transmitted %d SLOW packets",
-				slave->port_id, slow_cnt);
+				"member %u unexpectedly transmitted %d SLOW packets",
+				member->port_id, slow_cnt);
 
 			TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
-				"slave %u did not transmitted any packets", slave->port_id);
+				"member %u did not transmitted any packets", member->port_id);
 		}
 
 		pkts_cnt += normal_cnt;
@@ -1149,11 +1151,11 @@ test_mode4_tx_burst(void)
 	TEST_ASSERT_EQUAL(exp_pkts_cnt, pkts_cnt,
 		"Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
 
-	return remove_slaves_and_stop_bonded_device();
+	return remove_members_and_stop_bonded_device();
 }
 
 static void
-init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
+init_marker(struct rte_mbuf *pkt, struct member_conf *member)
 {
 	struct marker_header *marker_hdr = rte_pktmbuf_mtod(pkt,
 			struct marker_header *);
@@ -1166,7 +1168,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
 	rte_ether_addr_copy(&parnter_mac_default,
 			&marker_hdr->eth_hdr.src_addr);
 	marker_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
-		slave->port_id;
+		member->port_id;
 
 	marker_hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
 
@@ -1177,7 +1179,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
 			offsetof(struct marker, reserved_90) -
 			offsetof(struct marker, requester_port);
 	RTE_VERIFY(marker_hdr->marker.info_length == 16);
-	marker_hdr->marker.requester_port = slave->port_id + 1;
+	marker_hdr->marker.requester_port = member->port_id + 1;
 	marker_hdr->marker.tlv_type_terminator = TLV_TYPE_TERMINATOR_INFORMATION;
 	marker_hdr->marker.terminator_length = 0;
 }
@@ -1185,7 +1187,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
 static int
 test_mode4_marker(void)
 {
-	struct slave_conf *slave;
+	struct member_conf *member;
 	struct rte_mbuf *pkts[MAX_PKT_BURST];
 	struct rte_mbuf *marker_pkt;
 	struct marker_header *marker_hdr;
@@ -1196,7 +1198,7 @@ test_mode4_marker(void)
 	uint8_t i, j;
 	const uint16_t ethtype_slow_be = rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
 
-	retval = initialize_bonded_device_with_slaves(TEST_MARKER_SLAVE_COUT,
+	retval = initialize_bonded_device_with_members(TEST_MARKER_MEMBER_COUT,
 						      0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
@@ -1205,17 +1207,17 @@ test_mode4_marker(void)
 	TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
 
 	delay = bond_get_update_timeout_ms();
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_MEMBER(i, member) {
 		marker_pkt = rte_pktmbuf_alloc(test_params.mbuf_pool);
 		TEST_ASSERT_NOT_NULL(marker_pkt, "Failed to allocate marker packet");
-		init_marker(marker_pkt, slave);
+		init_marker(marker_pkt, member);
 
-		retval = slave_put_pkts(slave, &marker_pkt, 1);
+		retval = member_put_pkts(member, &marker_pkt, 1);
 		if (retval != 1)
 			rte_pktmbuf_free(marker_pkt);
 
 		TEST_ASSERT_EQUAL(retval, 1,
-			"Failed to send marker packet to slave %u", slave->port_id);
+			"Failed to send marker packet to member %u", member->port_id);
 
 		for (j = 0; j < 20; ++j) {
 			rte_delay_ms(delay);
@@ -1233,13 +1235,13 @@ test_mode4_marker(void)
 
 			/* Check if LACP packet was send by state machines
 			   First and only packet must be a maker response */
-			retval = slave_get_pkts(slave, pkts, MAX_PKT_BURST);
+			retval = member_get_pkts(member, pkts, MAX_PKT_BURST);
 			if (retval == 0)
 				continue;
 			if (retval > 1)
 				free_pkts(pkts, retval);
 
-			TEST_ASSERT_EQUAL(retval, 1, "failed to get slave packets");
+			TEST_ASSERT_EQUAL(retval, 1, "failed to get member packets");
 			nb_pkts = retval;
 
 			marker_hdr = rte_pktmbuf_mtod(pkts[0], struct marker_header *);
@@ -1263,7 +1265,7 @@ test_mode4_marker(void)
 		TEST_ASSERT(j < 20, "Marker response not found");
 	}
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval,	"Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -1272,7 +1274,7 @@ test_mode4_marker(void)
 static int
 test_mode4_expired(void)
 {
-	struct slave_conf *slave, *exp_slave = NULL;
+	struct member_conf *member, *exp_member = NULL;
 	struct rte_mbuf *pkts[MAX_PKT_BURST];
 	int retval;
 	uint32_t old_delay;
@@ -1282,7 +1284,7 @@ test_mode4_expired(void)
 
 	struct rte_eth_bond_8023ad_conf conf;
 
-	retval = initialize_bonded_device_with_slaves(TEST_EXPIRED_SLAVE_COUNT,
+	retval = initialize_bonded_device_with_members(TEST_EXPIRED_MEMBER_COUNT,
 						      0);
 	/* Set custom timeouts to make test last shorter. */
 	rte_eth_bond_8023ad_conf_get(test_params.bonded_port_id, &conf);
@@ -1298,8 +1300,8 @@ test_mode4_expired(void)
 
 	/* Wait for new settings to be applied. */
 	for (i = 0; i < old_delay/conf.update_timeout_ms * 2; i++) {
-		FOR_EACH_SLAVE(j, slave)
-			bond_handshake_reply(slave);
+		FOR_EACH_MEMBER(j, member)
+			bond_handshake_reply(member);
 
 		rte_delay_ms(conf.update_timeout_ms);
 	}
@@ -1307,13 +1309,13 @@ test_mode4_expired(void)
 	retval = bond_handshake();
 	TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
 
-	/* Find first slave */
-	FOR_EACH_SLAVE(i, slave) {
-		exp_slave = slave;
+	/* Find first member */
+	FOR_EACH_MEMBER(i, member) {
+		exp_member = member;
 		break;
 	}
 
-	RTE_VERIFY(exp_slave != NULL);
+	RTE_VERIFY(exp_member != NULL);
 
 	/* When one of partners do not send or respond to LACP frame in
 	 * conf.long_timeout_ms time, internal state machines should detect this
@@ -1325,16 +1327,16 @@ test_mode4_expired(void)
 		TEST_ASSERT_EQUAL(retval, 0, "Unexpectedly received %d packets",
 			retval);
 
-		FOR_EACH_SLAVE(i, slave) {
-			retval = bond_handshake_reply(slave);
+		FOR_EACH_MEMBER(i, member) {
+			retval = bond_handshake_reply(member);
 			TEST_ASSERT(retval >= 0, "Handshake failed");
 
-			/* Remove replay for slave that suppose to be expired. */
-			if (slave == exp_slave) {
-				while (rte_ring_count(slave->rx_queue) > 0) {
+			/* Remove replay for member that suppose to be expired. */
+			if (member == exp_member) {
+				while (rte_ring_count(member->rx_queue) > 0) {
 					void *pkt = NULL;
 
-					rte_ring_dequeue(slave->rx_queue, &pkt);
+					rte_ring_dequeue(member->rx_queue, &pkt);
 					rte_pktmbuf_free(pkt);
 				}
 			}
@@ -1348,17 +1350,17 @@ test_mode4_expired(void)
 			retval);
 	}
 
-	/* After test only expected slave should be in EXPIRED state */
-	FOR_EACH_SLAVE(i, slave) {
-		if (slave == exp_slave)
-			TEST_ASSERT(slave->lacp_parnter_state & STATE_EXPIRED,
-				"Slave %u should be in expired.", slave->port_id);
+	/* After test only expected member should be in EXPIRED state */
+	FOR_EACH_MEMBER(i, member) {
+		if (member == exp_member)
+			TEST_ASSERT(member->lacp_parnter_state & STATE_EXPIRED,
+				"Member %u should be in expired.", member->port_id);
 		else
-			TEST_ASSERT_EQUAL(bond_handshake_done(slave), 1,
-				"Slave %u should be operational.", slave->port_id);
+			TEST_ASSERT_EQUAL(bond_handshake_done(member), 1,
+				"Member %u should be operational.", member->port_id);
 	}
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -1372,17 +1374,17 @@ test_mode4_ext_ctrl(void)
 	 *   . try to transmit lacpdu (should fail)
 	 *   . try to set collecting and distributing flags (should fail)
 	 * reconfigure w/external sm
-	 *   . transmit one lacpdu on each slave using new api
-	 *   . make sure each slave receives one lacpdu using the callback api
-	 *   . transmit one data pdu on each slave (should fail)
+	 *   . transmit one lacpdu on each member using new api
+	 *   . make sure each member receives one lacpdu using the callback api
+	 *   . transmit one data pdu on each member (should fail)
 	 *   . enable distribution and collection, send one data pdu each again
 	 */
 
 	int retval;
-	struct slave_conf *slave = NULL;
+	struct member_conf *member = NULL;
 	uint8_t i;
 
-	struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
+	struct rte_mbuf *lacp_tx_buf[MEMBER_COUNT];
 	struct rte_ether_addr src_mac, dst_mac;
 	struct lacpdu_header lacpdu = {
 		.lacpdu = {
@@ -1396,30 +1398,30 @@ test_mode4_ext_ctrl(void)
 	initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
 			      RTE_ETHER_TYPE_SLOW, 0, 0);
 
-	for (i = 0; i < SLAVE_COUNT; i++) {
+	for (i = 0; i < MEMBER_COUNT; i++) {
 		lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
 		rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
 			   &lacpdu, sizeof(lacpdu));
 		rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
 	}
 
-	retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
+	retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_MEMBER(i, member) {
 		TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_slowtx(
 						test_params.bonded_port_id,
-						slave->port_id, lacp_tx_buf[i]),
-				 "Slave should not allow manual LACP xmit");
+						member->port_id, lacp_tx_buf[i]),
+				 "Member should not allow manual LACP xmit");
 		TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_collect(
 						test_params.bonded_port_id,
-						slave->port_id, 1),
-				 "Slave should not allow external state controls");
+						member->port_id, 1),
+				 "Member should not allow external state controls");
 	}
 
 	free_pkts(lacp_tx_buf, RTE_DIM(lacp_tx_buf));
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Bonded device cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -1430,13 +1432,13 @@ static int
 test_mode4_ext_lacp(void)
 {
 	int retval;
-	struct slave_conf *slave = NULL;
-	uint8_t all_slaves_done = 0, i;
+	struct member_conf *member = NULL;
+	uint8_t all_members_done = 0, i;
 	uint16_t nb_pkts;
 	const unsigned int delay = bond_get_update_timeout_ms();
 
-	struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
-	struct rte_mbuf *buf[SLAVE_COUNT];
+	struct rte_mbuf *lacp_tx_buf[MEMBER_COUNT];
+	struct rte_mbuf *buf[MEMBER_COUNT];
 	struct rte_ether_addr src_mac, dst_mac;
 	struct lacpdu_header lacpdu = {
 		.lacpdu = {
@@ -1450,14 +1452,14 @@ test_mode4_ext_lacp(void)
 	initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
 			      RTE_ETHER_TYPE_SLOW, 0, 0);
 
-	for (i = 0; i < SLAVE_COUNT; i++) {
+	for (i = 0; i < MEMBER_COUNT; i++) {
 		lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
 		rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
 			   &lacpdu, sizeof(lacpdu));
 		rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
 	}
 
-	retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 1);
+	retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 1);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 	memset(lacpdu_rx_count, 0, sizeof(lacpdu_rx_count));
@@ -1466,22 +1468,22 @@ test_mode4_ext_lacp(void)
 	for (i = 0; i < 30; ++i)
 		rte_delay_ms(delay);
 
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_MEMBER(i, member) {
 		retval = rte_eth_bond_8023ad_ext_slowtx(
 						test_params.bonded_port_id,
-						slave->port_id, lacp_tx_buf[i]);
+						member->port_id, lacp_tx_buf[i]);
 		TEST_ASSERT_SUCCESS(retval,
-				    "Slave should allow manual LACP xmit");
+				    "Member should allow manual LACP xmit");
 	}
 
 	nb_pkts = bond_tx(NULL, 0);
 	TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets transmitted unexpectedly");
 
-	FOR_EACH_SLAVE(i, slave) {
-		nb_pkts = slave_get_pkts(slave, buf, RTE_DIM(buf));
-		TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on slave %d\n",
+	FOR_EACH_MEMBER(i, member) {
+		nb_pkts = member_get_pkts(member, buf, RTE_DIM(buf));
+		TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on member %d\n",
 				  nb_pkts, i);
-		slave_put_pkts(slave, buf, nb_pkts);
+		member_put_pkts(member, buf, nb_pkts);
 	}
 
 	nb_pkts = bond_rx(buf, RTE_DIM(buf));
@@ -1489,26 +1491,26 @@ test_mode4_ext_lacp(void)
 	TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
 
 	/* wait for the periodic callback to run */
-	for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
+	for (i = 0; i < 30 && all_members_done == 0; ++i) {
 		uint8_t s, total = 0;
 
 		rte_delay_ms(delay);
-		FOR_EACH_SLAVE(s, slave) {
-			total += lacpdu_rx_count[slave->port_id];
+		FOR_EACH_MEMBER(s, member) {
+			total += lacpdu_rx_count[member->port_id];
 		}
 
-		if (total >= SLAVE_COUNT)
-			all_slaves_done = 1;
+		if (total >= MEMBER_COUNT)
+			all_members_done = 1;
 	}
 
-	FOR_EACH_SLAVE(i, slave) {
-		TEST_ASSERT_EQUAL(lacpdu_rx_count[slave->port_id], 1,
-				  "Slave port %u should have received 1 lacpdu (count=%u)",
-				  slave->port_id,
-				  lacpdu_rx_count[slave->port_id]);
+	FOR_EACH_MEMBER(i, member) {
+		TEST_ASSERT_EQUAL(lacpdu_rx_count[member->port_id], 1,
+				  "Member port %u should have received 1 lacpdu (count=%u)",
+				  member->port_id,
+				  lacpdu_rx_count[member->port_id]);
 	}
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_members_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -1517,10 +1519,10 @@ test_mode4_ext_lacp(void)
 static int
 check_environment(void)
 {
-	struct slave_conf *port;
+	struct member_conf *port;
 	uint8_t i, env_state;
-	uint16_t slaves[RTE_DIM(test_params.slave_ports)];
-	int slaves_count;
+	uint16_t members[RTE_DIM(test_params.member_ports)];
+	int members_count;
 
 	env_state = 0;
 	FOR_EACH_PORT(i, port) {
@@ -1540,20 +1542,20 @@ check_environment(void)
 			break;
 	}
 
-	slaves_count = rte_eth_bond_slaves_get(test_params.bonded_port_id,
-			slaves, RTE_DIM(slaves));
+	members_count = rte_eth_bond_members_get(test_params.bonded_port_id,
+			members, RTE_DIM(members));
 
-	if (slaves_count != 0)
+	if (members_count != 0)
 		env_state |= 0x10;
 
 	TEST_ASSERT_EQUAL(env_state, 0,
 		"Environment not clean (port %u):%s%s%s%s%s",
 		port->port_id,
-		env_state & 0x01 ? " slave rx queue not clean" : "",
-		env_state & 0x02 ? " slave tx queue not clean" : "",
-		env_state & 0x04 ? " port marked as enslaved" : "",
-		env_state & 0x80 ? " slave state is not reset" : "",
-		env_state & 0x10 ? " slave count not equal 0" : ".");
+		env_state & 0x01 ? " member rx queue not clean" : "",
+		env_state & 0x02 ? " member tx queue not clean" : "",
+		env_state & 0x04 ? " port marked as enmemberd" : "",
+		env_state & 0x80 ? " member state is not reset" : "",
+		env_state & 0x10 ? " member count not equal 0" : ".");
 
 
 	return TEST_SUCCESS;
@@ -1562,7 +1564,7 @@ check_environment(void)
 static int
 test_mode4_executor(int (*test_func)(void))
 {
-	struct slave_conf *port;
+	struct member_conf *port;
 	int test_result;
 	uint8_t i;
 	void *pkt;
@@ -1581,7 +1583,7 @@ test_mode4_executor(int (*test_func)(void))
 
 	/* Reset environment in case test failed to do that. */
 	if (test_result != TEST_SUCCESS) {
-		TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
+		TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
 			"Failed to stop bonded device");
 
 		FOR_EACH_PORT(i, port) {
diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 464fb2dbd0..1f888b4771 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -27,15 +27,15 @@
 
 #include "test.h"
 
-#define SLAVE_COUNT (4)
+#define MEMBER_COUNT (4)
 
 #define RXTX_RING_SIZE			1024
 #define RXTX_QUEUE_COUNT		4
 
 #define BONDED_DEV_NAME         ("net_bonding_rss")
 
-#define SLAVE_DEV_NAME_FMT      ("net_null%d")
-#define SLAVE_RXTX_QUEUE_FMT      ("rssconf_slave%d_q%d")
+#define MEMBER_DEV_NAME_FMT      ("net_null%d")
+#define MEMBER_RXTX_QUEUE_FMT      ("rssconf_member%d_q%d")
 
 #define NUM_MBUFS 8191
 #define MBUF_SIZE (1600 + RTE_PKTMBUF_HEADROOM)
@@ -46,7 +46,7 @@
 #define INVALID_PORT_ID         (0xFF)
 #define INVALID_BONDING_MODE    (-1)
 
-struct slave_conf {
+struct member_conf {
 	uint16_t port_id;
 	struct rte_eth_dev_info dev_info;
 
@@ -54,7 +54,7 @@ struct slave_conf {
 	uint8_t rss_key[40];
 	struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
 
-	uint8_t is_slave;
+	uint8_t is_member;
 	struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
 };
 
@@ -62,15 +62,15 @@ struct link_bonding_rssconf_unittest_params {
 	uint8_t bond_port_id;
 	struct rte_eth_dev_info bond_dev_info;
 	struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
-	struct slave_conf slave_ports[SLAVE_COUNT];
+	struct member_conf member_ports[MEMBER_COUNT];
 
 	struct rte_mempool *mbuf_pool;
 };
 
 static struct link_bonding_rssconf_unittest_params test_params  = {
 	.bond_port_id = INVALID_PORT_ID,
-	.slave_ports = {
-		[0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_slave = 0}
+	.member_ports = {
+		[0 ... MEMBER_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_member = 0}
 	},
 	.mbuf_pool = NULL,
 };
@@ -107,14 +107,14 @@ static struct rte_eth_conf rss_pmd_conf = {
 #define FOR_EACH(_i, _item, _array, _size) \
 	for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
 
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
  * in this test.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
  */
 #define FOR_EACH_PORT(_i, _port) \
-	FOR_EACH(_i, _port, test_params.slave_ports, \
-		RTE_DIM(test_params.slave_ports))
+	FOR_EACH(_i, _port, test_params.member_ports, \
+		RTE_DIM(test_params.member_ports))
 
 static int
 configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
@@ -151,21 +151,21 @@ configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
 }
 
 /**
- * Remove all slaves from bonding
+ * Remove all members from bonding
  */
 static int
-remove_slaves(void)
+remove_members(void)
 {
 	unsigned n;
-	struct slave_conf *port;
+	struct member_conf *port;
 
 	FOR_EACH_PORT(n, port) {
-		port = &test_params.slave_ports[n];
-		if (port->is_slave) {
-			TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(
+		port = &test_params.member_ports[n];
+		if (port->is_member) {
+			TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(
 					test_params.bond_port_id, port->port_id),
-					"Cannot remove slave %d from bonding", port->port_id);
-			port->is_slave = 0;
+					"Cannot remove member %d from bonding", port->port_id);
+			port->is_member = 0;
 		}
 	}
 
@@ -173,30 +173,30 @@ remove_slaves(void)
 }
 
 static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
 {
-	TEST_ASSERT_SUCCESS(remove_slaves(), "Removing slaves");
+	TEST_ASSERT_SUCCESS(remove_members(), "Removing members");
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bond_port_id),
 			"Failed to stop port %u", test_params.bond_port_id);
 	return TEST_SUCCESS;
 }
 
 /**
- * Add all slaves to bonding
+ * Add all members to bonding
  */
 static int
-bond_slaves(void)
+bond_members(void)
 {
 	unsigned n;
-	struct slave_conf *port;
+	struct member_conf *port;
 
 	FOR_EACH_PORT(n, port) {
-		port = &test_params.slave_ports[n];
-		if (!port->is_slave) {
-			TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
-					port->port_id), "Cannot attach slave %d to the bonding",
+		port = &test_params.member_ports[n];
+		if (!port->is_member) {
+			TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bond_port_id,
+					port->port_id), "Cannot attach member %d to the bonding",
 					port->port_id);
-			port->is_slave = 1;
+			port->is_member = 1;
 		}
 	}
 
@@ -223,11 +223,11 @@ reta_set(uint16_t port_id, uint8_t value, int reta_size)
 }
 
 /**
- * Check if slaves RETA is synchronized with bonding port. Returns 1 if slave
+ * Check if members RETA is synchronized with bonding port. Returns 1 if member
  * port is synced with bonding port.
  */
 static int
-reta_check_synced(struct slave_conf *port)
+reta_check_synced(struct member_conf *port)
 {
 	unsigned i;
 
@@ -264,10 +264,10 @@ bond_reta_fetch(void) {
 }
 
 /**
- * Fetch slaves RETA
+ * Fetch members RETA
  */
 static int
-slave_reta_fetch(struct slave_conf *port) {
+member_reta_fetch(struct member_conf *port) {
 	unsigned j;
 
 	for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
@@ -280,49 +280,49 @@ slave_reta_fetch(struct slave_conf *port) {
 }
 
 /**
- * Remove and add slave to check if slaves configuration is synced with
- * the bonding ports values after adding new slave.
+ * Remove and add member to check if members configuration is synced with
+ * the bonding ports values after adding new member.
  */
 static int
-slave_remove_and_add(void)
+member_remove_and_add(void)
 {
-	struct slave_conf *port = &(test_params.slave_ports[0]);
+	struct member_conf *port = &(test_params.member_ports[0]);
 
-	/* 1. Remove first slave from bonding */
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params.bond_port_id,
-			port->port_id), "Cannot remove slave #d from bonding");
+	/* 1. Remove first member from bonding */
+	TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(test_params.bond_port_id,
+			port->port_id), "Cannot remove member #d from bonding");
 
-	/* 2. Change removed (ex-)slave and bonding configuration to different
+	/* 2. Change removed (ex-)member and bonding configuration to different
 	 *    values
 	 */
 	reta_set(test_params.bond_port_id, 1, test_params.bond_dev_info.reta_size);
 	bond_reta_fetch();
 
 	reta_set(port->port_id, 2, port->dev_info.reta_size);
-	slave_reta_fetch(port);
+	member_reta_fetch(port);
 
 	TEST_ASSERT(reta_check_synced(port) == 0,
-			"Removed slave didn't should be synchronized with bonding port");
+			"Removed member didn't should be synchronized with bonding port");
 
-	/* 3. Add (ex-)slave and check if configuration changed*/
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
-			port->port_id), "Cannot add slave");
+	/* 3. Add (ex-)member and check if configuration changed*/
+	TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bond_port_id,
+			port->port_id), "Cannot add member");
 
 	bond_reta_fetch();
-	slave_reta_fetch(port);
+	member_reta_fetch(port);
 
 	return reta_check_synced(port);
 }
 
 /**
- * Test configuration propagation over slaves.
+ * Test configuration propagation over members.
  */
 static int
 test_propagate(void)
 {
 	unsigned i;
 	uint8_t n;
-	struct slave_conf *port;
+	struct member_conf *port;
 	uint8_t bond_rss_key[40];
 	struct rte_eth_rss_conf bond_rss_conf;
 
@@ -349,18 +349,18 @@ test_propagate(void)
 
 			retval = rte_eth_dev_rss_hash_update(test_params.bond_port_id,
 					&bond_rss_conf);
-			TEST_ASSERT_SUCCESS(retval, "Cannot set slaves hash function");
+			TEST_ASSERT_SUCCESS(retval, "Cannot set members hash function");
 
 			FOR_EACH_PORT(n, port) {
-				port = &test_params.slave_ports[n];
+				port = &test_params.member_ports[n];
 
 				retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
 						&port->rss_conf);
 				TEST_ASSERT_SUCCESS(retval,
-						"Cannot take slaves RSS configuration");
+						"Cannot take members RSS configuration");
 
 				TEST_ASSERT(port->rss_conf.rss_hf == rss_hf,
-						"Hash function not propagated for slave %d",
+						"Hash function not propagated for member %d",
 						port->port_id);
 			}
 
@@ -376,11 +376,11 @@ test_propagate(void)
 
 		/* Set all keys to zero */
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.member_ports[n];
 			memset(port->rss_conf.rss_key, 0, 40);
 			retval = rte_eth_dev_rss_hash_update(port->port_id,
 					&port->rss_conf);
-			TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RSS keys");
+			TEST_ASSERT_SUCCESS(retval, "Cannot set members RSS keys");
 		}
 
 		memset(bond_rss_key, i, sizeof(bond_rss_key));
@@ -393,18 +393,18 @@ test_propagate(void)
 		TEST_ASSERT_SUCCESS(retval, "Cannot set bonded port RSS keys");
 
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.member_ports[n];
 
 			retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
 					&(port->rss_conf));
 
 			TEST_ASSERT_SUCCESS(retval,
-					"Cannot take slaves RSS configuration");
+					"Cannot take members RSS configuration");
 
 			/* compare keys */
 			retval = memcmp(port->rss_conf.rss_key, bond_rss_key,
 					sizeof(bond_rss_key));
-			TEST_ASSERT(retval == 0, "Key value not propagated for slave %d",
+			TEST_ASSERT(retval == 0, "Key value not propagated for member %d",
 					port->port_id);
 		}
 	}
@@ -416,10 +416,10 @@ test_propagate(void)
 
 		/* Set all keys to zero */
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.member_ports[n];
 			retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
 					port->dev_info.reta_size);
-			TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RETA");
+			TEST_ASSERT_SUCCESS(retval, "Cannot set members RETA");
 		}
 
 		TEST_ASSERT_SUCCESS(reta_set(test_params.bond_port_id,
@@ -429,9 +429,9 @@ test_propagate(void)
 		bond_reta_fetch();
 
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.member_ports[n];
 
-			slave_reta_fetch(port);
+			member_reta_fetch(port);
 			TEST_ASSERT(reta_check_synced(port) == 1, "RETAs inconsistent");
 		}
 	}
@@ -459,29 +459,29 @@ test_rss(void)
 			"Error during getting device (port %u) info: %s\n",
 			test_params.bond_port_id, strerror(-ret));
 
-	TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
+	TEST_ASSERT_SUCCESS(bond_members(), "Bonding members failed");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
 			"Failed to start bonding port (%d).", test_params.bond_port_id);
 
 	TEST_ASSERT_SUCCESS(test_propagate(), "Propagation test failed");
 
-	TEST_ASSERT(slave_remove_and_add() == 1, "remove and add slaves success.");
+	TEST_ASSERT(member_remove_and_add() == 1, "remove and add members success.");
 
-	remove_slaves_and_stop_bonded_device();
+	remove_members_and_stop_bonded_device();
 
 	return TEST_SUCCESS;
 }
 
 
 /**
- * Test RSS configuration over bonded and slaves.
+ * Test RSS configuration over bonded and members.
  */
 static int
 test_rss_config_lazy(void)
 {
 	struct rte_eth_rss_conf bond_rss_conf = {0};
-	struct slave_conf *port;
+	struct member_conf *port;
 	uint8_t rss_key[40];
 	uint64_t rss_hf;
 	int retval;
@@ -502,18 +502,18 @@ test_rss_config_lazy(void)
 		TEST_ASSERT(retval != 0, "Succeeded in setting bonded port hash function");
 	}
 
-	/* Set all keys to zero for all slaves */
+	/* Set all keys to zero for all members */
 	FOR_EACH_PORT(n, port) {
-		port = &test_params.slave_ports[n];
+		port = &test_params.member_ports[n];
 		retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
 						       &port->rss_conf);
-		TEST_ASSERT_SUCCESS(retval, "Cannot get slaves RSS configuration");
+		TEST_ASSERT_SUCCESS(retval, "Cannot get members RSS configuration");
 		memset(port->rss_key, 0, sizeof(port->rss_key));
 		port->rss_conf.rss_key = port->rss_key;
 		port->rss_conf.rss_key_len = sizeof(port->rss_key);
 		retval = rte_eth_dev_rss_hash_update(port->port_id,
 						     &port->rss_conf);
-		TEST_ASSERT(retval != 0, "Succeeded in setting slaves RSS keys");
+		TEST_ASSERT(retval != 0, "Succeeded in setting members RSS keys");
 	}
 
 	/* Set RSS keys for bonded port */
@@ -529,10 +529,10 @@ test_rss_config_lazy(void)
 	/*  Test RETA propagation */
 	for (i = 0; i < RXTX_QUEUE_COUNT; i++) {
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.member_ports[n];
 			retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
 					  port->dev_info.reta_size);
-			TEST_ASSERT(retval != 0, "Succeeded in setting slaves RETA");
+			TEST_ASSERT(retval != 0, "Succeeded in setting members RETA");
 		}
 
 		retval = reta_set(test_params.bond_port_id, i % RXTX_QUEUE_COUNT,
@@ -560,14 +560,14 @@ test_rss_lazy(void)
 			"Error during getting device (port %u) info: %s\n",
 			test_params.bond_port_id, strerror(-ret));
 
-	TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
+	TEST_ASSERT_SUCCESS(bond_members(), "Bonding members failed");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
 			"Failed to start bonding port (%d).", test_params.bond_port_id);
 
 	TEST_ASSERT_SUCCESS(test_rss_config_lazy(), "Succeeded in setting RSS hash when RX_RSS mq_mode is turned off");
 
-	remove_slaves_and_stop_bonded_device();
+	remove_members_and_stop_bonded_device();
 
 	return TEST_SUCCESS;
 }
@@ -579,13 +579,13 @@ test_setup(void)
 	int retval;
 	int port_id;
 	char name[256];
-	struct slave_conf *port;
+	struct member_conf *port;
 	struct rte_ether_addr mac_addr = { .addr_bytes = {0} };
 
 	if (test_params.mbuf_pool == NULL) {
 
 		test_params.mbuf_pool = rte_pktmbuf_pool_create(
-			"RSS_MBUF_POOL", NUM_MBUFS * SLAVE_COUNT,
+			"RSS_MBUF_POOL", NUM_MBUFS * MEMBER_COUNT,
 			MBUF_CACHE_SIZE, 0, MBUF_SIZE, rte_socket_id());
 
 		TEST_ASSERT(test_params.mbuf_pool != NULL,
@@ -594,10 +594,10 @@ test_setup(void)
 
 	/* Create / initialize ring eth devs. */
 	FOR_EACH_PORT(n, port) {
-		port = &test_params.slave_ports[n];
+		port = &test_params.member_ports[n];
 
 		port_id = rte_eth_dev_count_avail();
-		snprintf(name, sizeof(name), SLAVE_DEV_NAME_FMT, port_id);
+		snprintf(name, sizeof(name), MEMBER_DEV_NAME_FMT, port_id);
 
 		retval = rte_vdev_init(name, "size=64,copy=0");
 		TEST_ASSERT_SUCCESS(retval, "Failed to create null device '%s'\n",
@@ -647,7 +647,7 @@ test_setup(void)
 static void
 testsuite_teardown(void)
 {
-	struct slave_conf *port;
+	struct member_conf *port;
 	uint8_t i;
 
 	/* Only stop ports.
@@ -685,7 +685,7 @@ test_rssconf_executor(int (*test_func)(void))
 
 	/* Reset environment in case test failed to do that. */
 	if (test_result != TEST_SUCCESS) {
-		TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
+		TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
 			"Failed to stop bonded device");
 	}
 
diff --git a/doc/guides/howto/lm_bond_virtio_sriov.rst b/doc/guides/howto/lm_bond_virtio_sriov.rst
index e854ae214e..c06d1bc43c 100644
--- a/doc/guides/howto/lm_bond_virtio_sriov.rst
+++ b/doc/guides/howto/lm_bond_virtio_sriov.rst
@@ -17,8 +17,8 @@ Test Setup
 ----------
 
 A bonded device is created in the VM.
-The virtio and VF PMD's are added as slaves to the bonded device.
-The VF is set as the primary slave of the bonded device.
+The virtio and VF PMD's are added as members to the bonded device.
+The VF is set as the primary member of the bonded device.
 
 A bridge must be set up on the Host connecting the tap device, which is the
 backend of the Virtio device and the Physical Function (PF) device.
@@ -116,13 +116,13 @@ Bonding is port 2 (P2).
 
    testpmd> create bonded device 1 0
    Created new bonded device net_bond_testpmd_0 on (port 2).
-   testpmd> add bonding slave 0 2
-   testpmd> add bonding slave 1 2
+   testpmd> add bonding member 0 2
+   testpmd> add bonding member 1 2
    testpmd> show bonding config 2
 
 The syntax of the ``testpmd`` command is:
 
-set bonding primary (slave id) (port id)
+set bonding primary (member id) (port id)
 
 Set primary to P1 before starting bonding port.
 
@@ -139,7 +139,7 @@ Set primary to P1 before starting bonding port.
 
    testpmd> show bonding config 2
 
-Primary is now P1. There are 2 active slaves.
+Primary is now P1. There are 2 active members.
 
 Use P2 only for forwarding.
 
@@ -151,7 +151,7 @@ Use P2 only for forwarding.
    testpmd> start
    testpmd> show bonding config 2
 
-Primary is now P1. There are 2 active slaves.
+Primary is now P1. There are 2 active members.
 
 .. code-block:: console
 
@@ -163,10 +163,10 @@ VF traffic is seen at P1 and P2.
 
    testpmd> clear port stats all
    testpmd> set bonding primary 0 2
-   testpmd> remove bonding slave 1 2
+   testpmd> remove bonding member 1 2
    testpmd> show bonding config 2
 
-Primary is now P0. There is 1 active slave.
+Primary is now P0. There is 1 active member.
 
 .. code-block:: console
 
@@ -210,7 +210,7 @@ On host_server_1: Terminal 1
 
    testpmd> show bonding config 2
 
-Primary is now P0. There is 1 active slave.
+Primary is now P0. There is 1 active member.
 
 .. code-block:: console
 
@@ -346,7 +346,7 @@ The ``mac_addr`` command only works with the Kernel PF for Niantic.
    testpmd> show port stats all.
    testpmd> show config fwd
    testpmd> show bonding config 2
-   testpmd> add bonding slave 1 2
+   testpmd> add bonding member 1 2
    testpmd> set bonding primary 1 2
    testpmd> show bonding config 2
    testpmd> show port stats all
@@ -355,7 +355,7 @@ VF traffic is seen at P1 (VF) and P2 (Bonded device).
 
 .. code-block:: console
 
-   testpmd> remove bonding slave 0 2
+   testpmd> remove bonding member 0 2
    testpmd> show bonding config 2
    testpmd> port stop 0
    testpmd> port close 0
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 0b09b0c50a..43b2622022 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -781,8 +781,8 @@ DPDK implements a light-weight library to allow PMDs to be bonded together and p
 
 .. code-block:: console
 
-    dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=<PCI B:D.F device 1>,slave=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
-    (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
+    dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,member=<PCI B:D.F device 1>,member=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
+    (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,member=0000:82:00.0,member=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
 
 Vector Processing
 -----------------
diff --git a/doc/guides/prog_guide/img/bond-mode-1.svg b/doc/guides/prog_guide/img/bond-mode-1.svg
index 7c81b856b7..5a9271facf 100644
--- a/doc/guides/prog_guide/img/bond-mode-1.svg
+++ b/doc/guides/prog_guide/img/bond-mode-1.svg
@@ -53,7 +53,7 @@
      v:langID="1033"
      v:metric="true"
      v:viewMarkup="false"><v:userDefs><v:ud
-         v:nameU="msvSubprocessMaster"
+         v:nameU="msvSubprocessMain"
          v:prompt=""
          v:val="VT4(Rectangle)" /><v:ud
          v:nameU="msvNoAutoConnect"
diff --git a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
index 1f66154e35..519a364105 100644
--- a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
+++ b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
@@ -16,14 +16,14 @@ allows physical PMDs to be bonded together to create a single logical PMD.
 The Link Bonding PMD library(librte_net_bond) supports bonding of groups of
 ``rte_eth_dev`` ports of the same speed and duplex to provide similar
 capabilities to that found in Linux bonding driver to allow the aggregation
-of multiple (slave) NICs into a single logical interface between a server
+of multiple (member) NICs into a single logical interface between a server
 and a switch. The new bonded PMD will then process these interfaces based on
 the mode of operation specified to provide support for features such as
 redundant links, fault tolerance and/or load balancing.
 
 The librte_net_bond library exports a C API which provides an API for the
 creation of bonded devices as well as the configuration and management of the
-bonded device and its slave devices.
+bonded device and its member devices.
 
 .. note::
 
@@ -45,7 +45,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
 
 
     This mode provides load balancing and fault tolerance by transmission of
-    packets in sequential order from the first available slave device through
+    packets in sequential order from the first available member device through
     the last. Packets are bulk dequeued from devices then serviced in a
     round-robin manner. This mode does not guarantee in order reception of
     packets and down stream should be able to handle out of order packets.
@@ -57,9 +57,9 @@ Currently the Link Bonding PMD library supports following modes of operation:
    Active Backup (Mode 1)
 
 
-    In this mode only one slave in the bond is active at any time, a different
-    slave becomes active if, and only if, the primary active slave fails,
-    thereby providing fault tolerance to slave failure. The single logical
+    In this mode only one member in the bond is active at any time, a different
+    member becomes active if, and only if, the primary active member fails,
+    thereby providing fault tolerance to member failure. The single logical
     bonded interface's MAC address is externally visible on only one NIC (port)
     to avoid confusing the network switch.
 
@@ -73,10 +73,10 @@ Currently the Link Bonding PMD library supports following modes of operation:
     This mode provides transmit load balancing (based on the selected
     transmission policy) and fault tolerance. The default policy (layer2) uses
     a simple calculation based on the packet flow source and destination MAC
-    addresses as well as the number of active slaves available to the bonded
-    device to classify the packet to a specific slave to transmit on. Alternate
+    addresses as well as the number of active members available to the bonded
+    device to classify the packet to a specific member to transmit on. Alternate
     transmission policies supported are layer 2+3, this takes the IP source and
-    destination addresses into the calculation of the transmit slave port and
+    destination addresses into the calculation of the transmit member port and
     the final supported policy is layer 3+4, this uses IP source and
     destination addresses as well as the TCP/UDP source and destination port.
 
@@ -92,7 +92,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
    Broadcast (Mode 3)
 
 
-    This mode provides fault tolerance by transmission of packets on all slave
+    This mode provides fault tolerance by transmission of packets on all member
     ports.
 
 *   **Link Aggregation 802.3AD (Mode 4):**
@@ -114,7 +114,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
        intervals period of less than 100ms.
 
     #. Calls to ``rte_eth_tx_burst`` must have a buffer size of at least 2xN,
-       where N is the number of slaves. This is a space required for LACP
+       where N is the number of members. This is a space required for LACP
        frames. Additionally LACP packets are included in the statistics, but
        they are not returned to the application.
 
@@ -126,7 +126,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
 
 
     This mode provides an adaptive transmit load balancing. It dynamically
-    changes the transmitting slave, according to the computed load. Statistics
+    changes the transmitting member, according to the computed load. Statistics
     are collected in 100ms intervals and scheduled every 10ms.
 
 
@@ -140,74 +140,74 @@ The Link Bonding Library supports the creation of bonded devices at application
 startup time during EAL initialization using the ``--vdev`` option as well as
 programmatically via the C API ``rte_eth_bond_create`` function.
 
-Bonded devices support the dynamical addition and removal of slave devices using
-the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove`` APIs.
+Bonded devices support the dynamical addition and removal of member devices using
+the ``rte_eth_bond_member_add`` / ``rte_eth_bond_member_remove`` APIs.
 
-After a slave device is added to a bonded device slave is stopped using
+After a member device is added to a bonded device member is stopped using
 ``rte_eth_dev_stop`` and then reconfigured using ``rte_eth_dev_configure``
 the RX and TX queues are also reconfigured using ``rte_eth_tx_queue_setup`` /
 ``rte_eth_rx_queue_setup`` with the parameters use to configure the bonding
 device. If RSS is enabled for bonding device, this mode is also enabled on new
-slave and configured as well.
+member and configured as well.
 Any flow which was configured to the bond device also is configured to the added
-slave.
+member.
 
 Setting up multi-queue mode for bonding device to RSS, makes it fully
-RSS-capable, so all slaves are synchronized with its configuration. This mode is
-intended to provide RSS configuration on slaves transparent for client
+RSS-capable, so all members are synchronized with its configuration. This mode is
+intended to provide RSS configuration on members transparent for client
 application implementation.
 
 Bonding device stores its own version of RSS settings i.e. RETA, RSS hash
-function and RSS key, used to set up its slaves. That let to define the meaning
+function and RSS key, used to set up its members. That let to define the meaning
 of RSS configuration of bonding device as desired configuration of whole bonding
-(as one unit), without pointing any of slave inside. It is required to ensure
+(as one unit), without pointing any of member inside. It is required to ensure
 consistency and made it more error-proof.
 
 RSS hash function set for bonding device, is a maximal set of RSS hash functions
-supported by all bonded slaves. RETA size is a GCD of all its RETA's sizes, so
-it can be easily used as a pattern providing expected behavior, even if slave
+supported by all bonded members. RETA size is a GCD of all its RETA's sizes, so
+it can be easily used as a pattern providing expected behavior, even if member
 RETAs' sizes are different. If RSS Key is not set for bonded device, it's not
-changed on the slaves and default key for device is used.
+changed on the members and default key for device is used.
 
-As RSS configurations, there is flow consistency in the bonded slaves for the
+As RSS configurations, there is flow consistency in the bonded members for the
 next rte flow operations:
 
 Validate:
-	- Validate flow for each slave, failure at least for one slave causes to
+	- Validate flow for each member, failure at least for one member causes to
 	  bond validation failure.
 
 Create:
-	- Create the flow in all slaves.
-	- Save all the slaves created flows objects in bonding internal flow
+	- Create the flow in all members.
+	- Save all the members created flows objects in bonding internal flow
 	  structure.
-	- Failure in flow creation for existed slave rejects the flow.
-	- Failure in flow creation for new slaves in slave adding time rejects
-	  the slave.
+	- Failure in flow creation for existed member rejects the flow.
+	- Failure in flow creation for new members in member adding time rejects
+	  the member.
 
 Destroy:
-	- Destroy the flow in all slaves and release the bond internal flow
+	- Destroy the flow in all members and release the bond internal flow
 	  memory.
 
 Flush:
-	- Destroy all the bonding PMD flows in all the slaves.
+	- Destroy all the bonding PMD flows in all the members.
 
 .. note::
 
-    Don't call slaves flush directly, It destroys all the slave flows which
+    Don't call members flush directly, It destroys all the member flows which
     may include external flows or the bond internal LACP flow.
 
 Query:
-	- Summarize flow counters from all the slaves, relevant only for
+	- Summarize flow counters from all the members, relevant only for
 	  ``RTE_FLOW_ACTION_TYPE_COUNT``.
 
 Isolate:
-	- Call to flow isolate for all slaves.
-	- Failure in flow isolation for existed slave rejects the isolate mode.
-	- Failure in flow isolation for new slaves in slave adding time rejects
-	  the slave.
+	- Call to flow isolate for all members.
+	- Failure in flow isolation for existed member rejects the isolate mode.
+	- Failure in flow isolation for new members in member adding time rejects
+	  the member.
 
 All settings are managed through the bonding port API and always are propagated
-in one direction (from bonding to slaves).
+in one direction (from bonding to members).
 
 Link Status Change Interrupts / Polling
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -215,16 +215,16 @@ Link Status Change Interrupts / Polling
 Link bonding devices support the registration of a link status change callback,
 using the ``rte_eth_dev_callback_register`` API, this will be called when the
 status of the bonding device changes. For example in the case of a bonding
-device which has 3 slaves, the link status will change to up when one slave
-becomes active or change to down when all slaves become inactive. There is no
-callback notification when a single slave changes state and the previous
-conditions are not met. If a user wishes to monitor individual slaves then they
-must register callbacks with that slave directly.
+device which has 3 members, the link status will change to up when one member
+becomes active or change to down when all members become inactive. There is no
+callback notification when a single member changes state and the previous
+conditions are not met. If a user wishes to monitor individual members then they
+must register callbacks with that member directly.
 
 The link bonding library also supports devices which do not implement link
 status change interrupts, this is achieved by polling the devices link status at
 a defined period which is set using the ``rte_eth_bond_link_monitoring_set``
-API, the default polling interval is 10ms. When a device is added as a slave to
+API, the default polling interval is 10ms. When a device is added as a member to
 a bonding device it is determined using the ``RTE_PCI_DRV_INTR_LSC`` flag
 whether the device supports interrupts or whether the link status should be
 monitored by polling it.
@@ -233,30 +233,30 @@ Requirements / Limitations
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 The current implementation only supports devices that support the same speed
-and duplex to be added as a slaves to the same bonded device. The bonded device
-inherits these attributes from the first active slave added to the bonded
-device and then all further slaves added to the bonded device must support
+and duplex to be added as a members to the same bonded device. The bonded device
+inherits these attributes from the first active member added to the bonded
+device and then all further members added to the bonded device must support
 these parameters.
 
-A bonding device must have a minimum of one slave before the bonding device
+A bonding device must have a minimum of one member before the bonding device
 itself can be started.
 
 To use a bonding device dynamic RSS configuration feature effectively, it is
-also required, that all slaves should be RSS-capable and support, at least one
+also required, that all members should be RSS-capable and support, at least one
 common hash function available for each of them. Changing RSS key is only
-possible, when all slave devices support the same key size.
+possible, when all member devices support the same key size.
 
-To prevent inconsistency on how slaves process packets, once a device is added
+To prevent inconsistency on how members process packets, once a device is added
 to a bonding device, RSS and rte flow configurations should be managed through
-the bonding device API, and not directly on the slave.
+the bonding device API, and not directly on the member.
 
 Like all other PMD, all functions exported by a PMD are lock-free functions
 that are assumed not to be invoked in parallel on different logical cores to
 work on the same target object.
 
 It should also be noted that the PMD receive function should not be invoked
-directly on a slave devices after they have been to a bonded device since
-packets read directly from the slave device will no longer be available to the
+directly on a member devices after they have been to a bonded device since
+packets read directly from the member device will no longer be available to the
 bonded device to read.
 
 Configuration
@@ -265,25 +265,25 @@ Configuration
 Link bonding devices are created using the ``rte_eth_bond_create`` API
 which requires a unique device name, the bonding mode,
 and the socket Id to allocate the bonding device's resources on.
-The other configurable parameters for a bonded device are its slave devices,
-its primary slave, a user defined MAC address and transmission policy to use if
+The other configurable parameters for a bonded device are its member devices,
+its primary member, a user defined MAC address and transmission policy to use if
 the device is in balance XOR mode.
 
-Slave Devices
+Member Devices
 ^^^^^^^^^^^^^
 
-Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` slave devices
-of the same speed and duplex. Ethernet devices can be added as a slave to a
-maximum of one bonded device. Slave devices are reconfigured with the
+Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` member devices
+of the same speed and duplex. Ethernet devices can be added as a member to a
+maximum of one bonded device. Member devices are reconfigured with the
 configuration of the bonded device on being added to a bonded device.
 
-The bonded also guarantees to return the MAC address of the slave device to its
-original value of removal of a slave from it.
+The bonded also guarantees to return the MAC address of the member device to its
+original value of removal of a member from it.
 
-Primary Slave
+Primary Member
 ^^^^^^^^^^^^^
 
-The primary slave is used to define the default port to use when a bonded
+The primary member is used to define the default port to use when a bonded
 device is in active backup mode. A different port will only be used if, and
 only if, the current primary port goes down. If the user does not specify a
 primary port it will default to being the first port added to the bonded device.
@@ -292,14 +292,14 @@ MAC Address
 ^^^^^^^^^^^
 
 The bonded device can be configured with a user specified MAC address, this
-address will be inherited by the some/all slave devices depending on the
+address will be inherited by the some/all member devices depending on the
 operating mode. If the device is in active backup mode then only the primary
-device will have the user specified MAC, all other slaves will retain their
-original MAC address. In mode 0, 2, 3, 4 all slaves devices are configure with
+device will have the user specified MAC, all other members will retain their
+original MAC address. In mode 0, 2, 3, 4 all members devices are configure with
 the bonded devices MAC address.
 
 If a user defined MAC address is not defined then the bonded device will
-default to using the primary slaves MAC address.
+default to using the primary members MAC address.
 
 Balance XOR Transmit Policies
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -310,17 +310,17 @@ Balance XOR mode. Layer 2, Layer 2+3, Layer 3+4.
 *   **Layer 2:**   Ethernet MAC address based balancing is the default
     transmission policy for Balance XOR bonding mode. It uses a simple XOR
     calculation on the source MAC address and destination MAC address of the
-    packet and then calculate the modulus of this value to calculate the slave
+    packet and then calculate the modulus of this value to calculate the member
     device to transmit the packet on.
 
 *   **Layer 2 + 3:** Ethernet MAC address & IP Address based balancing uses a
     combination of source/destination MAC addresses and the source/destination
-    IP addresses of the data packet to decide which slave port the packet will
+    IP addresses of the data packet to decide which member port the packet will
     be transmitted on.
 
 *   **Layer 3 + 4:**  IP Address & UDP Port based  balancing uses a combination
     of source/destination IP Address and the source/destination UDP ports of
-    the packet of the data packet to decide which slave port the packet will be
+    the packet of the data packet to decide which member port the packet will be
     transmitted on.
 
 All these policies support 802.1Q VLAN Ethernet packets, as well as IPv4, IPv6
@@ -350,13 +350,13 @@ device configure API ``rte_eth_dev_configure`` and then the RX and TX queues
 which will be used must be setup using ``rte_eth_tx_queue_setup`` /
 ``rte_eth_rx_queue_setup``.
 
-Slave devices can be dynamically added and removed from a link bonding device
-using the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove``
-APIs but at least one slave device must be added to the link bonding device
+Member devices can be dynamically added and removed from a link bonding device
+using the ``rte_eth_bond_member_add`` / ``rte_eth_bond_member_remove``
+APIs but at least one member device must be added to the link bonding device
 before it can be started using ``rte_eth_dev_start``.
 
-The link status of a bonded device is dictated by that of its slaves, if all
-slave device link status are down or if all slaves are removed from the link
+The link status of a bonded device is dictated by that of its members, if all
+member device link status are down or if all members are removed from the link
 bonding device then the link status of the bonding device will go down.
 
 It is also possible to configure / query the configuration of the control
@@ -390,7 +390,7 @@ long as the following two rules are respected:
     where X can be any combination of numbers and/or letters,
     and the name is no greater than 32 characters long.
 
-*   A least one slave device is provided with for each bonded device definition.
+*   A least one member device is provided with for each bonded device definition.
 
 *   The operation mode of the bonded device being created is provided.
 
@@ -404,20 +404,20 @@ The different options are:
 
         mode=2
 
-*   slave: Defines the PMD device which will be added as slave to the bonded
+*   member: Defines the PMD device which will be added as member to the bonded
     device. This option can be selected multiple times, for each device to be
-    added as a slave. Physical devices should be specified using their PCI
+    added as a member. Physical devices should be specified using their PCI
     address, in the format domain:bus:devid.function
 
 .. code-block:: console
 
-        slave=0000:0a:00.0,slave=0000:0a:00.1
+        member=0000:0a:00.0,member=0000:0a:00.1
 
-*   primary: Optional parameter which defines the primary slave port,
-    is used in active backup mode to select the primary slave for data TX/RX if
+*   primary: Optional parameter which defines the primary member port,
+    is used in active backup mode to select the primary member for data TX/RX if
     it is available. The primary port also is used to select the MAC address to
-    use when it is not defined by the user. This defaults to the first slave
-    added to the device if it is specified. The primary device must be a slave
+    use when it is not defined by the user. This defaults to the first member
+    added to the device if it is specified. The primary device must be a member
     of the bonded device.
 
 .. code-block:: console
@@ -432,7 +432,7 @@ The different options are:
         socket_id=0
 
 *   mac: Optional parameter to select a MAC address for link bonding device,
-    this overrides the value of the primary slave device.
+    this overrides the value of the primary member device.
 
 .. code-block:: console
 
@@ -474,29 +474,29 @@ The different options are:
 Examples of Usage
 ^^^^^^^^^^^^^^^^^
 
-Create a bonded device in round robin mode with two slaves specified by their PCI address:
+Create a bonded device in round robin mode with two members specified by their PCI address:
 
 .. code-block:: console
 
-    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00' -- --port-topology=chained
+    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,member=0000:0a:00.01,member=0000:04:00.00' -- --port-topology=chained
 
-Create a bonded device in round robin mode with two slaves specified by their PCI address and an overriding MAC address:
+Create a bonded device in round robin mode with two members specified by their PCI address and an overriding MAC address:
 
 .. code-block:: console
 
-    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
+    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,member=0000:0a:00.01,member=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
 
-Create a bonded device in active backup mode with two slaves specified, and a primary slave specified by their PCI addresses:
+Create a bonded device in active backup mode with two members specified, and a primary member specified by their PCI addresses:
 
 .. code-block:: console
 
-    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,slave=0000:0a:00.01,slave=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
+    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,member=0000:0a:00.01,member=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
 
-Create a bonded device in balance mode with two slaves specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
+Create a bonded device in balance mode with two members specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
 
 .. code-block:: console
 
-    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,slave=0000:0a:00.01,slave=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
+    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,member=0000:0a:00.01,member=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
 
 .. _bonding_testpmd_commands:
 
@@ -517,28 +517,28 @@ For example, to create a bonded device in mode 1 on socket 0::
    testpmd> create bonded device 1 0
    created new bonded device (port X)
 
-add bonding slave
+add bonding member
 ~~~~~~~~~~~~~~~~~
 
 Adds Ethernet device to a Link Bonding device::
 
-   testpmd> add bonding slave (slave id) (port id)
+   testpmd> add bonding member (member id) (port id)
 
 For example, to add Ethernet device (port 6) to a Link Bonding device (port 10)::
 
-   testpmd> add bonding slave 6 10
+   testpmd> add bonding member 6 10
 
 
-remove bonding slave
+remove bonding member
 ~~~~~~~~~~~~~~~~~~~~
 
-Removes an Ethernet slave device from a Link Bonding device::
+Removes an Ethernet member device from a Link Bonding device::
 
-   testpmd> remove bonding slave (slave id) (port id)
+   testpmd> remove bonding member (member id) (port id)
 
-For example, to remove Ethernet slave device (port 6) to a Link Bonding device (port 10)::
+For example, to remove Ethernet member device (port 6) to a Link Bonding device (port 10)::
 
-   testpmd> remove bonding slave 6 10
+   testpmd> remove bonding member 6 10
 
 set bonding mode
 ~~~~~~~~~~~~~~~~
@@ -554,11 +554,11 @@ For example, to set the bonding mode of a Link Bonding device (port 10) to broad
 set bonding primary
 ~~~~~~~~~~~~~~~~~~~
 
-Set an Ethernet slave device as the primary device on a Link Bonding device::
+Set an Ethernet member device as the primary device on a Link Bonding device::
 
-   testpmd> set bonding primary (slave id) (port id)
+   testpmd> set bonding primary (member id) (port id)
 
-For example, to set the Ethernet slave device (port 6) as the primary port of a Link Bonding device (port 10)::
+For example, to set the Ethernet member device (port 6) as the primary port of a Link Bonding device (port 10)::
 
    testpmd> set bonding primary 6 10
 
@@ -590,7 +590,7 @@ set bonding mon_period
 
 Set the link status monitoring polling period in milliseconds for a bonding device.
 
-This adds support for PMD slave devices which do not support link status interrupts.
+This adds support for PMD member devices which do not support link status interrupts.
 When the mon_period is set to a value greater than 0 then all PMD's which do not support
 link status ISR will be queried every polling interval to check if their link status has changed::
 
@@ -604,7 +604,7 @@ For example, to set the link status monitoring polling period of bonded device (
 set bonding lacp dedicated_queue
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Enable dedicated tx/rx queues on bonding devices slaves to handle LACP control plane traffic
+Enable dedicated tx/rx queues on bonding devices members to handle LACP control plane traffic
 when in mode 4 (link-aggregation-802.3ad)::
 
    testpmd> set bonding lacp dedicated_queues (port_id) (enable|disable)
@@ -627,13 +627,13 @@ it also shows link-aggregation-802.3ad information if the link mode is mode 4::
    testpmd> show bonding config (port id)
 
 For example,
-to show the configuration a Link Bonding device (port 9) with 3 slave devices (1, 3, 4)
+to show the configuration a Link Bonding device (port 9) with 3 member devices (1, 3, 4)
 in balance mode with a transmission policy of layer 2+3::
 
    testpmd> show bonding config 9
      - Dev basic:
         Bonding mode: BALANCE(2)
         Balance Xmit Policy: BALANCE_XMIT_POLICY_LAYER23
-        Slaves (3): [1 3 4]
-        Active Slaves (3): [1 3 4]
+        Members (3): [1 3 4]
+        Active Members (3): [1 3 4]
         Primary: [3]
diff --git a/drivers/net/bonding/bonding_testpmd.c b/drivers/net/bonding/bonding_testpmd.c
index b3c12cada0..1fe85839ed 100644
--- a/drivers/net/bonding/bonding_testpmd.c
+++ b/drivers/net/bonding/bonding_testpmd.c
@@ -279,7 +279,7 @@ struct cmd_set_bonding_primary_result {
 	cmdline_fixed_string_t set;
 	cmdline_fixed_string_t bonding;
 	cmdline_fixed_string_t primary;
-	portid_t slave_id;
+	portid_t member_id;
 	portid_t port_id;
 };
 
@@ -287,13 +287,13 @@ static void cmd_set_bonding_primary_parsed(void *parsed_result,
 	__rte_unused struct cmdline *cl, __rte_unused void *data)
 {
 	struct cmd_set_bonding_primary_result *res = parsed_result;
-	portid_t master_port_id = res->port_id;
-	portid_t slave_port_id = res->slave_id;
+	portid_t main_port_id = res->port_id;
+	portid_t member_port_id = res->member_id;
 
-	/* Set the primary slave for a bonded device. */
-	if (rte_eth_bond_primary_set(master_port_id, slave_port_id) != 0) {
-		fprintf(stderr, "\t Failed to set primary slave for port = %d.\n",
-			master_port_id);
+	/* Set the primary member for a bonded device. */
+	if (rte_eth_bond_primary_set(main_port_id, member_port_id) != 0) {
+		fprintf(stderr, "\t Failed to set primary member for port = %d.\n",
+			main_port_id);
 		return;
 	}
 	init_port_config();
@@ -308,141 +308,141 @@ static cmdline_parse_token_string_t cmd_setbonding_primary_bonding =
 static cmdline_parse_token_string_t cmd_setbonding_primary_primary =
 	TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_primary_result,
 		primary, "primary");
-static cmdline_parse_token_num_t cmd_setbonding_primary_slave =
+static cmdline_parse_token_num_t cmd_setbonding_primary_member =
 	TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
-		slave_id, RTE_UINT16);
+		member_id, RTE_UINT16);
 static cmdline_parse_token_num_t cmd_setbonding_primary_port =
 	TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
 		port_id, RTE_UINT16);
 
 static cmdline_parse_inst_t cmd_set_bonding_primary = {
 	.f = cmd_set_bonding_primary_parsed,
-	.help_str = "set bonding primary <slave_id> <port_id>: "
-		"Set the primary slave for port_id",
+	.help_str = "set bonding primary <member_id> <port_id>: "
+		"Set the primary member for port_id",
 	.data = NULL,
 	.tokens = {
 		(void *)&cmd_setbonding_primary_set,
 		(void *)&cmd_setbonding_primary_bonding,
 		(void *)&cmd_setbonding_primary_primary,
-		(void *)&cmd_setbonding_primary_slave,
+		(void *)&cmd_setbonding_primary_member,
 		(void *)&cmd_setbonding_primary_port,
 		NULL
 	}
 };
 
-/* *** ADD SLAVE *** */
-struct cmd_add_bonding_slave_result {
+/* *** ADD Member *** */
+struct cmd_add_bonding_member_result {
 	cmdline_fixed_string_t add;
 	cmdline_fixed_string_t bonding;
-	cmdline_fixed_string_t slave;
-	portid_t slave_id;
+	cmdline_fixed_string_t member;
+	portid_t member_id;
 	portid_t port_id;
 };
 
-static void cmd_add_bonding_slave_parsed(void *parsed_result,
+static void cmd_add_bonding_member_parsed(void *parsed_result,
 	__rte_unused struct cmdline *cl, __rte_unused void *data)
 {
-	struct cmd_add_bonding_slave_result *res = parsed_result;
-	portid_t master_port_id = res->port_id;
-	portid_t slave_port_id = res->slave_id;
+	struct cmd_add_bonding_member_result *res = parsed_result;
+	portid_t main_port_id = res->port_id;
+	portid_t member_port_id = res->member_id;
 
-	/* add the slave for a bonded device. */
-	if (rte_eth_bond_slave_add(master_port_id, slave_port_id) != 0) {
+	/* add the member for a bonded device. */
+	if (rte_eth_bond_member_add(main_port_id, member_port_id) != 0) {
 		fprintf(stderr,
-			"\t Failed to add slave %d to master port = %d.\n",
-			slave_port_id, master_port_id);
+			"\t Failed to add member %d to main port = %d.\n",
+			member_port_id, main_port_id);
 		return;
 	}
-	ports[master_port_id].update_conf = 1;
+	ports[main_port_id].update_conf = 1;
 	init_port_config();
-	set_port_slave_flag(slave_port_id);
+	set_port_member_flag(member_port_id);
 }
 
-static cmdline_parse_token_string_t cmd_addbonding_slave_add =
-	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_add =
+	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
 		add, "add");
-static cmdline_parse_token_string_t cmd_addbonding_slave_bonding =
-	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_bonding =
+	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
 		bonding, "bonding");
-static cmdline_parse_token_string_t cmd_addbonding_slave_slave =
-	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
-		slave, "slave");
-static cmdline_parse_token_num_t cmd_addbonding_slave_slaveid =
-	TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
-		slave_id, RTE_UINT16);
-static cmdline_parse_token_num_t cmd_addbonding_slave_port =
-	TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_member =
+	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
+		member, "member");
+static cmdline_parse_token_num_t cmd_addbonding_member_memberid =
+	TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_member_result,
+		member_id, RTE_UINT16);
+static cmdline_parse_token_num_t cmd_addbonding_member_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_member_result,
 		port_id, RTE_UINT16);
 
-static cmdline_parse_inst_t cmd_add_bonding_slave = {
-	.f = cmd_add_bonding_slave_parsed,
-	.help_str = "add bonding slave <slave_id> <port_id>: "
-		"Add a slave device to a bonded device",
+static cmdline_parse_inst_t cmd_add_bonding_member = {
+	.f = cmd_add_bonding_member_parsed,
+	.help_str = "add bonding member <member_id> <port_id>: "
+		"Add a member device to a bonded device",
 	.data = NULL,
 	.tokens = {
-		(void *)&cmd_addbonding_slave_add,
-		(void *)&cmd_addbonding_slave_bonding,
-		(void *)&cmd_addbonding_slave_slave,
-		(void *)&cmd_addbonding_slave_slaveid,
-		(void *)&cmd_addbonding_slave_port,
+		(void *)&cmd_addbonding_member_add,
+		(void *)&cmd_addbonding_member_bonding,
+		(void *)&cmd_addbonding_member_member,
+		(void *)&cmd_addbonding_member_memberid,
+		(void *)&cmd_addbonding_member_port,
 		NULL
 	}
 };
 
-/* *** REMOVE SLAVE *** */
-struct cmd_remove_bonding_slave_result {
+/* *** REMOVE Member *** */
+struct cmd_remove_bonding_member_result {
 	cmdline_fixed_string_t remove;
 	cmdline_fixed_string_t bonding;
-	cmdline_fixed_string_t slave;
-	portid_t slave_id;
+	cmdline_fixed_string_t member;
+	portid_t member_id;
 	portid_t port_id;
 };
 
-static void cmd_remove_bonding_slave_parsed(void *parsed_result,
+static void cmd_remove_bonding_member_parsed(void *parsed_result,
 	__rte_unused struct cmdline *cl, __rte_unused void *data)
 {
-	struct cmd_remove_bonding_slave_result *res = parsed_result;
-	portid_t master_port_id = res->port_id;
-	portid_t slave_port_id = res->slave_id;
+	struct cmd_remove_bonding_member_result *res = parsed_result;
+	portid_t main_port_id = res->port_id;
+	portid_t member_port_id = res->member_id;
 
-	/* remove the slave from a bonded device. */
-	if (rte_eth_bond_slave_remove(master_port_id, slave_port_id) != 0) {
+	/* remove the member from a bonded device. */
+	if (rte_eth_bond_member_remove(main_port_id, member_port_id) != 0) {
 		fprintf(stderr,
-			"\t Failed to remove slave %d from master port = %d.\n",
-			slave_port_id, master_port_id);
+			"\t Failed to remove member %d from main port = %d.\n",
+			member_port_id, main_port_id);
 		return;
 	}
 	init_port_config();
-	clear_port_slave_flag(slave_port_id);
+	clear_port_member_flag(member_port_id);
 }
 
-static cmdline_parse_token_string_t cmd_removebonding_slave_remove =
-	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_remove =
+	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
 		remove, "remove");
-static cmdline_parse_token_string_t cmd_removebonding_slave_bonding =
-	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_bonding =
+	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
 		bonding, "bonding");
-static cmdline_parse_token_string_t cmd_removebonding_slave_slave =
-	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
-		slave, "slave");
-static cmdline_parse_token_num_t cmd_removebonding_slave_slaveid =
-	TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
-		slave_id, RTE_UINT16);
-static cmdline_parse_token_num_t cmd_removebonding_slave_port =
-	TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_member =
+	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
+		member, "member");
+static cmdline_parse_token_num_t cmd_removebonding_member_memberid =
+	TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_member_result,
+		member_id, RTE_UINT16);
+static cmdline_parse_token_num_t cmd_removebonding_member_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_member_result,
 		port_id, RTE_UINT16);
 
-static cmdline_parse_inst_t cmd_remove_bonding_slave = {
-	.f = cmd_remove_bonding_slave_parsed,
-	.help_str = "remove bonding slave <slave_id> <port_id>: "
-		"Remove a slave device from a bonded device",
+static cmdline_parse_inst_t cmd_remove_bonding_member = {
+	.f = cmd_remove_bonding_member_parsed,
+	.help_str = "remove bonding member <member_id> <port_id>: "
+		"Remove a member device from a bonded device",
 	.data = NULL,
 	.tokens = {
-		(void *)&cmd_removebonding_slave_remove,
-		(void *)&cmd_removebonding_slave_bonding,
-		(void *)&cmd_removebonding_slave_slave,
-		(void *)&cmd_removebonding_slave_slaveid,
-		(void *)&cmd_removebonding_slave_port,
+		(void *)&cmd_removebonding_member_remove,
+		(void *)&cmd_removebonding_member_bonding,
+		(void *)&cmd_removebonding_member_member,
+		(void *)&cmd_removebonding_member_memberid,
+		(void *)&cmd_removebonding_member_port,
 		NULL
 	}
 };
@@ -706,18 +706,18 @@ static struct testpmd_driver_commands bonding_cmds = {
 	},
 	{
 		&cmd_set_bonding_primary,
-		"set bonding primary (slave_id) (port_id)\n"
-		"	Set the primary slave for a bonded device.\n",
+		"set bonding primary (member_id) (port_id)\n"
+		"	Set the primary member for a bonded device.\n",
 	},
 	{
-		&cmd_add_bonding_slave,
-		"add bonding slave (slave_id) (port_id)\n"
-		"	Add a slave device to a bonded device.\n",
+		&cmd_add_bonding_member,
+		"add bonding member (member_id) (port_id)\n"
+		"	Add a member device to a bonded device.\n",
 	},
 	{
-		&cmd_remove_bonding_slave,
-		"remove bonding slave (slave_id) (port_id)\n"
-		"	Remove a slave device from a bonded device.\n",
+		&cmd_remove_bonding_member,
+		"remove bonding member (member_id) (port_id)\n"
+		"	Remove a member device from a bonded device.\n",
 	},
 	{
 		&cmd_create_bonded_device,
diff --git a/drivers/net/bonding/eth_bond_8023ad_private.h b/drivers/net/bonding/eth_bond_8023ad_private.h
index a5e1fffea1..9d35d8aa47 100644
--- a/drivers/net/bonding/eth_bond_8023ad_private.h
+++ b/drivers/net/bonding/eth_bond_8023ad_private.h
@@ -15,10 +15,10 @@
 #include "rte_eth_bond_8023ad.h"
 
 #define BOND_MODE_8023AX_UPDATE_TIMEOUT_MS  100
-/** Maximum number of packets to one slave queued in TX ring. */
-#define BOND_MODE_8023AX_SLAVE_RX_PKTS        3
-/** Maximum number of LACP packets from one slave queued in TX ring. */
-#define BOND_MODE_8023AX_SLAVE_TX_PKTS        1
+/** Maximum number of packets to one member queued in TX ring. */
+#define BOND_MODE_8023AX_Member_RX_PKTS        3
+/** Maximum number of LACP packets from one member queued in TX ring. */
+#define BOND_MODE_8023AX_Member_TX_PKTS        1
 /**
  * Timeouts definitions (5.4.4 in 802.1AX documentation).
  */
@@ -113,7 +113,7 @@ struct port {
 	enum rte_bond_8023ad_selection selected;
 
 	/** Indicates if either allmulti or promisc has been enforced on the
-	 * slave so that we can receive lacp packets
+	 * member so that we can receive lacp packets
 	 */
 #define BOND_8023AD_FORCED_ALLMULTI (1 << 0)
 #define BOND_8023AD_FORCED_PROMISC (1 << 1)
@@ -162,8 +162,8 @@ struct mode8023ad_private {
 	uint8_t external_sm;
 	struct rte_ether_addr mac_addr;
 
-	struct rte_eth_link slave_link;
-	/***< slave link properties */
+	struct rte_eth_link member_link;
+	/***< member link properties */
 
 	/**
 	 * Configuration of dedicated hardware queues for control plane
@@ -208,7 +208,7 @@ bond_mode_8023ad_setup(struct rte_eth_dev *dev,
 /**
  * @internal
  *
- * Enables 802.1AX mode and all active slaves on bonded interface.
+ * Enables 802.1AX mode and all active members on bonded interface.
  *
  * @param dev Bonded interface
  * @return
@@ -220,7 +220,7 @@ bond_mode_8023ad_enable(struct rte_eth_dev *dev);
 /**
  * @internal
  *
- * Disables 802.1AX mode of the bonded interface and slaves.
+ * Disables 802.1AX mode of the bonded interface and members.
  *
  * @param dev Bonded interface
  * @return
@@ -256,43 +256,43 @@ bond_mode_8023ad_stop(struct rte_eth_dev *dev);
  *
  * Passes given slow packet to state machines management logic.
  * @param internals Bonded device private data.
- * @param slave_id Slave port id.
+ * @param member_id Member port id.
  * @param slot_pkt Slow packet.
  */
 void
 bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
-				 uint16_t slave_id, struct rte_mbuf *pkt);
+				 uint16_t member_id, struct rte_mbuf *pkt);
 
 /**
  * @internal
  *
- * Appends given slave used slave
+ * Appends given member used member
  *
  * @param dev       Bonded interface.
- * @param port_id   Slave port ID to be added
+ * @param port_id   Member port ID to be added
  *
  * @return
  *  0 on success, negative value otherwise.
  */
 void
-bond_mode_8023ad_activate_slave(struct rte_eth_dev *dev, uint16_t port_id);
+bond_mode_8023ad_activate_member(struct rte_eth_dev *dev, uint16_t port_id);
 
 /**
  * @internal
  *
- * Denitializes and removes given slave from 802.1AX mode.
+ * Denitializes and removes given member from 802.1AX mode.
  *
  * @param dev       Bonded interface.
- * @param slave_num Position of slave in active_slaves array
+ * @param member_num Position of member in active_members array
  *
  * @return
  *  0 on success, negative value otherwise.
  */
 int
-bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *dev, uint16_t slave_pos);
+bond_mode_8023ad_deactivate_member(struct rte_eth_dev *dev, uint16_t member_pos);
 
 /**
- * Updates state when MAC was changed on bonded device or one of its slaves.
+ * Updates state when MAC was changed on bonded device or one of its members.
  * @param bond_dev Bonded device
  */
 void
@@ -300,10 +300,10 @@ bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev);
 
 int
 bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
-		uint16_t slave_port);
+		uint16_t member_port);
 
 int
-bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port);
+bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t member_port);
 
 int
 bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id);
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index d4f1fb27d4..93d03b0a79 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -18,8 +18,8 @@
 #include "eth_bond_8023ad_private.h"
 #include "rte_eth_bond_alb.h"
 
-#define PMD_BOND_SLAVE_PORT_KVARG			("slave")
-#define PMD_BOND_PRIMARY_SLAVE_KVARG		("primary")
+#define PMD_BOND_MEMBER_PORT_KVARG			("member")
+#define PMD_BOND_PRIMARY_MEMBER_KVARG		("primary")
 #define PMD_BOND_MODE_KVARG					("mode")
 #define PMD_BOND_AGG_MODE_KVARG				("agg_mode")
 #define PMD_BOND_XMIT_POLICY_KVARG			("xmit_policy")
@@ -50,8 +50,8 @@ extern const struct rte_flow_ops bond_flow_ops;
 /** Port Queue Mapping Structure */
 struct bond_rx_queue {
 	uint16_t queue_id;
-	/**< Next active_slave to poll */
-	uint16_t active_slave;
+	/**< Next active_member to poll */
+	uint16_t active_member;
 	/**< Queue Id */
 	struct bond_dev_private *dev_private;
 	/**< Reference to eth_dev private structure */
@@ -74,19 +74,19 @@ struct bond_tx_queue {
 	/**< Copy of TX configuration structure for queue */
 };
 
-/** Bonded slave devices structure */
-struct bond_ethdev_slave_ports {
-	uint16_t slaves[RTE_MAX_ETHPORTS];	/**< Slave port id array */
-	uint16_t slave_count;				/**< Number of slaves */
+/** Bonded member devices structure */
+struct bond_ethdev_member_ports {
+	uint16_t members[RTE_MAX_ETHPORTS];	/**< Member port id array */
+	uint16_t member_count;				/**< Number of members */
 };
 
-struct bond_slave_details {
+struct bond_member_details {
 	uint16_t port_id;
 
 	uint8_t link_status_poll_enabled;
 	uint8_t link_status_wait_to_complete;
 	uint8_t last_link_status;
-	/**< Port Id of slave eth_dev */
+	/**< Port Id of member eth_dev */
 	struct rte_ether_addr persisted_mac_addr;
 
 	uint16_t reta_size;
@@ -94,7 +94,7 @@ struct bond_slave_details {
 
 struct rte_flow {
 	TAILQ_ENTRY(rte_flow) next;
-	/* Slaves flows */
+	/* Members flows */
 	struct rte_flow *flows[RTE_MAX_ETHPORTS];
 	/* Flow description for synchronization */
 	struct rte_flow_conv_rule rule;
@@ -102,7 +102,7 @@ struct rte_flow {
 };
 
 typedef void (*burst_xmit_hash_t)(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves);
+		uint16_t member_count, uint16_t *members);
 
 /** Link Bonding PMD device private configuration Structure */
 struct bond_dev_private {
@@ -112,8 +112,8 @@ struct bond_dev_private {
 	rte_spinlock_t lock;
 	rte_spinlock_t lsc_lock;
 
-	uint16_t primary_port;			/**< Primary Slave Port */
-	uint16_t current_primary_port;		/**< Primary Slave Port */
+	uint16_t primary_port;			/**< Primary Member Port */
+	uint16_t current_primary_port;		/**< Primary Member Port */
 	uint16_t user_defined_primary_port;
 	/**< Flag for whether primary port is user defined or not */
 
@@ -137,16 +137,16 @@ struct bond_dev_private {
 	uint16_t nb_rx_queues;			/**< Total number of rx queues */
 	uint16_t nb_tx_queues;			/**< Total number of tx queues*/
 
-	uint16_t active_slave_count;		/**< Number of active slaves */
-	uint16_t active_slaves[RTE_MAX_ETHPORTS];    /**< Active slave list */
+	uint16_t active_member_count;		/**< Number of active members */
+	uint16_t active_members[RTE_MAX_ETHPORTS];    /**< Active member list */
 
-	uint16_t slave_count;			/**< Number of bonded slaves */
-	struct bond_slave_details slaves[RTE_MAX_ETHPORTS];
-	/**< Array of bonded slaves details */
+	uint16_t member_count;			/**< Number of bonded members */
+	struct bond_member_details members[RTE_MAX_ETHPORTS];
+	/**< Array of bonded members details */
 
 	struct mode8023ad_private mode4;
-	uint16_t tlb_slaves_order[RTE_MAX_ETHPORTS];
-	/**< TLB active slaves send order */
+	uint16_t tlb_members_order[RTE_MAX_ETHPORTS];
+	/**< TLB active members send order */
 	struct mode_alb_private mode6;
 
 	uint64_t rx_offload_capa;       /** Rx offload capability */
@@ -177,7 +177,7 @@ struct bond_dev_private {
 	uint8_t rss_key_len;				/**< hash key length in bytes. */
 
 	struct rte_kvargs *kvlist;
-	uint8_t slave_update_idx;
+	uint8_t member_update_idx;
 
 	bool kvargs_processing_is_done;
 
@@ -191,19 +191,21 @@ struct bond_dev_private {
 extern const struct eth_dev_ops default_dev_ops;
 
 int
-check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev);
+check_for_main_bonded_ethdev(const struct rte_eth_dev *eth_dev);
 
 int
 check_for_bonded_ethdev(const struct rte_eth_dev *eth_dev);
 
-/* Search given slave array to find position of given id.
- * Return slave pos or slaves_count if not found. */
+/*
+ * Search given member array to find position of given id.
+ * Return member pos or members_count if not found.
+ */
 static inline uint16_t
-find_slave_by_id(uint16_t *slaves, uint16_t slaves_count, uint16_t slave_id) {
+find_member_by_id(uint16_t *members, uint16_t members_count, uint16_t member_id) {
 
 	uint16_t pos;
-	for (pos = 0; pos < slaves_count; pos++) {
-		if (slave_id == slaves[pos])
+	for (pos = 0; pos < members_count; pos++) {
+		if (member_id == members[pos])
 			break;
 	}
 
@@ -217,13 +219,13 @@ int
 valid_bonded_port_id(uint16_t port_id);
 
 int
-valid_slave_port_id(struct bond_dev_private *internals, uint16_t port_id);
+valid_member_port_id(struct bond_dev_private *internals, uint16_t port_id);
 
 void
-deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
+deactivate_member(struct rte_eth_dev *eth_dev, uint16_t port_id);
 
 void
-activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
+activate_member(struct rte_eth_dev *eth_dev, uint16_t port_id);
 
 int
 mac_address_set(struct rte_eth_dev *eth_dev,
@@ -234,66 +236,66 @@ mac_address_get(struct rte_eth_dev *eth_dev,
 		struct rte_ether_addr *dst_mac_addr);
 
 int
-mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev);
+mac_address_members_update(struct rte_eth_dev *bonded_eth_dev);
 
 int
-slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
-		uint16_t slave_port_id);
+member_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+		uint16_t member_port_id);
 
 int
-slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
-		uint16_t slave_port_id);
+member_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+		uint16_t member_port_id);
 
 int
 bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode);
 
 int
-slave_configure(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev);
+member_configure(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *member_eth_dev);
 
 int
-slave_start(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev);
+member_start(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *member_eth_dev);
 
 void
-slave_remove(struct bond_dev_private *internals,
-		struct rte_eth_dev *slave_eth_dev);
+member_remove(struct bond_dev_private *internals,
+		struct rte_eth_dev *member_eth_dev);
 
 void
-slave_add(struct bond_dev_private *internals,
-		struct rte_eth_dev *slave_eth_dev);
+member_add(struct bond_dev_private *internals,
+		struct rte_eth_dev *member_eth_dev);
 
 void
 burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves);
+		uint16_t member_count, uint16_t *members);
 
 void
 burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves);
+		uint16_t member_count, uint16_t *members);
 
 void
 burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves);
+		uint16_t member_count, uint16_t *members);
 
 
 void
 bond_ethdev_primary_set(struct bond_dev_private *internals,
-		uint16_t slave_port_id);
+		uint16_t member_port_id);
 
 int
 bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 		void *param, void *ret_param);
 
 int
-bond_ethdev_parse_slave_port_kvarg(const char *key,
+bond_ethdev_parse_member_port_kvarg(const char *key,
 		const char *value, void *extra_args);
 
 int
-bond_ethdev_parse_slave_mode_kvarg(const char *key,
+bond_ethdev_parse_member_mode_kvarg(const char *key,
 		const char *value, void *extra_args);
 
 int
-bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_agg_mode_kvarg(const char *key __rte_unused,
 		const char *value, void *extra_args);
 
 int
@@ -301,7 +303,7 @@ bond_ethdev_parse_socket_id_kvarg(const char *key,
 		const char *value, void *extra_args);
 
 int
-bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key,
+bond_ethdev_parse_primary_member_port_id_kvarg(const char *key,
 		const char *value, void *extra_args);
 
 int
@@ -323,7 +325,7 @@ void
 bond_tlb_enable(struct bond_dev_private *internals);
 
 void
-bond_tlb_activate_slave(struct bond_dev_private *internals);
+bond_tlb_activate_member(struct bond_dev_private *internals);
 
 int
 bond_ethdev_stop(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/bonding/rte_eth_bond.h b/drivers/net/bonding/rte_eth_bond.h
index 874aa91a5f..b90242264d 100644
--- a/drivers/net/bonding/rte_eth_bond.h
+++ b/drivers/net/bonding/rte_eth_bond.h
@@ -10,7 +10,7 @@
  *
  * RTE Link Bonding Ethernet Device
  * Link Bonding for 1GbE and 10GbE ports to allow the aggregation of multiple
- * (slave) NICs into a single logical interface. The bonded device processes
+ * (member) NICs into a single logical interface. The bonded device processes
  * these interfaces based on the mode of operation specified and supported.
  * This implementation supports 4 modes of operation round robin, active backup
  * balance and broadcast. Providing redundant links, fault tolerance and/or
@@ -28,24 +28,28 @@ extern "C" {
 #define BONDING_MODE_ROUND_ROBIN		(0)
 /**< Round Robin (Mode 0).
  * In this mode all transmitted packets will be balanced equally across all
- * active slaves of the bonded in a round robin fashion. */
+ * active members of the bonded in a round robin fashion.
+ */
 #define BONDING_MODE_ACTIVE_BACKUP		(1)
 /**< Active Backup (Mode 1).
  * In this mode all packets transmitted will be transmitted on the primary
- * slave until such point as the primary slave is no longer available and then
- * transmitted packets will be sent on the next available slaves. The primary
- * slave can be defined by the user but defaults to the first active slave
- * available if not specified. */
+ * member until such point as the primary member is no longer available and then
+ * transmitted packets will be sent on the next available members. The primary
+ * member can be defined by the user but defaults to the first active member
+ * available if not specified.
+ */
 #define BONDING_MODE_BALANCE			(2)
 /**< Balance (Mode 2).
  * In this mode all packets transmitted will be balanced across the available
- * slaves using one of three available transmit policies - l2, l2+3 or l3+4.
+ * members using one of three available transmit policies - l2, l2+3 or l3+4.
  * See BALANCE_XMIT_POLICY macros definitions for further details on transmit
- * policies. */
+ * policies.
+ */
 #define BONDING_MODE_BROADCAST			(3)
 /**< Broadcast (Mode 3).
  * In this mode all transmitted packets will be transmitted on all available
- * active slaves of the bonded. */
+ * active members of the bonded.
+ */
 #define BONDING_MODE_8023AD				(4)
 /**< 802.3AD (Mode 4).
  *
@@ -62,22 +66,22 @@ extern "C" {
  * be handled with the expected latency and this may cause the link status to be
  * incorrectly marked as down or failure to correctly negotiate with peers.
  * - For optimal performance during initial handshaking the array of mbufs provided
- * to rx_burst should be at least 2 times the slave count size.
- *
+ * to rx_burst should be at least 2 times the member count size.
  */
 #define BONDING_MODE_TLB	(5)
 /**< Adaptive TLB (Mode 5)
  * This mode provides an adaptive transmit load balancing. It dynamically
- * changes the transmitting slave, according to the computed load. Statistics
- * are collected in 100ms intervals and scheduled every 10ms */
+ * changes the transmitting member, according to the computed load. Statistics
+ * are collected in 100ms intervals and scheduled every 10ms.
+ */
 #define BONDING_MODE_ALB	(6)
 /**< Adaptive Load Balancing (Mode 6)
  * This mode includes adaptive TLB and receive load balancing (RLB). In RLB the
  * bonding driver intercepts ARP replies send by local system and overwrites its
  * source MAC address, so that different peers send data to the server on
- * different slave interfaces. When local system sends ARP request, it saves IP
+ * different member interfaces. When local system sends ARP request, it saves IP
  * information from it. When ARP reply from that peer is received, its MAC is
- * stored, one of slave MACs assigned and ARP reply send to that peer.
+ * stored, one of member MACs assigned and ARP reply send to that peer.
  */
 
 /* Balance Mode Transmit Policies */
@@ -113,28 +117,44 @@ int
 rte_eth_bond_free(const char *name);
 
 /**
- * Add a rte_eth_dev device as a slave to the bonded device
+ * Add a rte_eth_dev device as a member to the bonded device
  *
  * @param bonded_port_id	Port ID of bonded device.
- * @param slave_port_id		Port ID of slave device.
+ * @param member_port_id		Port ID of member device.
  *
  * @return
  *	0 on success, negative value otherwise
  */
+__rte_experimental
 int
-rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_member_add(uint16_t bonded_port_id, uint16_t member_port_id);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t member_port_id)
+{
+	return rte_eth_bond_member_add(bonded_port_id, member_port_id);
+}
 
 /**
- * Remove a slave rte_eth_dev device from the bonded device
+ * Remove a member rte_eth_dev device from the bonded device
  *
  * @param bonded_port_id	Port ID of bonded device.
- * @param slave_port_id		Port ID of slave device.
+ * @param member_port_id		Port ID of member device.
  *
  * @return
  *	0 on success, negative value otherwise
  */
+__rte_experimental
 int
-rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_member_remove(uint16_t bonded_port_id, uint16_t member_port_id);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t member_port_id)
+{
+	return rte_eth_bond_member_remove(bonded_port_id, member_port_id);
+}
 
 /**
  * Set link bonding mode of bonded device
@@ -160,65 +180,83 @@ int
 rte_eth_bond_mode_get(uint16_t bonded_port_id);
 
 /**
- * Set slave rte_eth_dev as primary slave of bonded device
+ * Set member rte_eth_dev as primary member of bonded device
  *
  * @param bonded_port_id	Port ID of bonded device.
- * @param slave_port_id		Port ID of slave device.
+ * @param member_port_id		Port ID of member device.
  *
  * @return
  *	0 on success, negative value otherwise
  */
 int
-rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t member_port_id);
 
 /**
- * Get primary slave of bonded device
+ * Get primary member of bonded device
  *
  * @param bonded_port_id	Port ID of bonded device.
  *
  * @return
- *	Port Id of primary slave on success, -1 on failure
+ *	Port Id of primary member on success, -1 on failure
  */
 int
 rte_eth_bond_primary_get(uint16_t bonded_port_id);
 
 /**
- * Populate an array with list of the slaves port id's of the bonded device
+ * Populate an array with list of the members port id's of the bonded device
  *
  * @param bonded_port_id	Port ID of bonded eth_dev to interrogate
- * @param slaves			Array to be populated with the current active slaves
- * @param len				Length of slaves array
+ * @param members			Array to be populated with the current active members
+ * @param len				Length of members array
  *
  * @return
- *	Number of slaves associated with bonded device on success,
+ *	Number of members associated with bonded device on success,
  *	negative value otherwise
  */
+__rte_experimental
 int
-rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
-			uint16_t len);
+rte_eth_bond_members_get(uint16_t bonded_port_id, uint16_t members[],
+		uint16_t len);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t members[],
+		uint16_t len)
+{
+	return rte_eth_bond_members_get(bonded_port_id, members, len);
+}
 
 /**
- * Populate an array with list of the active slaves port id's of the bonded
+ * Populate an array with list of the active members port id's of the bonded
  * device.
  *
  * @param bonded_port_id	Port ID of bonded eth_dev to interrogate
- * @param slaves			Array to be populated with the current active slaves
- * @param len				Length of slaves array
+ * @param members			Array to be populated with the current active members
+ * @param len				Length of members array
  *
  * @return
- *	Number of active slaves associated with bonded device on success,
+ *	Number of active members associated with bonded device on success,
  *	negative value otherwise
  */
+__rte_experimental
 int
-rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
-				uint16_t len);
+rte_eth_bond_active_members_get(uint16_t bonded_port_id, uint16_t members[],
+		uint16_t len);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t members[],
+		uint16_t len)
+{
+	return rte_eth_bond_active_members_get(bonded_port_id, members, len);
+}
 
 /**
- * Set explicit MAC address to use on bonded device and it's slaves.
+ * Set explicit MAC address to use on bonded device and it's members.
  *
  * @param bonded_port_id	Port ID of bonded device.
  * @param mac_addr			MAC Address to use on bonded device overriding
- *							slaves MAC addresses
+ *							members MAC addresses
  *
  * @return
  *	0 on success, negative value otherwise
@@ -228,8 +266,8 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
 		struct rte_ether_addr *mac_addr);
 
 /**
- * Reset bonded device to use MAC from primary slave on bonded device and it's
- * slaves.
+ * Reset bonded device to use MAC from primary member on bonded device and it's
+ * members.
  *
  * @param bonded_port_id	Port ID of bonded device.
  *
@@ -266,7 +304,7 @@ rte_eth_bond_xmit_policy_get(uint16_t bonded_port_id);
 
 /**
  * Set the link monitoring frequency (in ms) for monitoring the link status of
- * slave devices
+ * member devices
  *
  * @param bonded_port_id	Port ID of bonded device.
  * @param internal_ms		Monitoring interval in milliseconds
@@ -280,7 +318,7 @@ rte_eth_bond_link_monitoring_set(uint16_t bonded_port_id, uint32_t internal_ms);
 
 /**
  * Get the current link monitoring frequency (in ms) for monitoring of the link
- * status of slave devices
+ * status of member devices
  *
  * @param bonded_port_id	Port ID of bonded device.
  *
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 4a266bb2ca..7cf44d0595 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -19,7 +19,7 @@ static void bond_mode_8023ad_ext_periodic_cb(void *arg);
 #define MODE4_DEBUG(fmt, ...)				\
 	rte_log(RTE_LOG_DEBUG, bond_logtype,		\
 		"%6u [Port %u: %s] " fmt,		\
-		bond_dbg_get_time_diff_ms(), slave_id,	\
+		bond_dbg_get_time_diff_ms(), member_id,	\
 		__func__, ##__VA_ARGS__)
 
 static uint64_t start_time;
@@ -184,9 +184,9 @@ set_warning_flags(struct port *port, uint16_t flags)
 }
 
 static void
-show_warnings(uint16_t slave_id)
+show_warnings(uint16_t member_id)
 {
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[member_id];
 	uint8_t warnings;
 
 	do {
@@ -205,36 +205,36 @@ show_warnings(uint16_t slave_id)
 
 	if (warnings & WRN_RX_QUEUE_FULL) {
 		RTE_BOND_LOG(DEBUG,
-			     "Slave %u: failed to enqueue LACP packet into RX ring.\n"
+			     "Member %u: failed to enqueue LACP packet into RX ring.\n"
 			     "Receive and transmit functions must be invoked on bonded"
 			     "interface at least 10 times per second or LACP will notwork correctly",
-			     slave_id);
+			     member_id);
 	}
 
 	if (warnings & WRN_TX_QUEUE_FULL) {
 		RTE_BOND_LOG(DEBUG,
-			     "Slave %u: failed to enqueue LACP packet into TX ring.\n"
+			     "Member %u: failed to enqueue LACP packet into TX ring.\n"
 			     "Receive and transmit functions must be invoked on bonded"
 			     "interface at least 10 times per second or LACP will not work correctly",
-			     slave_id);
+			     member_id);
 	}
 
 	if (warnings & WRN_RX_MARKER_TO_FAST)
-		RTE_BOND_LOG(INFO, "Slave %u: marker to early - ignoring.",
-			     slave_id);
+		RTE_BOND_LOG(INFO, "Member %u: marker to early - ignoring.",
+			     member_id);
 
 	if (warnings & WRN_UNKNOWN_SLOW_TYPE) {
 		RTE_BOND_LOG(INFO,
-			"Slave %u: ignoring unknown slow protocol frame type",
-			     slave_id);
+			"Member %u: ignoring unknown slow protocol frame type",
+			     member_id);
 	}
 
 	if (warnings & WRN_UNKNOWN_MARKER_TYPE)
-		RTE_BOND_LOG(INFO, "Slave %u: ignoring unknown marker type",
-			     slave_id);
+		RTE_BOND_LOG(INFO, "Member %u: ignoring unknown marker type",
+			     member_id);
 
 	if (warnings & WRN_NOT_LACP_CAPABLE)
-		MODE4_DEBUG("Port %u is not LACP capable!\n", slave_id);
+		MODE4_DEBUG("Port %u is not LACP capable!\n", member_id);
 }
 
 static void
@@ -256,10 +256,10 @@ record_default(struct port *port)
  * @param port			Port on which LACPDU was received.
  */
 static void
-rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
+rx_machine(struct bond_dev_private *internals, uint16_t member_id,
 		struct lacpdu *lacp)
 {
-	struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *agg, *port = &bond_mode_8023ad_ports[member_id];
 	uint64_t timeout;
 
 	if (SM_FLAG(port, BEGIN)) {
@@ -389,9 +389,9 @@ rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
  * @param port			Port to handle state machine.
  */
 static void
-periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
+periodic_machine(struct bond_dev_private *internals, uint16_t member_id)
 {
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[member_id];
 	/* Calculate if either site is LACP enabled */
 	uint64_t timeout;
 	uint8_t active = ACTOR_STATE(port, LACP_ACTIVE) ||
@@ -451,9 +451,9 @@ periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
  * @param port			Port to handle state machine.
  */
 static void
-mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
+mux_machine(struct bond_dev_private *internals, uint16_t member_id)
 {
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[member_id];
 
 	/* Save current state for later use */
 	const uint8_t state_mask = STATE_SYNCHRONIZATION | STATE_DISTRIBUTING |
@@ -527,8 +527,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
 				SM_FLAG_SET(port, NTT);
 				MODE4_DEBUG("COLLECTING -> DISTRIBUTING\n");
 				RTE_BOND_LOG(INFO,
-					"Bond %u: slave id %u distributing started.",
-					internals->port_id, slave_id);
+					"Bond %u: member id %u distributing started.",
+					internals->port_id, member_id);
 			}
 		} else {
 			if (!PARTNER_STATE(port, COLLECTING)) {
@@ -538,8 +538,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
 				SM_FLAG_SET(port, NTT);
 				MODE4_DEBUG("DISTRIBUTING -> COLLECTING\n");
 				RTE_BOND_LOG(INFO,
-					"Bond %u: slave id %u distributing stopped.",
-					internals->port_id, slave_id);
+					"Bond %u: member id %u distributing stopped.",
+					internals->port_id, member_id);
 			}
 		}
 	}
@@ -554,9 +554,9 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
  * @param port
  */
 static void
-tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
+tx_machine(struct bond_dev_private *internals, uint16_t member_id)
 {
-	struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *agg, *port = &bond_mode_8023ad_ports[member_id];
 
 	struct rte_mbuf *lacp_pkt = NULL;
 	struct lacpdu_header *hdr;
@@ -587,7 +587,7 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
 
 	/* Source and destination MAC */
 	rte_ether_addr_copy(&lacp_mac_addr, &hdr->eth_hdr.dst_addr);
-	rte_eth_macaddr_get(slave_id, &hdr->eth_hdr.src_addr);
+	rte_eth_macaddr_get(member_id, &hdr->eth_hdr.src_addr);
 	hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
 
 	lacpdu = &hdr->lacpdu;
@@ -635,10 +635,10 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
 			return;
 		}
 	} else {
-		uint16_t pkts_sent = rte_eth_tx_prepare(slave_id,
+		uint16_t pkts_sent = rte_eth_tx_prepare(member_id,
 				internals->mode4.dedicated_queues.tx_qid,
 				&lacp_pkt, 1);
-		pkts_sent = rte_eth_tx_burst(slave_id,
+		pkts_sent = rte_eth_tx_burst(member_id,
 				internals->mode4.dedicated_queues.tx_qid,
 				&lacp_pkt, pkts_sent);
 		if (pkts_sent != 1) {
@@ -679,40 +679,40 @@ max_index(uint64_t *a, int n)
  * @param port_pos			Port to assign.
  */
 static void
-selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
+selection_logic(struct bond_dev_private *internals, uint16_t member_id)
 {
 	struct port *agg, *port;
-	uint16_t slaves_count, new_agg_id, i, j = 0;
-	uint16_t *slaves;
+	uint16_t members_count, new_agg_id, i, j = 0;
+	uint16_t *members;
 	uint64_t agg_bandwidth[RTE_MAX_ETHPORTS] = {0};
 	uint64_t agg_count[RTE_MAX_ETHPORTS] = {0};
-	uint16_t default_slave = 0;
+	uint16_t default_member = 0;
 	struct rte_eth_link link_info;
 	uint16_t agg_new_idx = 0;
 	int ret;
 
-	slaves = internals->active_slaves;
-	slaves_count = internals->active_slave_count;
-	port = &bond_mode_8023ad_ports[slave_id];
+	members = internals->active_members;
+	members_count = internals->active_member_count;
+	port = &bond_mode_8023ad_ports[member_id];
 
 	/* Search for aggregator suitable for this port */
-	for (i = 0; i < slaves_count; ++i) {
-		agg = &bond_mode_8023ad_ports[slaves[i]];
+	for (i = 0; i < members_count; ++i) {
+		agg = &bond_mode_8023ad_ports[members[i]];
 		/* Skip ports that are not aggregators */
-		if (agg->aggregator_port_id != slaves[i])
+		if (agg->aggregator_port_id != members[i])
 			continue;
 
-		ret = rte_eth_link_get_nowait(slaves[i], &link_info);
+		ret = rte_eth_link_get_nowait(members[i], &link_info);
 		if (ret < 0) {
 			RTE_BOND_LOG(ERR,
-				"Slave (port %u) link get failed: %s\n",
-				slaves[i], rte_strerror(-ret));
+				"Member (port %u) link get failed: %s\n",
+				members[i], rte_strerror(-ret));
 			continue;
 		}
 		agg_count[i] += 1;
 		agg_bandwidth[i] += link_info.link_speed;
 
-		/* Actors system ID is not checked since all slave device have the same
+		/* Actors system ID is not checked since all member device have the same
 		 * ID (MAC address). */
 		if ((agg->actor.key == port->actor.key &&
 			agg->partner.system_priority == port->partner.system_priority &&
@@ -724,31 +724,31 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
 				rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) != 0) {
 
 			if (j == 0)
-				default_slave = i;
+				default_member = i;
 			j++;
 		}
 	}
 
 	switch (internals->mode4.agg_selection) {
 	case AGG_COUNT:
-		agg_new_idx = max_index(agg_count, slaves_count);
-		new_agg_id = slaves[agg_new_idx];
+		agg_new_idx = max_index(agg_count, members_count);
+		new_agg_id = members[agg_new_idx];
 		break;
 	case AGG_BANDWIDTH:
-		agg_new_idx = max_index(agg_bandwidth, slaves_count);
-		new_agg_id = slaves[agg_new_idx];
+		agg_new_idx = max_index(agg_bandwidth, members_count);
+		new_agg_id = members[agg_new_idx];
 		break;
 	case AGG_STABLE:
-		if (default_slave == slaves_count)
-			new_agg_id = slaves[slave_id];
+		if (default_member == members_count)
+			new_agg_id = members[member_id];
 		else
-			new_agg_id = slaves[default_slave];
+			new_agg_id = members[default_member];
 		break;
 	default:
-		if (default_slave == slaves_count)
-			new_agg_id = slaves[slave_id];
+		if (default_member == members_count)
+			new_agg_id = members[member_id];
 		else
-			new_agg_id = slaves[default_slave];
+			new_agg_id = members[default_member];
 		break;
 	}
 
@@ -758,7 +758,7 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
 		MODE4_DEBUG("-> SELECTED: ID=%3u\n"
 			"\t%s aggregator ID=%3u\n",
 			port->aggregator_port_id,
-			port->aggregator_port_id == slave_id ?
+			port->aggregator_port_id == member_id ?
 				"aggregator not found, using default" : "aggregator found",
 			port->aggregator_port_id);
 	}
@@ -802,7 +802,7 @@ link_speed_key(uint16_t speed) {
 }
 
 static void
-rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
+rx_machine_update(struct bond_dev_private *internals, uint16_t member_id,
 		struct rte_mbuf *lacp_pkt) {
 	struct lacpdu_header *lacp;
 	struct lacpdu_actor_partner_params *partner;
@@ -813,7 +813,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
 		RTE_ASSERT(lacp->lacpdu.subtype == SLOW_SUBTYPE_LACP);
 
 		partner = &lacp->lacpdu.partner;
-		port = &bond_mode_8023ad_ports[slave_id];
+		port = &bond_mode_8023ad_ports[member_id];
 		agg = &bond_mode_8023ad_ports[port->aggregator_port_id];
 
 		if (rte_is_zero_ether_addr(&partner->port_params.system) ||
@@ -822,7 +822,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
 			/* This LACP frame is sending to the bonding port
 			 * so pass it to rx_machine.
 			 */
-			rx_machine(internals, slave_id, &lacp->lacpdu);
+			rx_machine(internals, member_id, &lacp->lacpdu);
 		} else {
 			char preferred_system_name[RTE_ETHER_ADDR_FMT_SIZE];
 			char self_system_name[RTE_ETHER_ADDR_FMT_SIZE];
@@ -837,16 +837,16 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
 		}
 		rte_pktmbuf_free(lacp_pkt);
 	} else
-		rx_machine(internals, slave_id, NULL);
+		rx_machine(internals, member_id, NULL);
 }
 
 static void
 bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
-			uint16_t slave_id)
+			uint16_t member_id)
 {
 #define DEDICATED_QUEUE_BURST_SIZE 32
 	struct rte_mbuf *lacp_pkt[DEDICATED_QUEUE_BURST_SIZE];
-	uint16_t rx_count = rte_eth_rx_burst(slave_id,
+	uint16_t rx_count = rte_eth_rx_burst(member_id,
 				internals->mode4.dedicated_queues.rx_qid,
 				lacp_pkt, DEDICATED_QUEUE_BURST_SIZE);
 
@@ -854,10 +854,10 @@ bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
 		uint16_t i;
 
 		for (i = 0; i < rx_count; i++)
-			bond_mode_8023ad_handle_slow_pkt(internals, slave_id,
+			bond_mode_8023ad_handle_slow_pkt(internals, member_id,
 					lacp_pkt[i]);
 	} else {
-		rx_machine_update(internals, slave_id, NULL);
+		rx_machine_update(internals, member_id, NULL);
 	}
 }
 
@@ -868,23 +868,23 @@ bond_mode_8023ad_periodic_cb(void *arg)
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
 	struct port *port;
 	struct rte_eth_link link_info;
-	struct rte_ether_addr slave_addr;
+	struct rte_ether_addr member_addr;
 	struct rte_mbuf *lacp_pkt = NULL;
-	uint16_t slave_id;
+	uint16_t member_id;
 	uint16_t i;
 
 
 	/* Update link status on each port */
-	for (i = 0; i < internals->active_slave_count; i++) {
+	for (i = 0; i < internals->active_member_count; i++) {
 		uint16_t key;
 		int ret;
 
-		slave_id = internals->active_slaves[i];
-		ret = rte_eth_link_get_nowait(slave_id, &link_info);
+		member_id = internals->active_members[i];
+		ret = rte_eth_link_get_nowait(member_id, &link_info);
 		if (ret < 0) {
 			RTE_BOND_LOG(ERR,
-				"Slave (port %u) link get failed: %s\n",
-				slave_id, rte_strerror(-ret));
+				"Member (port %u) link get failed: %s\n",
+				member_id, rte_strerror(-ret));
 		}
 
 		if (ret >= 0 && link_info.link_status != 0) {
@@ -895,8 +895,8 @@ bond_mode_8023ad_periodic_cb(void *arg)
 			key = 0;
 		}
 
-		rte_eth_macaddr_get(slave_id, &slave_addr);
-		port = &bond_mode_8023ad_ports[slave_id];
+		rte_eth_macaddr_get(member_id, &member_addr);
+		port = &bond_mode_8023ad_ports[member_id];
 
 		key = rte_cpu_to_be_16(key);
 		if (key != port->actor.key) {
@@ -907,16 +907,16 @@ bond_mode_8023ad_periodic_cb(void *arg)
 			SM_FLAG_SET(port, NTT);
 		}
 
-		if (!rte_is_same_ether_addr(&port->actor.system, &slave_addr)) {
-			rte_ether_addr_copy(&slave_addr, &port->actor.system);
-			if (port->aggregator_port_id == slave_id)
+		if (!rte_is_same_ether_addr(&port->actor.system, &member_addr)) {
+			rte_ether_addr_copy(&member_addr, &port->actor.system);
+			if (port->aggregator_port_id == member_id)
 				SM_FLAG_SET(port, NTT);
 		}
 	}
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		slave_id = internals->active_slaves[i];
-		port = &bond_mode_8023ad_ports[slave_id];
+	for (i = 0; i < internals->active_member_count; i++) {
+		member_id = internals->active_members[i];
+		port = &bond_mode_8023ad_ports[member_id];
 
 		if ((port->actor.key &
 				rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) == 0) {
@@ -947,19 +947,19 @@ bond_mode_8023ad_periodic_cb(void *arg)
 			if (retval != 0)
 				lacp_pkt = NULL;
 
-			rx_machine_update(internals, slave_id, lacp_pkt);
+			rx_machine_update(internals, member_id, lacp_pkt);
 		} else {
 			bond_mode_8023ad_dedicated_rxq_process(internals,
-					slave_id);
+					member_id);
 		}
 
-		periodic_machine(internals, slave_id);
-		mux_machine(internals, slave_id);
-		tx_machine(internals, slave_id);
-		selection_logic(internals, slave_id);
+		periodic_machine(internals, member_id);
+		mux_machine(internals, member_id);
+		tx_machine(internals, member_id);
+		selection_logic(internals, member_id);
 
 		SM_FLAG_CLR(port, BEGIN);
-		show_warnings(slave_id);
+		show_warnings(member_id);
 	}
 
 	rte_eal_alarm_set(internals->mode4.update_timeout_us,
@@ -967,34 +967,34 @@ bond_mode_8023ad_periodic_cb(void *arg)
 }
 
 static int
-bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
+bond_mode_8023ad_register_lacp_mac(uint16_t member_id)
 {
 	int ret;
 
-	ret = rte_eth_allmulticast_enable(slave_id);
+	ret = rte_eth_allmulticast_enable(member_id);
 	if (ret != 0) {
 		RTE_BOND_LOG(ERR,
 			"failed to enable allmulti mode for port %u: %s",
-			slave_id, rte_strerror(-ret));
+			member_id, rte_strerror(-ret));
 	}
-	if (rte_eth_allmulticast_get(slave_id)) {
+	if (rte_eth_allmulticast_get(member_id)) {
 		RTE_BOND_LOG(DEBUG, "forced allmulti for port %u",
-			     slave_id);
-		bond_mode_8023ad_ports[slave_id].forced_rx_flags =
+			     member_id);
+		bond_mode_8023ad_ports[member_id].forced_rx_flags =
 				BOND_8023AD_FORCED_ALLMULTI;
 		return 0;
 	}
 
-	ret = rte_eth_promiscuous_enable(slave_id);
+	ret = rte_eth_promiscuous_enable(member_id);
 	if (ret != 0) {
 		RTE_BOND_LOG(ERR,
 			"failed to enable promiscuous mode for port %u: %s",
-			slave_id, rte_strerror(-ret));
+			member_id, rte_strerror(-ret));
 	}
-	if (rte_eth_promiscuous_get(slave_id)) {
+	if (rte_eth_promiscuous_get(member_id)) {
 		RTE_BOND_LOG(DEBUG, "forced promiscuous for port %u",
-			     slave_id);
-		bond_mode_8023ad_ports[slave_id].forced_rx_flags =
+			     member_id);
+		bond_mode_8023ad_ports[member_id].forced_rx_flags =
 				BOND_8023AD_FORCED_PROMISC;
 		return 0;
 	}
@@ -1003,27 +1003,27 @@ bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
 }
 
 static void
-bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
+bond_mode_8023ad_unregister_lacp_mac(uint16_t member_id)
 {
 	int ret;
 
-	switch (bond_mode_8023ad_ports[slave_id].forced_rx_flags) {
+	switch (bond_mode_8023ad_ports[member_id].forced_rx_flags) {
 	case BOND_8023AD_FORCED_ALLMULTI:
-		RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", slave_id);
-		ret = rte_eth_allmulticast_disable(slave_id);
+		RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", member_id);
+		ret = rte_eth_allmulticast_disable(member_id);
 		if (ret != 0)
 			RTE_BOND_LOG(ERR,
 				"failed to disable allmulti mode for port %u: %s",
-				slave_id, rte_strerror(-ret));
+				member_id, rte_strerror(-ret));
 		break;
 
 	case BOND_8023AD_FORCED_PROMISC:
-		RTE_BOND_LOG(DEBUG, "unset promisc for port %u", slave_id);
-		ret = rte_eth_promiscuous_disable(slave_id);
+		RTE_BOND_LOG(DEBUG, "unset promisc for port %u", member_id);
+		ret = rte_eth_promiscuous_disable(member_id);
 		if (ret != 0)
 			RTE_BOND_LOG(ERR,
 				"failed to disable promiscuous mode for port %u: %s",
-				slave_id, rte_strerror(-ret));
+				member_id, rte_strerror(-ret));
 		break;
 
 	default:
@@ -1032,12 +1032,12 @@ bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
 }
 
 void
-bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
-				uint16_t slave_id)
+bond_mode_8023ad_activate_member(struct rte_eth_dev *bond_dev,
+				uint16_t member_id)
 {
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
 
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[member_id];
 	struct port_params initial = {
 			.system = { { 0 } },
 			.system_priority = rte_cpu_to_be_16(0xFFFF),
@@ -1053,15 +1053,15 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 	struct bond_tx_queue *bd_tx_q;
 	uint16_t q_id;
 
-	/* Given slave mus not be in active list */
-	RTE_ASSERT(find_slave_by_id(internals->active_slaves,
-	internals->active_slave_count, slave_id) == internals->active_slave_count);
+	/* Given member mus not be in active list */
+	RTE_ASSERT(find_member_by_id(internals->active_members,
+	internals->active_member_count, member_id) == internals->active_member_count);
 	RTE_SET_USED(internals); /* used only for assert when enabled */
 
 	memcpy(&port->actor, &initial, sizeof(struct port_params));
 	/* Standard requires that port ID must be grater than 0.
 	 * Add 1 do get corresponding port_number */
-	port->actor.port_number = rte_cpu_to_be_16(slave_id + 1);
+	port->actor.port_number = rte_cpu_to_be_16(member_id + 1);
 
 	memcpy(&port->partner, &initial, sizeof(struct port_params));
 	memcpy(&port->partner_admin, &initial, sizeof(struct port_params));
@@ -1072,11 +1072,11 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 	port->sm_flags = SM_FLAGS_BEGIN;
 
 	/* use this port as aggregator */
-	port->aggregator_port_id = slave_id;
+	port->aggregator_port_id = member_id;
 
-	if (bond_mode_8023ad_register_lacp_mac(slave_id) < 0) {
-		RTE_BOND_LOG(WARNING, "slave %u is most likely broken and won't receive LACP packets",
-			     slave_id);
+	if (bond_mode_8023ad_register_lacp_mac(member_id) < 0) {
+		RTE_BOND_LOG(WARNING, "member %u is most likely broken and won't receive LACP packets",
+			     member_id);
 	}
 
 	timer_cancel(&port->warning_timer);
@@ -1087,22 +1087,24 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 	RTE_ASSERT(port->rx_ring == NULL);
 	RTE_ASSERT(port->tx_ring == NULL);
 
-	socket_id = rte_eth_dev_socket_id(slave_id);
+	socket_id = rte_eth_dev_socket_id(member_id);
 	if (socket_id == -1)
 		socket_id = rte_socket_id();
 
 	element_size = sizeof(struct slow_protocol_frame) +
 				RTE_PKTMBUF_HEADROOM;
 
-	/* The size of the mempool should be at least:
-	 * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */
-	total_tx_desc = BOND_MODE_8023AX_SLAVE_TX_PKTS;
+	/*
+	 * The size of the mempool should be at least:
+	 * the sum of the TX descriptors + BOND_MODE_8023AX_Member_TX_PKTS.
+	 */
+	total_tx_desc = BOND_MODE_8023AX_Member_TX_PKTS;
 	for (q_id = 0; q_id < bond_dev->data->nb_tx_queues; q_id++) {
 		bd_tx_q = (struct bond_tx_queue*)bond_dev->data->tx_queues[q_id];
 		total_tx_desc += bd_tx_q->nb_tx_desc;
 	}
 
-	snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
+	snprintf(mem_name, RTE_DIM(mem_name), "member_port%u_pool", member_id);
 	port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
 		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
 			32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
@@ -1111,39 +1113,39 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 	/* Any memory allocation failure in initialization is critical because
 	 * resources can't be free, so reinitialization is impossible. */
 	if (port->mbuf_pool == NULL) {
-		rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
-			slave_id, mem_name, rte_strerror(rte_errno));
+		rte_panic("Member %u: Failed to create memory pool '%s': %s\n",
+			member_id, mem_name, rte_strerror(rte_errno));
 	}
 
-	snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_rx", slave_id);
+	snprintf(mem_name, RTE_DIM(mem_name), "member_%u_rx", member_id);
 	port->rx_ring = rte_ring_create(mem_name,
-			rte_align32pow2(BOND_MODE_8023AX_SLAVE_RX_PKTS), socket_id, 0);
+			rte_align32pow2(BOND_MODE_8023AX_Member_RX_PKTS), socket_id, 0);
 
 	if (port->rx_ring == NULL) {
-		rte_panic("Slave %u: Failed to create rx ring '%s': %s\n", slave_id,
+		rte_panic("Member %u: Failed to create rx ring '%s': %s\n", member_id,
 			mem_name, rte_strerror(rte_errno));
 	}
 
 	/* TX ring is at least one pkt longer to make room for marker packet. */
-	snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_tx", slave_id);
+	snprintf(mem_name, RTE_DIM(mem_name), "member_%u_tx", member_id);
 	port->tx_ring = rte_ring_create(mem_name,
-			rte_align32pow2(BOND_MODE_8023AX_SLAVE_TX_PKTS + 1), socket_id, 0);
+			rte_align32pow2(BOND_MODE_8023AX_Member_TX_PKTS + 1), socket_id, 0);
 
 	if (port->tx_ring == NULL) {
-		rte_panic("Slave %u: Failed to create tx ring '%s': %s\n", slave_id,
+		rte_panic("Member %u: Failed to create tx ring '%s': %s\n", member_id,
 			mem_name, rte_strerror(rte_errno));
 	}
 }
 
 int
-bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
-		uint16_t slave_id)
+bond_mode_8023ad_deactivate_member(struct rte_eth_dev *bond_dev __rte_unused,
+		uint16_t member_id)
 {
 	void *pkt = NULL;
 	struct port *port = NULL;
 	uint8_t old_partner_state;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 
 	ACTOR_STATE_CLR(port, AGGREGATION);
 	port->selected = UNSELECTED;
@@ -1151,7 +1153,7 @@ bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
 	old_partner_state = port->partner_state;
 	record_default(port);
 
-	bond_mode_8023ad_unregister_lacp_mac(slave_id);
+	bond_mode_8023ad_unregister_lacp_mac(member_id);
 
 	/* If partner timeout state changes then disable timer */
 	if (!((old_partner_state ^ port->partner_state) &
@@ -1174,30 +1176,30 @@ void
 bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev)
 {
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
-	struct rte_ether_addr slave_addr;
-	struct port *slave, *agg_slave;
-	uint16_t slave_id, i, j;
+	struct rte_ether_addr member_addr;
+	struct port *member, *agg_member;
+	uint16_t member_id, i, j;
 
 	bond_mode_8023ad_stop(bond_dev);
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		slave_id = internals->active_slaves[i];
-		slave = &bond_mode_8023ad_ports[slave_id];
-		rte_eth_macaddr_get(slave_id, &slave_addr);
+	for (i = 0; i < internals->active_member_count; i++) {
+		member_id = internals->active_members[i];
+		member = &bond_mode_8023ad_ports[member_id];
+		rte_eth_macaddr_get(member_id, &member_addr);
 
-		if (rte_is_same_ether_addr(&slave_addr, &slave->actor.system))
+		if (rte_is_same_ether_addr(&member_addr, &member->actor.system))
 			continue;
 
-		rte_ether_addr_copy(&slave_addr, &slave->actor.system);
+		rte_ether_addr_copy(&member_addr, &member->actor.system);
 		/* Do nothing if this port is not an aggregator. In other case
 		 * Set NTT flag on every port that use this aggregator. */
-		if (slave->aggregator_port_id != slave_id)
+		if (member->aggregator_port_id != member_id)
 			continue;
 
-		for (j = 0; j < internals->active_slave_count; j++) {
-			agg_slave = &bond_mode_8023ad_ports[internals->active_slaves[j]];
-			if (agg_slave->aggregator_port_id == slave_id)
-				SM_FLAG_SET(agg_slave, NTT);
+		for (j = 0; j < internals->active_member_count; j++) {
+			agg_member = &bond_mode_8023ad_ports[internals->active_members[j]];
+			if (agg_member->aggregator_port_id == member_id)
+				SM_FLAG_SET(agg_member, NTT);
 		}
 	}
 
@@ -1288,9 +1290,9 @@ bond_mode_8023ad_enable(struct rte_eth_dev *bond_dev)
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
 	uint16_t i;
 
-	for (i = 0; i < internals->active_slave_count; i++)
-		bond_mode_8023ad_activate_slave(bond_dev,
-				internals->active_slaves[i]);
+	for (i = 0; i < internals->active_member_count; i++)
+		bond_mode_8023ad_activate_member(bond_dev,
+				internals->active_members[i]);
 
 	return 0;
 }
@@ -1326,10 +1328,10 @@ bond_mode_8023ad_stop(struct rte_eth_dev *bond_dev)
 
 void
 bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
-				  uint16_t slave_id, struct rte_mbuf *pkt)
+				  uint16_t member_id, struct rte_mbuf *pkt)
 {
 	struct mode8023ad_private *mode4 = &internals->mode4;
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[member_id];
 	struct marker_header *m_hdr;
 	uint64_t marker_timer, old_marker_timer;
 	int retval;
@@ -1362,7 +1364,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
 		} while (unlikely(retval == 0));
 
 		m_hdr->marker.tlv_type_marker = MARKER_TLV_TYPE_RESP;
-		rte_eth_macaddr_get(slave_id, &m_hdr->eth_hdr.src_addr);
+		rte_eth_macaddr_get(member_id, &m_hdr->eth_hdr.src_addr);
 
 		if (internals->mode4.dedicated_queues.enabled == 0) {
 			if (rte_ring_enqueue(port->tx_ring, pkt) != 0) {
@@ -1373,10 +1375,10 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
 			}
 		} else {
 			/* Send packet directly to the slow queue */
-			uint16_t tx_count = rte_eth_tx_prepare(slave_id,
+			uint16_t tx_count = rte_eth_tx_prepare(member_id,
 					internals->mode4.dedicated_queues.tx_qid,
 					&pkt, 1);
-			tx_count = rte_eth_tx_burst(slave_id,
+			tx_count = rte_eth_tx_burst(member_id,
 					internals->mode4.dedicated_queues.tx_qid,
 					&pkt, tx_count);
 			if (tx_count != 1) {
@@ -1394,7 +1396,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
 				goto free_out;
 			}
 		} else
-			rx_machine_update(internals, slave_id, pkt);
+			rx_machine_update(internals, member_id, pkt);
 	} else {
 		wrn = WRN_UNKNOWN_SLOW_TYPE;
 		goto free_out;
@@ -1517,8 +1519,8 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
 
 
 int
-rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
-		struct rte_eth_bond_8023ad_slave_info *info)
+rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
+		struct rte_eth_bond_8023ad_member_info *info)
 {
 	struct rte_eth_dev *bond_dev;
 	struct bond_dev_private *internals;
@@ -1531,12 +1533,12 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
 	bond_dev = &rte_eth_devices[port_id];
 
 	internals = bond_dev->data->dev_private;
-	if (find_slave_by_id(internals->active_slaves,
-			internals->active_slave_count, slave_id) ==
-				internals->active_slave_count)
+	if (find_member_by_id(internals->active_members,
+			internals->active_member_count, member_id) ==
+				internals->active_member_count)
 		return -EINVAL;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 	info->selected = port->selected;
 
 	info->actor_state = port->actor_state;
@@ -1550,7 +1552,7 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
 }
 
 static int
-bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
+bond_8023ad_ext_validate(uint16_t port_id, uint16_t member_id)
 {
 	struct rte_eth_dev *bond_dev;
 	struct bond_dev_private *internals;
@@ -1565,9 +1567,9 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
 		return -EINVAL;
 
 	internals = bond_dev->data->dev_private;
-	if (find_slave_by_id(internals->active_slaves,
-			internals->active_slave_count, slave_id) ==
-				internals->active_slave_count)
+	if (find_member_by_id(internals->active_members,
+			internals->active_member_count, member_id) ==
+				internals->active_member_count)
 		return -EINVAL;
 
 	mode4 = &internals->mode4;
@@ -1578,17 +1580,17 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
 }
 
 int
-rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
 				int enabled)
 {
 	struct port *port;
 	int res;
 
-	res = bond_8023ad_ext_validate(port_id, slave_id);
+	res = bond_8023ad_ext_validate(port_id, member_id);
 	if (res != 0)
 		return res;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 
 	if (enabled)
 		ACTOR_STATE_SET(port, COLLECTING);
@@ -1599,17 +1601,17 @@ rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
 }
 
 int
-rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
 				int enabled)
 {
 	struct port *port;
 	int res;
 
-	res = bond_8023ad_ext_validate(port_id, slave_id);
+	res = bond_8023ad_ext_validate(port_id, member_id);
 	if (res != 0)
 		return res;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 
 	if (enabled)
 		ACTOR_STATE_SET(port, DISTRIBUTING);
@@ -1620,45 +1622,45 @@ rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
 }
 
 int
-rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id)
+rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id)
 {
 	struct port *port;
 	int err;
 
-	err = bond_8023ad_ext_validate(port_id, slave_id);
+	err = bond_8023ad_ext_validate(port_id, member_id);
 	if (err != 0)
 		return err;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 	return ACTOR_STATE(port, DISTRIBUTING);
 }
 
 int
-rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id)
+rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id)
 {
 	struct port *port;
 	int err;
 
-	err = bond_8023ad_ext_validate(port_id, slave_id);
+	err = bond_8023ad_ext_validate(port_id, member_id);
 	if (err != 0)
 		return err;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 	return ACTOR_STATE(port, COLLECTING);
 }
 
 int
-rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t member_id,
 		struct rte_mbuf *lacp_pkt)
 {
 	struct port *port;
 	int res;
 
-	res = bond_8023ad_ext_validate(port_id, slave_id);
+	res = bond_8023ad_ext_validate(port_id, member_id);
 	if (res != 0)
 		return res;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[member_id];
 
 	if (rte_pktmbuf_pkt_len(lacp_pkt) < sizeof(struct lacpdu_header))
 		return -EINVAL;
@@ -1683,11 +1685,11 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
 	struct mode8023ad_private *mode4 = &internals->mode4;
 	struct port *port;
 	void *pkt = NULL;
-	uint16_t i, slave_id;
+	uint16_t i, member_id;
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		slave_id = internals->active_slaves[i];
-		port = &bond_mode_8023ad_ports[slave_id];
+	for (i = 0; i < internals->active_member_count; i++) {
+		member_id = internals->active_members[i];
+		port = &bond_mode_8023ad_ports[member_id];
 
 		if (rte_ring_dequeue(port->rx_ring, &pkt) == 0) {
 			struct rte_mbuf *lacp_pkt = pkt;
@@ -1700,7 +1702,7 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
 			/* This is LACP frame so pass it to rx callback.
 			 * Callback is responsible for freeing mbuf.
 			 */
-			mode4->slowrx_cb(slave_id, lacp_pkt);
+			mode4->slowrx_cb(member_id, lacp_pkt);
 		}
 	}
 
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.h b/drivers/net/bonding/rte_eth_bond_8023ad.h
index 7ad8d6d00b..3144ee378a 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.h
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.h
@@ -35,7 +35,7 @@ extern "C" {
 #define MARKER_TLV_TYPE_INFO                0x01
 #define MARKER_TLV_TYPE_RESP                0x02
 
-typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t slave_id,
+typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t member_id,
 						  struct rte_mbuf *lacp_pkt);
 
 enum rte_bond_8023ad_selection {
@@ -66,13 +66,13 @@ struct port_params {
 	uint16_t system_priority;
 	/**< System priority (unused in current implementation) */
 	struct rte_ether_addr system;
-	/**< System ID - Slave MAC address, same as bonding MAC address */
+	/**< System ID - Member MAC address, same as bonding MAC address */
 	uint16_t key;
 	/**< Speed information (implementation dependent) and duplex. */
 	uint16_t port_priority;
 	/**< Priority of this (unused in current implementation) */
 	uint16_t port_number;
-	/**< Port number. It corresponds to slave port id. */
+	/**< Port number. It corresponds to member port id. */
 } __rte_packed __rte_aligned(2);
 
 struct lacpdu_actor_partner_params {
@@ -141,7 +141,7 @@ struct rte_eth_bond_8023ad_conf {
 	enum rte_bond_8023ad_agg_selection agg_selection;
 };
 
-struct rte_eth_bond_8023ad_slave_info {
+struct rte_eth_bond_8023ad_member_info {
 	enum rte_bond_8023ad_selection selected;
 	uint8_t actor_state;
 	struct port_params actor;
@@ -184,104 +184,113 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
 /**
  * @internal
  *
- * Function returns current state of given slave device.
+ * Function returns current state of given member device.
  *
- * @param slave_id  Port id of valid slave.
+ * @param member_id  Port id of valid member.
  * @param conf		buffer for configuration
  * @return
  *   0 - if ok
- *   -EINVAL if conf is NULL or slave id is invalid (not a slave of given
+ *   -EINVAL if conf is NULL or member id is invalid (not a member of given
  *       bonded device or is not inactive).
  */
+__rte_experimental
 int
-rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
-		struct rte_eth_bond_8023ad_slave_info *conf);
+rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
+		struct rte_eth_bond_8023ad_member_info *conf);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t member_id,
+		struct rte_eth_bond_8023ad_member_info *conf)
+{
+	return rte_eth_bond_8023ad_member_info(port_id, member_id, conf);
+}
 
 #ifdef __cplusplus
 }
 #endif
 
 /**
- * Configure a slave port to start collecting.
+ * Configure a member port to start collecting.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port id of valid slave.
+ * @param member_id	Port id of valid member.
  * @param enabled	Non-zero when collection enabled.
  * @return
  *   0 - if ok
- *   -EINVAL if slave is not valid.
+ *   -EINVAL if member is not valid.
  */
 int
-rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
 				int enabled);
 
 /**
- * Get COLLECTING flag from slave port actor state.
+ * Get COLLECTING flag from member port actor state.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port id of valid slave.
+ * @param member_id	Port id of valid member.
  * @return
  *   0 - if not set
  *   1 - if set
- *   -EINVAL if slave is not valid.
+ *   -EINVAL if member is not valid.
  */
 int
-rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id);
+rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id);
 
 /**
- * Configure a slave port to start distributing.
+ * Configure a member port to start distributing.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port id of valid slave.
+ * @param member_id	Port id of valid member.
  * @param enabled	Non-zero when distribution enabled.
  * @return
  *   0 - if ok
- *   -EINVAL if slave is not valid.
+ *   -EINVAL if member is not valid.
  */
 int
-rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
 				int enabled);
 
 /**
- * Get DISTRIBUTING flag from slave port actor state.
+ * Get DISTRIBUTING flag from member port actor state.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port id of valid slave.
+ * @param member_id	Port id of valid member.
  * @return
  *   0 - if not set
  *   1 - if set
- *   -EINVAL if slave is not valid.
+ *   -EINVAL if member is not valid.
  */
 int
-rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id);
+rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id);
 
 /**
  * LACPDU transmit path for external 802.3ad state machine.  Caller retains
  * ownership of the packet on failure.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port ID of valid slave device.
+ * @param member_id	Port ID of valid member device.
  * @param lacp_pkt	mbuf containing LACPDU.
  *
  * @return
  *   0 on success, negative value otherwise.
  */
 int
-rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t member_id,
 		struct rte_mbuf *lacp_pkt);
 
 /**
- * Enable dedicated hw queues for 802.3ad control plane traffic on slaves
+ * Enable dedicated hw queues for 802.3ad control plane traffic on members
  *
- * This function creates an additional tx and rx queue on each slave for
+ * This function creates an additional tx and rx queue on each member for
  * dedicated 802.3ad control plane traffic . A flow filtering rule is
- * programmed on each slave to redirect all LACP slow packets to that rx queue
+ * programmed on each member to redirect all LACP slow packets to that rx queue
  * for processing in the LACP state machine, this removes the need to filter
  * these packets in the bonded devices data path. The additional tx queue is
  * used to enable the LACP state machine to enqueue LACP packets directly to
- * slave hw independently of the bonded devices data path.
+ * member hw independently of the bonded devices data path.
  *
- * To use this feature all slaves must support the programming of the flow
+ * To use this feature all members must support the programming of the flow
  * filter rule required for rx and have enough queues that one rx and tx queue
  * can be reserved for the LACP state machines control packets.
  *
@@ -296,7 +305,7 @@ int
 rte_eth_bond_8023ad_dedicated_queues_enable(uint16_t port_id);
 
 /**
- * Disable slow queue on slaves
+ * Disable slow queue on members
  *
  * This function disables hardware slow packet filter.
  *
diff --git a/drivers/net/bonding/rte_eth_bond_alb.c b/drivers/net/bonding/rte_eth_bond_alb.c
index 86335a7971..56945e2349 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.c
+++ b/drivers/net/bonding/rte_eth_bond_alb.c
@@ -19,13 +19,13 @@ simple_hash(uint8_t *hash_start, int hash_size)
 }
 
 static uint16_t
-calculate_slave(struct bond_dev_private *internals)
+calculate_member(struct bond_dev_private *internals)
 {
 	uint16_t idx;
 
-	idx = (internals->mode6.last_slave + 1) % internals->active_slave_count;
-	internals->mode6.last_slave = idx;
-	return internals->active_slaves[idx];
+	idx = (internals->mode6.last_member + 1) % internals->active_member_count;
+	internals->mode6.last_member = idx;
+	return internals->active_members[idx];
 }
 
 int
@@ -41,7 +41,7 @@ bond_mode_alb_enable(struct rte_eth_dev *bond_dev)
 	/* Fill hash table with initial values */
 	memset(hash_table, 0, sizeof(struct client_data) * ALB_HASH_TABLE_SIZE);
 	rte_spinlock_init(&internals->mode6.lock);
-	internals->mode6.last_slave = ALB_NULL_INDEX;
+	internals->mode6.last_member = ALB_NULL_INDEX;
 	internals->mode6.ntt = 0;
 
 	/* Initialize memory pool for ARP packets to send */
@@ -96,7 +96,7 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
 	/*
 	 * We got reply for ARP Request send by the application. We need to
 	 * update client table when received data differ from what is stored
-	 * in ALB table and issue sending update packet to that slave.
+	 * in ALB table and issue sending update packet to that member.
 	 */
 	rte_spinlock_lock(&internals->mode6.lock);
 	if (client_info->in_use == 0 ||
@@ -112,8 +112,8 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
 		client_info->cli_ip = arp->arp_data.arp_sip;
 		rte_ether_addr_copy(&arp->arp_data.arp_sha,
 				&client_info->cli_mac);
-		client_info->slave_idx = calculate_slave(internals);
-		rte_eth_macaddr_get(client_info->slave_idx,
+		client_info->member_idx = calculate_member(internals);
+		rte_eth_macaddr_get(client_info->member_idx,
 				&client_info->app_mac);
 		rte_ether_addr_copy(&client_info->app_mac,
 				&arp->arp_data.arp_tha);
@@ -166,33 +166,33 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
 						&arp->arp_data.arp_tha,
 						&client_info->cli_mac);
 				}
-				rte_eth_macaddr_get(client_info->slave_idx,
+				rte_eth_macaddr_get(client_info->member_idx,
 						&client_info->app_mac);
 				rte_ether_addr_copy(&client_info->app_mac,
 						&arp->arp_data.arp_sha);
 				memcpy(client_info->vlan, eth_h + 1, offset);
 				client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
 				rte_spinlock_unlock(&internals->mode6.lock);
-				return client_info->slave_idx;
+				return client_info->member_idx;
 			}
 		}
 
-		/* Assign new slave to this client and update src mac in ARP */
+		/* Assign new member to this client and update src mac in ARP */
 		client_info->in_use = 1;
 		client_info->ntt = 0;
 		client_info->app_ip = arp->arp_data.arp_sip;
 		rte_ether_addr_copy(&arp->arp_data.arp_tha,
 				&client_info->cli_mac);
 		client_info->cli_ip = arp->arp_data.arp_tip;
-		client_info->slave_idx = calculate_slave(internals);
-		rte_eth_macaddr_get(client_info->slave_idx,
+		client_info->member_idx = calculate_member(internals);
+		rte_eth_macaddr_get(client_info->member_idx,
 				&client_info->app_mac);
 		rte_ether_addr_copy(&client_info->app_mac,
 				&arp->arp_data.arp_sha);
 		memcpy(client_info->vlan, eth_h + 1, offset);
 		client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
 		rte_spinlock_unlock(&internals->mode6.lock);
-		return client_info->slave_idx;
+		return client_info->member_idx;
 	}
 
 	/* If packet is not ARP Reply, send it on current primary port. */
@@ -208,7 +208,7 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
 {
 	struct rte_ether_hdr *eth_h;
 	struct rte_arp_hdr *arp_h;
-	uint16_t slave_idx;
+	uint16_t member_idx;
 
 	rte_spinlock_lock(&internals->mode6.lock);
 	eth_h = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
@@ -238,10 +238,10 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
 	arp_h->arp_plen = sizeof(uint32_t);
 	arp_h->arp_opcode = rte_cpu_to_be_16(RTE_ARP_OP_REPLY);
 
-	slave_idx = client_info->slave_idx;
+	member_idx = client_info->member_idx;
 	rte_spinlock_unlock(&internals->mode6.lock);
 
-	return slave_idx;
+	return member_idx;
 }
 
 void
@@ -252,18 +252,18 @@ bond_mode_alb_client_list_upd(struct rte_eth_dev *bond_dev)
 
 	int i;
 
-	/* If active slave count is 0, it's pointless to refresh alb table */
-	if (internals->active_slave_count <= 0)
+	/* If active member count is 0, it's pointless to refresh alb table */
+	if (internals->active_member_count <= 0)
 		return;
 
 	rte_spinlock_lock(&internals->mode6.lock);
-	internals->mode6.last_slave = ALB_NULL_INDEX;
+	internals->mode6.last_member = ALB_NULL_INDEX;
 
 	for (i = 0; i < ALB_HASH_TABLE_SIZE; i++) {
 		client_info = &internals->mode6.client_table[i];
 		if (client_info->in_use) {
-			client_info->slave_idx = calculate_slave(internals);
-			rte_eth_macaddr_get(client_info->slave_idx, &client_info->app_mac);
+			client_info->member_idx = calculate_member(internals);
+			rte_eth_macaddr_get(client_info->member_idx, &client_info->app_mac);
 			internals->mode6.ntt = 1;
 		}
 	}
diff --git a/drivers/net/bonding/rte_eth_bond_alb.h b/drivers/net/bonding/rte_eth_bond_alb.h
index 4e9aeda9bc..beb2e619f9 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.h
+++ b/drivers/net/bonding/rte_eth_bond_alb.h
@@ -22,8 +22,8 @@ struct client_data {
 	uint32_t cli_ip;
 	/**< Client IP address */
 
-	uint16_t slave_idx;
-	/**< Index of slave on which we connect with that client */
+	uint16_t member_idx;
+	/**< Index of member on which we connect with that client */
 	uint8_t in_use;
 	/**< Flag indicating if entry in client table is currently used */
 	uint8_t ntt;
@@ -42,8 +42,8 @@ struct mode_alb_private {
 	/**< Mempool for creating ARP update packets */
 	uint8_t ntt;
 	/**< Flag indicating if we need to send update to any client on next tx */
-	uint32_t last_slave;
-	/**< Index of last used slave in client table */
+	uint32_t last_member;
+	/**< Index of last used member in client table */
 	rte_spinlock_t lock;
 };
 
@@ -72,9 +72,9 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
 		struct bond_dev_private *internals);
 
 /**
- * Function handles ARP packet transmission. It also decides on which slave
- * send that packet. If packet is ARP Request, it is send on primary slave.
- * If it is ARP Reply, it is send on slave stored in client table for that
+ * Function handles ARP packet transmission. It also decides on which member
+ * send that packet. If packet is ARP Request, it is send on primary member.
+ * If it is ARP Reply, it is send on member stored in client table for that
  * connection. On Reply function also updates data in client table.
  *
  * @param eth_h			ETH header of transmitted packet.
@@ -82,7 +82,7 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
  * @param internals		Bonding data.
  *
  * @return
- * Index of slave on which packet should be sent.
+ * Index of member on which packet should be sent.
  */
 uint16_t
 bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
@@ -96,14 +96,14 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
  * @param internals		Bonding data.
  *
  * @return
- * Index of slave on which packet should be sent.
+ * Index of member on which packet should be sent.
  */
 uint16_t
 bond_mode_alb_arp_upd(struct client_data *client_info,
 		struct rte_mbuf *pkt, struct bond_dev_private *internals);
 
 /**
- * Function updates slave indexes of active connections.
+ * Function updates member indexes of active connections.
  *
  * @param bond_dev		Pointer to bonded device struct.
  */
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index c0178369b4..b6512a098a 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -37,7 +37,7 @@ valid_bonded_port_id(uint16_t port_id)
 }
 
 int
-check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
+check_for_main_bonded_ethdev(const struct rte_eth_dev *eth_dev)
 {
 	int i;
 	struct bond_dev_private *internals;
@@ -47,31 +47,31 @@ check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
 
 	internals = eth_dev->data->dev_private;
 
-	/* Check if any of slave devices is a bonded device */
-	for (i = 0; i < internals->slave_count; i++)
-		if (valid_bonded_port_id(internals->slaves[i].port_id) == 0)
+	/* Check if any of member devices is a bonded device */
+	for (i = 0; i < internals->member_count; i++)
+		if (valid_bonded_port_id(internals->members[i].port_id) == 0)
 			return 1;
 
 	return 0;
 }
 
 int
-valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
+valid_member_port_id(struct bond_dev_private *internals, uint16_t member_port_id)
 {
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(slave_port_id, -1);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(member_port_id, -1);
 
-	/* Verify that slave_port_id refers to a non bonded port */
-	if (check_for_bonded_ethdev(&rte_eth_devices[slave_port_id]) == 0 &&
+	/* Verify that member_port_id refers to a non bonded port */
+	if (check_for_bonded_ethdev(&rte_eth_devices[member_port_id]) == 0 &&
 			internals->mode == BONDING_MODE_8023AD) {
-		RTE_BOND_LOG(ERR, "Cannot add slave to bonded device in 802.3ad"
-				" mode as slave is also a bonded device, only "
+		RTE_BOND_LOG(ERR, "Cannot add member to bonded device in 802.3ad"
+				" mode as member is also a bonded device, only "
 				"physical devices can be support in this mode.");
 		return -1;
 	}
 
-	if (internals->port_id == slave_port_id) {
+	if (internals->port_id == member_port_id) {
 		RTE_BOND_LOG(ERR,
-			"Cannot add the bonded device itself as its slave.");
+			"Cannot add the bonded device itself as its member.");
 		return -1;
 	}
 
@@ -79,61 +79,63 @@ valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
 }
 
 void
-activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
+activate_member(struct rte_eth_dev *eth_dev, uint16_t port_id)
 {
 	struct bond_dev_private *internals = eth_dev->data->dev_private;
-	uint16_t active_count = internals->active_slave_count;
+	uint16_t active_count = internals->active_member_count;
 
 	if (internals->mode == BONDING_MODE_8023AD)
-		bond_mode_8023ad_activate_slave(eth_dev, port_id);
+		bond_mode_8023ad_activate_member(eth_dev, port_id);
 
 	if (internals->mode == BONDING_MODE_TLB
 			|| internals->mode == BONDING_MODE_ALB) {
 
-		internals->tlb_slaves_order[active_count] = port_id;
+		internals->tlb_members_order[active_count] = port_id;
 	}
 
-	RTE_ASSERT(internals->active_slave_count <
-			(RTE_DIM(internals->active_slaves) - 1));
+	RTE_ASSERT(internals->active_member_count <
+			(RTE_DIM(internals->active_members) - 1));
 
-	internals->active_slaves[internals->active_slave_count] = port_id;
-	internals->active_slave_count++;
+	internals->active_members[internals->active_member_count] = port_id;
+	internals->active_member_count++;
 
 	if (internals->mode == BONDING_MODE_TLB)
-		bond_tlb_activate_slave(internals);
+		bond_tlb_activate_member(internals);
 	if (internals->mode == BONDING_MODE_ALB)
 		bond_mode_alb_client_list_upd(eth_dev);
 }
 
 void
-deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
+deactivate_member(struct rte_eth_dev *eth_dev, uint16_t port_id)
 {
-	uint16_t slave_pos;
+	uint16_t member_pos;
 	struct bond_dev_private *internals = eth_dev->data->dev_private;
-	uint16_t active_count = internals->active_slave_count;
+	uint16_t active_count = internals->active_member_count;
 
 	if (internals->mode == BONDING_MODE_8023AD) {
 		bond_mode_8023ad_stop(eth_dev);
-		bond_mode_8023ad_deactivate_slave(eth_dev, port_id);
+		bond_mode_8023ad_deactivate_member(eth_dev, port_id);
 	} else if (internals->mode == BONDING_MODE_TLB
 			|| internals->mode == BONDING_MODE_ALB)
 		bond_tlb_disable(internals);
 
-	slave_pos = find_slave_by_id(internals->active_slaves, active_count,
+	member_pos = find_member_by_id(internals->active_members, active_count,
 			port_id);
 
-	/* If slave was not at the end of the list
-	 * shift active slaves up active array list */
-	if (slave_pos < active_count) {
+	/*
+	 * If member was not at the end of the list
+	 * shift active members up active array list.
+	 */
+	if (member_pos < active_count) {
 		active_count--;
-		memmove(internals->active_slaves + slave_pos,
-				internals->active_slaves + slave_pos + 1,
-				(active_count - slave_pos) *
-					sizeof(internals->active_slaves[0]));
+		memmove(internals->active_members + member_pos,
+				internals->active_members + member_pos + 1,
+				(active_count - member_pos) *
+					sizeof(internals->active_members[0]));
 	}
 
-	RTE_ASSERT(active_count < RTE_DIM(internals->active_slaves));
-	internals->active_slave_count = active_count;
+	RTE_ASSERT(active_count < RTE_DIM(internals->active_members));
+	internals->active_member_count = active_count;
 
 	if (eth_dev->data->dev_started) {
 		if (internals->mode == BONDING_MODE_8023AD) {
@@ -192,7 +194,7 @@ rte_eth_bond_free(const char *name)
 }
 
 static int
-slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
+member_vlan_filter_set(uint16_t bonded_port_id, uint16_t member_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
@@ -224,7 +226,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
 			if (unlikely(slab & mask)) {
 				uint16_t vlan_id = pos + i;
 
-				res = rte_eth_dev_vlan_filter(slave_port_id,
+				res = rte_eth_dev_vlan_filter(member_port_id,
 							      vlan_id, 1);
 			}
 		}
@@ -236,45 +238,45 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
 }
 
 static int
-slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
+member_rte_flow_prepare(uint16_t member_id, struct bond_dev_private *internals)
 {
 	struct rte_flow *flow;
 	struct rte_flow_error ferror;
-	uint16_t slave_port_id = internals->slaves[slave_id].port_id;
+	uint16_t member_port_id = internals->members[member_id].port_id;
 
 	if (internals->flow_isolated_valid != 0) {
-		if (rte_eth_dev_stop(slave_port_id) != 0) {
+		if (rte_eth_dev_stop(member_port_id) != 0) {
 			RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
-				     slave_port_id);
+				     member_port_id);
 			return -1;
 		}
 
-		if (rte_flow_isolate(slave_port_id, internals->flow_isolated,
+		if (rte_flow_isolate(member_port_id, internals->flow_isolated,
 		    &ferror)) {
-			RTE_BOND_LOG(ERR, "rte_flow_isolate failed for slave"
-				     " %d: %s", slave_id, ferror.message ?
+			RTE_BOND_LOG(ERR, "rte_flow_isolate failed for member"
+				     " %d: %s", member_id, ferror.message ?
 				     ferror.message : "(no stated reason)");
 			return -1;
 		}
 	}
 	TAILQ_FOREACH(flow, &internals->flow_list, next) {
-		flow->flows[slave_id] = rte_flow_create(slave_port_id,
+		flow->flows[member_id] = rte_flow_create(member_port_id,
 							flow->rule.attr,
 							flow->rule.pattern,
 							flow->rule.actions,
 							&ferror);
-		if (flow->flows[slave_id] == NULL) {
-			RTE_BOND_LOG(ERR, "Cannot create flow for slave"
-				     " %d: %s", slave_id,
+		if (flow->flows[member_id] == NULL) {
+			RTE_BOND_LOG(ERR, "Cannot create flow for member"
+				     " %d: %s", member_id,
 				     ferror.message ? ferror.message :
 				     "(no stated reason)");
-			/* Destroy successful bond flows from the slave */
+			/* Destroy successful bond flows from the member */
 			TAILQ_FOREACH(flow, &internals->flow_list, next) {
-				if (flow->flows[slave_id] != NULL) {
-					rte_flow_destroy(slave_port_id,
-							 flow->flows[slave_id],
+				if (flow->flows[member_id] != NULL) {
+					rte_flow_destroy(member_port_id,
+							 flow->flows[member_id],
 							 &ferror);
-					flow->flows[slave_id] = NULL;
+					flow->flows[member_id] = NULL;
 				}
 			}
 			return -1;
@@ -284,7 +286,7 @@ slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
 }
 
 static void
-eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_rx_first(struct bond_dev_private *internals,
 					 const struct rte_eth_dev_info *di)
 {
 	struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
@@ -292,20 +294,20 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
 	internals->reta_size = di->reta_size;
 	internals->rss_key_len = di->hash_key_size;
 
-	/* Inherit Rx offload capabilities from the first slave device */
+	/* Inherit Rx offload capabilities from the first member device */
 	internals->rx_offload_capa = di->rx_offload_capa;
 	internals->rx_queue_offload_capa = di->rx_queue_offload_capa;
 	internals->flow_type_rss_offloads = di->flow_type_rss_offloads;
 
-	/* Inherit maximum Rx packet size from the first slave device */
+	/* Inherit maximum Rx packet size from the first member device */
 	internals->candidate_max_rx_pktlen = di->max_rx_pktlen;
 
-	/* Inherit default Rx queue settings from the first slave device */
+	/* Inherit default Rx queue settings from the first member device */
 	memcpy(rxconf_i, &di->default_rxconf, sizeof(*rxconf_i));
 
 	/*
 	 * Turn off descriptor prefetch and writeback by default for all
-	 * slave devices. Applications may tweak this setting if need be.
+	 * member devices. Applications may tweak this setting if need be.
 	 */
 	rxconf_i->rx_thresh.pthresh = 0;
 	rxconf_i->rx_thresh.hthresh = 0;
@@ -314,26 +316,26 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
 	/* Setting this to zero should effectively enable default values */
 	rxconf_i->rx_free_thresh = 0;
 
-	/* Disable deferred start by default for all slave devices */
+	/* Disable deferred start by default for all member devices */
 	rxconf_i->rx_deferred_start = 0;
 }
 
 static void
-eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_tx_first(struct bond_dev_private *internals,
 					 const struct rte_eth_dev_info *di)
 {
 	struct rte_eth_txconf *txconf_i = &internals->default_txconf;
 
-	/* Inherit Tx offload capabilities from the first slave device */
+	/* Inherit Tx offload capabilities from the first member device */
 	internals->tx_offload_capa = di->tx_offload_capa;
 	internals->tx_queue_offload_capa = di->tx_queue_offload_capa;
 
-	/* Inherit default Tx queue settings from the first slave device */
+	/* Inherit default Tx queue settings from the first member device */
 	memcpy(txconf_i, &di->default_txconf, sizeof(*txconf_i));
 
 	/*
 	 * Turn off descriptor prefetch and writeback by default for all
-	 * slave devices. Applications may tweak this setting if need be.
+	 * member devices. Applications may tweak this setting if need be.
 	 */
 	txconf_i->tx_thresh.pthresh = 0;
 	txconf_i->tx_thresh.hthresh = 0;
@@ -341,17 +343,17 @@ eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
 
 	/*
 	 * Setting these parameters to zero assumes that default
-	 * values will be configured implicitly by slave devices.
+	 * values will be configured implicitly by member devices.
 	 */
 	txconf_i->tx_free_thresh = 0;
 	txconf_i->tx_rs_thresh = 0;
 
-	/* Disable deferred start by default for all slave devices */
+	/* Disable deferred start by default for all member devices */
 	txconf_i->tx_deferred_start = 0;
 }
 
 static void
-eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_rx_next(struct bond_dev_private *internals,
 					const struct rte_eth_dev_info *di)
 {
 	struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
@@ -362,32 +364,32 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
 	internals->flow_type_rss_offloads &= di->flow_type_rss_offloads;
 
 	/*
-	 * If at least one slave device suggests enabling this
-	 * setting by default, enable it for all slave devices
+	 * If at least one member device suggests enabling this
+	 * setting by default, enable it for all member devices
 	 * since disabling it may not be necessarily supported.
 	 */
 	if (rxconf->rx_drop_en == 1)
 		rxconf_i->rx_drop_en = 1;
 
 	/*
-	 * Adding a new slave device may cause some of previously inherited
+	 * Adding a new member device may cause some of previously inherited
 	 * offloads to be withdrawn from the internal rx_queue_offload_capa
 	 * value. Thus, the new internal value of default Rx queue offloads
 	 * has to be masked by rx_queue_offload_capa to make sure that only
 	 * commonly supported offloads are preserved from both the previous
-	 * value and the value being inherited from the new slave device.
+	 * value and the value being inherited from the new member device.
 	 */
 	rxconf_i->offloads = (rxconf_i->offloads | rxconf->offloads) &
 			     internals->rx_queue_offload_capa;
 
 	/*
-	 * RETA size is GCD of all slaves RETA sizes, so, if all sizes will be
+	 * RETA size is GCD of all members RETA sizes, so, if all sizes will be
 	 * the power of 2, the lower one is GCD
 	 */
 	if (internals->reta_size > di->reta_size)
 		internals->reta_size = di->reta_size;
 	if (internals->rss_key_len > di->hash_key_size) {
-		RTE_BOND_LOG(WARNING, "slave has different rss key size, "
+		RTE_BOND_LOG(WARNING, "member has different rss key size, "
 				"configuring rss may fail");
 		internals->rss_key_len = di->hash_key_size;
 	}
@@ -398,7 +400,7 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
 }
 
 static void
-eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_tx_next(struct bond_dev_private *internals,
 					const struct rte_eth_dev_info *di)
 {
 	struct rte_eth_txconf *txconf_i = &internals->default_txconf;
@@ -408,34 +410,34 @@ eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
 	internals->tx_queue_offload_capa &= di->tx_queue_offload_capa;
 
 	/*
-	 * Adding a new slave device may cause some of previously inherited
+	 * Adding a new member device may cause some of previously inherited
 	 * offloads to be withdrawn from the internal tx_queue_offload_capa
 	 * value. Thus, the new internal value of default Tx queue offloads
 	 * has to be masked by tx_queue_offload_capa to make sure that only
 	 * commonly supported offloads are preserved from both the previous
-	 * value and the value being inherited from the new slave device.
+	 * value and the value being inherited from the new member device.
 	 */
 	txconf_i->offloads = (txconf_i->offloads | txconf->offloads) &
 			     internals->tx_queue_offload_capa;
 }
 
 static void
-eth_bond_slave_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
-		const struct rte_eth_desc_lim *slave_desc_lim)
+eth_bond_member_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
+		const struct rte_eth_desc_lim *member_desc_lim)
 {
-	memcpy(bond_desc_lim, slave_desc_lim, sizeof(*bond_desc_lim));
+	memcpy(bond_desc_lim, member_desc_lim, sizeof(*bond_desc_lim));
 }
 
 static int
-eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
-		const struct rte_eth_desc_lim *slave_desc_lim)
+eth_bond_member_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
+		const struct rte_eth_desc_lim *member_desc_lim)
 {
 	bond_desc_lim->nb_max = RTE_MIN(bond_desc_lim->nb_max,
-					slave_desc_lim->nb_max);
+					member_desc_lim->nb_max);
 	bond_desc_lim->nb_min = RTE_MAX(bond_desc_lim->nb_min,
-					slave_desc_lim->nb_min);
+					member_desc_lim->nb_min);
 	bond_desc_lim->nb_align = RTE_MAX(bond_desc_lim->nb_align,
-					  slave_desc_lim->nb_align);
+					  member_desc_lim->nb_align);
 
 	if (bond_desc_lim->nb_min > bond_desc_lim->nb_max ||
 	    bond_desc_lim->nb_align > bond_desc_lim->nb_max) {
@@ -444,22 +446,22 @@ eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
 	}
 
 	/* Treat maximum number of segments equal to 0 as unspecified */
-	if (slave_desc_lim->nb_seg_max != 0 &&
+	if (member_desc_lim->nb_seg_max != 0 &&
 	    (bond_desc_lim->nb_seg_max == 0 ||
-	     slave_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
-		bond_desc_lim->nb_seg_max = slave_desc_lim->nb_seg_max;
-	if (slave_desc_lim->nb_mtu_seg_max != 0 &&
+	     member_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
+		bond_desc_lim->nb_seg_max = member_desc_lim->nb_seg_max;
+	if (member_desc_lim->nb_mtu_seg_max != 0 &&
 	    (bond_desc_lim->nb_mtu_seg_max == 0 ||
-	     slave_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
-		bond_desc_lim->nb_mtu_seg_max = slave_desc_lim->nb_mtu_seg_max;
+	     member_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
+		bond_desc_lim->nb_mtu_seg_max = member_desc_lim->nb_mtu_seg_max;
 
 	return 0;
 }
 
 static int
-__eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
+__eth_bond_member_add_lock_free(uint16_t bonded_port_id, uint16_t member_port_id)
 {
-	struct rte_eth_dev *bonded_eth_dev, *slave_eth_dev;
+	struct rte_eth_dev *bonded_eth_dev, *member_eth_dev;
 	struct bond_dev_private *internals;
 	struct rte_eth_link link_props;
 	struct rte_eth_dev_info dev_info;
@@ -468,78 +470,78 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	internals = bonded_eth_dev->data->dev_private;
 
-	if (valid_slave_port_id(internals, slave_port_id) != 0)
+	if (valid_member_port_id(internals, member_port_id) != 0)
 		return -1;
 
-	slave_eth_dev = &rte_eth_devices[slave_port_id];
-	if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDED_SLAVE) {
-		RTE_BOND_LOG(ERR, "Slave device is already a slave of a bonded device");
+	member_eth_dev = &rte_eth_devices[member_port_id];
+	if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDED_MEMBER) {
+		RTE_BOND_LOG(ERR, "Member device is already a member of a bonded device");
 		return -1;
 	}
 
-	ret = rte_eth_dev_info_get(slave_port_id, &dev_info);
+	ret = rte_eth_dev_info_get(member_port_id, &dev_info);
 	if (ret != 0) {
 		RTE_BOND_LOG(ERR,
 			"%s: Error during getting device (port %u) info: %s\n",
-			__func__, slave_port_id, strerror(-ret));
+			__func__, member_port_id, strerror(-ret));
 
 		return ret;
 	}
 	if (dev_info.max_rx_pktlen < internals->max_rx_pktlen) {
-		RTE_BOND_LOG(ERR, "Slave (port %u) max_rx_pktlen too small",
-			     slave_port_id);
+		RTE_BOND_LOG(ERR, "Member (port %u) max_rx_pktlen too small",
+			     member_port_id);
 		return -1;
 	}
 
-	slave_add(internals, slave_eth_dev);
+	member_add(internals, member_eth_dev);
 
-	/* We need to store slaves reta_size to be able to synchronize RETA for all
-	 * slave devices even if its sizes are different.
+	/* We need to store members reta_size to be able to synchronize RETA for all
+	 * member devices even if its sizes are different.
 	 */
-	internals->slaves[internals->slave_count].reta_size = dev_info.reta_size;
+	internals->members[internals->member_count].reta_size = dev_info.reta_size;
 
-	if (internals->slave_count < 1) {
-		/* if MAC is not user defined then use MAC of first slave add to
+	if (internals->member_count < 1) {
+		/* if MAC is not user defined then use MAC of first member add to
 		 * bonded device */
 		if (!internals->user_defined_mac) {
 			if (mac_address_set(bonded_eth_dev,
-					    slave_eth_dev->data->mac_addrs)) {
+					    member_eth_dev->data->mac_addrs)) {
 				RTE_BOND_LOG(ERR, "Failed to set MAC address");
 				return -1;
 			}
 		}
 
-		/* Make primary slave */
-		internals->primary_port = slave_port_id;
-		internals->current_primary_port = slave_port_id;
+		/* Make primary member */
+		internals->primary_port = member_port_id;
+		internals->current_primary_port = member_port_id;
 
 		internals->speed_capa = dev_info.speed_capa;
 
-		/* Inherit queues settings from first slave */
-		internals->nb_rx_queues = slave_eth_dev->data->nb_rx_queues;
-		internals->nb_tx_queues = slave_eth_dev->data->nb_tx_queues;
+		/* Inherit queues settings from first member */
+		internals->nb_rx_queues = member_eth_dev->data->nb_rx_queues;
+		internals->nb_tx_queues = member_eth_dev->data->nb_tx_queues;
 
-		eth_bond_slave_inherit_dev_info_rx_first(internals, &dev_info);
-		eth_bond_slave_inherit_dev_info_tx_first(internals, &dev_info);
+		eth_bond_member_inherit_dev_info_rx_first(internals, &dev_info);
+		eth_bond_member_inherit_dev_info_tx_first(internals, &dev_info);
 
-		eth_bond_slave_inherit_desc_lim_first(&internals->rx_desc_lim,
+		eth_bond_member_inherit_desc_lim_first(&internals->rx_desc_lim,
 						      &dev_info.rx_desc_lim);
-		eth_bond_slave_inherit_desc_lim_first(&internals->tx_desc_lim,
+		eth_bond_member_inherit_desc_lim_first(&internals->tx_desc_lim,
 						      &dev_info.tx_desc_lim);
 	} else {
 		int ret;
 
 		internals->speed_capa &= dev_info.speed_capa;
-		eth_bond_slave_inherit_dev_info_rx_next(internals, &dev_info);
-		eth_bond_slave_inherit_dev_info_tx_next(internals, &dev_info);
+		eth_bond_member_inherit_dev_info_rx_next(internals, &dev_info);
+		eth_bond_member_inherit_dev_info_tx_next(internals, &dev_info);
 
-		ret = eth_bond_slave_inherit_desc_lim_next(
-				&internals->rx_desc_lim, &dev_info.rx_desc_lim);
+		ret = eth_bond_member_inherit_desc_lim_next(&internals->rx_desc_lim,
+							&dev_info.rx_desc_lim);
 		if (ret != 0)
 			return ret;
 
-		ret = eth_bond_slave_inherit_desc_lim_next(
-				&internals->tx_desc_lim, &dev_info.tx_desc_lim);
+		ret = eth_bond_member_inherit_desc_lim_next(&internals->tx_desc_lim,
+							&dev_info.tx_desc_lim);
 		if (ret != 0)
 			return ret;
 	}
@@ -552,79 +554,81 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
 	bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf &=
 			internals->flow_type_rss_offloads;
 
-	if (slave_rte_flow_prepare(internals->slave_count, internals) != 0) {
-		RTE_BOND_LOG(ERR, "Failed to prepare new slave flows: port=%d",
-			     slave_port_id);
+	if (member_rte_flow_prepare(internals->member_count, internals) != 0) {
+		RTE_BOND_LOG(ERR, "Failed to prepare new member flows: port=%d",
+			     member_port_id);
 		return -1;
 	}
 
-	/* Add additional MAC addresses to the slave */
-	if (slave_add_mac_addresses(bonded_eth_dev, slave_port_id) != 0) {
-		RTE_BOND_LOG(ERR, "Failed to add mac address(es) to slave %hu",
-				slave_port_id);
+	/* Add additional MAC addresses to the member */
+	if (member_add_mac_addresses(bonded_eth_dev, member_port_id) != 0) {
+		RTE_BOND_LOG(ERR, "Failed to add mac address(es) to member %hu",
+				member_port_id);
 		return -1;
 	}
 
-	internals->slave_count++;
+	internals->member_count++;
 
 	if (bonded_eth_dev->data->dev_started) {
-		if (slave_configure(bonded_eth_dev, slave_eth_dev) != 0) {
-			internals->slave_count--;
-			RTE_BOND_LOG(ERR, "rte_bond_slaves_configure: port=%d",
-					slave_port_id);
+		if (member_configure(bonded_eth_dev, member_eth_dev) != 0) {
+			internals->member_count--;
+			RTE_BOND_LOG(ERR, "rte_bond_members_configure: port=%d",
+					member_port_id);
 			return -1;
 		}
-		if (slave_start(bonded_eth_dev, slave_eth_dev) != 0) {
-			internals->slave_count--;
-			RTE_BOND_LOG(ERR, "rte_bond_slaves_start: port=%d",
-					slave_port_id);
+		if (member_start(bonded_eth_dev, member_eth_dev) != 0) {
+			internals->member_count--;
+			RTE_BOND_LOG(ERR, "rte_bond_members_start: port=%d",
+					member_port_id);
 			return -1;
 		}
 	}
 
-	/* Update all slave devices MACs */
-	mac_address_slaves_update(bonded_eth_dev);
+	/* Update all member devices MACs */
+	mac_address_members_update(bonded_eth_dev);
 
 	/* Register link status change callback with bonded device pointer as
 	 * argument*/
-	rte_eth_dev_callback_register(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
+	rte_eth_dev_callback_register(member_port_id, RTE_ETH_EVENT_INTR_LSC,
 			bond_ethdev_lsc_event_callback, &bonded_eth_dev->data->port_id);
 
-	/* If bonded device is started then we can add the slave to our active
-	 * slave array */
+	/*
+	 * If bonded device is started then we can add the member to our active
+	 * member array.
+	 */
 	if (bonded_eth_dev->data->dev_started) {
-		ret = rte_eth_link_get_nowait(slave_port_id, &link_props);
+		ret = rte_eth_link_get_nowait(member_port_id, &link_props);
 		if (ret < 0) {
-			rte_eth_dev_callback_unregister(slave_port_id,
+			rte_eth_dev_callback_unregister(member_port_id,
 					RTE_ETH_EVENT_INTR_LSC,
 					bond_ethdev_lsc_event_callback,
 					&bonded_eth_dev->data->port_id);
-			internals->slave_count--;
+			internals->member_count--;
 			RTE_BOND_LOG(ERR,
-				"Slave (port %u) link get failed: %s\n",
-				slave_port_id, rte_strerror(-ret));
+				"Member (port %u) link get failed: %s\n",
+				member_port_id, rte_strerror(-ret));
 			return -1;
 		}
 
 		if (link_props.link_status == RTE_ETH_LINK_UP) {
-			if (internals->active_slave_count == 0 &&
+			if (internals->active_member_count == 0 &&
 			    !internals->user_defined_primary_port)
 				bond_ethdev_primary_set(internals,
-							slave_port_id);
+							member_port_id);
 		}
 	}
 
-	/* Add slave details to bonded device */
-	slave_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDED_SLAVE;
+	/* Add member details to bonded device */
+	member_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDED_MEMBER;
 
-	slave_vlan_filter_set(bonded_port_id, slave_port_id);
+	member_vlan_filter_set(bonded_port_id, member_port_id);
 
 	return 0;
 
 }
 
 int
-rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_member_add(uint16_t bonded_port_id, uint16_t member_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
@@ -637,12 +641,12 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	internals = bonded_eth_dev->data->dev_private;
 
-	if (valid_slave_port_id(internals, slave_port_id) != 0)
+	if (valid_member_port_id(internals, member_port_id) != 0)
 		return -1;
 
 	rte_spinlock_lock(&internals->lock);
 
-	retval = __eth_bond_slave_add_lock_free(bonded_port_id, slave_port_id);
+	retval = __eth_bond_member_add_lock_free(bonded_port_id, member_port_id);
 
 	rte_spinlock_unlock(&internals->lock);
 
@@ -650,93 +654,95 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
 }
 
 static int
-__eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
-				   uint16_t slave_port_id)
+__eth_bond_member_remove_lock_free(uint16_t bonded_port_id,
+				   uint16_t member_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
-	struct rte_eth_dev *slave_eth_dev;
+	struct rte_eth_dev *member_eth_dev;
 	struct rte_flow_error flow_error;
 	struct rte_flow *flow;
-	int i, slave_idx;
+	int i, member_idx;
 
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	internals = bonded_eth_dev->data->dev_private;
 
-	if (valid_slave_port_id(internals, slave_port_id) < 0)
+	if (valid_member_port_id(internals, member_port_id) < 0)
 		return -1;
 
-	/* first remove from active slave list */
-	slave_idx = find_slave_by_id(internals->active_slaves,
-		internals->active_slave_count, slave_port_id);
+	/* first remove from active member list */
+	member_idx = find_member_by_id(internals->active_members,
+		internals->active_member_count, member_port_id);
 
-	if (slave_idx < internals->active_slave_count)
-		deactivate_slave(bonded_eth_dev, slave_port_id);
+	if (member_idx < internals->active_member_count)
+		deactivate_member(bonded_eth_dev, member_port_id);
 
-	slave_idx = -1;
-	/* now find in slave list */
-	for (i = 0; i < internals->slave_count; i++)
-		if (internals->slaves[i].port_id == slave_port_id) {
-			slave_idx = i;
+	member_idx = -1;
+	/* now find in member list */
+	for (i = 0; i < internals->member_count; i++)
+		if (internals->members[i].port_id == member_port_id) {
+			member_idx = i;
 			break;
 		}
 
-	if (slave_idx < 0) {
-		RTE_BOND_LOG(ERR, "Couldn't find slave in port list, slave count %u",
-				internals->slave_count);
+	if (member_idx < 0) {
+		RTE_BOND_LOG(ERR, "Could not find member in port list, member count %u",
+				internals->member_count);
 		return -1;
 	}
 
 	/* Un-register link status change callback with bonded device pointer as
 	 * argument*/
-	rte_eth_dev_callback_unregister(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
+	rte_eth_dev_callback_unregister(member_port_id, RTE_ETH_EVENT_INTR_LSC,
 			bond_ethdev_lsc_event_callback,
 			&rte_eth_devices[bonded_port_id].data->port_id);
 
-	/* Restore original MAC address of slave device */
-	rte_eth_dev_default_mac_addr_set(slave_port_id,
-			&(internals->slaves[slave_idx].persisted_mac_addr));
+	/* Restore original MAC address of member device */
+	rte_eth_dev_default_mac_addr_set(member_port_id,
+			&internals->members[member_idx].persisted_mac_addr);
 
-	/* remove additional MAC addresses from the slave */
-	slave_remove_mac_addresses(bonded_eth_dev, slave_port_id);
+	/* remove additional MAC addresses from the member */
+	member_remove_mac_addresses(bonded_eth_dev, member_port_id);
 
 	/*
-	 * Remove bond device flows from slave device.
+	 * Remove bond device flows from member device.
 	 * Note: don't restore flow isolate mode.
 	 */
 	TAILQ_FOREACH(flow, &internals->flow_list, next) {
-		if (flow->flows[slave_idx] != NULL) {
-			rte_flow_destroy(slave_port_id, flow->flows[slave_idx],
+		if (flow->flows[member_idx] != NULL) {
+			rte_flow_destroy(member_port_id, flow->flows[member_idx],
 					 &flow_error);
-			flow->flows[slave_idx] = NULL;
+			flow->flows[member_idx] = NULL;
 		}
 	}
 
-	slave_eth_dev = &rte_eth_devices[slave_port_id];
-	slave_remove(internals, slave_eth_dev);
-	slave_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDED_SLAVE);
+	member_eth_dev = &rte_eth_devices[member_port_id];
+	member_remove(internals, member_eth_dev);
+	member_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDED_MEMBER);
 
-	/*  first slave in the active list will be the primary by default,
+	/*  first member in the active list will be the primary by default,
 	 *  otherwise use first device in list */
-	if (internals->current_primary_port == slave_port_id) {
-		if (internals->active_slave_count > 0)
-			internals->current_primary_port = internals->active_slaves[0];
-		else if (internals->slave_count > 0)
-			internals->current_primary_port = internals->slaves[0].port_id;
+	if (internals->current_primary_port == member_port_id) {
+		if (internals->active_member_count > 0)
+			internals->current_primary_port = internals->active_members[0];
+		else if (internals->member_count > 0)
+			internals->current_primary_port = internals->members[0].port_id;
 		else
 			internals->primary_port = 0;
-		mac_address_slaves_update(bonded_eth_dev);
+		mac_address_members_update(bonded_eth_dev);
 	}
 
-	if (internals->active_slave_count < 1) {
-		/* if no slaves are any longer attached to bonded device and MAC is not
+	if (internals->active_member_count < 1) {
+		/*
+		 * if no members are any longer attached to bonded device and MAC is not
 		 * user defined then clear MAC of bonded device as it will be reset
-		 * when a new slave is added */
-		if (internals->slave_count < 1 && !internals->user_defined_mac)
+		 * when a new member is added.
+		 */
+		if (internals->member_count < 1 && !internals->user_defined_mac)
 			memset(rte_eth_devices[bonded_port_id].data->mac_addrs, 0,
 					sizeof(*(rte_eth_devices[bonded_port_id].data->mac_addrs)));
 	}
-	if (internals->slave_count == 0) {
+	if (internals->member_count == 0) {
 		internals->rx_offload_capa = 0;
 		internals->tx_offload_capa = 0;
 		internals->rx_queue_offload_capa = 0;
@@ -750,7 +756,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
 }
 
 int
-rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_member_remove(uint16_t bonded_port_id, uint16_t member_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
@@ -764,7 +770,7 @@ rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
 
 	rte_spinlock_lock(&internals->lock);
 
-	retval = __eth_bond_slave_remove_lock_free(bonded_port_id, slave_port_id);
+	retval = __eth_bond_member_remove_lock_free(bonded_port_id, member_port_id);
 
 	rte_spinlock_unlock(&internals->lock);
 
@@ -781,7 +787,7 @@ rte_eth_bond_mode_set(uint16_t bonded_port_id, uint8_t mode)
 
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 
-	if (check_for_master_bonded_ethdev(bonded_eth_dev) != 0 &&
+	if (check_for_main_bonded_ethdev(bonded_eth_dev) != 0 &&
 			mode == BONDING_MODE_8023AD)
 		return -1;
 
@@ -802,7 +808,7 @@ rte_eth_bond_mode_get(uint16_t bonded_port_id)
 }
 
 int
-rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t member_port_id)
 {
 	struct bond_dev_private *internals;
 
@@ -811,13 +817,13 @@ rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
 
 	internals = rte_eth_devices[bonded_port_id].data->dev_private;
 
-	if (valid_slave_port_id(internals, slave_port_id) != 0)
+	if (valid_member_port_id(internals, member_port_id) != 0)
 		return -1;
 
 	internals->user_defined_primary_port = 1;
-	internals->primary_port = slave_port_id;
+	internals->primary_port = member_port_id;
 
-	bond_ethdev_primary_set(internals, slave_port_id);
+	bond_ethdev_primary_set(internals, member_port_id);
 
 	return 0;
 }
@@ -832,14 +838,14 @@ rte_eth_bond_primary_get(uint16_t bonded_port_id)
 
 	internals = rte_eth_devices[bonded_port_id].data->dev_private;
 
-	if (internals->slave_count < 1)
+	if (internals->member_count < 1)
 		return -1;
 
 	return internals->current_primary_port;
 }
 
 int
-rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_members_get(uint16_t bonded_port_id, uint16_t members[],
 			uint16_t len)
 {
 	struct bond_dev_private *internals;
@@ -848,22 +854,22 @@ rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
 	if (valid_bonded_port_id(bonded_port_id) != 0)
 		return -1;
 
-	if (slaves == NULL)
+	if (members == NULL)
 		return -1;
 
 	internals = rte_eth_devices[bonded_port_id].data->dev_private;
 
-	if (internals->slave_count > len)
+	if (internals->member_count > len)
 		return -1;
 
-	for (i = 0; i < internals->slave_count; i++)
-		slaves[i] = internals->slaves[i].port_id;
+	for (i = 0; i < internals->member_count; i++)
+		members[i] = internals->members[i].port_id;
 
-	return internals->slave_count;
+	return internals->member_count;
 }
 
 int
-rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_active_members_get(uint16_t bonded_port_id, uint16_t members[],
 		uint16_t len)
 {
 	struct bond_dev_private *internals;
@@ -871,18 +877,18 @@ rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
 	if (valid_bonded_port_id(bonded_port_id) != 0)
 		return -1;
 
-	if (slaves == NULL)
+	if (members == NULL)
 		return -1;
 
 	internals = rte_eth_devices[bonded_port_id].data->dev_private;
 
-	if (internals->active_slave_count > len)
+	if (internals->active_member_count > len)
 		return -1;
 
-	memcpy(slaves, internals->active_slaves,
-	internals->active_slave_count * sizeof(internals->active_slaves[0]));
+	memcpy(members, internals->active_members,
+	internals->active_member_count * sizeof(internals->active_members[0]));
 
-	return internals->active_slave_count;
+	return internals->active_member_count;
 }
 
 int
@@ -904,9 +910,9 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
 
 	internals->user_defined_mac = 1;
 
-	/* Update all slave devices MACs*/
-	if (internals->slave_count > 0)
-		return mac_address_slaves_update(bonded_eth_dev);
+	/* Update all member devices MACs*/
+	if (internals->member_count > 0)
+		return mac_address_members_update(bonded_eth_dev);
 
 	return 0;
 }
@@ -925,30 +931,30 @@ rte_eth_bond_mac_address_reset(uint16_t bonded_port_id)
 
 	internals->user_defined_mac = 0;
 
-	if (internals->slave_count > 0) {
-		int slave_port;
-		/* Get the primary slave location based on the primary port
-		 * number as, while slave_add(), we will keep the primary
-		 * slave based on slave_count,but not based on the primary port.
+	if (internals->member_count > 0) {
+		int member_port;
+		/* Get the primary member location based on the primary port
+		 * number as, while member_add(), we will keep the primary
+		 * member based on member_count,but not based on the primary port.
 		 */
-		for (slave_port = 0; slave_port < internals->slave_count;
-		     slave_port++) {
-			if (internals->slaves[slave_port].port_id ==
+		for (member_port = 0; member_port < internals->member_count;
+		     member_port++) {
+			if (internals->members[member_port].port_id ==
 			    internals->primary_port)
 				break;
 		}
 
 		/* Set MAC Address of Bonded Device */
 		if (mac_address_set(bonded_eth_dev,
-			&internals->slaves[slave_port].persisted_mac_addr)
+			&internals->members[member_port].persisted_mac_addr)
 				!= 0) {
 			RTE_BOND_LOG(ERR, "Failed to set MAC address on bonded device");
 			return -1;
 		}
-		/* Update all slave devices MAC addresses */
-		return mac_address_slaves_update(bonded_eth_dev);
+		/* Update all member devices MAC addresses */
+		return mac_address_members_update(bonded_eth_dev);
 	}
-	/* No need to update anything as no slaves present */
+	/* No need to update anything as no members present */
 	return 0;
 }
 
diff --git a/drivers/net/bonding/rte_eth_bond_args.c b/drivers/net/bonding/rte_eth_bond_args.c
index 6553166f5c..cbc905f700 100644
--- a/drivers/net/bonding/rte_eth_bond_args.c
+++ b/drivers/net/bonding/rte_eth_bond_args.c
@@ -12,8 +12,8 @@
 #include "eth_bond_private.h"
 
 const char *pmd_bond_init_valid_arguments[] = {
-	PMD_BOND_SLAVE_PORT_KVARG,
-	PMD_BOND_PRIMARY_SLAVE_KVARG,
+	PMD_BOND_MEMBER_PORT_KVARG,
+	PMD_BOND_PRIMARY_MEMBER_KVARG,
 	PMD_BOND_MODE_KVARG,
 	PMD_BOND_XMIT_POLICY_KVARG,
 	PMD_BOND_SOCKET_ID_KVARG,
@@ -109,31 +109,31 @@ parse_port_id(const char *port_str)
 }
 
 int
-bond_ethdev_parse_slave_port_kvarg(const char *key,
+bond_ethdev_parse_member_port_kvarg(const char *key,
 		const char *value, void *extra_args)
 {
-	struct bond_ethdev_slave_ports *slave_ports;
+	struct bond_ethdev_member_ports *member_ports;
 
 	if (value == NULL || extra_args == NULL)
 		return -1;
 
-	slave_ports = extra_args;
+	member_ports = extra_args;
 
-	if (strcmp(key, PMD_BOND_SLAVE_PORT_KVARG) == 0) {
+	if (strcmp(key, PMD_BOND_MEMBER_PORT_KVARG) == 0) {
 		int port_id = parse_port_id(value);
 		if (port_id < 0) {
-			RTE_BOND_LOG(ERR, "Invalid slave port value (%s) specified",
+			RTE_BOND_LOG(ERR, "Invalid member port value (%s) specified",
 				     value);
 			return -1;
 		} else
-			slave_ports->slaves[slave_ports->slave_count++] =
+			member_ports->members[member_ports->member_count++] =
 					port_id;
 	}
 	return 0;
 }
 
 int
-bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_mode_kvarg(const char *key __rte_unused,
 		const char *value, void *extra_args)
 {
 	uint8_t *mode;
@@ -160,13 +160,13 @@ bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
 	case BONDING_MODE_ALB:
 		return 0;
 	default:
-		RTE_BOND_LOG(ERR, "Invalid slave mode value (%s) specified", value);
+		RTE_BOND_LOG(ERR, "Invalid member mode value (%s) specified", value);
 		return -1;
 	}
 }
 
 int
-bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_agg_mode_kvarg(const char *key __rte_unused,
 		const char *value, void *extra_args)
 {
 	uint8_t *agg_mode;
@@ -221,19 +221,19 @@ bond_ethdev_parse_socket_id_kvarg(const char *key __rte_unused,
 }
 
 int
-bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_primary_member_port_id_kvarg(const char *key __rte_unused,
 		const char *value, void *extra_args)
 {
-	int primary_slave_port_id;
+	int primary_member_port_id;
 
 	if (value == NULL || extra_args == NULL)
 		return -1;
 
-	primary_slave_port_id = parse_port_id(value);
-	if (primary_slave_port_id < 0)
+	primary_member_port_id = parse_port_id(value);
+	if (primary_member_port_id < 0)
 		return -1;
 
-	*(uint16_t *)extra_args = (uint16_t)primary_slave_port_id;
+	*(uint16_t *)extra_args = (uint16_t)primary_member_port_id;
 
 	return 0;
 }
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 65b77faae7..71a91675f7 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -69,12 +69,12 @@ bond_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	int i;
 	int ret;
 
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_flow_validate(internals->slaves[i].port_id, attr,
+	for (i = 0; i < internals->member_count; i++) {
+		ret = rte_flow_validate(internals->members[i].port_id, attr,
 					patterns, actions, err);
 		if (ret) {
 			RTE_BOND_LOG(ERR, "Operation rte_flow_validate failed"
-				     " for slave %d with error %d", i, ret);
+				     " for member %d with error %d", i, ret);
 			return ret;
 		}
 	}
@@ -97,11 +97,11 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 				   NULL, rte_strerror(ENOMEM));
 		return NULL;
 	}
-	for (i = 0; i < internals->slave_count; i++) {
-		flow->flows[i] = rte_flow_create(internals->slaves[i].port_id,
+	for (i = 0; i < internals->member_count; i++) {
+		flow->flows[i] = rte_flow_create(internals->members[i].port_id,
 						 attr, patterns, actions, err);
 		if (unlikely(flow->flows[i] == NULL)) {
-			RTE_BOND_LOG(ERR, "Failed to create flow on slave %d",
+			RTE_BOND_LOG(ERR, "Failed to create flow on member %d",
 				     i);
 			goto err;
 		}
@@ -109,10 +109,10 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	TAILQ_INSERT_TAIL(&internals->flow_list, flow, next);
 	return flow;
 err:
-	/* Destroy all slaves flows. */
-	for (i = 0; i < internals->slave_count; i++) {
+	/* Destroy all members flows. */
+	for (i = 0; i < internals->member_count; i++) {
 		if (flow->flows[i] != NULL)
-			rte_flow_destroy(internals->slaves[i].port_id,
+			rte_flow_destroy(internals->members[i].port_id,
 					 flow->flows[i], err);
 	}
 	bond_flow_release(&flow);
@@ -127,15 +127,15 @@ bond_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
 	int i;
 	int ret = 0;
 
-	for (i = 0; i < internals->slave_count; i++) {
+	for (i = 0; i < internals->member_count; i++) {
 		int lret;
 
 		if (unlikely(flow->flows[i] == NULL))
 			continue;
-		lret = rte_flow_destroy(internals->slaves[i].port_id,
+		lret = rte_flow_destroy(internals->members[i].port_id,
 					flow->flows[i], err);
 		if (unlikely(lret != 0)) {
-			RTE_BOND_LOG(ERR, "Failed to destroy flow on slave %d:"
+			RTE_BOND_LOG(ERR, "Failed to destroy flow on member %d:"
 				     " %d", i, lret);
 			ret = lret;
 		}
@@ -154,7 +154,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
 	int ret = 0;
 	int lret;
 
-	/* Destroy all bond flows from its slaves instead of flushing them to
+	/* Destroy all bond flows from its members instead of flushing them to
 	 * keep the LACP flow or any other external flows.
 	 */
 	RTE_TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
@@ -163,7 +163,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
 			ret = lret;
 	}
 	if (unlikely(ret != 0))
-		RTE_BOND_LOG(ERR, "Failed to flush flow in all slaves");
+		RTE_BOND_LOG(ERR, "Failed to flush flow in all members");
 	return ret;
 }
 
@@ -174,26 +174,26 @@ bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
 		      struct rte_flow_error *err)
 {
 	struct bond_dev_private *internals = dev->data->dev_private;
-	struct rte_flow_query_count slave_count;
+	struct rte_flow_query_count member_count;
 	int i;
 	int ret;
 
 	count->bytes = 0;
 	count->hits = 0;
-	rte_memcpy(&slave_count, count, sizeof(slave_count));
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_flow_query(internals->slaves[i].port_id,
+	rte_memcpy(&member_count, count, sizeof(member_count));
+	for (i = 0; i < internals->member_count; i++) {
+		ret = rte_flow_query(internals->members[i].port_id,
 				     flow->flows[i], action,
-				     &slave_count, err);
+				     &member_count, err);
 		if (unlikely(ret != 0)) {
 			RTE_BOND_LOG(ERR, "Failed to query flow on"
-				     " slave %d: %d", i, ret);
+				     " member %d: %d", i, ret);
 			return ret;
 		}
-		count->bytes += slave_count.bytes;
-		count->hits += slave_count.hits;
-		slave_count.bytes = 0;
-		slave_count.hits = 0;
+		count->bytes += member_count.bytes;
+		count->hits += member_count.hits;
+		member_count.bytes = 0;
+		member_count.hits = 0;
 	}
 	return 0;
 }
@@ -221,11 +221,11 @@ bond_flow_isolate(struct rte_eth_dev *dev, int set,
 	int i;
 	int ret;
 
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_flow_isolate(internals->slaves[i].port_id, set, err);
+	for (i = 0; i < internals->member_count; i++) {
+		ret = rte_flow_isolate(internals->members[i].port_id, set, err);
 		if (unlikely(ret != 0)) {
 			RTE_BOND_LOG(ERR, "Operation rte_flow_isolate failed"
-				     " for slave %d with error %d", i, ret);
+				     " for member %d with error %d", i, ret);
 			internals->flow_isolated_valid = 0;
 			return ret;
 		}
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index f0c4f7d26b..0e17febcf6 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -61,33 +61,35 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	struct bond_dev_private *internals;
 
 	uint16_t num_rx_total = 0;
-	uint16_t slave_count;
-	uint16_t active_slave;
+	uint16_t member_count;
+	uint16_t active_member;
 	int i;
 
 	/* Cast to structure, containing bonded device's port id and queue id */
 	struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue;
 	internals = bd_rx_q->dev_private;
-	slave_count = internals->active_slave_count;
-	active_slave = bd_rx_q->active_slave;
+	member_count = internals->active_member_count;
+	active_member = bd_rx_q->active_member;
 
-	for (i = 0; i < slave_count && nb_pkts; i++) {
-		uint16_t num_rx_slave;
+	for (i = 0; i < member_count && nb_pkts; i++) {
+		uint16_t num_rx_member;
 
-		/* Offset of pointer to *bufs increases as packets are received
-		 * from other slaves */
-		num_rx_slave =
-			rte_eth_rx_burst(internals->active_slaves[active_slave],
+		/*
+		 * Offset of pointer to *bufs increases as packets are received
+		 * from other members.
+		 */
+		num_rx_member =
+			rte_eth_rx_burst(internals->active_members[active_member],
 					 bd_rx_q->queue_id,
 					 bufs + num_rx_total, nb_pkts);
-		num_rx_total += num_rx_slave;
-		nb_pkts -= num_rx_slave;
-		if (++active_slave >= slave_count)
-			active_slave = 0;
+		num_rx_total += num_rx_member;
+		nb_pkts -= num_rx_member;
+		if (++active_member >= member_count)
+			active_member = 0;
 	}
 
-	if (++bd_rx_q->active_slave >= slave_count)
-		bd_rx_q->active_slave = 0;
+	if (++bd_rx_q->active_member >= member_count)
+		bd_rx_q->active_member = 0;
 	return num_rx_total;
 }
 
@@ -158,8 +160,8 @@ const struct rte_flow_attr flow_attr_8023ad = {
 
 int
 bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
-		uint16_t slave_port) {
-	struct rte_eth_dev_info slave_info;
+		uint16_t member_port) {
+	struct rte_eth_dev_info member_info;
 	struct rte_flow_error error;
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
 
@@ -177,29 +179,29 @@ bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
 		}
 	};
 
-	int ret = rte_flow_validate(slave_port, &flow_attr_8023ad,
+	int ret = rte_flow_validate(member_port, &flow_attr_8023ad,
 			flow_item_8023ad, actions, &error);
 	if (ret < 0) {
-		RTE_BOND_LOG(ERR, "%s: %s (slave_port=%d queue_id=%d)",
-				__func__, error.message, slave_port,
+		RTE_BOND_LOG(ERR, "%s: %s (member_port=%d queue_id=%d)",
+				__func__, error.message, member_port,
 				internals->mode4.dedicated_queues.rx_qid);
 		return -1;
 	}
 
-	ret = rte_eth_dev_info_get(slave_port, &slave_info);
+	ret = rte_eth_dev_info_get(member_port, &member_info);
 	if (ret != 0) {
 		RTE_BOND_LOG(ERR,
 			"%s: Error during getting device (port %u) info: %s\n",
-			__func__, slave_port, strerror(-ret));
+			__func__, member_port, strerror(-ret));
 
 		return ret;
 	}
 
-	if (slave_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
-			slave_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
+	if (member_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
+			member_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
 		RTE_BOND_LOG(ERR,
-			"%s: Slave %d capabilities doesn't allow allocating additional queues",
-			__func__, slave_port);
+			"%s: Member %d capabilities doesn't allow allocating additional queues",
+			__func__, member_port);
 		return -1;
 	}
 
@@ -214,8 +216,8 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
 	uint16_t idx;
 	int ret;
 
-	/* Verify if all slaves in bonding supports flow director and */
-	if (internals->slave_count > 0) {
+	/* Verify if all members in bonding supports flow director and */
+	if (internals->member_count > 0) {
 		ret = rte_eth_dev_info_get(bond_dev->data->port_id, &bond_info);
 		if (ret != 0) {
 			RTE_BOND_LOG(ERR,
@@ -229,9 +231,9 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
 		internals->mode4.dedicated_queues.rx_qid = bond_info.nb_rx_queues;
 		internals->mode4.dedicated_queues.tx_qid = bond_info.nb_tx_queues;
 
-		for (idx = 0; idx < internals->slave_count; idx++) {
+		for (idx = 0; idx < internals->member_count; idx++) {
 			if (bond_ethdev_8023ad_flow_verify(bond_dev,
-					internals->slaves[idx].port_id) != 0)
+					internals->members[idx].port_id) != 0)
 				return -1;
 		}
 	}
@@ -240,7 +242,7 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
 }
 
 int
-bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
+bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t member_port) {
 
 	struct rte_flow_error error;
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
@@ -258,12 +260,12 @@ bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
 		}
 	};
 
-	internals->mode4.dedicated_queues.flow[slave_port] = rte_flow_create(slave_port,
+	internals->mode4.dedicated_queues.flow[member_port] = rte_flow_create(member_port,
 			&flow_attr_8023ad, flow_item_8023ad, actions, &error);
-	if (internals->mode4.dedicated_queues.flow[slave_port] == NULL) {
+	if (internals->mode4.dedicated_queues.flow[member_port] == NULL) {
 		RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_set: %s "
-				"(slave_port=%d queue_id=%d)",
-				error.message, slave_port,
+				"(member_port=%d queue_id=%d)",
+				error.message, member_port,
 				internals->mode4.dedicated_queues.rx_qid);
 		return -1;
 	}
@@ -304,10 +306,10 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 	const uint16_t ether_type_slow_be =
 		rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
 	uint16_t num_rx_total = 0;	/* Total number of received packets */
-	uint16_t slaves[RTE_MAX_ETHPORTS];
-	uint16_t slave_count, idx;
+	uint16_t members[RTE_MAX_ETHPORTS];
+	uint16_t member_count, idx;
 
-	uint8_t collecting;  /* current slave collecting status */
+	uint8_t collecting;  /* current member collecting status */
 	const uint8_t promisc = rte_eth_promiscuous_get(internals->port_id);
 	const uint8_t allmulti = rte_eth_allmulticast_get(internals->port_id);
 	uint8_t subtype;
@@ -315,24 +317,24 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 	uint16_t j;
 	uint16_t k;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy member list to protect against member up/down changes during tx
 	 * bursting */
-	slave_count = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * slave_count);
+	member_count = internals->active_member_count;
+	memcpy(members, internals->active_members,
+			sizeof(internals->active_members[0]) * member_count);
 
-	idx = bd_rx_q->active_slave;
-	if (idx >= slave_count) {
-		bd_rx_q->active_slave = 0;
+	idx = bd_rx_q->active_member;
+	if (idx >= member_count) {
+		bd_rx_q->active_member = 0;
 		idx = 0;
 	}
-	for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) {
+	for (i = 0; i < member_count && num_rx_total < nb_pkts; i++) {
 		j = num_rx_total;
-		collecting = ACTOR_STATE(&bond_mode_8023ad_ports[slaves[idx]],
+		collecting = ACTOR_STATE(&bond_mode_8023ad_ports[members[idx]],
 					 COLLECTING);
 
-		/* Read packets from this slave */
-		num_rx_total += rte_eth_rx_burst(slaves[idx], bd_rx_q->queue_id,
+		/* Read packets from this member */
+		num_rx_total += rte_eth_rx_burst(members[idx], bd_rx_q->queue_id,
 				&bufs[num_rx_total], nb_pkts - num_rx_total);
 
 		for (k = j; k < 2 && k < num_rx_total; k++)
@@ -348,7 +350,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 
 			/* Remove packet from array if:
 			 * - it is slow packet but no dedicated rxq is present,
-			 * - slave is not in collecting state,
+			 * - member is not in collecting state,
 			 * - bonding interface is not in promiscuous mode and
 			 *   packet address isn't in mac_addrs array:
 			 *   - packet is unicast,
@@ -367,7 +369,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 				  !allmulti)))) {
 				if (hdr->ether_type == ether_type_slow_be) {
 					bond_mode_8023ad_handle_slow_pkt(
-					    internals, slaves[idx], bufs[j]);
+					    internals, members[idx], bufs[j]);
 				} else
 					rte_pktmbuf_free(bufs[j]);
 
@@ -380,12 +382,12 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 			} else
 				j++;
 		}
-		if (unlikely(++idx == slave_count))
+		if (unlikely(++idx == member_count))
 			idx = 0;
 	}
 
-	if (++bd_rx_q->active_slave >= slave_count)
-		bd_rx_q->active_slave = 0;
+	if (++bd_rx_q->active_member >= member_count)
+		bd_rx_q->active_member = 0;
 
 	return num_rx_total;
 }
@@ -406,7 +408,7 @@ bond_ethdev_rx_burst_8023ad_fast_queue(void *queue, struct rte_mbuf **bufs,
 
 #if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
 uint32_t burstnumberRX;
-uint32_t burstnumberTX;
+uint32_t burst_number_TX;
 
 #ifdef RTE_LIBRTE_BOND_DEBUG_ALB
 
@@ -583,59 +585,61 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 	struct bond_dev_private *internals;
 	struct bond_tx_queue *bd_tx_q;
 
-	struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_pkts];
-	uint16_t slave_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
+	struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS][nb_pkts];
+	uint16_t member_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
 
-	uint16_t num_of_slaves;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t num_of_members;
+	uint16_t members[RTE_MAX_ETHPORTS];
 
-	uint16_t num_tx_total = 0, num_tx_slave;
+	uint16_t num_tx_total = 0, num_tx_member;
 
-	static int slave_idx = 0;
-	int i, cslave_idx = 0, tx_fail_total = 0;
+	static int member_idx;
+	int i, cmember_idx = 0, tx_fail_total = 0;
 
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy member list to protect against member up/down changes during tx
 	 * bursting */
-	num_of_slaves = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * num_of_slaves);
+	num_of_members = internals->active_member_count;
+	memcpy(members, internals->active_members,
+			sizeof(internals->active_members[0]) * num_of_members);
 
-	if (num_of_slaves < 1)
+	if (num_of_members < 1)
 		return num_tx_total;
 
-	/* Populate slaves mbuf with which packets are to be sent on it  */
+	/* Populate members mbuf with which packets are to be sent on it  */
 	for (i = 0; i < nb_pkts; i++) {
-		cslave_idx = (slave_idx + i) % num_of_slaves;
-		slave_bufs[cslave_idx][(slave_nb_pkts[cslave_idx])++] = bufs[i];
+		cmember_idx = (member_idx + i) % num_of_members;
+		member_bufs[cmember_idx][(member_nb_pkts[cmember_idx])++] = bufs[i];
 	}
 
-	/* increment current slave index so the next call to tx burst starts on the
-	 * next slave */
-	slave_idx = ++cslave_idx;
+	/*
+	 * increment current member index so the next call to tx burst starts on the
+	 * next member.
+	 */
+	member_idx = ++cmember_idx;
 
-	/* Send packet burst on each slave device */
-	for (i = 0; i < num_of_slaves; i++) {
-		if (slave_nb_pkts[i] > 0) {
-			num_tx_slave = rte_eth_tx_prepare(slaves[i],
-					bd_tx_q->queue_id, slave_bufs[i],
-					slave_nb_pkts[i]);
-			num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
-					slave_bufs[i], num_tx_slave);
+	/* Send packet burst on each member device */
+	for (i = 0; i < num_of_members; i++) {
+		if (member_nb_pkts[i] > 0) {
+			num_tx_member = rte_eth_tx_prepare(members[i],
+					bd_tx_q->queue_id, member_bufs[i],
+					member_nb_pkts[i]);
+			num_tx_member = rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
+					member_bufs[i], num_tx_member);
 
 			/* if tx burst fails move packets to end of bufs */
-			if (unlikely(num_tx_slave < slave_nb_pkts[i])) {
-				int tx_fail_slave = slave_nb_pkts[i] - num_tx_slave;
+			if (unlikely(num_tx_member < member_nb_pkts[i])) {
+				int tx_fail_member = member_nb_pkts[i] - num_tx_member;
 
-				tx_fail_total += tx_fail_slave;
+				tx_fail_total += tx_fail_member;
 
 				memcpy(&bufs[nb_pkts - tx_fail_total],
-				       &slave_bufs[i][num_tx_slave],
-				       tx_fail_slave * sizeof(bufs[0]));
+				       &member_bufs[i][num_tx_member],
+				       tx_fail_member * sizeof(bufs[0]));
 			}
-			num_tx_total += num_tx_slave;
+			num_tx_total += num_tx_member;
 		}
 	}
 
@@ -653,7 +657,7 @@ bond_ethdev_tx_burst_active_backup(void *queue,
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	if (internals->active_slave_count < 1)
+	if (internals->active_member_count < 1)
 		return 0;
 
 	nb_prep_pkts = rte_eth_tx_prepare(internals->current_primary_port,
@@ -699,7 +703,7 @@ ipv6_hash(struct rte_ipv6_hdr *ipv6_hdr)
 
 void
 burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves)
+		uint16_t member_count, uint16_t *members)
 {
 	struct rte_ether_hdr *eth_hdr;
 	uint32_t hash;
@@ -710,13 +714,13 @@ burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
 
 		hash = ether_hash(eth_hdr);
 
-		slaves[i] = (hash ^= hash >> 8) % slave_count;
+		members[i] = (hash ^= hash >> 8) % member_count;
 	}
 }
 
 void
 burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves)
+		uint16_t member_count, uint16_t *members)
 {
 	uint16_t i;
 	struct rte_ether_hdr *eth_hdr;
@@ -748,13 +752,13 @@ burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
 		hash ^= hash >> 16;
 		hash ^= hash >> 8;
 
-		slaves[i] = hash % slave_count;
+		members[i] = hash % member_count;
 	}
 }
 
 void
 burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves)
+		uint16_t member_count, uint16_t *members)
 {
 	struct rte_ether_hdr *eth_hdr;
 	uint16_t proto;
@@ -822,30 +826,29 @@ burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
 		hash ^= hash >> 16;
 		hash ^= hash >> 8;
 
-		slaves[i] = hash % slave_count;
+		members[i] = hash % member_count;
 	}
 }
 
-struct bwg_slave {
+struct bwg_member {
 	uint64_t bwg_left_int;
 	uint64_t bwg_left_remainder;
-	uint16_t slave;
+	uint16_t member;
 };
 
 void
-bond_tlb_activate_slave(struct bond_dev_private *internals) {
+bond_tlb_activate_member(struct bond_dev_private *internals) {
 	int i;
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		tlb_last_obytets[internals->active_slaves[i]] = 0;
-	}
+	for (i = 0; i < internals->active_member_count; i++)
+		tlb_last_obytets[internals->active_members[i]] = 0;
 }
 
 static int
 bandwidth_cmp(const void *a, const void *b)
 {
-	const struct bwg_slave *bwg_a = a;
-	const struct bwg_slave *bwg_b = b;
+	const struct bwg_member *bwg_a = a;
+	const struct bwg_member *bwg_b = b;
 	int64_t diff = (int64_t)bwg_b->bwg_left_int - (int64_t)bwg_a->bwg_left_int;
 	int64_t diff2 = (int64_t)bwg_b->bwg_left_remainder -
 			(int64_t)bwg_a->bwg_left_remainder;
@@ -863,14 +866,14 @@ bandwidth_cmp(const void *a, const void *b)
 
 static void
 bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
-		struct bwg_slave *bwg_slave)
+		struct bwg_member *bwg_member)
 {
 	struct rte_eth_link link_status;
 	int ret;
 
 	ret = rte_eth_link_get_nowait(port_id, &link_status);
 	if (ret < 0) {
-		RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
+		RTE_BOND_LOG(ERR, "Member (port %u) link get failed: %s",
 			     port_id, rte_strerror(-ret));
 		return;
 	}
@@ -878,51 +881,51 @@ bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
 	if (link_bwg == 0)
 		return;
 	link_bwg = link_bwg * (update_idx+1) * REORDER_PERIOD_MS;
-	bwg_slave->bwg_left_int = (link_bwg - 1000*load) / link_bwg;
-	bwg_slave->bwg_left_remainder = (link_bwg - 1000*load) % link_bwg;
+	bwg_member->bwg_left_int = (link_bwg - 1000 * load) / link_bwg;
+	bwg_member->bwg_left_remainder = (link_bwg - 1000 * load) % link_bwg;
 }
 
 static void
-bond_ethdev_update_tlb_slave_cb(void *arg)
+bond_ethdev_update_tlb_member_cb(void *arg)
 {
 	struct bond_dev_private *internals = arg;
-	struct rte_eth_stats slave_stats;
-	struct bwg_slave bwg_array[RTE_MAX_ETHPORTS];
-	uint16_t slave_count;
+	struct rte_eth_stats member_stats;
+	struct bwg_member bwg_array[RTE_MAX_ETHPORTS];
+	uint16_t member_count;
 	uint64_t tx_bytes;
 
 	uint8_t update_stats = 0;
-	uint16_t slave_id;
+	uint16_t member_id;
 	uint16_t i;
 
-	internals->slave_update_idx++;
+	internals->member_update_idx++;
 
 
-	if (internals->slave_update_idx >= REORDER_PERIOD_MS)
+	if (internals->member_update_idx >= REORDER_PERIOD_MS)
 		update_stats = 1;
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		slave_id = internals->active_slaves[i];
-		rte_eth_stats_get(slave_id, &slave_stats);
-		tx_bytes = slave_stats.obytes - tlb_last_obytets[slave_id];
-		bandwidth_left(slave_id, tx_bytes,
-				internals->slave_update_idx, &bwg_array[i]);
-		bwg_array[i].slave = slave_id;
+	for (i = 0; i < internals->active_member_count; i++) {
+		member_id = internals->active_members[i];
+		rte_eth_stats_get(member_id, &member_stats);
+		tx_bytes = member_stats.obytes - tlb_last_obytets[member_id];
+		bandwidth_left(member_id, tx_bytes,
+				internals->member_update_idx, &bwg_array[i]);
+		bwg_array[i].member = member_id;
 
 		if (update_stats) {
-			tlb_last_obytets[slave_id] = slave_stats.obytes;
+			tlb_last_obytets[member_id] = member_stats.obytes;
 		}
 	}
 
 	if (update_stats == 1)
-		internals->slave_update_idx = 0;
+		internals->member_update_idx = 0;
 
-	slave_count = i;
-	qsort(bwg_array, slave_count, sizeof(bwg_array[0]), bandwidth_cmp);
-	for (i = 0; i < slave_count; i++)
-		internals->tlb_slaves_order[i] = bwg_array[i].slave;
+	member_count = i;
+	qsort(bwg_array, member_count, sizeof(bwg_array[0]), bandwidth_cmp);
+	for (i = 0; i < member_count; i++)
+		internals->tlb_members_order[i] = bwg_array[i].member;
 
-	rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_slave_cb,
+	rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_member_cb,
 			(struct bond_dev_private *)internals);
 }
 
@@ -937,29 +940,29 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	uint16_t num_tx_total = 0, num_tx_prep;
 	uint16_t i, j;
 
-	uint16_t num_of_slaves = internals->active_slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t num_of_members = internals->active_member_count;
+	uint16_t members[RTE_MAX_ETHPORTS];
 
 	struct rte_ether_hdr *ether_hdr;
-	struct rte_ether_addr primary_slave_addr;
-	struct rte_ether_addr active_slave_addr;
+	struct rte_ether_addr primary_member_addr;
+	struct rte_ether_addr active_member_addr;
 
-	if (num_of_slaves < 1)
+	if (num_of_members < 1)
 		return num_tx_total;
 
-	memcpy(slaves, internals->tlb_slaves_order,
-				sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
+	memcpy(members, internals->tlb_members_order,
+				sizeof(internals->tlb_members_order[0]) * num_of_members);
 
 
-	rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
+	rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_member_addr);
 
 	if (nb_pkts > 3) {
 		for (i = 0; i < 3; i++)
 			rte_prefetch0(rte_pktmbuf_mtod(bufs[i], void*));
 	}
 
-	for (i = 0; i < num_of_slaves; i++) {
-		rte_eth_macaddr_get(slaves[i], &active_slave_addr);
+	for (i = 0; i < num_of_members; i++) {
+		rte_eth_macaddr_get(members[i], &active_member_addr);
 		for (j = num_tx_total; j < nb_pkts; j++) {
 			if (j + 3 < nb_pkts)
 				rte_prefetch0(rte_pktmbuf_mtod(bufs[j+3], void*));
@@ -967,17 +970,18 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			ether_hdr = rte_pktmbuf_mtod(bufs[j],
 						struct rte_ether_hdr *);
 			if (rte_is_same_ether_addr(&ether_hdr->src_addr,
-							&primary_slave_addr))
-				rte_ether_addr_copy(&active_slave_addr,
+							&primary_member_addr))
+				rte_ether_addr_copy(&active_member_addr,
 						&ether_hdr->src_addr);
 #if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
-					mode6_debug("TX IPv4:", ether_hdr, slaves[i], &burstnumberTX);
+					mode6_debug("TX IPv4:", ether_hdr, members[i],
+						&burst_number_TX);
 #endif
 		}
 
-		num_tx_prep = rte_eth_tx_prepare(slaves[i], bd_tx_q->queue_id,
+		num_tx_prep = rte_eth_tx_prepare(members[i], bd_tx_q->queue_id,
 				bufs + num_tx_total, nb_pkts - num_tx_total);
-		num_tx_total += rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+		num_tx_total += rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
 				bufs + num_tx_total, num_tx_prep);
 
 		if (num_tx_total == nb_pkts)
@@ -990,13 +994,13 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 void
 bond_tlb_disable(struct bond_dev_private *internals)
 {
-	rte_eal_alarm_cancel(bond_ethdev_update_tlb_slave_cb, internals);
+	rte_eal_alarm_cancel(bond_ethdev_update_tlb_member_cb, internals);
 }
 
 void
 bond_tlb_enable(struct bond_dev_private *internals)
 {
-	bond_ethdev_update_tlb_slave_cb(internals);
+	bond_ethdev_update_tlb_member_cb(internals);
 }
 
 static uint16_t
@@ -1011,11 +1015,11 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	struct client_data *client_info;
 
 	/*
-	 * We create transmit buffers for every slave and one additional to send
+	 * We create transmit buffers for every member and one additional to send
 	 * through tlb. In worst case every packet will be send on one port.
 	 */
-	struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
-	uint16_t slave_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
+	struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
+	uint16_t member_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
 
 	/*
 	 * We create separate transmit buffers for update packets as they won't
@@ -1029,7 +1033,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 	uint16_t num_send, num_not_send = 0;
 	uint16_t num_tx_total = 0;
-	uint16_t slave_idx;
+	uint16_t member_idx;
 
 	int i, j;
 
@@ -1040,19 +1044,19 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		offset = get_vlan_offset(eth_h, &ether_type);
 
 		if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP)) {
-			slave_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
+			member_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
 
 			/* Change src mac in eth header */
-			rte_eth_macaddr_get(slave_idx, &eth_h->src_addr);
+			rte_eth_macaddr_get(member_idx, &eth_h->src_addr);
 
-			/* Add packet to slave tx buffer */
-			slave_bufs[slave_idx][slave_bufs_pkts[slave_idx]] = bufs[i];
-			slave_bufs_pkts[slave_idx]++;
+			/* Add packet to member tx buffer */
+			member_bufs[member_idx][member_bufs_pkts[member_idx]] = bufs[i];
+			member_bufs_pkts[member_idx]++;
 		} else {
 			/* If packet is not ARP, send it with TLB policy */
-			slave_bufs[RTE_MAX_ETHPORTS][slave_bufs_pkts[RTE_MAX_ETHPORTS]] =
+			member_bufs[RTE_MAX_ETHPORTS][member_bufs_pkts[RTE_MAX_ETHPORTS]] =
 					bufs[i];
-			slave_bufs_pkts[RTE_MAX_ETHPORTS]++;
+			member_bufs_pkts[RTE_MAX_ETHPORTS]++;
 		}
 	}
 
@@ -1062,7 +1066,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			client_info = &internals->mode6.client_table[i];
 
 			if (client_info->in_use) {
-				/* Allocate new packet to send ARP update on current slave */
+				/* Allocate new packet to send ARP update on current member */
 				upd_pkt = rte_pktmbuf_alloc(internals->mode6.mempool);
 				if (upd_pkt == NULL) {
 					RTE_BOND_LOG(ERR,
@@ -1076,44 +1080,44 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				upd_pkt->data_len = pkt_size;
 				upd_pkt->pkt_len = pkt_size;
 
-				slave_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
+				member_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
 						internals);
 
 				/* Add packet to update tx buffer */
-				update_bufs[slave_idx][update_bufs_pkts[slave_idx]] = upd_pkt;
-				update_bufs_pkts[slave_idx]++;
+				update_bufs[member_idx][update_bufs_pkts[member_idx]] = upd_pkt;
+				update_bufs_pkts[member_idx]++;
 			}
 		}
 		internals->mode6.ntt = 0;
 	}
 
-	/* Send ARP packets on proper slaves */
+	/* Send ARP packets on proper members */
 	for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
-		if (slave_bufs_pkts[i] > 0) {
+		if (member_bufs_pkts[i] > 0) {
 			num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
-					slave_bufs[i], slave_bufs_pkts[i]);
+					member_bufs[i], member_bufs_pkts[i]);
 			num_send = rte_eth_tx_burst(i, bd_tx_q->queue_id,
-					slave_bufs[i], num_send);
-			for (j = 0; j < slave_bufs_pkts[i] - num_send; j++) {
+					member_bufs[i], num_send);
+			for (j = 0; j < member_bufs_pkts[i] - num_send; j++) {
 				bufs[nb_pkts - 1 - num_not_send - j] =
-						slave_bufs[i][nb_pkts - 1 - j];
+						member_bufs[i][nb_pkts - 1 - j];
 			}
 
 			num_tx_total += num_send;
-			num_not_send += slave_bufs_pkts[i] - num_send;
+			num_not_send += member_bufs_pkts[i] - num_send;
 
 #if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
 	/* Print TX stats including update packets */
-			for (j = 0; j < slave_bufs_pkts[i]; j++) {
-				eth_h = rte_pktmbuf_mtod(slave_bufs[i][j],
+			for (j = 0; j < member_bufs_pkts[i]; j++) {
+				eth_h = rte_pktmbuf_mtod(member_bufs[i][j],
 							struct rte_ether_hdr *);
-				mode6_debug("TX ARP:", eth_h, i, &burstnumberTX);
+				mode6_debug("TX ARP:", eth_h, i, &burst_number_TX);
 			}
 #endif
 		}
 	}
 
-	/* Send update packets on proper slaves */
+	/* Send update packets on proper members */
 	for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
 		if (update_bufs_pkts[i] > 0) {
 			num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
@@ -1127,21 +1131,21 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			for (j = 0; j < update_bufs_pkts[i]; j++) {
 				eth_h = rte_pktmbuf_mtod(update_bufs[i][j],
 							struct rte_ether_hdr *);
-				mode6_debug("TX ARPupd:", eth_h, i, &burstnumberTX);
+				mode6_debug("TX ARPupd:", eth_h, i, &burst_number_TX);
 			}
 #endif
 		}
 	}
 
 	/* Send non-ARP packets using tlb policy */
-	if (slave_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
+	if (member_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
 		num_send = bond_ethdev_tx_burst_tlb(queue,
-				slave_bufs[RTE_MAX_ETHPORTS],
-				slave_bufs_pkts[RTE_MAX_ETHPORTS]);
+				member_bufs[RTE_MAX_ETHPORTS],
+				member_bufs_pkts[RTE_MAX_ETHPORTS]);
 
-		for (j = 0; j < slave_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
+		for (j = 0; j < member_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
 			bufs[nb_pkts - 1 - num_not_send - j] =
-					slave_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
+					member_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
 		}
 
 		num_tx_total += num_send;
@@ -1152,59 +1156,59 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 static inline uint16_t
 tx_burst_balance(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
-		 uint16_t *slave_port_ids, uint16_t slave_count)
+		 uint16_t *member_port_ids, uint16_t member_count)
 {
 	struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
 	struct bond_dev_private *internals = bd_tx_q->dev_private;
 
-	/* Array to sort mbufs for transmission on each slave into */
-	struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_bufs];
-	/* Number of mbufs for transmission on each slave */
-	uint16_t slave_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
-	/* Mapping array generated by hash function to map mbufs to slaves */
-	uint16_t bufs_slave_port_idxs[nb_bufs];
+	/* Array to sort mbufs for transmission on each member into */
+	struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS][nb_bufs];
+	/* Number of mbufs for transmission on each member */
+	uint16_t member_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
+	/* Mapping array generated by hash function to map mbufs to members */
+	uint16_t bufs_member_port_idxs[nb_bufs];
 
-	uint16_t slave_tx_count;
+	uint16_t member_tx_count;
 	uint16_t total_tx_count = 0, total_tx_fail_count = 0;
 
 	uint16_t i;
 
 	/*
-	 * Populate slaves mbuf with the packets which are to be sent on it
-	 * selecting output slave using hash based on xmit policy
+	 * Populate members mbuf with the packets which are to be sent on it
+	 * selecting output member using hash based on xmit policy
 	 */
-	internals->burst_xmit_hash(bufs, nb_bufs, slave_count,
-			bufs_slave_port_idxs);
+	internals->burst_xmit_hash(bufs, nb_bufs, member_count,
+			bufs_member_port_idxs);
 
 	for (i = 0; i < nb_bufs; i++) {
-		/* Populate slave mbuf arrays with mbufs for that slave. */
-		uint16_t slave_idx = bufs_slave_port_idxs[i];
+		/* Populate member mbuf arrays with mbufs for that member. */
+		uint16_t member_idx = bufs_member_port_idxs[i];
 
-		slave_bufs[slave_idx][slave_nb_bufs[slave_idx]++] = bufs[i];
+		member_bufs[member_idx][member_nb_bufs[member_idx]++] = bufs[i];
 	}
 
-	/* Send packet burst on each slave device */
-	for (i = 0; i < slave_count; i++) {
-		if (slave_nb_bufs[i] == 0)
+	/* Send packet burst on each member device */
+	for (i = 0; i < member_count; i++) {
+		if (member_nb_bufs[i] == 0)
 			continue;
 
-		slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
-				bd_tx_q->queue_id, slave_bufs[i],
-				slave_nb_bufs[i]);
-		slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
-				bd_tx_q->queue_id, slave_bufs[i],
-				slave_tx_count);
+		member_tx_count = rte_eth_tx_prepare(member_port_ids[i],
+				bd_tx_q->queue_id, member_bufs[i],
+				member_nb_bufs[i]);
+		member_tx_count = rte_eth_tx_burst(member_port_ids[i],
+				bd_tx_q->queue_id, member_bufs[i],
+				member_tx_count);
 
-		total_tx_count += slave_tx_count;
+		total_tx_count += member_tx_count;
 
 		/* If tx burst fails move packets to end of bufs */
-		if (unlikely(slave_tx_count < slave_nb_bufs[i])) {
-			int slave_tx_fail_count = slave_nb_bufs[i] -
-					slave_tx_count;
-			total_tx_fail_count += slave_tx_fail_count;
+		if (unlikely(member_tx_count < member_nb_bufs[i])) {
+			int member_tx_fail_count = member_nb_bufs[i] -
+					member_tx_count;
+			total_tx_fail_count += member_tx_fail_count;
 			memcpy(&bufs[nb_bufs - total_tx_fail_count],
-			       &slave_bufs[i][slave_tx_count],
-			       slave_tx_fail_count * sizeof(bufs[0]));
+			       &member_bufs[i][member_tx_count],
+			       member_tx_fail_count * sizeof(bufs[0]));
 		}
 	}
 
@@ -1218,23 +1222,23 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 	struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
 	struct bond_dev_private *internals = bd_tx_q->dev_private;
 
-	uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
-	uint16_t slave_count;
+	uint16_t member_port_ids[RTE_MAX_ETHPORTS];
+	uint16_t member_count;
 
 	if (unlikely(nb_bufs == 0))
 		return 0;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy member list to protect against member up/down changes during tx
 	 * bursting
 	 */
-	slave_count = internals->active_slave_count;
-	if (unlikely(slave_count < 1))
+	member_count = internals->active_member_count;
+	if (unlikely(member_count < 1))
 		return 0;
 
-	memcpy(slave_port_ids, internals->active_slaves,
-			sizeof(slave_port_ids[0]) * slave_count);
-	return tx_burst_balance(queue, bufs, nb_bufs, slave_port_ids,
-				slave_count);
+	memcpy(member_port_ids, internals->active_members,
+			sizeof(member_port_ids[0]) * member_count);
+	return tx_burst_balance(queue, bufs, nb_bufs, member_port_ids,
+				member_count);
 }
 
 static inline uint16_t
@@ -1244,31 +1248,31 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
 	struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
 	struct bond_dev_private *internals = bd_tx_q->dev_private;
 
-	uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
-	uint16_t slave_count;
+	uint16_t member_port_ids[RTE_MAX_ETHPORTS];
+	uint16_t member_count;
 
-	uint16_t dist_slave_port_ids[RTE_MAX_ETHPORTS];
-	uint16_t dist_slave_count;
+	uint16_t dist_member_port_ids[RTE_MAX_ETHPORTS];
+	uint16_t dist_member_count;
 
-	uint16_t slave_tx_count;
+	uint16_t member_tx_count;
 
 	uint16_t i;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy member list to protect against member up/down changes during tx
 	 * bursting */
-	slave_count = internals->active_slave_count;
-	if (unlikely(slave_count < 1))
+	member_count = internals->active_member_count;
+	if (unlikely(member_count < 1))
 		return 0;
 
-	memcpy(slave_port_ids, internals->active_slaves,
-			sizeof(slave_port_ids[0]) * slave_count);
+	memcpy(member_port_ids, internals->active_members,
+			sizeof(member_port_ids[0]) * member_count);
 
 	if (dedicated_txq)
 		goto skip_tx_ring;
 
 	/* Check for LACP control packets and send if available */
-	for (i = 0; i < slave_count; i++) {
-		struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
+	for (i = 0; i < member_count; i++) {
+		struct port *port = &bond_mode_8023ad_ports[member_port_ids[i]];
 		struct rte_mbuf *ctrl_pkt = NULL;
 
 		if (likely(rte_ring_empty(port->tx_ring)))
@@ -1276,15 +1280,15 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
 
 		if (rte_ring_dequeue(port->tx_ring,
 				     (void **)&ctrl_pkt) != -ENOENT) {
-			slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
+			member_tx_count = rte_eth_tx_prepare(member_port_ids[i],
 					bd_tx_q->queue_id, &ctrl_pkt, 1);
-			slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
-					bd_tx_q->queue_id, &ctrl_pkt, slave_tx_count);
+			member_tx_count = rte_eth_tx_burst(member_port_ids[i],
+					bd_tx_q->queue_id, &ctrl_pkt, member_tx_count);
 			/*
 			 * re-enqueue LAG control plane packets to buffering
 			 * ring if transmission fails so the packet isn't lost.
 			 */
-			if (slave_tx_count != 1)
+			if (member_tx_count != 1)
 				rte_ring_enqueue(port->tx_ring,	ctrl_pkt);
 		}
 	}
@@ -1293,20 +1297,20 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
 	if (unlikely(nb_bufs == 0))
 		return 0;
 
-	dist_slave_count = 0;
-	for (i = 0; i < slave_count; i++) {
-		struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
+	dist_member_count = 0;
+	for (i = 0; i < member_count; i++) {
+		struct port *port = &bond_mode_8023ad_ports[member_port_ids[i]];
 
 		if (ACTOR_STATE(port, DISTRIBUTING))
-			dist_slave_port_ids[dist_slave_count++] =
-					slave_port_ids[i];
+			dist_member_port_ids[dist_member_count++] =
+					member_port_ids[i];
 	}
 
-	if (unlikely(dist_slave_count < 1))
+	if (unlikely(dist_member_count < 1))
 		return 0;
 
-	return tx_burst_balance(queue, bufs, nb_bufs, dist_slave_port_ids,
-				dist_slave_count);
+	return tx_burst_balance(queue, bufs, nb_bufs, dist_member_port_ids,
+				dist_member_count);
 }
 
 static uint16_t
@@ -1330,78 +1334,78 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 	struct bond_dev_private *internals;
 	struct bond_tx_queue *bd_tx_q;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t members[RTE_MAX_ETHPORTS];
 	uint8_t tx_failed_flag = 0;
-	uint16_t num_of_slaves;
+	uint16_t num_of_members;
 
 	uint16_t max_nb_of_tx_pkts = 0;
 
-	int slave_tx_total[RTE_MAX_ETHPORTS];
-	int i, most_successful_tx_slave = -1;
+	int member_tx_total[RTE_MAX_ETHPORTS];
+	int i, most_successful_tx_member = -1;
 
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy member list to protect against member up/down changes during tx
 	 * bursting */
-	num_of_slaves = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * num_of_slaves);
+	num_of_members = internals->active_member_count;
+	memcpy(members, internals->active_members,
+			sizeof(internals->active_members[0]) * num_of_members);
 
-	if (num_of_slaves < 1)
+	if (num_of_members < 1)
 		return 0;
 
 	/* It is rare that bond different PMDs together, so just call tx-prepare once */
-	nb_pkts = rte_eth_tx_prepare(slaves[0], bd_tx_q->queue_id, bufs, nb_pkts);
+	nb_pkts = rte_eth_tx_prepare(members[0], bd_tx_q->queue_id, bufs, nb_pkts);
 
 	/* Increment reference count on mbufs */
 	for (i = 0; i < nb_pkts; i++)
-		rte_pktmbuf_refcnt_update(bufs[i], num_of_slaves - 1);
+		rte_pktmbuf_refcnt_update(bufs[i], num_of_members - 1);
 
-	/* Transmit burst on each active slave */
-	for (i = 0; i < num_of_slaves; i++) {
-		slave_tx_total[i] = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+	/* Transmit burst on each active member */
+	for (i = 0; i < num_of_members; i++) {
+		member_tx_total[i] = rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
 					bufs, nb_pkts);
 
-		if (unlikely(slave_tx_total[i] < nb_pkts))
+		if (unlikely(member_tx_total[i] < nb_pkts))
 			tx_failed_flag = 1;
 
-		/* record the value and slave index for the slave which transmits the
+		/* record the value and member index for the member which transmits the
 		 * maximum number of packets */
-		if (slave_tx_total[i] > max_nb_of_tx_pkts) {
-			max_nb_of_tx_pkts = slave_tx_total[i];
-			most_successful_tx_slave = i;
+		if (member_tx_total[i] > max_nb_of_tx_pkts) {
+			max_nb_of_tx_pkts = member_tx_total[i];
+			most_successful_tx_member = i;
 		}
 	}
 
-	/* if slaves fail to transmit packets from burst, the calling application
+	/* if members fail to transmit packets from burst, the calling application
 	 * is not expected to know about multiple references to packets so we must
-	 * handle failures of all packets except those of the most successful slave
+	 * handle failures of all packets except those of the most successful member
 	 */
 	if (unlikely(tx_failed_flag))
-		for (i = 0; i < num_of_slaves; i++)
-			if (i != most_successful_tx_slave)
-				while (slave_tx_total[i] < nb_pkts)
-					rte_pktmbuf_free(bufs[slave_tx_total[i]++]);
+		for (i = 0; i < num_of_members; i++)
+			if (i != most_successful_tx_member)
+				while (member_tx_total[i] < nb_pkts)
+					rte_pktmbuf_free(bufs[member_tx_total[i]++]);
 
 	return max_nb_of_tx_pkts;
 }
 
 static void
-link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
+link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *member_link)
 {
 	struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
 
 	if (bond_ctx->mode == BONDING_MODE_8023AD) {
 		/**
 		 * If in mode 4 then save the link properties of the first
-		 * slave, all subsequent slaves must match these properties
+		 * member, all subsequent members must match these properties
 		 */
-		struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
+		struct rte_eth_link *bond_link = &bond_ctx->mode4.member_link;
 
-		bond_link->link_autoneg = slave_link->link_autoneg;
-		bond_link->link_duplex = slave_link->link_duplex;
-		bond_link->link_speed = slave_link->link_speed;
+		bond_link->link_autoneg = member_link->link_autoneg;
+		bond_link->link_duplex = member_link->link_duplex;
+		bond_link->link_speed = member_link->link_speed;
 	} else {
 		/**
 		 * In any other mode the link properties are set to default
@@ -1414,16 +1418,16 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
 
 static int
 link_properties_valid(struct rte_eth_dev *ethdev,
-		struct rte_eth_link *slave_link)
+		struct rte_eth_link *member_link)
 {
 	struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
 
 	if (bond_ctx->mode == BONDING_MODE_8023AD) {
-		struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
+		struct rte_eth_link *bond_link = &bond_ctx->mode4.member_link;
 
-		if (bond_link->link_duplex != slave_link->link_duplex ||
-			bond_link->link_autoneg != slave_link->link_autoneg ||
-			bond_link->link_speed != slave_link->link_speed)
+		if (bond_link->link_duplex != member_link->link_duplex ||
+			bond_link->link_autoneg != member_link->link_autoneg ||
+			bond_link->link_speed != member_link->link_speed)
 			return -1;
 	}
 
@@ -1480,11 +1484,11 @@ mac_address_set(struct rte_eth_dev *eth_dev,
 static const struct rte_ether_addr null_mac_addr;
 
 /*
- * Add additional MAC addresses to the slave
+ * Add additional MAC addresses to the member
  */
 int
-slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
-		uint16_t slave_port_id)
+member_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+		uint16_t member_port_id)
 {
 	int i, ret;
 	struct rte_ether_addr *mac_addr;
@@ -1494,11 +1498,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
 		if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
 			break;
 
-		ret = rte_eth_dev_mac_addr_add(slave_port_id, mac_addr, 0);
+		ret = rte_eth_dev_mac_addr_add(member_port_id, mac_addr, 0);
 		if (ret < 0) {
 			/* rollback */
 			for (i--; i > 0; i--)
-				rte_eth_dev_mac_addr_remove(slave_port_id,
+				rte_eth_dev_mac_addr_remove(member_port_id,
 					&bonded_eth_dev->data->mac_addrs[i]);
 			return ret;
 		}
@@ -1508,11 +1512,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
 }
 
 /*
- * Remove additional MAC addresses from the slave
+ * Remove additional MAC addresses from the member
  */
 int
-slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
-		uint16_t slave_port_id)
+member_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+		uint16_t member_port_id)
 {
 	int i, rc, ret;
 	struct rte_ether_addr *mac_addr;
@@ -1523,7 +1527,7 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
 		if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
 			break;
 
-		ret = rte_eth_dev_mac_addr_remove(slave_port_id, mac_addr);
+		ret = rte_eth_dev_mac_addr_remove(member_port_id, mac_addr);
 		/* save only the first error */
 		if (ret < 0 && rc == 0)
 			rc = ret;
@@ -1533,26 +1537,26 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
 }
 
 int
-mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
+mac_address_members_update(struct rte_eth_dev *bonded_eth_dev)
 {
 	struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
 	bool set;
 	int i;
 
-	/* Update slave devices MAC addresses */
-	if (internals->slave_count < 1)
+	/* Update member devices MAC addresses */
+	if (internals->member_count < 1)
 		return -1;
 
 	switch (internals->mode) {
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
-		for (i = 0; i < internals->slave_count; i++) {
+		for (i = 0; i < internals->member_count; i++) {
 			if (rte_eth_dev_default_mac_addr_set(
-					internals->slaves[i].port_id,
+					internals->members[i].port_id,
 					bonded_eth_dev->data->mac_addrs)) {
 				RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
-						internals->slaves[i].port_id);
+						internals->members[i].port_id);
 				return -1;
 			}
 		}
@@ -1565,8 +1569,8 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
 	case BONDING_MODE_ALB:
 	default:
 		set = true;
-		for (i = 0; i < internals->slave_count; i++) {
-			if (internals->slaves[i].port_id ==
+		for (i = 0; i < internals->member_count; i++) {
+			if (internals->members[i].port_id ==
 					internals->current_primary_port) {
 				if (rte_eth_dev_default_mac_addr_set(
 						internals->current_primary_port,
@@ -1577,10 +1581,10 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
 				}
 			} else {
 				if (rte_eth_dev_default_mac_addr_set(
-						internals->slaves[i].port_id,
-						&internals->slaves[i].persisted_mac_addr)) {
+						internals->members[i].port_id,
+						&internals->members[i].persisted_mac_addr)) {
 					RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
-							internals->slaves[i].port_id);
+							internals->members[i].port_id);
 				}
 			}
 		}
@@ -1655,55 +1659,55 @@ bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode)
 
 
 static int
-slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev)
+member_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *member_eth_dev)
 {
 	int errval = 0;
 	struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
-	struct port *port = &bond_mode_8023ad_ports[slave_eth_dev->data->port_id];
+	struct port *port = &bond_mode_8023ad_ports[member_eth_dev->data->port_id];
 
 	if (port->slow_pool == NULL) {
 		char mem_name[256];
-		int slave_id = slave_eth_dev->data->port_id;
+		int member_id = member_eth_dev->data->port_id;
 
-		snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_slow_pool",
-				slave_id);
+		snprintf(mem_name, RTE_DIM(mem_name), "member_port%u_slow_pool",
+				member_id);
 		port->slow_pool = rte_pktmbuf_pool_create(mem_name, 8191,
 			250, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
-			slave_eth_dev->data->numa_node);
+			member_eth_dev->data->numa_node);
 
 		/* Any memory allocation failure in initialization is critical because
 		 * resources can't be free, so reinitialization is impossible. */
 		if (port->slow_pool == NULL) {
-			rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
-				slave_id, mem_name, rte_strerror(rte_errno));
+			rte_panic("Member %u: Failed to create memory pool '%s': %s\n",
+				member_id, mem_name, rte_strerror(rte_errno));
 		}
 	}
 
 	if (internals->mode4.dedicated_queues.enabled == 1) {
 		/* Configure slow Rx queue */
 
-		errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id,
+		errval = rte_eth_rx_queue_setup(member_eth_dev->data->port_id,
 				internals->mode4.dedicated_queues.rx_qid, 128,
-				rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+				rte_eth_dev_socket_id(member_eth_dev->data->port_id),
 				NULL, port->slow_pool);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 					"rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
-					slave_eth_dev->data->port_id,
+					member_eth_dev->data->port_id,
 					internals->mode4.dedicated_queues.rx_qid,
 					errval);
 			return errval;
 		}
 
-		errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id,
+		errval = rte_eth_tx_queue_setup(member_eth_dev->data->port_id,
 				internals->mode4.dedicated_queues.tx_qid, 512,
-				rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+				rte_eth_dev_socket_id(member_eth_dev->data->port_id),
 				NULL);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 				"rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
-				slave_eth_dev->data->port_id,
+				member_eth_dev->data->port_id,
 				internals->mode4.dedicated_queues.tx_qid,
 				errval);
 			return errval;
@@ -1713,8 +1717,8 @@ slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
 }
 
 int
-slave_configure(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev)
+member_configure(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *member_eth_dev)
 {
 	uint16_t nb_rx_queues;
 	uint16_t nb_tx_queues;
@@ -1723,45 +1727,45 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 
 	struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
 
-	/* Stop slave */
-	errval = rte_eth_dev_stop(slave_eth_dev->data->port_id);
+	/* Stop member */
+	errval = rte_eth_dev_stop(member_eth_dev->data->port_id);
 	if (errval != 0)
 		RTE_BOND_LOG(ERR, "rte_eth_dev_stop: port %u, err (%d)",
-			     slave_eth_dev->data->port_id, errval);
+			     member_eth_dev->data->port_id, errval);
 
-	/* Enable interrupts on slave device if supported */
-	if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
-		slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
+	/* Enable interrupts on member device if supported */
+	if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+		member_eth_dev->data->dev_conf.intr_conf.lsc = 1;
 
-	/* If RSS is enabled for bonding, try to enable it for slaves  */
+	/* If RSS is enabled for bonding, try to enable it for members  */
 	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		/* rss_key won't be empty if RSS is configured in bonded dev */
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
+		member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
 					internals->rss_key_len;
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
+		member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
 					internals->rss_key;
 
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
+		member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
 				bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
-		slave_eth_dev->data->dev_conf.rxmode.mq_mode =
+		member_eth_dev->data->dev_conf.rxmode.mq_mode =
 				bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
 	} else {
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
-		slave_eth_dev->data->dev_conf.rxmode.mq_mode =
+		member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
+		member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
+		member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
+		member_eth_dev->data->dev_conf.rxmode.mq_mode =
 				bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
 	}
 
-	slave_eth_dev->data->dev_conf.rxmode.mtu =
+	member_eth_dev->data->dev_conf.rxmode.mtu =
 			bonded_eth_dev->data->dev_conf.rxmode.mtu;
-	slave_eth_dev->data->dev_conf.link_speeds =
+	member_eth_dev->data->dev_conf.link_speeds =
 			bonded_eth_dev->data->dev_conf.link_speeds;
 
-	slave_eth_dev->data->dev_conf.txmode.offloads =
+	member_eth_dev->data->dev_conf.txmode.offloads =
 			bonded_eth_dev->data->dev_conf.txmode.offloads;
 
-	slave_eth_dev->data->dev_conf.rxmode.offloads =
+	member_eth_dev->data->dev_conf.rxmode.offloads =
 			bonded_eth_dev->data->dev_conf.rxmode.offloads;
 
 	nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
@@ -1775,28 +1779,28 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 	}
 
 	/* Configure device */
-	errval = rte_eth_dev_configure(slave_eth_dev->data->port_id,
+	errval = rte_eth_dev_configure(member_eth_dev->data->port_id,
 			nb_rx_queues, nb_tx_queues,
-			&(slave_eth_dev->data->dev_conf));
+			&member_eth_dev->data->dev_conf);
 	if (errval != 0) {
-		RTE_BOND_LOG(ERR, "Cannot configure slave device: port %u, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+		RTE_BOND_LOG(ERR, "Cannot configure member device: port %u, err (%d)",
+				member_eth_dev->data->port_id, errval);
 		return errval;
 	}
 
-	errval = rte_eth_dev_set_mtu(slave_eth_dev->data->port_id,
+	errval = rte_eth_dev_set_mtu(member_eth_dev->data->port_id,
 				     bonded_eth_dev->data->mtu);
 	if (errval != 0 && errval != -ENOTSUP) {
 		RTE_BOND_LOG(ERR, "rte_eth_dev_set_mtu: port %u, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				member_eth_dev->data->port_id, errval);
 		return errval;
 	}
 	return 0;
 }
 
 int
-slave_start(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev)
+member_start(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *member_eth_dev)
 {
 	int errval = 0;
 	struct bond_rx_queue *bd_rx_q;
@@ -1804,19 +1808,20 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
 	uint16_t q_id;
 	struct rte_flow_error flow_error;
 	struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
+	uint16_t member_port_id = member_eth_dev->data->port_id;
 
 	/* Setup Rx Queues */
 	for (q_id = 0; q_id < bonded_eth_dev->data->nb_rx_queues; q_id++) {
 		bd_rx_q = (struct bond_rx_queue *)bonded_eth_dev->data->rx_queues[q_id];
 
-		errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id, q_id,
+		errval = rte_eth_rx_queue_setup(member_port_id, q_id,
 				bd_rx_q->nb_rx_desc,
-				rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+				rte_eth_dev_socket_id(member_port_id),
 				&(bd_rx_q->rx_conf), bd_rx_q->mb_pool);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 					"rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
-					slave_eth_dev->data->port_id, q_id, errval);
+					member_port_id, q_id, errval);
 			return errval;
 		}
 	}
@@ -1825,58 +1830,58 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
 	for (q_id = 0; q_id < bonded_eth_dev->data->nb_tx_queues; q_id++) {
 		bd_tx_q = (struct bond_tx_queue *)bonded_eth_dev->data->tx_queues[q_id];
 
-		errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id, q_id,
+		errval = rte_eth_tx_queue_setup(member_port_id, q_id,
 				bd_tx_q->nb_tx_desc,
-				rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+				rte_eth_dev_socket_id(member_port_id),
 				&bd_tx_q->tx_conf);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 				"rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
-				slave_eth_dev->data->port_id, q_id, errval);
+				member_port_id, q_id, errval);
 			return errval;
 		}
 	}
 
 	if (internals->mode == BONDING_MODE_8023AD &&
 			internals->mode4.dedicated_queues.enabled == 1) {
-		if (slave_configure_slow_queue(bonded_eth_dev, slave_eth_dev)
+		if (member_configure_slow_queue(bonded_eth_dev, member_eth_dev)
 				!= 0)
 			return errval;
 
 		errval = bond_ethdev_8023ad_flow_verify(bonded_eth_dev,
-				slave_eth_dev->data->port_id);
+				member_port_id);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 				"bond_ethdev_8023ad_flow_verify: port=%d, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				member_port_id, errval);
 			return errval;
 		}
 
-		if (internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id] != NULL) {
-			errval = rte_flow_destroy(slave_eth_dev->data->port_id,
-					internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id],
+		if (internals->mode4.dedicated_queues.flow[member_port_id] != NULL) {
+			errval = rte_flow_destroy(member_port_id,
+					internals->mode4.dedicated_queues.flow[member_port_id],
 					&flow_error);
 			RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_destroy: port=%d, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				member_port_id, errval);
 		}
 	}
 
 	/* Start device */
-	errval = rte_eth_dev_start(slave_eth_dev->data->port_id);
+	errval = rte_eth_dev_start(member_port_id);
 	if (errval != 0) {
 		RTE_BOND_LOG(ERR, "rte_eth_dev_start: port=%u, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				member_port_id, errval);
 		return -1;
 	}
 
 	if (internals->mode == BONDING_MODE_8023AD &&
 			internals->mode4.dedicated_queues.enabled == 1) {
 		errval = bond_ethdev_8023ad_flow_set(bonded_eth_dev,
-				slave_eth_dev->data->port_id);
+				member_port_id);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 				"bond_ethdev_8023ad_flow_set: port=%d, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				member_port_id, errval);
 			return errval;
 		}
 	}
@@ -1888,27 +1893,27 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
 
 		internals = bonded_eth_dev->data->dev_private;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			if (internals->slaves[i].port_id == slave_eth_dev->data->port_id) {
+		for (i = 0; i < internals->member_count; i++) {
+			if (internals->members[i].port_id == member_port_id) {
 				errval = rte_eth_dev_rss_reta_update(
-						slave_eth_dev->data->port_id,
+						member_port_id,
 						&internals->reta_conf[0],
-						internals->slaves[i].reta_size);
+						internals->members[i].reta_size);
 				if (errval != 0) {
 					RTE_BOND_LOG(WARNING,
-						     "rte_eth_dev_rss_reta_update on slave port %d fails (err %d)."
+						     "rte_eth_dev_rss_reta_update on member port %d fails (err %d)."
 						     " RSS Configuration for bonding may be inconsistent.",
-						     slave_eth_dev->data->port_id, errval);
+						     member_port_id, errval);
 				}
 				break;
 			}
 		}
 	}
 
-	/* If lsc interrupt is set, check initial slave's link status */
-	if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
-		slave_eth_dev->dev_ops->link_update(slave_eth_dev, 0);
-		bond_ethdev_lsc_event_callback(slave_eth_dev->data->port_id,
+	/* If lsc interrupt is set, check initial member's link status */
+	if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
+		member_eth_dev->dev_ops->link_update(member_eth_dev, 0);
+		bond_ethdev_lsc_event_callback(member_port_id,
 			RTE_ETH_EVENT_INTR_LSC, &bonded_eth_dev->data->port_id,
 			NULL);
 	}
@@ -1917,75 +1922,74 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
 }
 
 void
-slave_remove(struct bond_dev_private *internals,
-		struct rte_eth_dev *slave_eth_dev)
+member_remove(struct bond_dev_private *internals,
+		struct rte_eth_dev *member_eth_dev)
 {
 	uint16_t i;
 
-	for (i = 0; i < internals->slave_count; i++)
-		if (internals->slaves[i].port_id ==
-				slave_eth_dev->data->port_id)
+	for (i = 0; i < internals->member_count; i++)
+		if (internals->members[i].port_id ==
+				member_eth_dev->data->port_id)
 			break;
 
-	if (i < (internals->slave_count - 1)) {
+	if (i < (internals->member_count - 1)) {
 		struct rte_flow *flow;
 
-		memmove(&internals->slaves[i], &internals->slaves[i + 1],
-				sizeof(internals->slaves[0]) *
-				(internals->slave_count - i - 1));
+		memmove(&internals->members[i], &internals->members[i + 1],
+				sizeof(internals->members[0]) *
+				(internals->member_count - i - 1));
 		TAILQ_FOREACH(flow, &internals->flow_list, next) {
 			memmove(&flow->flows[i], &flow->flows[i + 1],
 				sizeof(flow->flows[0]) *
-				(internals->slave_count - i - 1));
-			flow->flows[internals->slave_count - 1] = NULL;
+				(internals->member_count - i - 1));
+			flow->flows[internals->member_count - 1] = NULL;
 		}
 	}
 
-	internals->slave_count--;
+	internals->member_count--;
 
-	/* force reconfiguration of slave interfaces */
-	rte_eth_dev_internal_reset(slave_eth_dev);
+	/* force reconfiguration of member interfaces */
+	rte_eth_dev_internal_reset(member_eth_dev);
 }
 
 static void
-bond_ethdev_slave_link_status_change_monitor(void *cb_arg);
+bond_ethdev_member_link_status_change_monitor(void *cb_arg);
 
 void
-slave_add(struct bond_dev_private *internals,
-		struct rte_eth_dev *slave_eth_dev)
+member_add(struct bond_dev_private *internals,
+		struct rte_eth_dev *member_eth_dev)
 {
-	struct bond_slave_details *slave_details =
-			&internals->slaves[internals->slave_count];
+	struct bond_member_details *member_details =
+			&internals->members[internals->member_count];
 
-	slave_details->port_id = slave_eth_dev->data->port_id;
-	slave_details->last_link_status = 0;
+	member_details->port_id = member_eth_dev->data->port_id;
+	member_details->last_link_status = 0;
 
-	/* Mark slave devices that don't support interrupts so we can
+	/* Mark member devices that don't support interrupts so we can
 	 * compensate when we start the bond
 	 */
-	if (!(slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) {
-		slave_details->link_status_poll_enabled = 1;
-	}
+	if (!(member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC))
+		member_details->link_status_poll_enabled = 1;
 
-	slave_details->link_status_wait_to_complete = 0;
+	member_details->link_status_wait_to_complete = 0;
 	/* clean tlb_last_obytes when adding port for bonding device */
-	memcpy(&(slave_details->persisted_mac_addr), slave_eth_dev->data->mac_addrs,
+	memcpy(&member_details->persisted_mac_addr, member_eth_dev->data->mac_addrs,
 			sizeof(struct rte_ether_addr));
 }
 
 void
 bond_ethdev_primary_set(struct bond_dev_private *internals,
-		uint16_t slave_port_id)
+		uint16_t member_port_id)
 {
 	int i;
 
-	if (internals->active_slave_count < 1)
-		internals->current_primary_port = slave_port_id;
+	if (internals->active_member_count < 1)
+		internals->current_primary_port = member_port_id;
 	else
-		/* Search bonded device slave ports for new proposed primary port */
-		for (i = 0; i < internals->active_slave_count; i++) {
-			if (internals->active_slaves[i] == slave_port_id)
-				internals->current_primary_port = slave_port_id;
+		/* Search bonded device member ports for new proposed primary port */
+		for (i = 0; i < internals->active_member_count; i++) {
+			if (internals->active_members[i] == member_port_id)
+				internals->current_primary_port = member_port_id;
 		}
 }
 
@@ -1998,9 +2002,9 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 	struct bond_dev_private *internals;
 	int i;
 
-	/* slave eth dev will be started by bonded device */
+	/* member eth dev will be started by bonded device */
 	if (check_for_bonded_ethdev(eth_dev)) {
-		RTE_BOND_LOG(ERR, "User tried to explicitly start a slave eth_dev (%d)",
+		RTE_BOND_LOG(ERR, "User tried to explicitly start a member eth_dev (%d)",
 				eth_dev->data->port_id);
 		return -1;
 	}
@@ -2010,17 +2014,17 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 
 	internals = eth_dev->data->dev_private;
 
-	if (internals->slave_count == 0) {
-		RTE_BOND_LOG(ERR, "Cannot start port since there are no slave devices");
+	if (internals->member_count == 0) {
+		RTE_BOND_LOG(ERR, "Cannot start port since there are no member devices");
 		goto out_err;
 	}
 
 	if (internals->user_defined_mac == 0) {
 		struct rte_ether_addr *new_mac_addr = NULL;
 
-		for (i = 0; i < internals->slave_count; i++)
-			if (internals->slaves[i].port_id == internals->primary_port)
-				new_mac_addr = &internals->slaves[i].persisted_mac_addr;
+		for (i = 0; i < internals->member_count; i++)
+			if (internals->members[i].port_id == internals->primary_port)
+				new_mac_addr = &internals->members[i].persisted_mac_addr;
 
 		if (new_mac_addr == NULL)
 			goto out_err;
@@ -2042,28 +2046,28 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 	}
 
 
-	/* Reconfigure each slave device if starting bonded device */
-	for (i = 0; i < internals->slave_count; i++) {
-		struct rte_eth_dev *slave_ethdev =
-				&(rte_eth_devices[internals->slaves[i].port_id]);
-		if (slave_configure(eth_dev, slave_ethdev) != 0) {
+	/* Reconfigure each member device if starting bonded device */
+	for (i = 0; i < internals->member_count; i++) {
+		struct rte_eth_dev *member_ethdev =
+				&(rte_eth_devices[internals->members[i].port_id]);
+		if (member_configure(eth_dev, member_ethdev) != 0) {
 			RTE_BOND_LOG(ERR,
-				"bonded port (%d) failed to reconfigure slave device (%d)",
+				"bonded port (%d) failed to reconfigure member device (%d)",
 				eth_dev->data->port_id,
-				internals->slaves[i].port_id);
+				internals->members[i].port_id);
 			goto out_err;
 		}
-		if (slave_start(eth_dev, slave_ethdev) != 0) {
+		if (member_start(eth_dev, member_ethdev) != 0) {
 			RTE_BOND_LOG(ERR,
-				"bonded port (%d) failed to start slave device (%d)",
+				"bonded port (%d) failed to start member device (%d)",
 				eth_dev->data->port_id,
-				internals->slaves[i].port_id);
+				internals->members[i].port_id);
 			goto out_err;
 		}
-		/* We will need to poll for link status if any slave doesn't
+		/* We will need to poll for link status if any member doesn't
 		 * support interrupts
 		 */
-		if (internals->slaves[i].link_status_poll_enabled)
+		if (internals->members[i].link_status_poll_enabled)
 			internals->link_status_polling_enabled = 1;
 	}
 
@@ -2071,12 +2075,12 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 	if (internals->link_status_polling_enabled) {
 		rte_eal_alarm_set(
 			internals->link_status_polling_interval_ms * 1000,
-			bond_ethdev_slave_link_status_change_monitor,
+			bond_ethdev_member_link_status_change_monitor,
 			(void *)&rte_eth_devices[internals->port_id]);
 	}
 
-	/* Update all slave devices MACs*/
-	if (mac_address_slaves_update(eth_dev) != 0)
+	/* Update all member devices MACs*/
+	if (mac_address_members_update(eth_dev) != 0)
 		goto out_err;
 
 	if (internals->user_defined_primary_port)
@@ -2132,8 +2136,8 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 		bond_mode_8023ad_stop(eth_dev);
 
 		/* Discard all messages to/from mode 4 state machines */
-		for (i = 0; i < internals->active_slave_count; i++) {
-			port = &bond_mode_8023ad_ports[internals->active_slaves[i]];
+		for (i = 0; i < internals->active_member_count; i++) {
+			port = &bond_mode_8023ad_ports[internals->active_members[i]];
 
 			RTE_ASSERT(port->rx_ring != NULL);
 			while (rte_ring_dequeue(port->rx_ring, &pkt) != -ENOENT)
@@ -2148,30 +2152,30 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 	if (internals->mode == BONDING_MODE_TLB ||
 			internals->mode == BONDING_MODE_ALB) {
 		bond_tlb_disable(internals);
-		for (i = 0; i < internals->active_slave_count; i++)
-			tlb_last_obytets[internals->active_slaves[i]] = 0;
+		for (i = 0; i < internals->active_member_count; i++)
+			tlb_last_obytets[internals->active_members[i]] = 0;
 	}
 
 	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 0;
 
 	internals->link_status_polling_enabled = 0;
-	for (i = 0; i < internals->slave_count; i++) {
-		uint16_t slave_id = internals->slaves[i].port_id;
+	for (i = 0; i < internals->member_count; i++) {
+		uint16_t member_id = internals->members[i].port_id;
 
-		internals->slaves[i].last_link_status = 0;
-		ret = rte_eth_dev_stop(slave_id);
+		internals->members[i].last_link_status = 0;
+		ret = rte_eth_dev_stop(member_id);
 		if (ret != 0) {
 			RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
-				     slave_id);
+				     member_id);
 			return ret;
 		}
 
-		/* active slaves need to be deactivated. */
-		if (find_slave_by_id(internals->active_slaves,
-				internals->active_slave_count, slave_id) !=
-					internals->active_slave_count)
-			deactivate_slave(eth_dev, slave_id);
+		/* active members need to be deactivated. */
+		if (find_member_by_id(internals->active_members,
+				internals->active_member_count, member_id) !=
+					internals->active_member_count)
+			deactivate_member(eth_dev, member_id);
 	}
 
 	return 0;
@@ -2188,8 +2192,8 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
 	/* Flush flows in all back-end devices before removing them */
 	bond_flow_ops.flush(dev, &ferror);
 
-	while (internals->slave_count != skipped) {
-		uint16_t port_id = internals->slaves[skipped].port_id;
+	while (internals->member_count != skipped) {
+		uint16_t port_id = internals->members[skipped].port_id;
 		int ret;
 
 		ret = rte_eth_dev_stop(port_id);
@@ -2203,7 +2207,7 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
 			continue;
 		}
 
-		if (rte_eth_bond_slave_remove(bond_port_id, port_id) != 0) {
+		if (rte_eth_bond_member_remove(bond_port_id, port_id) != 0) {
 			RTE_BOND_LOG(ERR,
 				     "Failed to remove port %d from bonded device %s",
 				     port_id, dev->device->name);
@@ -2246,7 +2250,7 @@ static int
 bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 {
 	struct bond_dev_private *internals = dev->data->dev_private;
-	struct bond_slave_details slave;
+	struct bond_member_details member;
 	int ret;
 
 	uint16_t max_nb_rx_queues = UINT16_MAX;
@@ -2259,31 +2263,31 @@ bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 			RTE_ETHER_MAX_JUMBO_FRAME_LEN;
 
 	/* Max number of tx/rx queues that the bonded device can support is the
-	 * minimum values of the bonded slaves, as all slaves must be capable
+	 * minimum values of the bonded members, as all members must be capable
 	 * of supporting the same number of tx/rx queues.
 	 */
-	if (internals->slave_count > 0) {
-		struct rte_eth_dev_info slave_info;
+	if (internals->member_count > 0) {
+		struct rte_eth_dev_info member_info;
 		uint16_t idx;
 
-		for (idx = 0; idx < internals->slave_count; idx++) {
-			slave = internals->slaves[idx];
-			ret = rte_eth_dev_info_get(slave.port_id, &slave_info);
+		for (idx = 0; idx < internals->member_count; idx++) {
+			member = internals->members[idx];
+			ret = rte_eth_dev_info_get(member.port_id, &member_info);
 			if (ret != 0) {
 				RTE_BOND_LOG(ERR,
 					"%s: Error during getting device (port %u) info: %s\n",
 					__func__,
-					slave.port_id,
+					member.port_id,
 					strerror(-ret));
 
 				return ret;
 			}
 
-			if (slave_info.max_rx_queues < max_nb_rx_queues)
-				max_nb_rx_queues = slave_info.max_rx_queues;
+			if (member_info.max_rx_queues < max_nb_rx_queues)
+				max_nb_rx_queues = member_info.max_rx_queues;
 
-			if (slave_info.max_tx_queues < max_nb_tx_queues)
-				max_nb_tx_queues = slave_info.max_tx_queues;
+			if (member_info.max_tx_queues < max_nb_tx_queues)
+				max_nb_tx_queues = member_info.max_tx_queues;
 		}
 	}
 
@@ -2332,7 +2336,7 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 	uint16_t i;
 	struct bond_dev_private *internals = dev->data->dev_private;
 
-	/* don't do this while a slave is being added */
+	/* don't do this while a member is being added */
 	rte_spinlock_lock(&internals->lock);
 
 	if (on)
@@ -2340,13 +2344,13 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 	else
 		rte_bitmap_clear(internals->vlan_filter_bmp, vlan_id);
 
-	for (i = 0; i < internals->slave_count; i++) {
-		uint16_t port_id = internals->slaves[i].port_id;
+	for (i = 0; i < internals->member_count; i++) {
+		uint16_t port_id = internals->members[i].port_id;
 
 		res = rte_eth_dev_vlan_filter(port_id, vlan_id, on);
 		if (res == ENOTSUP)
 			RTE_BOND_LOG(WARNING,
-				     "Setting VLAN filter on slave port %u not supported.",
+				     "Setting VLAN filter on member port %u not supported.",
 				     port_id);
 	}
 
@@ -2424,14 +2428,14 @@ bond_ethdev_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id)
 }
 
 static void
-bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
+bond_ethdev_member_link_status_change_monitor(void *cb_arg)
 {
-	struct rte_eth_dev *bonded_ethdev, *slave_ethdev;
+	struct rte_eth_dev *bonded_ethdev, *member_ethdev;
 	struct bond_dev_private *internals;
 
-	/* Default value for polling slave found is true as we don't want to
+	/* Default value for polling member found is true as we don't want to
 	 * disable the polling thread if we cannot get the lock */
-	int i, polling_slave_found = 1;
+	int i, polling_member_found = 1;
 
 	if (cb_arg == NULL)
 		return;
@@ -2443,28 +2447,28 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
 		!internals->link_status_polling_enabled)
 		return;
 
-	/* If device is currently being configured then don't check slaves link
+	/* If device is currently being configured then don't check members link
 	 * status, wait until next period */
 	if (rte_spinlock_trylock(&internals->lock)) {
-		if (internals->slave_count > 0)
-			polling_slave_found = 0;
+		if (internals->member_count > 0)
+			polling_member_found = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			if (!internals->slaves[i].link_status_poll_enabled)
+		for (i = 0; i < internals->member_count; i++) {
+			if (!internals->members[i].link_status_poll_enabled)
 				continue;
 
-			slave_ethdev = &rte_eth_devices[internals->slaves[i].port_id];
-			polling_slave_found = 1;
+			member_ethdev = &rte_eth_devices[internals->members[i].port_id];
+			polling_member_found = 1;
 
-			/* Update slave link status */
-			(*slave_ethdev->dev_ops->link_update)(slave_ethdev,
-					internals->slaves[i].link_status_wait_to_complete);
+			/* Update member link status */
+			(*member_ethdev->dev_ops->link_update)(member_ethdev,
+					internals->members[i].link_status_wait_to_complete);
 
 			/* if link status has changed since last checked then call lsc
 			 * event callback */
-			if (slave_ethdev->data->dev_link.link_status !=
-					internals->slaves[i].last_link_status) {
-				bond_ethdev_lsc_event_callback(internals->slaves[i].port_id,
+			if (member_ethdev->data->dev_link.link_status !=
+					internals->members[i].last_link_status) {
+				bond_ethdev_lsc_event_callback(internals->members[i].port_id,
 						RTE_ETH_EVENT_INTR_LSC,
 						&bonded_ethdev->data->port_id,
 						NULL);
@@ -2473,10 +2477,10 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
 		rte_spinlock_unlock(&internals->lock);
 	}
 
-	if (polling_slave_found)
-		/* Set alarm to continue monitoring link status of slave ethdev's */
+	if (polling_member_found)
+		/* Set alarm to continue monitoring link status of member ethdev's */
 		rte_eal_alarm_set(internals->link_status_polling_interval_ms * 1000,
-				bond_ethdev_slave_link_status_change_monitor, cb_arg);
+				bond_ethdev_member_link_status_change_monitor, cb_arg);
 }
 
 static int
@@ -2485,7 +2489,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 	int (*link_update)(uint16_t port_id, struct rte_eth_link *eth_link);
 
 	struct bond_dev_private *bond_ctx;
-	struct rte_eth_link slave_link;
+	struct rte_eth_link member_link;
 
 	bool one_link_update_succeeded;
 	uint32_t idx;
@@ -2496,7 +2500,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 	ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	if (ethdev->data->dev_started == 0 ||
-			bond_ctx->active_slave_count == 0) {
+			bond_ctx->active_member_count == 0) {
 		ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 		return 0;
 	}
@@ -2512,51 +2516,51 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 	case BONDING_MODE_BROADCAST:
 		/**
 		 * Setting link speed to UINT32_MAX to ensure we pick up the
-		 * value of the first active slave
+		 * value of the first active member
 		 */
 		ethdev->data->dev_link.link_speed = UINT32_MAX;
 
 		/**
-		 * link speed is minimum value of all the slaves link speed as
-		 * packet loss will occur on this slave if transmission at rates
+		 * link speed is minimum value of all the members link speed as
+		 * packet loss will occur on this member if transmission at rates
 		 * greater than this are attempted
 		 */
-		for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
-			ret = link_update(bond_ctx->active_slaves[idx],
-					  &slave_link);
+		for (idx = 0; idx < bond_ctx->active_member_count; idx++) {
+			ret = link_update(bond_ctx->active_members[idx],
+					  &member_link);
 			if (ret < 0) {
 				ethdev->data->dev_link.link_speed =
 					RTE_ETH_SPEED_NUM_NONE;
 				RTE_BOND_LOG(ERR,
-					"Slave (port %u) link get failed: %s",
-					bond_ctx->active_slaves[idx],
+					"Member (port %u) link get failed: %s",
+					bond_ctx->active_members[idx],
 					rte_strerror(-ret));
 				return 0;
 			}
 
-			if (slave_link.link_speed <
+			if (member_link.link_speed <
 					ethdev->data->dev_link.link_speed)
 				ethdev->data->dev_link.link_speed =
-						slave_link.link_speed;
+						member_link.link_speed;
 		}
 		break;
 	case BONDING_MODE_ACTIVE_BACKUP:
-		/* Current primary slave */
-		ret = link_update(bond_ctx->current_primary_port, &slave_link);
+		/* Current primary member */
+		ret = link_update(bond_ctx->current_primary_port, &member_link);
 		if (ret < 0) {
-			RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
+			RTE_BOND_LOG(ERR, "Member (port %u) link get failed: %s",
 				bond_ctx->current_primary_port,
 				rte_strerror(-ret));
 			return 0;
 		}
 
-		ethdev->data->dev_link.link_speed = slave_link.link_speed;
+		ethdev->data->dev_link.link_speed = member_link.link_speed;
 		break;
 	case BONDING_MODE_8023AD:
 		ethdev->data->dev_link.link_autoneg =
-				bond_ctx->mode4.slave_link.link_autoneg;
+				bond_ctx->mode4.member_link.link_autoneg;
 		ethdev->data->dev_link.link_duplex =
-				bond_ctx->mode4.slave_link.link_duplex;
+				bond_ctx->mode4.member_link.link_duplex;
 		/* fall through */
 		/* to update link speed */
 	case BONDING_MODE_ROUND_ROBIN:
@@ -2566,29 +2570,29 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 	default:
 		/**
 		 * In theses mode the maximum theoretical link speed is the sum
-		 * of all the slaves
+		 * of all the members
 		 */
 		ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		one_link_update_succeeded = false;
 
-		for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
-			ret = link_update(bond_ctx->active_slaves[idx],
-					&slave_link);
+		for (idx = 0; idx < bond_ctx->active_member_count; idx++) {
+			ret = link_update(bond_ctx->active_members[idx],
+					&member_link);
 			if (ret < 0) {
 				RTE_BOND_LOG(ERR,
-					"Slave (port %u) link get failed: %s",
-					bond_ctx->active_slaves[idx],
+					"Member (port %u) link get failed: %s",
+					bond_ctx->active_members[idx],
 					rte_strerror(-ret));
 				continue;
 			}
 
 			one_link_update_succeeded = true;
 			ethdev->data->dev_link.link_speed +=
-					slave_link.link_speed;
+					member_link.link_speed;
 		}
 
 		if (!one_link_update_succeeded) {
-			RTE_BOND_LOG(ERR, "All slaves link get failed");
+			RTE_BOND_LOG(ERR, "All members link get failed");
 			return 0;
 		}
 	}
@@ -2602,27 +2606,27 @@ static int
 bond_ethdev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 {
 	struct bond_dev_private *internals = dev->data->dev_private;
-	struct rte_eth_stats slave_stats;
+	struct rte_eth_stats member_stats;
 	int i, j;
 
-	for (i = 0; i < internals->slave_count; i++) {
-		rte_eth_stats_get(internals->slaves[i].port_id, &slave_stats);
+	for (i = 0; i < internals->member_count; i++) {
+		rte_eth_stats_get(internals->members[i].port_id, &member_stats);
 
-		stats->ipackets += slave_stats.ipackets;
-		stats->opackets += slave_stats.opackets;
-		stats->ibytes += slave_stats.ibytes;
-		stats->obytes += slave_stats.obytes;
-		stats->imissed += slave_stats.imissed;
-		stats->ierrors += slave_stats.ierrors;
-		stats->oerrors += slave_stats.oerrors;
-		stats->rx_nombuf += slave_stats.rx_nombuf;
+		stats->ipackets += member_stats.ipackets;
+		stats->opackets += member_stats.opackets;
+		stats->ibytes += member_stats.ibytes;
+		stats->obytes += member_stats.obytes;
+		stats->imissed += member_stats.imissed;
+		stats->ierrors += member_stats.ierrors;
+		stats->oerrors += member_stats.oerrors;
+		stats->rx_nombuf += member_stats.rx_nombuf;
 
 		for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
-			stats->q_ipackets[j] += slave_stats.q_ipackets[j];
-			stats->q_opackets[j] += slave_stats.q_opackets[j];
-			stats->q_ibytes[j] += slave_stats.q_ibytes[j];
-			stats->q_obytes[j] += slave_stats.q_obytes[j];
-			stats->q_errors[j] += slave_stats.q_errors[j];
+			stats->q_ipackets[j] += member_stats.q_ipackets[j];
+			stats->q_opackets[j] += member_stats.q_opackets[j];
+			stats->q_ibytes[j] += member_stats.q_ibytes[j];
+			stats->q_obytes[j] += member_stats.q_obytes[j];
+			stats->q_errors[j] += member_stats.q_errors[j];
 		}
 
 	}
@@ -2638,8 +2642,8 @@ bond_ethdev_stats_reset(struct rte_eth_dev *dev)
 	int err;
 	int ret;
 
-	for (i = 0, err = 0; i < internals->slave_count; i++) {
-		ret = rte_eth_stats_reset(internals->slaves[i].port_id);
+	for (i = 0, err = 0; i < internals->member_count; i++) {
+		ret = rte_eth_stats_reset(internals->members[i].port_id);
 		if (ret != 0)
 			err = ret;
 	}
@@ -2656,15 +2660,15 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
 	uint16_t port_id;
 
 	switch (internals->mode) {
-	/* Promiscuous mode is propagated to all slaves */
+	/* Promiscuous mode is propagated to all members */
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD: {
-		unsigned int slave_ok = 0;
+		unsigned int member_ok = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			port_id = internals->slaves[i].port_id;
+		for (i = 0; i < internals->member_count; i++) {
+			port_id = internals->members[i].port_id;
 
 			ret = rte_eth_promiscuous_enable(port_id);
 			if (ret != 0)
@@ -2672,23 +2676,23 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
 					"Failed to enable promiscuous mode for port %u: %s",
 					port_id, rte_strerror(-ret));
 			else
-				slave_ok++;
+				member_ok++;
 		}
 		/*
 		 * Report success if operation is successful on at least
-		 * on one slave. Otherwise return last error code.
+		 * on one member. Otherwise return last error code.
 		 */
-		if (slave_ok > 0)
+		if (member_ok > 0)
 			ret = 0;
 		break;
 	}
-	/* Promiscuous mode is propagated only to primary slave */
+	/* Promiscuous mode is propagated only to primary member */
 	case BONDING_MODE_ACTIVE_BACKUP:
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
 		/* Do not touch promisc when there cannot be primary ports */
-		if (internals->slave_count == 0)
+		if (internals->member_count == 0)
 			break;
 		port_id = internals->current_primary_port;
 		ret = rte_eth_promiscuous_enable(port_id);
@@ -2710,20 +2714,20 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
 	uint16_t port_id;
 
 	switch (internals->mode) {
-	/* Promiscuous mode is propagated to all slaves */
+	/* Promiscuous mode is propagated to all members */
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD: {
-		unsigned int slave_ok = 0;
+		unsigned int member_ok = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			port_id = internals->slaves[i].port_id;
+		for (i = 0; i < internals->member_count; i++) {
+			port_id = internals->members[i].port_id;
 
 			if (internals->mode == BONDING_MODE_8023AD &&
 			    bond_mode_8023ad_ports[port_id].forced_rx_flags ==
 					BOND_8023AD_FORCED_PROMISC) {
-				slave_ok++;
+				member_ok++;
 				continue;
 			}
 			ret = rte_eth_promiscuous_disable(port_id);
@@ -2732,23 +2736,23 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
 					"Failed to disable promiscuous mode for port %u: %s",
 					port_id, rte_strerror(-ret));
 			else
-				slave_ok++;
+				member_ok++;
 		}
 		/*
 		 * Report success if operation is successful on at least
-		 * on one slave. Otherwise return last error code.
+		 * on one member. Otherwise return last error code.
 		 */
-		if (slave_ok > 0)
+		if (member_ok > 0)
 			ret = 0;
 		break;
 	}
-	/* Promiscuous mode is propagated only to primary slave */
+	/* Promiscuous mode is propagated only to primary member */
 	case BONDING_MODE_ACTIVE_BACKUP:
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
 		/* Do not touch promisc when there cannot be primary ports */
-		if (internals->slave_count == 0)
+		if (internals->member_count == 0)
 			break;
 		port_id = internals->current_primary_port;
 		ret = rte_eth_promiscuous_disable(port_id);
@@ -2772,7 +2776,7 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD:
-		/* As promiscuous mode is propagated to all slaves for these
+		/* As promiscuous mode is propagated to all members for these
 		 * mode, no need to update for bonding device.
 		 */
 		break;
@@ -2780,9 +2784,9 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
-		/* As promiscuous mode is propagated only to primary slave
+		/* As promiscuous mode is propagated only to primary member
 		 * for these mode. When active/standby switchover, promiscuous
-		 * mode should be set to new primary slave according to bonding
+		 * mode should be set to new primary member according to bonding
 		 * device.
 		 */
 		if (rte_eth_promiscuous_get(internals->port_id) == 1)
@@ -2803,15 +2807,15 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
 	uint16_t port_id;
 
 	switch (internals->mode) {
-	/* allmulti mode is propagated to all slaves */
+	/* allmulti mode is propagated to all members */
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD: {
-		unsigned int slave_ok = 0;
+		unsigned int member_ok = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			port_id = internals->slaves[i].port_id;
+		for (i = 0; i < internals->member_count; i++) {
+			port_id = internals->members[i].port_id;
 
 			ret = rte_eth_allmulticast_enable(port_id);
 			if (ret != 0)
@@ -2819,23 +2823,23 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
 					"Failed to enable allmulti mode for port %u: %s",
 					port_id, rte_strerror(-ret));
 			else
-				slave_ok++;
+				member_ok++;
 		}
 		/*
 		 * Report success if operation is successful on at least
-		 * on one slave. Otherwise return last error code.
+		 * on one member. Otherwise return last error code.
 		 */
-		if (slave_ok > 0)
+		if (member_ok > 0)
 			ret = 0;
 		break;
 	}
-	/* allmulti mode is propagated only to primary slave */
+	/* allmulti mode is propagated only to primary member */
 	case BONDING_MODE_ACTIVE_BACKUP:
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
 		/* Do not touch allmulti when there cannot be primary ports */
-		if (internals->slave_count == 0)
+		if (internals->member_count == 0)
 			break;
 		port_id = internals->current_primary_port;
 		ret = rte_eth_allmulticast_enable(port_id);
@@ -2857,15 +2861,15 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
 	uint16_t port_id;
 
 	switch (internals->mode) {
-	/* allmulti mode is propagated to all slaves */
+	/* allmulti mode is propagated to all members */
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD: {
-		unsigned int slave_ok = 0;
+		unsigned int member_ok = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			uint16_t port_id = internals->slaves[i].port_id;
+		for (i = 0; i < internals->member_count; i++) {
+			uint16_t port_id = internals->members[i].port_id;
 
 			if (internals->mode == BONDING_MODE_8023AD &&
 			    bond_mode_8023ad_ports[port_id].forced_rx_flags ==
@@ -2878,23 +2882,23 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
 					"Failed to disable allmulti mode for port %u: %s",
 					port_id, rte_strerror(-ret));
 			else
-				slave_ok++;
+				member_ok++;
 		}
 		/*
 		 * Report success if operation is successful on at least
-		 * on one slave. Otherwise return last error code.
+		 * on one member. Otherwise return last error code.
 		 */
-		if (slave_ok > 0)
+		if (member_ok > 0)
 			ret = 0;
 		break;
 	}
-	/* allmulti mode is propagated only to primary slave */
+	/* allmulti mode is propagated only to primary member */
 	case BONDING_MODE_ACTIVE_BACKUP:
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
 		/* Do not touch allmulti when there cannot be primary ports */
-		if (internals->slave_count == 0)
+		if (internals->member_count == 0)
 			break;
 		port_id = internals->current_primary_port;
 		ret = rte_eth_allmulticast_disable(port_id);
@@ -2918,7 +2922,7 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD:
-		/* As allmulticast mode is propagated to all slaves for these
+		/* As allmulticast mode is propagated to all members for these
 		 * mode, no need to update for bonding device.
 		 */
 		break;
@@ -2926,9 +2930,9 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
-		/* As allmulticast mode is propagated only to primary slave
+		/* As allmulticast mode is propagated only to primary member
 		 * for these mode. When active/standby switchover, allmulticast
-		 * mode should be set to new primary slave according to bonding
+		 * mode should be set to new primary member according to bonding
 		 * device.
 		 */
 		if (rte_eth_allmulticast_get(internals->port_id) == 1)
@@ -2961,8 +2965,8 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 	int ret;
 
 	uint8_t lsc_flag = 0;
-	int valid_slave = 0;
-	uint16_t active_pos, slave_idx;
+	int valid_member = 0;
+	uint16_t active_pos, member_idx;
 	uint16_t i;
 
 	if (type != RTE_ETH_EVENT_INTR_LSC || param == NULL)
@@ -2979,62 +2983,62 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 	if (!bonded_eth_dev->data->dev_started)
 		return rc;
 
-	/* verify that port_id is a valid slave of bonded port */
-	for (i = 0; i < internals->slave_count; i++) {
-		if (internals->slaves[i].port_id == port_id) {
-			valid_slave = 1;
-			slave_idx = i;
+	/* verify that port_id is a valid member of bonded port */
+	for (i = 0; i < internals->member_count; i++) {
+		if (internals->members[i].port_id == port_id) {
+			valid_member = 1;
+			member_idx = i;
 			break;
 		}
 	}
 
-	if (!valid_slave)
+	if (!valid_member)
 		return rc;
 
 	/* Synchronize lsc callback parallel calls either by real link event
-	 * from the slaves PMDs or by the bonding PMD itself.
+	 * from the members PMDs or by the bonding PMD itself.
 	 */
 	rte_spinlock_lock(&internals->lsc_lock);
 
 	/* Search for port in active port list */
-	active_pos = find_slave_by_id(internals->active_slaves,
-			internals->active_slave_count, port_id);
+	active_pos = find_member_by_id(internals->active_members,
+			internals->active_member_count, port_id);
 
 	ret = rte_eth_link_get_nowait(port_id, &link);
 	if (ret < 0)
-		RTE_BOND_LOG(ERR, "Slave (port %u) link get failed", port_id);
+		RTE_BOND_LOG(ERR, "Member (port %u) link get failed", port_id);
 
 	if (ret == 0 && link.link_status) {
-		if (active_pos < internals->active_slave_count)
+		if (active_pos < internals->active_member_count)
 			goto link_update;
 
 		/* check link state properties if bonded link is up*/
 		if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
 			if (link_properties_valid(bonded_eth_dev, &link) != 0)
 				RTE_BOND_LOG(ERR, "Invalid link properties "
-					     "for slave %d in bonding mode %d",
+					     "for member %d in bonding mode %d",
 					     port_id, internals->mode);
 		} else {
-			/* inherit slave link properties */
+			/* inherit member link properties */
 			link_properties_set(bonded_eth_dev, &link);
 		}
 
-		/* If no active slave ports then set this port to be
+		/* If no active member ports then set this port to be
 		 * the primary port.
 		 */
-		if (internals->active_slave_count < 1) {
-			/* If first active slave, then change link status */
+		if (internals->active_member_count < 1) {
+			/* If first active member, then change link status */
 			bonded_eth_dev->data->dev_link.link_status =
 								RTE_ETH_LINK_UP;
 			internals->current_primary_port = port_id;
 			lsc_flag = 1;
 
-			mac_address_slaves_update(bonded_eth_dev);
+			mac_address_members_update(bonded_eth_dev);
 			bond_ethdev_promiscuous_update(bonded_eth_dev);
 			bond_ethdev_allmulticast_update(bonded_eth_dev);
 		}
 
-		activate_slave(bonded_eth_dev, port_id);
+		activate_member(bonded_eth_dev, port_id);
 
 		/* If the user has defined the primary port then default to
 		 * using it.
@@ -3043,24 +3047,24 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 				internals->primary_port == port_id)
 			bond_ethdev_primary_set(internals, port_id);
 	} else {
-		if (active_pos == internals->active_slave_count)
+		if (active_pos == internals->active_member_count)
 			goto link_update;
 
-		/* Remove from active slave list */
-		deactivate_slave(bonded_eth_dev, port_id);
+		/* Remove from active member list */
+		deactivate_member(bonded_eth_dev, port_id);
 
-		if (internals->active_slave_count < 1)
+		if (internals->active_member_count < 1)
 			lsc_flag = 1;
 
-		/* Update primary id, take first active slave from list or if none
+		/* Update primary id, take first active member from list or if none
 		 * available set to -1 */
 		if (port_id == internals->current_primary_port) {
-			if (internals->active_slave_count > 0)
+			if (internals->active_member_count > 0)
 				bond_ethdev_primary_set(internals,
-						internals->active_slaves[0]);
+						internals->active_members[0]);
 			else
 				internals->current_primary_port = internals->primary_port;
-			mac_address_slaves_update(bonded_eth_dev);
+			mac_address_members_update(bonded_eth_dev);
 			bond_ethdev_promiscuous_update(bonded_eth_dev);
 			bond_ethdev_allmulticast_update(bonded_eth_dev);
 		}
@@ -3069,10 +3073,10 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 link_update:
 	/**
 	 * Update bonded device link properties after any change to active
-	 * slaves
+	 * members
 	 */
 	bond_ethdev_link_update(bonded_eth_dev, 0);
-	internals->slaves[slave_idx].last_link_status = link.link_status;
+	internals->members[member_idx].last_link_status = link.link_status;
 
 	if (lsc_flag) {
 		/* Cancel any possible outstanding interrupts if delays are enabled */
@@ -3114,7 +3118,7 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
 {
 	unsigned i, j;
 	int result = 0;
-	int slave_reta_size;
+	int member_reta_size;
 	unsigned reta_count;
 	struct bond_dev_private *internals = dev->data->dev_private;
 
@@ -3137,11 +3141,11 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
 		memcpy(&internals->reta_conf[i], &internals->reta_conf[0],
 				sizeof(internals->reta_conf[0]) * reta_count);
 
-	/* Propagate RETA over slaves */
-	for (i = 0; i < internals->slave_count; i++) {
-		slave_reta_size = internals->slaves[i].reta_size;
-		result = rte_eth_dev_rss_reta_update(internals->slaves[i].port_id,
-				&internals->reta_conf[0], slave_reta_size);
+	/* Propagate RETA over members */
+	for (i = 0; i < internals->member_count; i++) {
+		member_reta_size = internals->members[i].reta_size;
+		result = rte_eth_dev_rss_reta_update(internals->members[i].port_id,
+				&internals->reta_conf[0], member_reta_size);
 		if (result < 0)
 			return result;
 	}
@@ -3194,8 +3198,8 @@ bond_ethdev_rss_hash_update(struct rte_eth_dev *dev,
 		bond_rss_conf.rss_key_len = internals->rss_key_len;
 	}
 
-	for (i = 0; i < internals->slave_count; i++) {
-		result = rte_eth_dev_rss_hash_update(internals->slaves[i].port_id,
+	for (i = 0; i < internals->member_count; i++) {
+		result = rte_eth_dev_rss_hash_update(internals->members[i].port_id,
 				&bond_rss_conf);
 		if (result < 0)
 			return result;
@@ -3221,21 +3225,21 @@ bond_ethdev_rss_hash_conf_get(struct rte_eth_dev *dev,
 static int
 bond_ethdev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 {
-	struct rte_eth_dev *slave_eth_dev;
+	struct rte_eth_dev *member_eth_dev;
 	struct bond_dev_private *internals = dev->data->dev_private;
 	int ret, i;
 
 	rte_spinlock_lock(&internals->lock);
 
-	for (i = 0; i < internals->slave_count; i++) {
-		slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
-		if (*slave_eth_dev->dev_ops->mtu_set == NULL) {
+	for (i = 0; i < internals->member_count; i++) {
+		member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+		if (*member_eth_dev->dev_ops->mtu_set == NULL) {
 			rte_spinlock_unlock(&internals->lock);
 			return -ENOTSUP;
 		}
 	}
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_eth_dev_set_mtu(internals->slaves[i].port_id, mtu);
+	for (i = 0; i < internals->member_count; i++) {
+		ret = rte_eth_dev_set_mtu(internals->members[i].port_id, mtu);
 		if (ret < 0) {
 			rte_spinlock_unlock(&internals->lock);
 			return ret;
@@ -3271,29 +3275,29 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
 			struct rte_ether_addr *mac_addr,
 			__rte_unused uint32_t index, uint32_t vmdq)
 {
-	struct rte_eth_dev *slave_eth_dev;
+	struct rte_eth_dev *member_eth_dev;
 	struct bond_dev_private *internals = dev->data->dev_private;
 	int ret, i;
 
 	rte_spinlock_lock(&internals->lock);
 
-	for (i = 0; i < internals->slave_count; i++) {
-		slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
-		if (*slave_eth_dev->dev_ops->mac_addr_add == NULL ||
-			 *slave_eth_dev->dev_ops->mac_addr_remove == NULL) {
+	for (i = 0; i < internals->member_count; i++) {
+		member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+		if (*member_eth_dev->dev_ops->mac_addr_add == NULL ||
+			 *member_eth_dev->dev_ops->mac_addr_remove == NULL) {
 			ret = -ENOTSUP;
 			goto end;
 		}
 	}
 
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_eth_dev_mac_addr_add(internals->slaves[i].port_id,
+	for (i = 0; i < internals->member_count; i++) {
+		ret = rte_eth_dev_mac_addr_add(internals->members[i].port_id,
 				mac_addr, vmdq);
 		if (ret < 0) {
 			/* rollback */
 			for (i--; i >= 0; i--)
 				rte_eth_dev_mac_addr_remove(
-					internals->slaves[i].port_id, mac_addr);
+					internals->members[i].port_id, mac_addr);
 			goto end;
 		}
 	}
@@ -3307,22 +3311,22 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
 static void
 bond_ethdev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index)
 {
-	struct rte_eth_dev *slave_eth_dev;
+	struct rte_eth_dev *member_eth_dev;
 	struct bond_dev_private *internals = dev->data->dev_private;
 	int i;
 
 	rte_spinlock_lock(&internals->lock);
 
-	for (i = 0; i < internals->slave_count; i++) {
-		slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
-		if (*slave_eth_dev->dev_ops->mac_addr_remove == NULL)
+	for (i = 0; i < internals->member_count; i++) {
+		member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+		if (*member_eth_dev->dev_ops->mac_addr_remove == NULL)
 			goto end;
 	}
 
 	struct rte_ether_addr *mac_addr = &dev->data->mac_addrs[index];
 
-	for (i = 0; i < internals->slave_count; i++)
-		rte_eth_dev_mac_addr_remove(internals->slaves[i].port_id,
+	for (i = 0; i < internals->member_count; i++)
+		rte_eth_dev_mac_addr_remove(internals->members[i].port_id,
 				mac_addr);
 
 end:
@@ -3402,30 +3406,30 @@ dump_basic(const struct rte_eth_dev *dev, FILE *f)
 		fprintf(f, "\n");
 	}
 
-	if (internals->slave_count > 0) {
-		fprintf(f, "\tSlaves (%u): [", internals->slave_count);
-		for (i = 0; i < internals->slave_count - 1; i++)
-			fprintf(f, "%u ", internals->slaves[i].port_id);
+	if (internals->member_count > 0) {
+		fprintf(f, "\tMembers (%u): [", internals->member_count);
+		for (i = 0; i < internals->member_count - 1; i++)
+			fprintf(f, "%u ", internals->members[i].port_id);
 
-		fprintf(f, "%u]\n", internals->slaves[internals->slave_count - 1].port_id);
+		fprintf(f, "%u]\n", internals->members[internals->member_count - 1].port_id);
 	} else {
-		fprintf(f, "\tSlaves: []\n");
+		fprintf(f, "\tMembers: []\n");
 	}
 
-	if (internals->active_slave_count > 0) {
-		fprintf(f, "\tActive Slaves (%u): [", internals->active_slave_count);
-		for (i = 0; i < internals->active_slave_count - 1; i++)
-			fprintf(f, "%u ", internals->active_slaves[i]);
+	if (internals->active_member_count > 0) {
+		fprintf(f, "\tActive Members (%u): [", internals->active_member_count);
+		for (i = 0; i < internals->active_member_count - 1; i++)
+			fprintf(f, "%u ", internals->active_members[i]);
 
-		fprintf(f, "%u]\n", internals->active_slaves[internals->active_slave_count - 1]);
+		fprintf(f, "%u]\n", internals->active_members[internals->active_member_count - 1]);
 
 	} else {
-		fprintf(f, "\tActive Slaves: []\n");
+		fprintf(f, "\tActive Members: []\n");
 	}
 
 	if (internals->user_defined_primary_port)
 		fprintf(f, "\tUser Defined Primary: [%u]\n", internals->primary_port);
-	if (internals->slave_count > 0)
+	if (internals->member_count > 0)
 		fprintf(f, "\tCurrent Primary: [%u]\n", internals->current_primary_port);
 }
 
@@ -3471,7 +3475,7 @@ dump_lacp_port_param(const struct port_params *params, FILE *f)
 }
 
 static void
-dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
+dump_lacp_member(const struct rte_eth_bond_8023ad_member_info *info, FILE *f)
 {
 	char a_state[256] = { 0 };
 	char p_state[256] = { 0 };
@@ -3520,18 +3524,18 @@ dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
 static void
 dump_lacp(uint16_t port_id, FILE *f)
 {
-	struct rte_eth_bond_8023ad_slave_info slave_info;
+	struct rte_eth_bond_8023ad_member_info member_info;
 	struct rte_eth_bond_8023ad_conf port_conf;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
-	int num_active_slaves;
+	uint16_t members[RTE_MAX_ETHPORTS];
+	int num_active_members;
 	int i, ret;
 
 	fprintf(f, "  - Lacp info:\n");
 
-	num_active_slaves = rte_eth_bond_active_slaves_get(port_id, slaves,
+	num_active_members = rte_eth_bond_active_members_get(port_id, members,
 			RTE_MAX_ETHPORTS);
-	if (num_active_slaves < 0) {
-		fprintf(f, "\tFailed to get active slave list for port %u\n",
+	if (num_active_members < 0) {
+		fprintf(f, "\tFailed to get active member list for port %u\n",
 				port_id);
 		return;
 	}
@@ -3545,16 +3549,16 @@ dump_lacp(uint16_t port_id, FILE *f)
 	}
 	dump_lacp_conf(&port_conf, f);
 
-	for (i = 0; i < num_active_slaves; i++) {
-		ret = rte_eth_bond_8023ad_slave_info(port_id, slaves[i],
-				&slave_info);
+	for (i = 0; i < num_active_members; i++) {
+		ret = rte_eth_bond_8023ad_member_info(port_id, members[i],
+				&member_info);
 		if (ret) {
-			fprintf(f, "\tGet slave device %u 8023ad info failed\n",
-				slaves[i]);
+			fprintf(f, "\tGet member device %u 8023ad info failed\n",
+				members[i]);
 			return;
 		}
-		fprintf(f, "\tSlave Port: %u\n", slaves[i]);
-		dump_lacp_slave(&slave_info, f);
+		fprintf(f, "\tMember Port: %u\n", members[i]);
+		dump_lacp_member(&member_info, f);
 	}
 }
 
@@ -3655,8 +3659,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
 	internals->link_down_delay_ms = 0;
 	internals->link_up_delay_ms = 0;
 
-	internals->slave_count = 0;
-	internals->active_slave_count = 0;
+	internals->member_count = 0;
+	internals->active_member_count = 0;
 	internals->rx_offload_capa = 0;
 	internals->tx_offload_capa = 0;
 	internals->rx_queue_offload_capa = 0;
@@ -3684,8 +3688,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
 	internals->rx_desc_lim.nb_align = 1;
 	internals->tx_desc_lim.nb_align = 1;
 
-	memset(internals->active_slaves, 0, sizeof(internals->active_slaves));
-	memset(internals->slaves, 0, sizeof(internals->slaves));
+	memset(internals->active_members, 0, sizeof(internals->active_members));
+	memset(internals->members, 0, sizeof(internals->members));
 
 	TAILQ_INIT(&internals->flow_list);
 	internals->flow_isolated_valid = 0;
@@ -3770,7 +3774,7 @@ bond_probe(struct rte_vdev_device *dev)
 	/* Parse link bonding mode */
 	if (rte_kvargs_count(kvlist, PMD_BOND_MODE_KVARG) == 1) {
 		if (rte_kvargs_process(kvlist, PMD_BOND_MODE_KVARG,
-				&bond_ethdev_parse_slave_mode_kvarg,
+				&bond_ethdev_parse_member_mode_kvarg,
 				&bonding_mode) != 0) {
 			RTE_BOND_LOG(ERR, "Invalid mode for bonded device %s",
 					name);
@@ -3815,7 +3819,7 @@ bond_probe(struct rte_vdev_device *dev)
 	if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
 		if (rte_kvargs_process(kvlist,
 				PMD_BOND_AGG_MODE_KVARG,
-				&bond_ethdev_parse_slave_agg_mode_kvarg,
+				&bond_ethdev_parse_member_agg_mode_kvarg,
 				&agg_mode) != 0) {
 			RTE_BOND_LOG(ERR,
 					"Failed to parse agg selection mode for bonded device %s",
@@ -3865,7 +3869,7 @@ bond_remove(struct rte_vdev_device *dev)
 	RTE_ASSERT(eth_dev->device == &dev->device);
 
 	internals = eth_dev->data->dev_private;
-	if (internals->slave_count != 0)
+	if (internals->member_count != 0)
 		return -EBUSY;
 
 	if (eth_dev->data->dev_started == 1) {
@@ -3877,7 +3881,7 @@ bond_remove(struct rte_vdev_device *dev)
 	return ret;
 }
 
-/* this part will resolve the slave portids after all the other pdev and vdev
+/* this part will resolve the member portids after all the other pdev and vdev
  * have been allocated */
 static int
 bond_ethdev_configure(struct rte_eth_dev *dev)
@@ -3959,7 +3963,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 	if (link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		if ((link_speeds &
 		    (internals->speed_capa & ~RTE_ETH_LINK_SPEED_FIXED)) == 0) {
-			RTE_BOND_LOG(ERR, "the fixed speed is not supported by all slave devices.");
+			RTE_BOND_LOG(ERR, "the fixed speed is not supported by all member devices.");
 			return -EINVAL;
 		}
 		/*
@@ -4041,7 +4045,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 	if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
 		if (rte_kvargs_process(kvlist,
 				       PMD_BOND_AGG_MODE_KVARG,
-				       &bond_ethdev_parse_slave_agg_mode_kvarg,
+				       &bond_ethdev_parse_member_agg_mode_kvarg,
 				       &agg_mode) != 0) {
 			RTE_BOND_LOG(ERR,
 				     "Failed to parse agg selection mode for bonded device %s",
@@ -4059,60 +4063,60 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	/* Parse/add slave ports to bonded device */
-	if (rte_kvargs_count(kvlist, PMD_BOND_SLAVE_PORT_KVARG) > 0) {
-		struct bond_ethdev_slave_ports slave_ports;
+	/* Parse/add member ports to bonded device */
+	if (rte_kvargs_count(kvlist, PMD_BOND_MEMBER_PORT_KVARG) > 0) {
+		struct bond_ethdev_member_ports member_ports;
 		unsigned i;
 
-		memset(&slave_ports, 0, sizeof(slave_ports));
+		memset(&member_ports, 0, sizeof(member_ports));
 
-		if (rte_kvargs_process(kvlist, PMD_BOND_SLAVE_PORT_KVARG,
-				       &bond_ethdev_parse_slave_port_kvarg, &slave_ports) != 0) {
+		if (rte_kvargs_process(kvlist, PMD_BOND_MEMBER_PORT_KVARG,
+				       &bond_ethdev_parse_member_port_kvarg, &member_ports) != 0) {
 			RTE_BOND_LOG(ERR,
-				     "Failed to parse slave ports for bonded device %s",
+				     "Failed to parse member ports for bonded device %s",
 				     name);
 			return -1;
 		}
 
-		for (i = 0; i < slave_ports.slave_count; i++) {
-			if (rte_eth_bond_slave_add(port_id, slave_ports.slaves[i]) != 0) {
+		for (i = 0; i < member_ports.member_count; i++) {
+			if (rte_eth_bond_member_add(port_id, member_ports.members[i]) != 0) {
 				RTE_BOND_LOG(ERR,
-					     "Failed to add port %d as slave to bonded device %s",
-					     slave_ports.slaves[i], name);
+					     "Failed to add port %d as member to bonded device %s",
+					     member_ports.members[i], name);
 			}
 		}
 
 	} else {
-		RTE_BOND_LOG(INFO, "No slaves specified for bonded device %s", name);
+		RTE_BOND_LOG(INFO, "No members specified for bonded device %s", name);
 		return -1;
 	}
 
-	/* Parse/set primary slave port id*/
-	arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_SLAVE_KVARG);
+	/* Parse/set primary member port id*/
+	arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_MEMBER_KVARG);
 	if (arg_count == 1) {
-		uint16_t primary_slave_port_id;
+		uint16_t primary_member_port_id;
 
 		if (rte_kvargs_process(kvlist,
-				       PMD_BOND_PRIMARY_SLAVE_KVARG,
-				       &bond_ethdev_parse_primary_slave_port_id_kvarg,
-				       &primary_slave_port_id) < 0) {
+				       PMD_BOND_PRIMARY_MEMBER_KVARG,
+				       &bond_ethdev_parse_primary_member_port_id_kvarg,
+				       &primary_member_port_id) < 0) {
 			RTE_BOND_LOG(INFO,
-				     "Invalid primary slave port id specified for bonded device %s",
+				     "Invalid primary member port id specified for bonded device %s",
 				     name);
 			return -1;
 		}
 
 		/* Set balance mode transmit policy*/
-		if (rte_eth_bond_primary_set(port_id, primary_slave_port_id)
+		if (rte_eth_bond_primary_set(port_id, primary_member_port_id)
 		    != 0) {
 			RTE_BOND_LOG(ERR,
-				     "Failed to set primary slave port %d on bonded device %s",
-				     primary_slave_port_id, name);
+				     "Failed to set primary member port %d on bonded device %s",
+				     primary_member_port_id, name);
 			return -1;
 		}
 	} else if (arg_count > 1) {
 		RTE_BOND_LOG(INFO,
-			     "Primary slave can be specified only once for bonded device %s",
+			     "Primary member can be specified only once for bonded device %s",
 			     name);
 		return -1;
 	}
@@ -4206,15 +4210,15 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 		return -1;
 	}
 
-	/* configure slaves so we can pass mtu setting */
-	for (i = 0; i < internals->slave_count; i++) {
-		struct rte_eth_dev *slave_ethdev =
-				&(rte_eth_devices[internals->slaves[i].port_id]);
-		if (slave_configure(dev, slave_ethdev) != 0) {
+	/* configure members so we can pass mtu setting */
+	for (i = 0; i < internals->member_count; i++) {
+		struct rte_eth_dev *member_ethdev =
+				&(rte_eth_devices[internals->members[i].port_id]);
+		if (member_configure(dev, member_ethdev) != 0) {
 			RTE_BOND_LOG(ERR,
-				"bonded port (%d) failed to configure slave device (%d)",
+				"bonded port (%d) failed to configure member device (%d)",
 				dev->data->port_id,
-				internals->slaves[i].port_id);
+				internals->members[i].port_id);
 			return -1;
 		}
 	}
@@ -4230,7 +4234,7 @@ RTE_PMD_REGISTER_VDEV(net_bonding, pmd_bond_drv);
 RTE_PMD_REGISTER_ALIAS(net_bonding, eth_bond);
 
 RTE_PMD_REGISTER_PARAM_STRING(net_bonding,
-	"slave=<ifc> "
+	"member=<ifc> "
 	"primary=<ifc> "
 	"mode=[0-6] "
 	"xmit_policy=[l2 | l23 | l34] "
diff --git a/drivers/net/bonding/version.map b/drivers/net/bonding/version.map
index 9333923b4e..56bc143a89 100644
--- a/drivers/net/bonding/version.map
+++ b/drivers/net/bonding/version.map
@@ -12,8 +12,6 @@ DPDK_23 {
 	rte_eth_bond_8023ad_ext_distrib_get;
 	rte_eth_bond_8023ad_ext_slowtx;
 	rte_eth_bond_8023ad_setup;
-	rte_eth_bond_8023ad_slave_info;
-	rte_eth_bond_active_slaves_get;
 	rte_eth_bond_create;
 	rte_eth_bond_free;
 	rte_eth_bond_link_monitoring_set;
@@ -23,11 +21,18 @@ DPDK_23 {
 	rte_eth_bond_mode_set;
 	rte_eth_bond_primary_get;
 	rte_eth_bond_primary_set;
-	rte_eth_bond_slave_add;
-	rte_eth_bond_slave_remove;
-	rte_eth_bond_slaves_get;
 	rte_eth_bond_xmit_policy_get;
 	rte_eth_bond_xmit_policy_set;
 
 	local: *;
 };
+
+EXPERIMENTAL {
+	# added in 23.07
+	global:
+	rte_eth_bond_8023ad_member_info;
+	rte_eth_bond_active_members_get;
+	rte_eth_bond_member_add;
+	rte_eth_bond_member_remove;
+	rte_eth_bond_members_get;
+};
diff --git a/examples/bond/main.c b/examples/bond/main.c
index 9b076bb39f..90f422ec11 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -105,8 +105,8 @@
 		":%02"PRIx8":%02"PRIx8":%02"PRIx8,	\
 		RTE_ETHER_ADDR_BYTES(&addr))
 
-uint16_t slaves[RTE_MAX_ETHPORTS];
-uint16_t slaves_count;
+uint16_t members[RTE_MAX_ETHPORTS];
+uint16_t members_count;
 
 static uint16_t BOND_PORT = 0xffff;
 
@@ -128,7 +128,7 @@ static struct rte_eth_conf port_conf = {
 };
 
 static void
-slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
+member_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
 {
 	int retval;
 	uint16_t nb_rxd = RTE_RX_DESC_DEFAULT;
@@ -252,10 +252,10 @@ bond_port_init(struct rte_mempool *mbuf_pool)
 		rte_exit(EXIT_FAILURE, "port %u: rte_eth_dev_adjust_nb_rx_tx_desc "
 				"failed (res=%d)\n", BOND_PORT, retval);
 
-	for (i = 0; i < slaves_count; i++) {
-		if (rte_eth_bond_slave_add(BOND_PORT, slaves[i]) == -1)
-			rte_exit(-1, "Oooops! adding slave (%u) to bond (%u) failed!\n",
-					slaves[i], BOND_PORT);
+	for (i = 0; i < members_count; i++) {
+		if (rte_eth_bond_member_add(BOND_PORT, members[i]) == -1)
+			rte_exit(-1, "Oooops! adding member (%u) to bond (%u) failed!\n",
+					members[i], BOND_PORT);
 
 	}
 
@@ -283,18 +283,18 @@ bond_port_init(struct rte_mempool *mbuf_pool)
 	if (retval < 0)
 		rte_exit(retval, "Start port %d failed (res=%d)", BOND_PORT, retval);
 
-	printf("Waiting for slaves to become active...");
+	printf("Waiting for members to become active...");
 	while (wait_counter) {
-		uint16_t act_slaves[16] = {0};
-		if (rte_eth_bond_active_slaves_get(BOND_PORT, act_slaves, 16) ==
-				slaves_count) {
+		uint16_t act_members[16] = {0};
+		if (rte_eth_bond_active_members_get(BOND_PORT, act_members, 16) ==
+				members_count) {
 			printf("\n");
 			break;
 		}
 		sleep(1);
 		printf("...");
 		if (--wait_counter == 0)
-			rte_exit(-1, "\nFailed to activate slaves\n");
+			rte_exit(-1, "\nFailed to activate members\n");
 	}
 
 	retval = rte_eth_promiscuous_enable(BOND_PORT);
@@ -631,7 +631,7 @@ static void cmd_help_parsed(__rte_unused void *parsed_result,
 			"send IP	- sends one ARPrequest through bonding for IP.\n"
 			"start		- starts listening ARPs.\n"
 			"stop		- stops lcore_main.\n"
-			"show		- shows some bond info: ex. active slaves etc.\n"
+			"show		- shows some bond info: ex. active members etc.\n"
 			"help		- prints help.\n"
 			"quit		- terminate all threads and quit.\n"
 		       );
@@ -742,13 +742,13 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
 			    struct cmdline *cl,
 			    __rte_unused void *data)
 {
-	uint16_t slaves[16] = {0};
+	uint16_t members[16] = {0};
 	uint8_t len = 16;
 	struct rte_ether_addr addr;
 	uint16_t i;
 	int ret;
 
-	for (i = 0; i < slaves_count; i++) {
+	for (i = 0; i < members_count; i++) {
 		ret = rte_eth_macaddr_get(i, &addr);
 		if (ret != 0) {
 			cmdline_printf(cl,
@@ -763,9 +763,9 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
 
 	rte_spinlock_lock(&global_flag_stru_p->lock);
 	cmdline_printf(cl,
-			"Active_slaves:%d "
+			"Active_members:%d "
 			"packets received:Tot:%d Arp:%d IPv4:%d\n",
-			rte_eth_bond_active_slaves_get(BOND_PORT, slaves, len),
+			rte_eth_bond_active_members_get(BOND_PORT, members, len),
 			global_flag_stru_p->port_packets[0],
 			global_flag_stru_p->port_packets[1],
 			global_flag_stru_p->port_packets[2]);
@@ -836,10 +836,10 @@ main(int argc, char *argv[])
 		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
 
 	/* initialize all ports */
-	slaves_count = nb_ports;
+	members_count = nb_ports;
 	RTE_ETH_FOREACH_DEV(i) {
-		slave_port_init(i, mbuf_pool);
-		slaves[i] = i;
+		member_port_init(i, mbuf_pool);
+		members[i] = i;
 	}
 
 	bond_port_init(mbuf_pool);
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 99fe9e238b..85439e3a41 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -2035,8 +2035,13 @@ struct rte_eth_dev_owner {
 #define RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE  RTE_BIT32(0)
 /** Device supports link state interrupt */
 #define RTE_ETH_DEV_INTR_LSC              RTE_BIT32(1)
-/** Device is a bonded slave */
-#define RTE_ETH_DEV_BONDED_SLAVE          RTE_BIT32(2)
+/** Device is a bonded member */
+#define RTE_ETH_DEV_BONDED_MEMBER          RTE_BIT32(2)
+#define RTE_ETH_DEV_BONDED_SLAVE                         \
+	do {                                             \
+		RTE_DEPRECATED(RTE_ETH_DEV_BONDED_SLAVE) \
+		RTE_ETH_DEV_BONDED_MEMBER                \
+	} while (0)
 /** Device supports device removal interrupt */
 #define RTE_ETH_DEV_INTR_RMV              RTE_BIT32(3)
 /** Device is port representor */
-- 
2.39.1


^ permalink raw reply	[relevance 1%]

* Re: [dpdk-dev] [PATCH v2] net/liquidio: remove LiquidIO ethdev driver
  2023-05-08 13:44  1% ` [dpdk-dev] [PATCH v2] net/liquidio: remove " jerinj
@ 2023-05-17 15:47  0%   ` Jerin Jacob
  0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2023-05-17 15:47 UTC (permalink / raw)
  To: jerinj
  Cc: dev, Thomas Monjalon, Anatoly Burakov, david.marchand, ferruh.yigit

On Mon, May 8, 2023 at 7:15 PM <jerinj@marvell.com> wrote:
>
> From: Jerin Jacob <jerinj@marvell.com>
>
> The LiquidIO product line has been substituted with CN9K/CN10K
> OCTEON product line smart NICs located at drivers/net/octeon_ep/.
>
> DPDK 20.08 has categorized the LiquidIO driver as UNMAINTAINED
> because of the absence of updates in the driver.
>
> Due to the above reasons, the driver removed from DPDK 23.07.
>
> Also removed deprecation notice entry for the removal in
> doc/guides/rel_notes/deprecation.rst and skipped removed
> driver file in ABI check in devtools/libabigail.abignore.
>
> Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> ---
> v2:
> - Skip driver ABI check (Ferruh)
> - Addressed the review comments in
>   http://patches.dpdk.org/project/dpdk/patch/20230428103127.1059989-1-jerinj@marvell.com/ (Ferruh)


Applied to dpdk-next-net-mrvl/for-next-net. Thanks




>
>  MAINTAINERS                              |    8 -
>  devtools/libabigail.abignore             |    1 +
>  doc/guides/nics/features/liquidio.ini    |   29 -
>  doc/guides/nics/index.rst                |    1 -
>  doc/guides/nics/liquidio.rst             |  169 --
>  doc/guides/rel_notes/deprecation.rst     |    7 -
>  doc/guides/rel_notes/release_23_07.rst   |    2 +
>  drivers/net/liquidio/base/lio_23xx_reg.h |  165 --
>  drivers/net/liquidio/base/lio_23xx_vf.c  |  513 ------
>  drivers/net/liquidio/base/lio_23xx_vf.h  |   63 -
>  drivers/net/liquidio/base/lio_hw_defs.h  |  239 ---
>  drivers/net/liquidio/base/lio_mbox.c     |  246 ---
>  drivers/net/liquidio/base/lio_mbox.h     |  102 -
>  drivers/net/liquidio/lio_ethdev.c        | 2147 ----------------------
>  drivers/net/liquidio/lio_ethdev.h        |  179 --
>  drivers/net/liquidio/lio_logs.h          |   58 -
>  drivers/net/liquidio/lio_rxtx.c          | 1804 ------------------
>  drivers/net/liquidio/lio_rxtx.h          |  740 --------
>  drivers/net/liquidio/lio_struct.h        |  661 -------
>  drivers/net/liquidio/meson.build         |   16 -
>  drivers/net/meson.build                  |    1 -
>  21 files changed, 3 insertions(+), 7148 deletions(-)
>  delete mode 100644 doc/guides/nics/features/liquidio.ini
>  delete mode 100644 doc/guides/nics/liquidio.rst
>  delete mode 100644 drivers/net/liquidio/base/lio_23xx_reg.h
>  delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.c
>  delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.h
>  delete mode 100644 drivers/net/liquidio/base/lio_hw_defs.h
>  delete mode 100644 drivers/net/liquidio/base/lio_mbox.c
>  delete mode 100644 drivers/net/liquidio/base/lio_mbox.h
>  delete mode 100644 drivers/net/liquidio/lio_ethdev.c
>  delete mode 100644 drivers/net/liquidio/lio_ethdev.h
>  delete mode 100644 drivers/net/liquidio/lio_logs.h
>  delete mode 100644 drivers/net/liquidio/lio_rxtx.c
>  delete mode 100644 drivers/net/liquidio/lio_rxtx.h
>  delete mode 100644 drivers/net/liquidio/lio_struct.h
>  delete mode 100644 drivers/net/liquidio/meson.build
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 8df23e5099..0157c26dd2 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -681,14 +681,6 @@ F: drivers/net/thunderx/
>  F: doc/guides/nics/thunderx.rst
>  F: doc/guides/nics/features/thunderx.ini
>
> -Cavium LiquidIO - UNMAINTAINED
> -M: Shijith Thotton <sthotton@marvell.com>
> -M: Srisivasubramanian Srinivasan <srinivasan@marvell.com>
> -T: git://dpdk.org/next/dpdk-next-net-mrvl
> -F: drivers/net/liquidio/
> -F: doc/guides/nics/liquidio.rst
> -F: doc/guides/nics/features/liquidio.ini
> -
>  Cavium OCTEON TX
>  M: Harman Kalra <hkalra@marvell.com>
>  T: git://dpdk.org/next/dpdk-next-net-mrvl
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index 3ff51509de..c0361bfc7b 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -25,6 +25,7 @@
>  ;
>  ; SKIP_LIBRARY=librte_common_mlx5_glue
>  ; SKIP_LIBRARY=librte_net_mlx4_glue
> +; SKIP_LIBRARY=librte_net_liquidio
>
>  ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
>  ; Experimental APIs exceptions ;
> diff --git a/doc/guides/nics/features/liquidio.ini b/doc/guides/nics/features/liquidio.ini
> deleted file mode 100644
> index a8bde282e0..0000000000
> --- a/doc/guides/nics/features/liquidio.ini
> +++ /dev/null
> @@ -1,29 +0,0 @@
> -;
> -; Supported features of the 'LiquidIO' network poll mode driver.
> -;
> -; Refer to default.ini for the full list of available PMD features.
> -;
> -[Features]
> -Speed capabilities   = Y
> -Link status          = Y
> -Link status event    = Y
> -MTU update           = Y
> -Scattered Rx         = Y
> -Promiscuous mode     = Y
> -Allmulticast mode    = Y
> -RSS hash             = Y
> -RSS key update       = Y
> -RSS reta update      = Y
> -VLAN filter          = Y
> -CRC offload          = Y
> -VLAN offload         = P
> -L3 checksum offload  = Y
> -L4 checksum offload  = Y
> -Inner L3 checksum    = Y
> -Inner L4 checksum    = Y
> -Basic stats          = Y
> -Extended stats       = Y
> -Multiprocess aware   = Y
> -Linux                = Y
> -x86-64               = Y
> -Usage doc            = Y
> diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
> index 5c9d1edf5e..31296822e5 100644
> --- a/doc/guides/nics/index.rst
> +++ b/doc/guides/nics/index.rst
> @@ -44,7 +44,6 @@ Network Interface Controller Drivers
>      ipn3ke
>      ixgbe
>      kni
> -    liquidio
>      mana
>      memif
>      mlx4
> diff --git a/doc/guides/nics/liquidio.rst b/doc/guides/nics/liquidio.rst
> deleted file mode 100644
> index f893b3b539..0000000000
> --- a/doc/guides/nics/liquidio.rst
> +++ /dev/null
> @@ -1,169 +0,0 @@
> -..  SPDX-License-Identifier: BSD-3-Clause
> -    Copyright(c) 2017 Cavium, Inc
> -
> -LiquidIO VF Poll Mode Driver
> -============================
> -
> -The LiquidIO VF PMD library (**librte_net_liquidio**) provides poll mode driver support for
> -Cavium LiquidIO® II server adapter VFs. PF management and VF creation can be
> -done using kernel driver.
> -
> -More information can be found at `Cavium Official Website
> -<http://cavium.com/LiquidIO_Adapters.html>`_.
> -
> -Supported LiquidIO Adapters
> ------------------------------
> -
> -- LiquidIO II CN2350 210SV/225SV
> -- LiquidIO II CN2350 210SVPT
> -- LiquidIO II CN2360 210SV/225SV
> -- LiquidIO II CN2360 210SVPT
> -
> -
> -SR-IOV: Prerequisites and Sample Application Notes
> ---------------------------------------------------
> -
> -This section provides instructions to configure SR-IOV with Linux OS.
> -
> -#. Verify SR-IOV and ARI capabilities are enabled on the adapter using ``lspci``:
> -
> -   .. code-block:: console
> -
> -      lspci -s <slot> -vvv
> -
> -   Example output:
> -
> -   .. code-block:: console
> -
> -      [...]
> -      Capabilities: [148 v1] Alternative Routing-ID Interpretation (ARI)
> -      [...]
> -      Capabilities: [178 v1] Single Root I/O Virtualization (SR-IOV)
> -      [...]
> -      Kernel driver in use: LiquidIO
> -
> -#. Load the kernel module:
> -
> -   .. code-block:: console
> -
> -      modprobe liquidio
> -
> -#. Bring up the PF ports:
> -
> -   .. code-block:: console
> -
> -      ifconfig p4p1 up
> -      ifconfig p4p2 up
> -
> -#. Change PF MTU if required:
> -
> -   .. code-block:: console
> -
> -      ifconfig p4p1 mtu 9000
> -      ifconfig p4p2 mtu 9000
> -
> -#. Create VF device(s):
> -
> -   Echo number of VFs to be created into ``"sriov_numvfs"`` sysfs entry
> -   of the parent PF.
> -
> -   .. code-block:: console
> -
> -      echo 1 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs
> -      echo 1 > /sys/bus/pci/devices/0000:03:00.1/sriov_numvfs
> -
> -#. Assign VF MAC address:
> -
> -   Assign MAC address to the VF using iproute2 utility. The syntax is::
> -
> -      ip link set <PF iface> vf <VF id> mac <macaddr>
> -
> -   Example output:
> -
> -   .. code-block:: console
> -
> -      ip link set p4p1 vf 0 mac F2:A8:1B:5E:B4:66
> -
> -#. Assign VF(s) to VM.
> -
> -   The VF devices may be passed through to the guest VM using qemu or
> -   virt-manager or virsh etc.
> -
> -   Example qemu guest launch command:
> -
> -   .. code-block:: console
> -
> -      ./qemu-system-x86_64 -name lio-vm -machine accel=kvm \
> -      -cpu host -m 4096 -smp 4 \
> -      -drive file=<disk_file>,if=none,id=disk1,format=<type> \
> -      -device virtio-blk-pci,scsi=off,drive=disk1,id=virtio-disk1,bootindex=1 \
> -      -device vfio-pci,host=03:00.3 -device vfio-pci,host=03:08.3
> -
> -#. Running testpmd
> -
> -   Refer to the document
> -   :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` to run
> -   ``testpmd`` application.
> -
> -   .. note::
> -
> -      Use ``igb_uio`` instead of ``vfio-pci`` in VM.
> -
> -   Example output:
> -
> -   .. code-block:: console
> -
> -      [...]
> -      EAL: PCI device 0000:03:00.3 on NUMA socket 0
> -      EAL:   probe driver: 177d:9712 net_liovf
> -      EAL:   using IOMMU type 1 (Type 1)
> -      PMD: net_liovf[03:00.3]INFO: DEVICE : CN23XX VF
> -      EAL: PCI device 0000:03:08.3 on NUMA socket 0
> -      EAL:   probe driver: 177d:9712 net_liovf
> -      PMD: net_liovf[03:08.3]INFO: DEVICE : CN23XX VF
> -      Interactive-mode selected
> -      USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
> -      Configuring Port 0 (socket 0)
> -      PMD: net_liovf[03:00.3]INFO: Starting port 0
> -      Port 0: F2:A8:1B:5E:B4:66
> -      Configuring Port 1 (socket 0)
> -      PMD: net_liovf[03:08.3]INFO: Starting port 1
> -      Port 1: 32:76:CC:EE:56:D7
> -      Checking link statuses...
> -      Port 0 Link Up - speed 10000 Mbps - full-duplex
> -      Port 1 Link Up - speed 10000 Mbps - full-duplex
> -      Done
> -      testpmd>
> -
> -#. Enabling VF promiscuous mode
> -
> -   One VF per PF can be marked as trusted for promiscuous mode.
> -
> -   .. code-block:: console
> -
> -      ip link set dev <PF iface> vf <VF id> trust on
> -
> -
> -Limitations
> ------------
> -
> -VF MTU
> -~~~~~~
> -
> -VF MTU is limited by PF MTU. Raise PF value before configuring VF for larger packet size.
> -
> -VLAN offload
> -~~~~~~~~~~~~
> -
> -Tx VLAN insertion is not supported and consequently VLAN offload feature is
> -marked partial.
> -
> -Ring size
> -~~~~~~~~~
> -
> -Number of descriptors for Rx/Tx ring should be in the range 128 to 512.
> -
> -CRC stripping
> -~~~~~~~~~~~~~
> -
> -LiquidIO adapters strip ethernet FCS of every packet coming to the host interface.
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index dcc1ca1696..8e1cdd677a 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -121,13 +121,6 @@ Deprecation Notices
>  * net/bnx2x: Starting from DPDK 23.07, the Marvell QLogic bnx2x driver will be removed.
>    This decision has been made to alleviate the burden of maintaining a discontinued product.
>
> -* net/liquidio: Remove LiquidIO ethdev driver.
> -  The LiquidIO product line has been substituted
> -  with CN9K/CN10K OCTEON product line smart NICs located in ``drivers/net/octeon_ep/``.
> -  DPDK 20.08 has categorized the LiquidIO driver as UNMAINTAINED
> -  because of the absence of updates in the driver.
> -  Due to the above reasons, the driver will be unavailable from DPDK 23.07.
> -
>  * cryptodev: The function ``rte_cryptodev_cb_fn`` will be updated
>    to have another parameter ``qp_id`` to return the queue pair ID
>    which got error interrupt to the application,
> diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
> index a9b1293689..f13a7b32b6 100644
> --- a/doc/guides/rel_notes/release_23_07.rst
> +++ b/doc/guides/rel_notes/release_23_07.rst
> @@ -68,6 +68,8 @@ Removed Items
>     Also, make sure to start the actual text at the margin.
>     =======================================================
>
> +* Removed LiquidIO ethdev driver located at ``drivers/net/liquidio/``
> +
>
>  API Changes
>  -----------
> diff --git a/drivers/net/liquidio/base/lio_23xx_reg.h b/drivers/net/liquidio/base/lio_23xx_reg.h
> deleted file mode 100644
> index 9f28504b53..0000000000
> --- a/drivers/net/liquidio/base/lio_23xx_reg.h
> +++ /dev/null
> @@ -1,165 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_23XX_REG_H_
> -#define _LIO_23XX_REG_H_
> -
> -/* ###################### REQUEST QUEUE ######################### */
> -
> -/* 64 registers for Input Queues Start Addr - SLI_PKT(0..63)_INSTR_BADDR */
> -#define CN23XX_SLI_PKT_INSTR_BADDR_START64     0x10010
> -
> -/* 64 registers for Input Doorbell - SLI_PKT(0..63)_INSTR_BAOFF_DBELL */
> -#define CN23XX_SLI_PKT_INSTR_BADDR_DBELL_START 0x10020
> -
> -/* 64 registers for Input Queue size - SLI_PKT(0..63)_INSTR_FIFO_RSIZE */
> -#define CN23XX_SLI_PKT_INSTR_FIFO_RSIZE_START  0x10030
> -
> -/* 64 registers for Input Queue Instr Count - SLI_PKT_IN_DONE(0..63)_CNTS */
> -#define CN23XX_SLI_PKT_IN_DONE_CNTS_START64    0x10040
> -
> -/* 64 registers (64-bit) - ES, RO, NS, Arbitration for Input Queue Data &
> - * gather list fetches. SLI_PKT(0..63)_INPUT_CONTROL.
> - */
> -#define CN23XX_SLI_PKT_INPUT_CONTROL_START64   0x10000
> -
> -/* ------- Request Queue Macros --------- */
> -
> -/* Each Input Queue register is at a 16-byte Offset in BAR0 */
> -#define CN23XX_IQ_OFFSET                       0x20000
> -
> -#define CN23XX_SLI_IQ_PKT_CONTROL64(iq)                                        \
> -       (CN23XX_SLI_PKT_INPUT_CONTROL_START64 + ((iq) * CN23XX_IQ_OFFSET))
> -
> -#define CN23XX_SLI_IQ_BASE_ADDR64(iq)                                  \
> -       (CN23XX_SLI_PKT_INSTR_BADDR_START64 + ((iq) * CN23XX_IQ_OFFSET))
> -
> -#define CN23XX_SLI_IQ_SIZE(iq)                                         \
> -       (CN23XX_SLI_PKT_INSTR_FIFO_RSIZE_START + ((iq) * CN23XX_IQ_OFFSET))
> -
> -#define CN23XX_SLI_IQ_DOORBELL(iq)                                     \
> -       (CN23XX_SLI_PKT_INSTR_BADDR_DBELL_START + ((iq) * CN23XX_IQ_OFFSET))
> -
> -#define CN23XX_SLI_IQ_INSTR_COUNT64(iq)                                        \
> -       (CN23XX_SLI_PKT_IN_DONE_CNTS_START64 + ((iq) * CN23XX_IQ_OFFSET))
> -
> -/* Number of instructions to be read in one MAC read request.
> - * setting to Max value(4)
> - */
> -#define CN23XX_PKT_INPUT_CTL_RDSIZE                    (3 << 25)
> -#define CN23XX_PKT_INPUT_CTL_IS_64B                    (1 << 24)
> -#define CN23XX_PKT_INPUT_CTL_RST                       (1 << 23)
> -#define CN23XX_PKT_INPUT_CTL_QUIET                     (1 << 28)
> -#define CN23XX_PKT_INPUT_CTL_RING_ENB                  (1 << 22)
> -#define CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP          (1 << 6)
> -#define CN23XX_PKT_INPUT_CTL_USE_CSR                   (1 << 4)
> -#define CN23XX_PKT_INPUT_CTL_GATHER_ES_64B_SWAP                (2)
> -
> -/* These bits[47:44] select the Physical function number within the MAC */
> -#define CN23XX_PKT_INPUT_CTL_PF_NUM_POS                45
> -/* These bits[43:32] select the function number within the PF */
> -#define CN23XX_PKT_INPUT_CTL_VF_NUM_POS                32
> -
> -#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> -#define CN23XX_PKT_INPUT_CTL_MASK                      \
> -       (CN23XX_PKT_INPUT_CTL_RDSIZE |                  \
> -        CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP |        \
> -        CN23XX_PKT_INPUT_CTL_USE_CSR)
> -#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -#define CN23XX_PKT_INPUT_CTL_MASK                      \
> -       (CN23XX_PKT_INPUT_CTL_RDSIZE |                  \
> -        CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP |        \
> -        CN23XX_PKT_INPUT_CTL_USE_CSR |                 \
> -        CN23XX_PKT_INPUT_CTL_GATHER_ES_64B_SWAP)
> -#endif
> -
> -/* ############################ OUTPUT QUEUE ######################### */
> -
> -/* 64 registers for Output queue control - SLI_PKT(0..63)_OUTPUT_CONTROL */
> -#define CN23XX_SLI_PKT_OUTPUT_CONTROL_START    0x10050
> -
> -/* 64 registers for Output queue buffer and info size
> - * SLI_PKT(0..63)_OUT_SIZE
> - */
> -#define CN23XX_SLI_PKT_OUT_SIZE                        0x10060
> -
> -/* 64 registers for Output Queue Start Addr - SLI_PKT(0..63)_SLIST_BADDR */
> -#define CN23XX_SLI_SLIST_BADDR_START64         0x10070
> -
> -/* 64 registers for Output Queue Packet Credits
> - * SLI_PKT(0..63)_SLIST_BAOFF_DBELL
> - */
> -#define CN23XX_SLI_PKT_SLIST_BAOFF_DBELL_START 0x10080
> -
> -/* 64 registers for Output Queue size - SLI_PKT(0..63)_SLIST_FIFO_RSIZE */
> -#define CN23XX_SLI_PKT_SLIST_FIFO_RSIZE_START  0x10090
> -
> -/* 64 registers for Output Queue Packet Count - SLI_PKT(0..63)_CNTS */
> -#define CN23XX_SLI_PKT_CNTS_START              0x100B0
> -
> -/* Each Output Queue register is at a 16-byte Offset in BAR0 */
> -#define CN23XX_OQ_OFFSET                       0x20000
> -
> -/* ------- Output Queue Macros --------- */
> -
> -#define CN23XX_SLI_OQ_PKT_CONTROL(oq)                                  \
> -       (CN23XX_SLI_PKT_OUTPUT_CONTROL_START + ((oq) * CN23XX_OQ_OFFSET))
> -
> -#define CN23XX_SLI_OQ_BASE_ADDR64(oq)                                  \
> -       (CN23XX_SLI_SLIST_BADDR_START64 + ((oq) * CN23XX_OQ_OFFSET))
> -
> -#define CN23XX_SLI_OQ_SIZE(oq)                                         \
> -       (CN23XX_SLI_PKT_SLIST_FIFO_RSIZE_START + ((oq) * CN23XX_OQ_OFFSET))
> -
> -#define CN23XX_SLI_OQ_BUFF_INFO_SIZE(oq)                               \
> -       (CN23XX_SLI_PKT_OUT_SIZE + ((oq) * CN23XX_OQ_OFFSET))
> -
> -#define CN23XX_SLI_OQ_PKTS_SENT(oq)                                    \
> -       (CN23XX_SLI_PKT_CNTS_START + ((oq) * CN23XX_OQ_OFFSET))
> -
> -#define CN23XX_SLI_OQ_PKTS_CREDIT(oq)                                  \
> -       (CN23XX_SLI_PKT_SLIST_BAOFF_DBELL_START + ((oq) * CN23XX_OQ_OFFSET))
> -
> -/* ------------------ Masks ---------------- */
> -#define CN23XX_PKT_OUTPUT_CTL_IPTR             (1 << 11)
> -#define CN23XX_PKT_OUTPUT_CTL_ES               (1 << 9)
> -#define CN23XX_PKT_OUTPUT_CTL_NSR              (1 << 8)
> -#define CN23XX_PKT_OUTPUT_CTL_ROR              (1 << 7)
> -#define CN23XX_PKT_OUTPUT_CTL_DPTR             (1 << 6)
> -#define CN23XX_PKT_OUTPUT_CTL_BMODE            (1 << 5)
> -#define CN23XX_PKT_OUTPUT_CTL_ES_P             (1 << 3)
> -#define CN23XX_PKT_OUTPUT_CTL_NSR_P            (1 << 2)
> -#define CN23XX_PKT_OUTPUT_CTL_ROR_P            (1 << 1)
> -#define CN23XX_PKT_OUTPUT_CTL_RING_ENB         (1 << 0)
> -
> -/* Rings per Virtual Function [RO] */
> -#define CN23XX_PKT_INPUT_CTL_RPVF_MASK         0x3F
> -#define CN23XX_PKT_INPUT_CTL_RPVF_POS          48
> -
> -/* These bits[47:44][RO] give the Physical function
> - * number info within the MAC
> - */
> -#define CN23XX_PKT_INPUT_CTL_PF_NUM_MASK       0x7
> -
> -/* These bits[43:32][RO] give the virtual function
> - * number info within the PF
> - */
> -#define CN23XX_PKT_INPUT_CTL_VF_NUM_MASK       0x1FFF
> -
> -/* ######################### Mailbox Reg Macros ######################## */
> -#define CN23XX_SLI_PKT_PF_VF_MBOX_SIG_START    0x10200
> -#define CN23XX_VF_SLI_PKT_MBOX_INT_START       0x10210
> -
> -#define CN23XX_SLI_MBOX_OFFSET                 0x20000
> -#define CN23XX_SLI_MBOX_SIG_IDX_OFFSET         0x8
> -
> -#define CN23XX_SLI_PKT_PF_VF_MBOX_SIG(q, idx)                          \
> -       (CN23XX_SLI_PKT_PF_VF_MBOX_SIG_START +                          \
> -        ((q) * CN23XX_SLI_MBOX_OFFSET +                                \
> -         (idx) * CN23XX_SLI_MBOX_SIG_IDX_OFFSET))
> -
> -#define CN23XX_VF_SLI_PKT_MBOX_INT(q)                                  \
> -       (CN23XX_VF_SLI_PKT_MBOX_INT_START + ((q) * CN23XX_SLI_MBOX_OFFSET))
> -
> -#endif /* _LIO_23XX_REG_H_ */
> diff --git a/drivers/net/liquidio/base/lio_23xx_vf.c b/drivers/net/liquidio/base/lio_23xx_vf.c
> deleted file mode 100644
> index c6b8310b71..0000000000
> --- a/drivers/net/liquidio/base/lio_23xx_vf.c
> +++ /dev/null
> @@ -1,513 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#include <string.h>
> -
> -#include <ethdev_driver.h>
> -#include <rte_cycles.h>
> -#include <rte_malloc.h>
> -
> -#include "lio_logs.h"
> -#include "lio_23xx_vf.h"
> -#include "lio_23xx_reg.h"
> -#include "lio_mbox.h"
> -
> -static int
> -cn23xx_vf_reset_io_queues(struct lio_device *lio_dev, uint32_t num_queues)
> -{
> -       uint32_t loop = CN23XX_VF_BUSY_READING_REG_LOOP_COUNT;
> -       uint64_t d64, q_no;
> -       int ret_val = 0;
> -
> -       PMD_INIT_FUNC_TRACE();
> -
> -       for (q_no = 0; q_no < num_queues; q_no++) {
> -               /* set RST bit to 1. This bit applies to both IQ and OQ */
> -               d64 = lio_read_csr64(lio_dev,
> -                                    CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
> -               d64 = d64 | CN23XX_PKT_INPUT_CTL_RST;
> -               lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
> -                               d64);
> -       }
> -
> -       /* wait until the RST bit is clear or the RST and QUIET bits are set */
> -       for (q_no = 0; q_no < num_queues; q_no++) {
> -               volatile uint64_t reg_val;
> -
> -               reg_val = lio_read_csr64(lio_dev,
> -                                        CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
> -               while ((reg_val & CN23XX_PKT_INPUT_CTL_RST) &&
> -                               !(reg_val & CN23XX_PKT_INPUT_CTL_QUIET) &&
> -                               loop) {
> -                       reg_val = lio_read_csr64(
> -                                       lio_dev,
> -                                       CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
> -                       loop = loop - 1;
> -               }
> -
> -               if (loop == 0) {
> -                       lio_dev_err(lio_dev,
> -                                   "clearing the reset reg failed or setting the quiet reg failed for qno: %lu\n",
> -                                   (unsigned long)q_no);
> -                       return -1;
> -               }
> -
> -               reg_val = reg_val & ~CN23XX_PKT_INPUT_CTL_RST;
> -               lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
> -                               reg_val);
> -
> -               reg_val = lio_read_csr64(
> -                   lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
> -               if (reg_val & CN23XX_PKT_INPUT_CTL_RST) {
> -                       lio_dev_err(lio_dev,
> -                                   "clearing the reset failed for qno: %lu\n",
> -                                   (unsigned long)q_no);
> -                       ret_val = -1;
> -               }
> -       }
> -
> -       return ret_val;
> -}
> -
> -static int
> -cn23xx_vf_setup_global_input_regs(struct lio_device *lio_dev)
> -{
> -       uint64_t q_no;
> -       uint64_t d64;
> -
> -       PMD_INIT_FUNC_TRACE();
> -
> -       if (cn23xx_vf_reset_io_queues(lio_dev,
> -                                     lio_dev->sriov_info.rings_per_vf))
> -               return -1;
> -
> -       for (q_no = 0; q_no < (lio_dev->sriov_info.rings_per_vf); q_no++) {
> -               lio_write_csr64(lio_dev, CN23XX_SLI_IQ_DOORBELL(q_no),
> -                               0xFFFFFFFF);
> -
> -               d64 = lio_read_csr64(lio_dev,
> -                                    CN23XX_SLI_IQ_INSTR_COUNT64(q_no));
> -
> -               d64 &= 0xEFFFFFFFFFFFFFFFL;
> -
> -               lio_write_csr64(lio_dev, CN23XX_SLI_IQ_INSTR_COUNT64(q_no),
> -                               d64);
> -
> -               /* Select ES, RO, NS, RDSIZE,DPTR Fomat#0 for
> -                * the Input Queues
> -                */
> -               lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
> -                               CN23XX_PKT_INPUT_CTL_MASK);
> -       }
> -
> -       return 0;
> -}
> -
> -static void
> -cn23xx_vf_setup_global_output_regs(struct lio_device *lio_dev)
> -{
> -       uint32_t reg_val;
> -       uint32_t q_no;
> -
> -       PMD_INIT_FUNC_TRACE();
> -
> -       for (q_no = 0; q_no < lio_dev->sriov_info.rings_per_vf; q_no++) {
> -               lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKTS_CREDIT(q_no),
> -                             0xFFFFFFFF);
> -
> -               reg_val =
> -                   lio_read_csr(lio_dev, CN23XX_SLI_OQ_PKTS_SENT(q_no));
> -
> -               reg_val &= 0xEFFFFFFFFFFFFFFFL;
> -
> -               lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKTS_SENT(q_no), reg_val);
> -
> -               reg_val =
> -                   lio_read_csr(lio_dev, CN23XX_SLI_OQ_PKT_CONTROL(q_no));
> -
> -               /* set IPTR & DPTR */
> -               reg_val |=
> -                   (CN23XX_PKT_OUTPUT_CTL_IPTR | CN23XX_PKT_OUTPUT_CTL_DPTR);
> -
> -               /* reset BMODE */
> -               reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_BMODE);
> -
> -               /* No Relaxed Ordering, No Snoop, 64-bit Byte swap
> -                * for Output Queue Scatter List
> -                * reset ROR_P, NSR_P
> -                */
> -               reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ROR_P);
> -               reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_NSR_P);
> -
> -#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> -               reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ES_P);
> -#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -               reg_val |= (CN23XX_PKT_OUTPUT_CTL_ES_P);
> -#endif
> -               /* No Relaxed Ordering, No Snoop, 64-bit Byte swap
> -                * for Output Queue Data
> -                * reset ROR, NSR
> -                */
> -               reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ROR);
> -               reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_NSR);
> -               /* set the ES bit */
> -               reg_val |= (CN23XX_PKT_OUTPUT_CTL_ES);
> -
> -               /* write all the selected settings */
> -               lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKT_CONTROL(q_no),
> -                             reg_val);
> -       }
> -}
> -
> -static int
> -cn23xx_vf_setup_device_regs(struct lio_device *lio_dev)
> -{
> -       PMD_INIT_FUNC_TRACE();
> -
> -       if (cn23xx_vf_setup_global_input_regs(lio_dev))
> -               return -1;
> -
> -       cn23xx_vf_setup_global_output_regs(lio_dev);
> -
> -       return 0;
> -}
> -
> -static void
> -cn23xx_vf_setup_iq_regs(struct lio_device *lio_dev, uint32_t iq_no)
> -{
> -       struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
> -       uint64_t pkt_in_done = 0;
> -
> -       PMD_INIT_FUNC_TRACE();
> -
> -       /* Write the start of the input queue's ring and its size */
> -       lio_write_csr64(lio_dev, CN23XX_SLI_IQ_BASE_ADDR64(iq_no),
> -                       iq->base_addr_dma);
> -       lio_write_csr(lio_dev, CN23XX_SLI_IQ_SIZE(iq_no), iq->nb_desc);
> -
> -       /* Remember the doorbell & instruction count register addr
> -        * for this queue
> -        */
> -       iq->doorbell_reg = (uint8_t *)lio_dev->hw_addr +
> -                               CN23XX_SLI_IQ_DOORBELL(iq_no);
> -       iq->inst_cnt_reg = (uint8_t *)lio_dev->hw_addr +
> -                               CN23XX_SLI_IQ_INSTR_COUNT64(iq_no);
> -       lio_dev_dbg(lio_dev, "InstQ[%d]:dbell reg @ 0x%p instcnt_reg @ 0x%p\n",
> -                   iq_no, iq->doorbell_reg, iq->inst_cnt_reg);
> -
> -       /* Store the current instruction counter (used in flush_iq
> -        * calculation)
> -        */
> -       pkt_in_done = rte_read64(iq->inst_cnt_reg);
> -
> -       /* Clear the count by writing back what we read, but don't
> -        * enable data traffic here
> -        */
> -       rte_write64(pkt_in_done, iq->inst_cnt_reg);
> -}
> -
> -static void
> -cn23xx_vf_setup_oq_regs(struct lio_device *lio_dev, uint32_t oq_no)
> -{
> -       struct lio_droq *droq = lio_dev->droq[oq_no];
> -
> -       PMD_INIT_FUNC_TRACE();
> -
> -       lio_write_csr64(lio_dev, CN23XX_SLI_OQ_BASE_ADDR64(oq_no),
> -                       droq->desc_ring_dma);
> -       lio_write_csr(lio_dev, CN23XX_SLI_OQ_SIZE(oq_no), droq->nb_desc);
> -
> -       lio_write_csr(lio_dev, CN23XX_SLI_OQ_BUFF_INFO_SIZE(oq_no),
> -                     (droq->buffer_size | (OCTEON_RH_SIZE << 16)));
> -
> -       /* Get the mapped address of the pkt_sent and pkts_credit regs */
> -       droq->pkts_sent_reg = (uint8_t *)lio_dev->hw_addr +
> -                                       CN23XX_SLI_OQ_PKTS_SENT(oq_no);
> -       droq->pkts_credit_reg = (uint8_t *)lio_dev->hw_addr +
> -                                       CN23XX_SLI_OQ_PKTS_CREDIT(oq_no);
> -}
> -
> -static void
> -cn23xx_vf_free_mbox(struct lio_device *lio_dev)
> -{
> -       PMD_INIT_FUNC_TRACE();
> -
> -       rte_free(lio_dev->mbox[0]);
> -       lio_dev->mbox[0] = NULL;
> -
> -       rte_free(lio_dev->mbox);
> -       lio_dev->mbox = NULL;
> -}
> -
> -static int
> -cn23xx_vf_setup_mbox(struct lio_device *lio_dev)
> -{
> -       struct lio_mbox *mbox;
> -
> -       PMD_INIT_FUNC_TRACE();
> -
> -       if (lio_dev->mbox == NULL) {
> -               lio_dev->mbox = rte_zmalloc(NULL, sizeof(void *), 0);
> -               if (lio_dev->mbox == NULL)
> -                       return -ENOMEM;
> -       }
> -
> -       mbox = rte_zmalloc(NULL, sizeof(struct lio_mbox), 0);
> -       if (mbox == NULL) {
> -               rte_free(lio_dev->mbox);
> -               lio_dev->mbox = NULL;
> -               return -ENOMEM;
> -       }
> -
> -       rte_spinlock_init(&mbox->lock);
> -
> -       mbox->lio_dev = lio_dev;
> -
> -       mbox->q_no = 0;
> -
> -       mbox->state = LIO_MBOX_STATE_IDLE;
> -
> -       /* VF mbox interrupt reg */
> -       mbox->mbox_int_reg = (uint8_t *)lio_dev->hw_addr +
> -                               CN23XX_VF_SLI_PKT_MBOX_INT(0);
> -       /* VF reads from SIG0 reg */
> -       mbox->mbox_read_reg = (uint8_t *)lio_dev->hw_addr +
> -                               CN23XX_SLI_PKT_PF_VF_MBOX_SIG(0, 0);
> -       /* VF writes into SIG1 reg */
> -       mbox->mbox_write_reg = (uint8_t *)lio_dev->hw_addr +
> -                               CN23XX_SLI_PKT_PF_VF_MBOX_SIG(0, 1);
> -
> -       lio_dev->mbox[0] = mbox;
> -
> -       rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
> -
> -       return 0;
> -}
> -
> -static int
> -cn23xx_vf_enable_io_queues(struct lio_device *lio_dev)
> -{
> -       uint32_t q_no;
> -
> -       PMD_INIT_FUNC_TRACE();
> -
> -       for (q_no = 0; q_no < lio_dev->num_iqs; q_no++) {
> -               uint64_t reg_val;
> -
> -               /* set the corresponding IQ IS_64B bit */
> -               if (lio_dev->io_qmask.iq64B & (1ULL << q_no)) {
> -                       reg_val = lio_read_csr64(
> -                                       lio_dev,
> -                                       CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
> -                       reg_val = reg_val | CN23XX_PKT_INPUT_CTL_IS_64B;
> -                       lio_write_csr64(lio_dev,
> -                                       CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
> -                                       reg_val);
> -               }
> -
> -               /* set the corresponding IQ ENB bit */
> -               if (lio_dev->io_qmask.iq & (1ULL << q_no)) {
> -                       reg_val = lio_read_csr64(
> -                                       lio_dev,
> -                                       CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
> -                       reg_val = reg_val | CN23XX_PKT_INPUT_CTL_RING_ENB;
> -                       lio_write_csr64(lio_dev,
> -                                       CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
> -                                       reg_val);
> -               }
> -       }
> -       for (q_no = 0; q_no < lio_dev->num_oqs; q_no++) {
> -               uint32_t reg_val;
> -
> -               /* set the corresponding OQ ENB bit */
> -               if (lio_dev->io_qmask.oq & (1ULL << q_no)) {
> -                       reg_val = lio_read_csr(
> -                                       lio_dev,
> -                                       CN23XX_SLI_OQ_PKT_CONTROL(q_no));
> -                       reg_val = reg_val | CN23XX_PKT_OUTPUT_CTL_RING_ENB;
> -                       lio_write_csr(lio_dev,
> -                                     CN23XX_SLI_OQ_PKT_CONTROL(q_no),
> -                                     reg_val);
> -               }
> -       }
> -
> -       return 0;
> -}
> -
> -static void
> -cn23xx_vf_disable_io_queues(struct lio_device *lio_dev)
> -{
> -       uint32_t num_queues;
> -
> -       PMD_INIT_FUNC_TRACE();
> -
> -       /* per HRM, rings can only be disabled via reset operation,
> -        * NOT via SLI_PKT()_INPUT/OUTPUT_CONTROL[ENB]
> -        */
> -       num_queues = lio_dev->num_iqs;
> -       if (num_queues < lio_dev->num_oqs)
> -               num_queues = lio_dev->num_oqs;
> -
> -       cn23xx_vf_reset_io_queues(lio_dev, num_queues);
> -}
> -
> -void
> -cn23xx_vf_ask_pf_to_do_flr(struct lio_device *lio_dev)
> -{
> -       struct lio_mbox_cmd mbox_cmd;
> -
> -       memset(&mbox_cmd, 0, sizeof(struct lio_mbox_cmd));
> -       mbox_cmd.msg.s.type = LIO_MBOX_REQUEST;
> -       mbox_cmd.msg.s.resp_needed = 0;
> -       mbox_cmd.msg.s.cmd = LIO_VF_FLR_REQUEST;
> -       mbox_cmd.msg.s.len = 1;
> -       mbox_cmd.q_no = 0;
> -       mbox_cmd.recv_len = 0;
> -       mbox_cmd.recv_status = 0;
> -       mbox_cmd.fn = NULL;
> -       mbox_cmd.fn_arg = 0;
> -
> -       lio_mbox_write(lio_dev, &mbox_cmd);
> -}
> -
> -static void
> -cn23xx_pfvf_hs_callback(struct lio_device *lio_dev,
> -                       struct lio_mbox_cmd *cmd, void *arg)
> -{
> -       uint32_t major = 0;
> -
> -       PMD_INIT_FUNC_TRACE();
> -
> -       rte_memcpy((uint8_t *)&lio_dev->pfvf_hsword, cmd->msg.s.params, 6);
> -       if (cmd->recv_len > 1) {
> -               struct lio_version *lio_ver = (struct lio_version *)cmd->data;
> -
> -               major = lio_ver->major;
> -               major = major << 16;
> -       }
> -
> -       rte_atomic64_set((rte_atomic64_t *)arg, major | 1);
> -}
> -
> -int
> -cn23xx_pfvf_handshake(struct lio_device *lio_dev)
> -{
> -       struct lio_mbox_cmd mbox_cmd;
> -       struct lio_version *lio_ver = (struct lio_version *)&mbox_cmd.data[0];
> -       uint32_t q_no, count = 0;
> -       rte_atomic64_t status;
> -       uint32_t pfmajor;
> -       uint32_t vfmajor;
> -       uint32_t ret;
> -
> -       PMD_INIT_FUNC_TRACE();
> -
> -       /* Sending VF_ACTIVE indication to the PF driver */
> -       lio_dev_dbg(lio_dev, "requesting info from PF\n");
> -
> -       mbox_cmd.msg.mbox_msg64 = 0;
> -       mbox_cmd.msg.s.type = LIO_MBOX_REQUEST;
> -       mbox_cmd.msg.s.resp_needed = 1;
> -       mbox_cmd.msg.s.cmd = LIO_VF_ACTIVE;
> -       mbox_cmd.msg.s.len = 2;
> -       mbox_cmd.data[0] = 0;
> -       lio_ver->major = LIO_BASE_MAJOR_VERSION;
> -       lio_ver->minor = LIO_BASE_MINOR_VERSION;
> -       lio_ver->micro = LIO_BASE_MICRO_VERSION;
> -       mbox_cmd.q_no = 0;
> -       mbox_cmd.recv_len = 0;
> -       mbox_cmd.recv_status = 0;
> -       mbox_cmd.fn = (lio_mbox_callback)cn23xx_pfvf_hs_callback;
> -       mbox_cmd.fn_arg = (void *)&status;
> -
> -       if (lio_mbox_write(lio_dev, &mbox_cmd)) {
> -               lio_dev_err(lio_dev, "Write to mailbox failed\n");
> -               return -1;
> -       }
> -
> -       rte_atomic64_set(&status, 0);
> -
> -       do {
> -               rte_delay_ms(1);
> -       } while ((rte_atomic64_read(&status) == 0) && (count++ < 10000));
> -
> -       ret = rte_atomic64_read(&status);
> -       if (ret == 0) {
> -               lio_dev_err(lio_dev, "cn23xx_pfvf_handshake timeout\n");
> -               return -1;
> -       }
> -
> -       for (q_no = 0; q_no < lio_dev->num_iqs; q_no++)
> -               lio_dev->instr_queue[q_no]->txpciq.s.pkind =
> -                                               lio_dev->pfvf_hsword.pkind;
> -
> -       vfmajor = LIO_BASE_MAJOR_VERSION;
> -       pfmajor = ret >> 16;
> -       if (pfmajor != vfmajor) {
> -               lio_dev_err(lio_dev,
> -                           "VF LiquidIO driver (major version %d) is not compatible with LiquidIO PF driver (major version %d)\n",
> -                           vfmajor, pfmajor);
> -               ret = -EPERM;
> -       } else {
> -               lio_dev_dbg(lio_dev,
> -                           "VF LiquidIO driver (major version %d), LiquidIO PF driver (major version %d)\n",
> -                           vfmajor, pfmajor);
> -               ret = 0;
> -       }
> -
> -       lio_dev_dbg(lio_dev, "got data from PF pkind is %d\n",
> -                   lio_dev->pfvf_hsword.pkind);
> -
> -       return ret;
> -}
> -
> -void
> -cn23xx_vf_handle_mbox(struct lio_device *lio_dev)
> -{
> -       uint64_t mbox_int_val;
> -
> -       /* read and clear by writing 1 */
> -       mbox_int_val = rte_read64(lio_dev->mbox[0]->mbox_int_reg);
> -       rte_write64(mbox_int_val, lio_dev->mbox[0]->mbox_int_reg);
> -       if (lio_mbox_read(lio_dev->mbox[0]))
> -               lio_mbox_process_message(lio_dev->mbox[0]);
> -}
> -
> -int
> -cn23xx_vf_setup_device(struct lio_device *lio_dev)
> -{
> -       uint64_t reg_val;
> -
> -       PMD_INIT_FUNC_TRACE();
> -
> -       /* INPUT_CONTROL[RPVF] gives the VF IOq count */
> -       reg_val = lio_read_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(0));
> -
> -       lio_dev->pf_num = (reg_val >> CN23XX_PKT_INPUT_CTL_PF_NUM_POS) &
> -                               CN23XX_PKT_INPUT_CTL_PF_NUM_MASK;
> -       lio_dev->vf_num = (reg_val >> CN23XX_PKT_INPUT_CTL_VF_NUM_POS) &
> -                               CN23XX_PKT_INPUT_CTL_VF_NUM_MASK;
> -
> -       reg_val = reg_val >> CN23XX_PKT_INPUT_CTL_RPVF_POS;
> -
> -       lio_dev->sriov_info.rings_per_vf =
> -                               reg_val & CN23XX_PKT_INPUT_CTL_RPVF_MASK;
> -
> -       lio_dev->default_config = lio_get_conf(lio_dev);
> -       if (lio_dev->default_config == NULL)
> -               return -1;
> -
> -       lio_dev->fn_list.setup_iq_regs          = cn23xx_vf_setup_iq_regs;
> -       lio_dev->fn_list.setup_oq_regs          = cn23xx_vf_setup_oq_regs;
> -       lio_dev->fn_list.setup_mbox             = cn23xx_vf_setup_mbox;
> -       lio_dev->fn_list.free_mbox              = cn23xx_vf_free_mbox;
> -
> -       lio_dev->fn_list.setup_device_regs      = cn23xx_vf_setup_device_regs;
> -
> -       lio_dev->fn_list.enable_io_queues       = cn23xx_vf_enable_io_queues;
> -       lio_dev->fn_list.disable_io_queues      = cn23xx_vf_disable_io_queues;
> -
> -       return 0;
> -}
> -
> diff --git a/drivers/net/liquidio/base/lio_23xx_vf.h b/drivers/net/liquidio/base/lio_23xx_vf.h
> deleted file mode 100644
> index 8e5362db15..0000000000
> --- a/drivers/net/liquidio/base/lio_23xx_vf.h
> +++ /dev/null
> @@ -1,63 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_23XX_VF_H_
> -#define _LIO_23XX_VF_H_
> -
> -#include <stdio.h>
> -
> -#include "lio_struct.h"
> -
> -static const struct lio_config default_cn23xx_conf     = {
> -       .card_type                              = LIO_23XX,
> -       .card_name                              = LIO_23XX_NAME,
> -       /** IQ attributes */
> -       .iq                                     = {
> -               .max_iqs                        = CN23XX_CFG_IO_QUEUES,
> -               .pending_list_size              =
> -                       (CN23XX_MAX_IQ_DESCRIPTORS * CN23XX_CFG_IO_QUEUES),
> -               .instr_type                     = OCTEON_64BYTE_INSTR,
> -       },
> -
> -       /** OQ attributes */
> -       .oq                                     = {
> -               .max_oqs                        = CN23XX_CFG_IO_QUEUES,
> -               .info_ptr                       = OCTEON_OQ_INFOPTR_MODE,
> -               .refill_threshold               = CN23XX_OQ_REFIL_THRESHOLD,
> -       },
> -
> -       .num_nic_ports                          = CN23XX_DEFAULT_NUM_PORTS,
> -       .num_def_rx_descs                       = CN23XX_MAX_OQ_DESCRIPTORS,
> -       .num_def_tx_descs                       = CN23XX_MAX_IQ_DESCRIPTORS,
> -       .def_rx_buf_size                        = CN23XX_OQ_BUF_SIZE,
> -};
> -
> -static inline const struct lio_config *
> -lio_get_conf(struct lio_device *lio_dev)
> -{
> -       const struct lio_config *default_lio_conf = NULL;
> -
> -       /* check the LIO Device model & return the corresponding lio
> -        * configuration
> -        */
> -       default_lio_conf = &default_cn23xx_conf;
> -
> -       if (default_lio_conf == NULL) {
> -               lio_dev_err(lio_dev, "Configuration verification failed\n");
> -               return NULL;
> -       }
> -
> -       return default_lio_conf;
> -}
> -
> -#define CN23XX_VF_BUSY_READING_REG_LOOP_COUNT  100000
> -
> -void cn23xx_vf_ask_pf_to_do_flr(struct lio_device *lio_dev);
> -
> -int cn23xx_pfvf_handshake(struct lio_device *lio_dev);
> -
> -int cn23xx_vf_setup_device(struct lio_device  *lio_dev);
> -
> -void cn23xx_vf_handle_mbox(struct lio_device *lio_dev);
> -#endif /* _LIO_23XX_VF_H_  */
> diff --git a/drivers/net/liquidio/base/lio_hw_defs.h b/drivers/net/liquidio/base/lio_hw_defs.h
> deleted file mode 100644
> index 5e119c1241..0000000000
> --- a/drivers/net/liquidio/base/lio_hw_defs.h
> +++ /dev/null
> @@ -1,239 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_HW_DEFS_H_
> -#define _LIO_HW_DEFS_H_
> -
> -#include <rte_io.h>
> -
> -#ifndef PCI_VENDOR_ID_CAVIUM
> -#define PCI_VENDOR_ID_CAVIUM   0x177D
> -#endif
> -
> -#define LIO_CN23XX_VF_VID      0x9712
> -
> -/* CN23xx subsystem device ids */
> -#define PCI_SUBSYS_DEV_ID_CN2350_210           0x0004
> -#define PCI_SUBSYS_DEV_ID_CN2360_210           0x0005
> -#define PCI_SUBSYS_DEV_ID_CN2360_225           0x0006
> -#define PCI_SUBSYS_DEV_ID_CN2350_225           0x0007
> -#define PCI_SUBSYS_DEV_ID_CN2350_210SVPN3      0x0008
> -#define PCI_SUBSYS_DEV_ID_CN2360_210SVPN3      0x0009
> -#define PCI_SUBSYS_DEV_ID_CN2350_210SVPT       0x000a
> -#define PCI_SUBSYS_DEV_ID_CN2360_210SVPT       0x000b
> -
> -/* --------------------------CONFIG VALUES------------------------ */
> -
> -/* CN23xx IQ configuration macros */
> -#define CN23XX_MAX_RINGS_PER_PF                        64
> -#define CN23XX_MAX_RINGS_PER_VF                        8
> -
> -#define CN23XX_MAX_INPUT_QUEUES                        CN23XX_MAX_RINGS_PER_PF
> -#define CN23XX_MAX_IQ_DESCRIPTORS              512
> -#define CN23XX_MIN_IQ_DESCRIPTORS              128
> -
> -#define CN23XX_MAX_OUTPUT_QUEUES               CN23XX_MAX_RINGS_PER_PF
> -#define CN23XX_MAX_OQ_DESCRIPTORS              512
> -#define CN23XX_MIN_OQ_DESCRIPTORS              128
> -#define CN23XX_OQ_BUF_SIZE                     1536
> -
> -#define CN23XX_OQ_REFIL_THRESHOLD              16
> -
> -#define CN23XX_DEFAULT_NUM_PORTS               1
> -
> -#define CN23XX_CFG_IO_QUEUES                   CN23XX_MAX_RINGS_PER_PF
> -
> -/* common OCTEON configuration macros */
> -#define OCTEON_64BYTE_INSTR                    64
> -#define OCTEON_OQ_INFOPTR_MODE                 1
> -
> -/* Max IOQs per LIO Link */
> -#define LIO_MAX_IOQS_PER_IF                    64
> -
> -/* Wait time in milliseconds for FLR */
> -#define LIO_PCI_FLR_WAIT                       100
> -
> -enum lio_card_type {
> -       LIO_23XX /* 23xx */
> -};
> -
> -#define LIO_23XX_NAME "23xx"
> -
> -#define LIO_DEV_RUNNING                0xc
> -
> -#define LIO_OQ_REFILL_THRESHOLD_CFG(cfg)                               \
> -               ((cfg)->default_config->oq.refill_threshold)
> -#define LIO_NUM_DEF_TX_DESCS_CFG(cfg)                                  \
> -               ((cfg)->default_config->num_def_tx_descs)
> -
> -#define LIO_IQ_INSTR_TYPE(cfg)         ((cfg)->default_config->iq.instr_type)
> -
> -/* The following config values are fixed and should not be modified. */
> -
> -/* Maximum number of Instruction queues */
> -#define LIO_MAX_INSTR_QUEUES(lio_dev)          CN23XX_MAX_RINGS_PER_VF
> -
> -#define LIO_MAX_POSSIBLE_INSTR_QUEUES          CN23XX_MAX_INPUT_QUEUES
> -#define LIO_MAX_POSSIBLE_OUTPUT_QUEUES         CN23XX_MAX_OUTPUT_QUEUES
> -
> -#define LIO_DEVICE_NAME_LEN            32
> -#define LIO_BASE_MAJOR_VERSION         1
> -#define LIO_BASE_MINOR_VERSION         5
> -#define LIO_BASE_MICRO_VERSION         1
> -
> -#define LIO_FW_VERSION_LENGTH          32
> -
> -#define LIO_Q_RECONF_MIN_VERSION       "1.7.0"
> -#define LIO_VF_TRUST_MIN_VERSION       "1.7.1"
> -
> -/** Tag types used by Octeon cores in its work. */
> -enum octeon_tag_type {
> -       OCTEON_ORDERED_TAG      = 0,
> -       OCTEON_ATOMIC_TAG       = 1,
> -};
> -
> -/* pre-defined host->NIC tag values */
> -#define LIO_CONTROL    (0x11111110)
> -#define LIO_DATA(i)    (0x11111111 + (i))
> -
> -/* used for NIC operations */
> -#define LIO_OPCODE     1
> -
> -/* Subcodes are used by host driver/apps to identify the sub-operation
> - * for the core. They only need to by unique for a given subsystem.
> - */
> -#define LIO_OPCODE_SUBCODE(op, sub)            \
> -               ((((op) & 0x0f) << 8) | ((sub) & 0x7f))
> -
> -/** LIO_OPCODE subcodes */
> -/* This subcode is sent by core PCI driver to indicate cores are ready. */
> -#define LIO_OPCODE_NW_DATA             0x02 /* network packet data */
> -#define LIO_OPCODE_CMD                 0x03
> -#define LIO_OPCODE_INFO                        0x04
> -#define LIO_OPCODE_PORT_STATS          0x05
> -#define LIO_OPCODE_IF_CFG              0x09
> -
> -#define LIO_MIN_RX_BUF_SIZE            64
> -#define LIO_MAX_RX_PKTLEN              (64 * 1024)
> -
> -/* NIC Command types */
> -#define LIO_CMD_CHANGE_MTU             0x1
> -#define LIO_CMD_CHANGE_DEVFLAGS                0x3
> -#define LIO_CMD_RX_CTL                 0x4
> -#define LIO_CMD_CLEAR_STATS            0x6
> -#define LIO_CMD_SET_RSS                        0xD
> -#define LIO_CMD_TNL_RX_CSUM_CTL                0x10
> -#define LIO_CMD_TNL_TX_CSUM_CTL                0x11
> -#define LIO_CMD_ADD_VLAN_FILTER                0x17
> -#define LIO_CMD_DEL_VLAN_FILTER                0x18
> -#define LIO_CMD_VXLAN_PORT_CONFIG      0x19
> -#define LIO_CMD_QUEUE_COUNT_CTL                0x1f
> -
> -#define LIO_CMD_VXLAN_PORT_ADD         0x0
> -#define LIO_CMD_VXLAN_PORT_DEL         0x1
> -#define LIO_CMD_RXCSUM_ENABLE          0x0
> -#define LIO_CMD_TXCSUM_ENABLE          0x0
> -
> -/* RX(packets coming from wire) Checksum verification flags */
> -/* TCP/UDP csum */
> -#define LIO_L4_CSUM_VERIFIED           0x1
> -#define LIO_IP_CSUM_VERIFIED           0x2
> -
> -/* RSS */
> -#define LIO_RSS_PARAM_DISABLE_RSS              0x10
> -#define LIO_RSS_PARAM_HASH_KEY_UNCHANGED       0x08
> -#define LIO_RSS_PARAM_ITABLE_UNCHANGED         0x04
> -#define LIO_RSS_PARAM_HASH_INFO_UNCHANGED      0x02
> -
> -#define LIO_RSS_HASH_IPV4                      0x100
> -#define LIO_RSS_HASH_TCP_IPV4                  0x200
> -#define LIO_RSS_HASH_IPV6                      0x400
> -#define LIO_RSS_HASH_TCP_IPV6                  0x1000
> -#define LIO_RSS_HASH_IPV6_EX                   0x800
> -#define LIO_RSS_HASH_TCP_IPV6_EX               0x2000
> -
> -#define LIO_RSS_OFFLOAD_ALL (          \
> -               LIO_RSS_HASH_IPV4 |     \
> -               LIO_RSS_HASH_TCP_IPV4 | \
> -               LIO_RSS_HASH_IPV6 |     \
> -               LIO_RSS_HASH_TCP_IPV6 | \
> -               LIO_RSS_HASH_IPV6_EX |  \
> -               LIO_RSS_HASH_TCP_IPV6_EX)
> -
> -#define LIO_RSS_MAX_TABLE_SZ           128
> -#define LIO_RSS_MAX_KEY_SZ             40
> -#define LIO_RSS_PARAM_SIZE             16
> -
> -/* Interface flags communicated between host driver and core app. */
> -enum lio_ifflags {
> -       LIO_IFFLAG_PROMISC      = 0x01,
> -       LIO_IFFLAG_ALLMULTI     = 0x02,
> -       LIO_IFFLAG_UNICAST      = 0x10
> -};
> -
> -/* Routines for reading and writing CSRs */
> -#ifdef RTE_LIBRTE_LIO_DEBUG_REGS
> -#define lio_write_csr(lio_dev, reg_off, value)                         \
> -       do {                                                            \
> -               typeof(lio_dev) _dev = lio_dev;                         \
> -               typeof(reg_off) _reg_off = reg_off;                     \
> -               typeof(value) _value = value;                           \
> -               PMD_REGS_LOG(_dev,                                      \
> -                            "Write32: Reg: 0x%08lx Val: 0x%08lx\n",    \
> -                            (unsigned long)_reg_off,                   \
> -                            (unsigned long)_value);                    \
> -               rte_write32(_value, _dev->hw_addr + _reg_off);          \
> -       } while (0)
> -
> -#define lio_write_csr64(lio_dev, reg_off, val64)                       \
> -       do {                                                            \
> -               typeof(lio_dev) _dev = lio_dev;                         \
> -               typeof(reg_off) _reg_off = reg_off;                     \
> -               typeof(val64) _val64 = val64;                           \
> -               PMD_REGS_LOG(                                           \
> -                   _dev,                                               \
> -                   "Write64: Reg: 0x%08lx Val: 0x%016llx\n",           \
> -                   (unsigned long)_reg_off,                            \
> -                   (unsigned long long)_val64);                        \
> -               rte_write64(_val64, _dev->hw_addr + _reg_off);          \
> -       } while (0)
> -
> -#define lio_read_csr(lio_dev, reg_off)                                 \
> -       ({                                                              \
> -               typeof(lio_dev) _dev = lio_dev;                         \
> -               typeof(reg_off) _reg_off = reg_off;                     \
> -               uint32_t val = rte_read32(_dev->hw_addr + _reg_off);    \
> -               PMD_REGS_LOG(_dev,                                      \
> -                            "Read32: Reg: 0x%08lx Val: 0x%08lx\n",     \
> -                            (unsigned long)_reg_off,                   \
> -                            (unsigned long)val);                       \
> -               val;                                                    \
> -       })
> -
> -#define lio_read_csr64(lio_dev, reg_off)                               \
> -       ({                                                              \
> -               typeof(lio_dev) _dev = lio_dev;                         \
> -               typeof(reg_off) _reg_off = reg_off;                     \
> -               uint64_t val64 = rte_read64(_dev->hw_addr + _reg_off);  \
> -               PMD_REGS_LOG(                                           \
> -                   _dev,                                               \
> -                   "Read64: Reg: 0x%08lx Val: 0x%016llx\n",            \
> -                   (unsigned long)_reg_off,                            \
> -                   (unsigned long long)val64);                         \
> -               val64;                                                  \
> -       })
> -#else
> -#define lio_write_csr(lio_dev, reg_off, value)                         \
> -       rte_write32(value, (lio_dev)->hw_addr + (reg_off))
> -
> -#define lio_write_csr64(lio_dev, reg_off, val64)                       \
> -       rte_write64(val64, (lio_dev)->hw_addr + (reg_off))
> -
> -#define lio_read_csr(lio_dev, reg_off)                                 \
> -       rte_read32((lio_dev)->hw_addr + (reg_off))
> -
> -#define lio_read_csr64(lio_dev, reg_off)                               \
> -       rte_read64((lio_dev)->hw_addr + (reg_off))
> -#endif
> -#endif /* _LIO_HW_DEFS_H_ */
> diff --git a/drivers/net/liquidio/base/lio_mbox.c b/drivers/net/liquidio/base/lio_mbox.c
> deleted file mode 100644
> index 2ac2b1b334..0000000000
> --- a/drivers/net/liquidio/base/lio_mbox.c
> +++ /dev/null
> @@ -1,246 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#include <ethdev_driver.h>
> -#include <rte_cycles.h>
> -
> -#include "lio_logs.h"
> -#include "lio_struct.h"
> -#include "lio_mbox.h"
> -
> -/**
> - * lio_mbox_read:
> - * @mbox: Pointer mailbox
> - *
> - * Reads the 8-bytes of data from the mbox register
> - * Writes back the acknowledgment indicating completion of read
> - */
> -int
> -lio_mbox_read(struct lio_mbox *mbox)
> -{
> -       union lio_mbox_message msg;
> -       int ret = 0;
> -
> -       msg.mbox_msg64 = rte_read64(mbox->mbox_read_reg);
> -
> -       if ((msg.mbox_msg64 == LIO_PFVFACK) || (msg.mbox_msg64 == LIO_PFVFSIG))
> -               return 0;
> -
> -       if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVING) {
> -               mbox->mbox_req.data[mbox->mbox_req.recv_len - 1] =
> -                                       msg.mbox_msg64;
> -               mbox->mbox_req.recv_len++;
> -       } else {
> -               if (mbox->state & LIO_MBOX_STATE_RES_RECEIVING) {
> -                       mbox->mbox_resp.data[mbox->mbox_resp.recv_len - 1] =
> -                                       msg.mbox_msg64;
> -                       mbox->mbox_resp.recv_len++;
> -               } else {
> -                       if ((mbox->state & LIO_MBOX_STATE_IDLE) &&
> -                                       (msg.s.type == LIO_MBOX_REQUEST)) {
> -                               mbox->state &= ~LIO_MBOX_STATE_IDLE;
> -                               mbox->state |= LIO_MBOX_STATE_REQ_RECEIVING;
> -                               mbox->mbox_req.msg.mbox_msg64 = msg.mbox_msg64;
> -                               mbox->mbox_req.q_no = mbox->q_no;
> -                               mbox->mbox_req.recv_len = 1;
> -                       } else {
> -                               if ((mbox->state &
> -                                    LIO_MBOX_STATE_RES_PENDING) &&
> -                                   (msg.s.type == LIO_MBOX_RESPONSE)) {
> -                                       mbox->state &=
> -                                               ~LIO_MBOX_STATE_RES_PENDING;
> -                                       mbox->state |=
> -                                               LIO_MBOX_STATE_RES_RECEIVING;
> -                                       mbox->mbox_resp.msg.mbox_msg64 =
> -                                                               msg.mbox_msg64;
> -                                       mbox->mbox_resp.q_no = mbox->q_no;
> -                                       mbox->mbox_resp.recv_len = 1;
> -                               } else {
> -                                       rte_write64(LIO_PFVFERR,
> -                                                   mbox->mbox_read_reg);
> -                                       mbox->state |= LIO_MBOX_STATE_ERROR;
> -                                       return -1;
> -                               }
> -                       }
> -               }
> -       }
> -
> -       if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVING) {
> -               if (mbox->mbox_req.recv_len < msg.s.len) {
> -                       ret = 0;
> -               } else {
> -                       mbox->state &= ~LIO_MBOX_STATE_REQ_RECEIVING;
> -                       mbox->state |= LIO_MBOX_STATE_REQ_RECEIVED;
> -                       ret = 1;
> -               }
> -       } else {
> -               if (mbox->state & LIO_MBOX_STATE_RES_RECEIVING) {
> -                       if (mbox->mbox_resp.recv_len < msg.s.len) {
> -                               ret = 0;
> -                       } else {
> -                               mbox->state &= ~LIO_MBOX_STATE_RES_RECEIVING;
> -                               mbox->state |= LIO_MBOX_STATE_RES_RECEIVED;
> -                               ret = 1;
> -                       }
> -               } else {
> -                       RTE_ASSERT(0);
> -               }
> -       }
> -
> -       rte_write64(LIO_PFVFACK, mbox->mbox_read_reg);
> -
> -       return ret;
> -}
> -
> -/**
> - * lio_mbox_write:
> - * @lio_dev: Pointer lio device
> - * @mbox_cmd: Cmd to send to mailbox.
> - *
> - * Populates the queue specific mbox structure
> - * with cmd information.
> - * Write the cmd to mbox register
> - */
> -int
> -lio_mbox_write(struct lio_device *lio_dev,
> -              struct lio_mbox_cmd *mbox_cmd)
> -{
> -       struct lio_mbox *mbox = lio_dev->mbox[mbox_cmd->q_no];
> -       uint32_t count, i, ret = LIO_MBOX_STATUS_SUCCESS;
> -
> -       if ((mbox_cmd->msg.s.type == LIO_MBOX_RESPONSE) &&
> -                       !(mbox->state & LIO_MBOX_STATE_REQ_RECEIVED))
> -               return LIO_MBOX_STATUS_FAILED;
> -
> -       if ((mbox_cmd->msg.s.type == LIO_MBOX_REQUEST) &&
> -                       !(mbox->state & LIO_MBOX_STATE_IDLE))
> -               return LIO_MBOX_STATUS_BUSY;
> -
> -       if (mbox_cmd->msg.s.type == LIO_MBOX_REQUEST) {
> -               rte_memcpy(&mbox->mbox_resp, mbox_cmd,
> -                          sizeof(struct lio_mbox_cmd));
> -               mbox->state = LIO_MBOX_STATE_RES_PENDING;
> -       }
> -
> -       count = 0;
> -
> -       while (rte_read64(mbox->mbox_write_reg) != LIO_PFVFSIG) {
> -               rte_delay_ms(1);
> -               if (count++ == 1000) {
> -                       ret = LIO_MBOX_STATUS_FAILED;
> -                       break;
> -               }
> -       }
> -
> -       if (ret == LIO_MBOX_STATUS_SUCCESS) {
> -               rte_write64(mbox_cmd->msg.mbox_msg64, mbox->mbox_write_reg);
> -               for (i = 0; i < (uint32_t)(mbox_cmd->msg.s.len - 1); i++) {
> -                       count = 0;
> -                       while (rte_read64(mbox->mbox_write_reg) !=
> -                                       LIO_PFVFACK) {
> -                               rte_delay_ms(1);
> -                               if (count++ == 1000) {
> -                                       ret = LIO_MBOX_STATUS_FAILED;
> -                                       break;
> -                               }
> -                       }
> -                       rte_write64(mbox_cmd->data[i], mbox->mbox_write_reg);
> -               }
> -       }
> -
> -       if (mbox_cmd->msg.s.type == LIO_MBOX_RESPONSE) {
> -               mbox->state = LIO_MBOX_STATE_IDLE;
> -               rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
> -       } else {
> -               if ((!mbox_cmd->msg.s.resp_needed) ||
> -                               (ret == LIO_MBOX_STATUS_FAILED)) {
> -                       mbox->state &= ~LIO_MBOX_STATE_RES_PENDING;
> -                       if (!(mbox->state & (LIO_MBOX_STATE_REQ_RECEIVING |
> -                                            LIO_MBOX_STATE_REQ_RECEIVED)))
> -                               mbox->state = LIO_MBOX_STATE_IDLE;
> -               }
> -       }
> -
> -       return ret;
> -}
> -
> -/**
> - * lio_mbox_process_cmd:
> - * @mbox: Pointer mailbox
> - * @mbox_cmd: Pointer to command received
> - *
> - * Process the cmd received in mbox
> - */
> -static int
> -lio_mbox_process_cmd(struct lio_mbox *mbox,
> -                    struct lio_mbox_cmd *mbox_cmd)
> -{
> -       struct lio_device *lio_dev = mbox->lio_dev;
> -
> -       if (mbox_cmd->msg.s.cmd == LIO_CORES_CRASHED)
> -               lio_dev_err(lio_dev, "Octeon core(s) crashed or got stuck!\n");
> -
> -       return 0;
> -}
> -
> -/**
> - * Process the received mbox message.
> - */
> -int
> -lio_mbox_process_message(struct lio_mbox *mbox)
> -{
> -       struct lio_mbox_cmd mbox_cmd;
> -
> -       if (mbox->state & LIO_MBOX_STATE_ERROR) {
> -               if (mbox->state & (LIO_MBOX_STATE_RES_PENDING |
> -                                  LIO_MBOX_STATE_RES_RECEIVING)) {
> -                       rte_memcpy(&mbox_cmd, &mbox->mbox_resp,
> -                                  sizeof(struct lio_mbox_cmd));
> -                       mbox->state = LIO_MBOX_STATE_IDLE;
> -                       rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
> -                       mbox_cmd.recv_status = 1;
> -                       if (mbox_cmd.fn)
> -                               mbox_cmd.fn(mbox->lio_dev, &mbox_cmd,
> -                                           mbox_cmd.fn_arg);
> -
> -                       return 0;
> -               }
> -
> -               mbox->state = LIO_MBOX_STATE_IDLE;
> -               rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
> -
> -               return 0;
> -       }
> -
> -       if (mbox->state & LIO_MBOX_STATE_RES_RECEIVED) {
> -               rte_memcpy(&mbox_cmd, &mbox->mbox_resp,
> -                          sizeof(struct lio_mbox_cmd));
> -               mbox->state = LIO_MBOX_STATE_IDLE;
> -               rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
> -               mbox_cmd.recv_status = 0;
> -               if (mbox_cmd.fn)
> -                       mbox_cmd.fn(mbox->lio_dev, &mbox_cmd, mbox_cmd.fn_arg);
> -
> -               return 0;
> -       }
> -
> -       if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVED) {
> -               rte_memcpy(&mbox_cmd, &mbox->mbox_req,
> -                          sizeof(struct lio_mbox_cmd));
> -               if (!mbox_cmd.msg.s.resp_needed) {
> -                       mbox->state &= ~LIO_MBOX_STATE_REQ_RECEIVED;
> -                       if (!(mbox->state & LIO_MBOX_STATE_RES_PENDING))
> -                               mbox->state = LIO_MBOX_STATE_IDLE;
> -                       rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
> -               }
> -
> -               lio_mbox_process_cmd(mbox, &mbox_cmd);
> -
> -               return 0;
> -       }
> -
> -       RTE_ASSERT(0);
> -
> -       return 0;
> -}
> diff --git a/drivers/net/liquidio/base/lio_mbox.h b/drivers/net/liquidio/base/lio_mbox.h
> deleted file mode 100644
> index 457917e91f..0000000000
> --- a/drivers/net/liquidio/base/lio_mbox.h
> +++ /dev/null
> @@ -1,102 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_MBOX_H_
> -#define _LIO_MBOX_H_
> -
> -#include <stdint.h>
> -
> -#include <rte_spinlock.h>
> -
> -/* Macros for Mail Box Communication */
> -
> -#define LIO_MBOX_DATA_MAX                      32
> -
> -#define LIO_VF_ACTIVE                          0x1
> -#define LIO_VF_FLR_REQUEST                     0x2
> -#define LIO_CORES_CRASHED                      0x3
> -
> -/* Macro for Read acknowledgment */
> -#define LIO_PFVFACK                            0xffffffffffffffff
> -#define LIO_PFVFSIG                            0x1122334455667788
> -#define LIO_PFVFERR                            0xDEADDEADDEADDEAD
> -
> -enum lio_mbox_cmd_status {
> -       LIO_MBOX_STATUS_SUCCESS         = 0,
> -       LIO_MBOX_STATUS_FAILED          = 1,
> -       LIO_MBOX_STATUS_BUSY            = 2
> -};
> -
> -enum lio_mbox_message_type {
> -       LIO_MBOX_REQUEST        = 0,
> -       LIO_MBOX_RESPONSE       = 1
> -};
> -
> -union lio_mbox_message {
> -       uint64_t mbox_msg64;
> -       struct {
> -               uint16_t type : 1;
> -               uint16_t resp_needed : 1;
> -               uint16_t cmd : 6;
> -               uint16_t len : 8;
> -               uint8_t params[6];
> -       } s;
> -};
> -
> -typedef void (*lio_mbox_callback)(void *, void *, void *);
> -
> -struct lio_mbox_cmd {
> -       union lio_mbox_message msg;
> -       uint64_t data[LIO_MBOX_DATA_MAX];
> -       uint32_t q_no;
> -       uint32_t recv_len;
> -       uint32_t recv_status;
> -       lio_mbox_callback fn;
> -       void *fn_arg;
> -};
> -
> -enum lio_mbox_state {
> -       LIO_MBOX_STATE_IDLE             = 1,
> -       LIO_MBOX_STATE_REQ_RECEIVING    = 2,
> -       LIO_MBOX_STATE_REQ_RECEIVED     = 4,
> -       LIO_MBOX_STATE_RES_PENDING      = 8,
> -       LIO_MBOX_STATE_RES_RECEIVING    = 16,
> -       LIO_MBOX_STATE_RES_RECEIVED     = 16,
> -       LIO_MBOX_STATE_ERROR            = 32
> -};
> -
> -struct lio_mbox {
> -       /* A spinlock to protect access to this q_mbox. */
> -       rte_spinlock_t lock;
> -
> -       struct lio_device *lio_dev;
> -
> -       uint32_t q_no;
> -
> -       enum lio_mbox_state state;
> -
> -       /* SLI_MAC_PF_MBOX_INT for PF, SLI_PKT_MBOX_INT for VF. */
> -       void *mbox_int_reg;
> -
> -       /* SLI_PKT_PF_VF_MBOX_SIG(0) for PF,
> -        * SLI_PKT_PF_VF_MBOX_SIG(1) for VF.
> -        */
> -       void *mbox_write_reg;
> -
> -       /* SLI_PKT_PF_VF_MBOX_SIG(1) for PF,
> -        * SLI_PKT_PF_VF_MBOX_SIG(0) for VF.
> -        */
> -       void *mbox_read_reg;
> -
> -       struct lio_mbox_cmd mbox_req;
> -
> -       struct lio_mbox_cmd mbox_resp;
> -
> -};
> -
> -int lio_mbox_read(struct lio_mbox *mbox);
> -int lio_mbox_write(struct lio_device *lio_dev,
> -                  struct lio_mbox_cmd *mbox_cmd);
> -int lio_mbox_process_message(struct lio_mbox *mbox);
> -#endif /* _LIO_MBOX_H_ */
> diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
> deleted file mode 100644
> index ebcfbb1a5c..0000000000
> --- a/drivers/net/liquidio/lio_ethdev.c
> +++ /dev/null
> @@ -1,2147 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#include <rte_string_fns.h>
> -#include <ethdev_driver.h>
> -#include <ethdev_pci.h>
> -#include <rte_cycles.h>
> -#include <rte_malloc.h>
> -#include <rte_alarm.h>
> -#include <rte_ether.h>
> -
> -#include "lio_logs.h"
> -#include "lio_23xx_vf.h"
> -#include "lio_ethdev.h"
> -#include "lio_rxtx.h"
> -
> -/* Default RSS key in use */
> -static uint8_t lio_rss_key[40] = {
> -       0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2,
> -       0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0,
> -       0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4,
> -       0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C,
> -       0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
> -};
> -
> -static const struct rte_eth_desc_lim lio_rx_desc_lim = {
> -       .nb_max         = CN23XX_MAX_OQ_DESCRIPTORS,
> -       .nb_min         = CN23XX_MIN_OQ_DESCRIPTORS,
> -       .nb_align       = 1,
> -};
> -
> -static const struct rte_eth_desc_lim lio_tx_desc_lim = {
> -       .nb_max         = CN23XX_MAX_IQ_DESCRIPTORS,
> -       .nb_min         = CN23XX_MIN_IQ_DESCRIPTORS,
> -       .nb_align       = 1,
> -};
> -
> -/* Wait for control command to reach nic. */
> -static uint16_t
> -lio_wait_for_ctrl_cmd(struct lio_device *lio_dev,
> -                     struct lio_dev_ctrl_cmd *ctrl_cmd)
> -{
> -       uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> -
> -       while ((ctrl_cmd->cond == 0) && --timeout) {
> -               lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -               rte_delay_ms(1);
> -       }
> -
> -       return !timeout;
> -}
> -
> -/**
> - * \brief Send Rx control command
> - * @param eth_dev Pointer to the structure rte_eth_dev
> - * @param start_stop whether to start or stop
> - */
> -static int
> -lio_send_rx_ctrl_cmd(struct rte_eth_dev *eth_dev, int start_stop)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct lio_dev_ctrl_cmd ctrl_cmd;
> -       struct lio_ctrl_pkt ctrl_pkt;
> -
> -       /* flush added to prevent cmd failure
> -        * incase the queue is full
> -        */
> -       lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> -       memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> -       memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> -       ctrl_cmd.eth_dev = eth_dev;
> -       ctrl_cmd.cond = 0;
> -
> -       ctrl_pkt.ncmd.s.cmd = LIO_CMD_RX_CTL;
> -       ctrl_pkt.ncmd.s.param1 = start_stop;
> -       ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> -       if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> -               lio_dev_err(lio_dev, "Failed to send RX Control message\n");
> -               return -1;
> -       }
> -
> -       if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> -               lio_dev_err(lio_dev, "RX Control command timed out\n");
> -               return -1;
> -       }
> -
> -       return 0;
> -}
> -
> -/* store statistics names and its offset in stats structure */
> -struct rte_lio_xstats_name_off {
> -       char name[RTE_ETH_XSTATS_NAME_SIZE];
> -       unsigned int offset;
> -};
> -
> -static const struct rte_lio_xstats_name_off rte_lio_stats_strings[] = {
> -       {"rx_pkts", offsetof(struct octeon_rx_stats, total_rcvd)},
> -       {"rx_bytes", offsetof(struct octeon_rx_stats, bytes_rcvd)},
> -       {"rx_broadcast_pkts", offsetof(struct octeon_rx_stats, total_bcst)},
> -       {"rx_multicast_pkts", offsetof(struct octeon_rx_stats, total_mcst)},
> -       {"rx_flow_ctrl_pkts", offsetof(struct octeon_rx_stats, ctl_rcvd)},
> -       {"rx_fifo_err", offsetof(struct octeon_rx_stats, fifo_err)},
> -       {"rx_dmac_drop", offsetof(struct octeon_rx_stats, dmac_drop)},
> -       {"rx_fcs_err", offsetof(struct octeon_rx_stats, fcs_err)},
> -       {"rx_jabber_err", offsetof(struct octeon_rx_stats, jabber_err)},
> -       {"rx_l2_err", offsetof(struct octeon_rx_stats, l2_err)},
> -       {"rx_vxlan_pkts", offsetof(struct octeon_rx_stats, fw_rx_vxlan)},
> -       {"rx_vxlan_err", offsetof(struct octeon_rx_stats, fw_rx_vxlan_err)},
> -       {"rx_lro_pkts", offsetof(struct octeon_rx_stats, fw_lro_pkts)},
> -       {"tx_pkts", (offsetof(struct octeon_tx_stats, total_pkts_sent)) +
> -                                               sizeof(struct octeon_rx_stats)},
> -       {"tx_bytes", (offsetof(struct octeon_tx_stats, total_bytes_sent)) +
> -                                               sizeof(struct octeon_rx_stats)},
> -       {"tx_broadcast_pkts",
> -               (offsetof(struct octeon_tx_stats, bcast_pkts_sent)) +
> -                       sizeof(struct octeon_rx_stats)},
> -       {"tx_multicast_pkts",
> -               (offsetof(struct octeon_tx_stats, mcast_pkts_sent)) +
> -                       sizeof(struct octeon_rx_stats)},
> -       {"tx_flow_ctrl_pkts", (offsetof(struct octeon_tx_stats, ctl_sent)) +
> -                                               sizeof(struct octeon_rx_stats)},
> -       {"tx_fifo_err", (offsetof(struct octeon_tx_stats, fifo_err)) +
> -                                               sizeof(struct octeon_rx_stats)},
> -       {"tx_total_collisions", (offsetof(struct octeon_tx_stats,
> -                                         total_collisions)) +
> -                                               sizeof(struct octeon_rx_stats)},
> -       {"tx_tso", (offsetof(struct octeon_tx_stats, fw_tso)) +
> -                                               sizeof(struct octeon_rx_stats)},
> -       {"tx_vxlan_pkts", (offsetof(struct octeon_tx_stats, fw_tx_vxlan)) +
> -                                               sizeof(struct octeon_rx_stats)},
> -};
> -
> -#define LIO_NB_XSTATS  RTE_DIM(rte_lio_stats_strings)
> -
> -/* Get hw stats of the port */
> -static int
> -lio_dev_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *xstats,
> -                  unsigned int n)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> -       struct octeon_link_stats *hw_stats;
> -       struct lio_link_stats_resp *resp;
> -       struct lio_soft_command *sc;
> -       uint32_t resp_size;
> -       unsigned int i;
> -       int retval;
> -
> -       if (!lio_dev->intf_open) {
> -               lio_dev_err(lio_dev, "Port %d down\n",
> -                           lio_dev->port_id);
> -               return -EINVAL;
> -       }
> -
> -       if (n < LIO_NB_XSTATS)
> -               return LIO_NB_XSTATS;
> -
> -       resp_size = sizeof(struct lio_link_stats_resp);
> -       sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
> -       if (sc == NULL)
> -               return -ENOMEM;
> -
> -       resp = (struct lio_link_stats_resp *)sc->virtrptr;
> -       lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
> -                                LIO_OPCODE_PORT_STATS, 0, 0, 0);
> -
> -       /* Setting wait time in seconds */
> -       sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
> -
> -       retval = lio_send_soft_command(lio_dev, sc);
> -       if (retval == LIO_IQ_SEND_FAILED) {
> -               lio_dev_err(lio_dev, "failed to get port stats from firmware. status: %x\n",
> -                           retval);
> -               goto get_stats_fail;
> -       }
> -
> -       while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
> -               lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
> -               lio_process_ordered_list(lio_dev);
> -               rte_delay_ms(1);
> -       }
> -
> -       retval = resp->status;
> -       if (retval) {
> -               lio_dev_err(lio_dev, "failed to get port stats from firmware\n");
> -               goto get_stats_fail;
> -       }
> -
> -       lio_swap_8B_data((uint64_t *)(&resp->link_stats),
> -                        sizeof(struct octeon_link_stats) >> 3);
> -
> -       hw_stats = &resp->link_stats;
> -
> -       for (i = 0; i < LIO_NB_XSTATS; i++) {
> -               xstats[i].id = i;
> -               xstats[i].value =
> -                   *(uint64_t *)(((char *)hw_stats) +
> -                                       rte_lio_stats_strings[i].offset);
> -       }
> -
> -       lio_free_soft_command(sc);
> -
> -       return LIO_NB_XSTATS;
> -
> -get_stats_fail:
> -       lio_free_soft_command(sc);
> -
> -       return -1;
> -}
> -
> -static int
> -lio_dev_xstats_get_names(struct rte_eth_dev *eth_dev,
> -                        struct rte_eth_xstat_name *xstats_names,
> -                        unsigned limit __rte_unused)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       unsigned int i;
> -
> -       if (!lio_dev->intf_open) {
> -               lio_dev_err(lio_dev, "Port %d down\n",
> -                           lio_dev->port_id);
> -               return -EINVAL;
> -       }
> -
> -       if (xstats_names == NULL)
> -               return LIO_NB_XSTATS;
> -
> -       /* Note: limit checked in rte_eth_xstats_names() */
> -
> -       for (i = 0; i < LIO_NB_XSTATS; i++) {
> -               snprintf(xstats_names[i].name, sizeof(xstats_names[i].name),
> -                        "%s", rte_lio_stats_strings[i].name);
> -       }
> -
> -       return LIO_NB_XSTATS;
> -}
> -
> -/* Reset hw stats for the port */
> -static int
> -lio_dev_xstats_reset(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct lio_dev_ctrl_cmd ctrl_cmd;
> -       struct lio_ctrl_pkt ctrl_pkt;
> -       int ret;
> -
> -       if (!lio_dev->intf_open) {
> -               lio_dev_err(lio_dev, "Port %d down\n",
> -                           lio_dev->port_id);
> -               return -EINVAL;
> -       }
> -
> -       /* flush added to prevent cmd failure
> -        * incase the queue is full
> -        */
> -       lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> -       memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> -       memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> -       ctrl_cmd.eth_dev = eth_dev;
> -       ctrl_cmd.cond = 0;
> -
> -       ctrl_pkt.ncmd.s.cmd = LIO_CMD_CLEAR_STATS;
> -       ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> -       ret = lio_send_ctrl_pkt(lio_dev, &ctrl_pkt);
> -       if (ret != 0) {
> -               lio_dev_err(lio_dev, "Failed to send clear stats command\n");
> -               return ret;
> -       }
> -
> -       ret = lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd);
> -       if (ret != 0) {
> -               lio_dev_err(lio_dev, "Clear stats command timed out\n");
> -               return ret;
> -       }
> -
> -       /* clear stored per queue stats */
> -       if (*eth_dev->dev_ops->stats_reset == NULL)
> -               return 0;
> -       return (*eth_dev->dev_ops->stats_reset)(eth_dev);
> -}
> -
> -/* Retrieve the device statistics (# packets in/out, # bytes in/out, etc */
> -static int
> -lio_dev_stats_get(struct rte_eth_dev *eth_dev,
> -                 struct rte_eth_stats *stats)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct lio_droq_stats *oq_stats;
> -       struct lio_iq_stats *iq_stats;
> -       struct lio_instr_queue *txq;
> -       struct lio_droq *droq;
> -       int i, iq_no, oq_no;
> -       uint64_t bytes = 0;
> -       uint64_t pkts = 0;
> -       uint64_t drop = 0;
> -
> -       for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
> -               iq_no = lio_dev->linfo.txpciq[i].s.q_no;
> -               txq = lio_dev->instr_queue[iq_no];
> -               if (txq != NULL) {
> -                       iq_stats = &txq->stats;
> -                       pkts += iq_stats->tx_done;
> -                       drop += iq_stats->tx_dropped;
> -                       bytes += iq_stats->tx_tot_bytes;
> -               }
> -       }
> -
> -       stats->opackets = pkts;
> -       stats->obytes = bytes;
> -       stats->oerrors = drop;
> -
> -       pkts = 0;
> -       drop = 0;
> -       bytes = 0;
> -
> -       for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
> -               oq_no = lio_dev->linfo.rxpciq[i].s.q_no;
> -               droq = lio_dev->droq[oq_no];
> -               if (droq != NULL) {
> -                       oq_stats = &droq->stats;
> -                       pkts += oq_stats->rx_pkts_received;
> -                       drop += (oq_stats->rx_dropped +
> -                                       oq_stats->dropped_toomany +
> -                                       oq_stats->dropped_nomem);
> -                       bytes += oq_stats->rx_bytes_received;
> -               }
> -       }
> -       stats->ibytes = bytes;
> -       stats->ipackets = pkts;
> -       stats->ierrors = drop;
> -
> -       return 0;
> -}
> -
> -static int
> -lio_dev_stats_reset(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct lio_droq_stats *oq_stats;
> -       struct lio_iq_stats *iq_stats;
> -       struct lio_instr_queue *txq;
> -       struct lio_droq *droq;
> -       int i, iq_no, oq_no;
> -
> -       for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
> -               iq_no = lio_dev->linfo.txpciq[i].s.q_no;
> -               txq = lio_dev->instr_queue[iq_no];
> -               if (txq != NULL) {
> -                       iq_stats = &txq->stats;
> -                       memset(iq_stats, 0, sizeof(struct lio_iq_stats));
> -               }
> -       }
> -
> -       for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
> -               oq_no = lio_dev->linfo.rxpciq[i].s.q_no;
> -               droq = lio_dev->droq[oq_no];
> -               if (droq != NULL) {
> -                       oq_stats = &droq->stats;
> -                       memset(oq_stats, 0, sizeof(struct lio_droq_stats));
> -               }
> -       }
> -
> -       return 0;
> -}
> -
> -static int
> -lio_dev_info_get(struct rte_eth_dev *eth_dev,
> -                struct rte_eth_dev_info *devinfo)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
> -
> -       switch (pci_dev->id.subsystem_device_id) {
> -       /* CN23xx 10G cards */
> -       case PCI_SUBSYS_DEV_ID_CN2350_210:
> -       case PCI_SUBSYS_DEV_ID_CN2360_210:
> -       case PCI_SUBSYS_DEV_ID_CN2350_210SVPN3:
> -       case PCI_SUBSYS_DEV_ID_CN2360_210SVPN3:
> -       case PCI_SUBSYS_DEV_ID_CN2350_210SVPT:
> -       case PCI_SUBSYS_DEV_ID_CN2360_210SVPT:
> -               devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
> -               break;
> -       /* CN23xx 25G cards */
> -       case PCI_SUBSYS_DEV_ID_CN2350_225:
> -       case PCI_SUBSYS_DEV_ID_CN2360_225:
> -               devinfo->speed_capa = RTE_ETH_LINK_SPEED_25G;
> -               break;
> -       default:
> -               devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
> -               lio_dev_err(lio_dev,
> -                           "Unknown CN23XX subsystem device id. Setting 10G as default link speed.\n");
> -               return -EINVAL;
> -       }
> -
> -       devinfo->max_rx_queues = lio_dev->max_rx_queues;
> -       devinfo->max_tx_queues = lio_dev->max_tx_queues;
> -
> -       devinfo->min_rx_bufsize = LIO_MIN_RX_BUF_SIZE;
> -       devinfo->max_rx_pktlen = LIO_MAX_RX_PKTLEN;
> -
> -       devinfo->max_mac_addrs = 1;
> -
> -       devinfo->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM               |
> -                                   RTE_ETH_RX_OFFLOAD_UDP_CKSUM                |
> -                                   RTE_ETH_RX_OFFLOAD_TCP_CKSUM                |
> -                                   RTE_ETH_RX_OFFLOAD_VLAN_STRIP               |
> -                                   RTE_ETH_RX_OFFLOAD_RSS_HASH);
> -       devinfo->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM               |
> -                                   RTE_ETH_TX_OFFLOAD_UDP_CKSUM                |
> -                                   RTE_ETH_TX_OFFLOAD_TCP_CKSUM                |
> -                                   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM);
> -
> -       devinfo->rx_desc_lim = lio_rx_desc_lim;
> -       devinfo->tx_desc_lim = lio_tx_desc_lim;
> -
> -       devinfo->reta_size = LIO_RSS_MAX_TABLE_SZ;
> -       devinfo->hash_key_size = LIO_RSS_MAX_KEY_SZ;
> -       devinfo->flow_type_rss_offloads = (RTE_ETH_RSS_IPV4                     |
> -                                          RTE_ETH_RSS_NONFRAG_IPV4_TCP |
> -                                          RTE_ETH_RSS_IPV6                     |
> -                                          RTE_ETH_RSS_NONFRAG_IPV6_TCP |
> -                                          RTE_ETH_RSS_IPV6_EX          |
> -                                          RTE_ETH_RSS_IPV6_TCP_EX);
> -       return 0;
> -}
> -
> -static int
> -lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct lio_dev_ctrl_cmd ctrl_cmd;
> -       struct lio_ctrl_pkt ctrl_pkt;
> -
> -       PMD_INIT_FUNC_TRACE();
> -
> -       if (!lio_dev->intf_open) {
> -               lio_dev_err(lio_dev, "Port %d down, can't set MTU\n",
> -                           lio_dev->port_id);
> -               return -EINVAL;
> -       }
> -
> -       /* flush added to prevent cmd failure
> -        * incase the queue is full
> -        */
> -       lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> -       memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> -       memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> -       ctrl_cmd.eth_dev = eth_dev;
> -       ctrl_cmd.cond = 0;
> -
> -       ctrl_pkt.ncmd.s.cmd = LIO_CMD_CHANGE_MTU;
> -       ctrl_pkt.ncmd.s.param1 = mtu;
> -       ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> -       if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> -               lio_dev_err(lio_dev, "Failed to send command to change MTU\n");
> -               return -1;
> -       }
> -
> -       if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> -               lio_dev_err(lio_dev, "Command to change MTU timed out\n");
> -               return -1;
> -       }
> -
> -       return 0;
> -}
> -
> -static int
> -lio_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
> -                       struct rte_eth_rss_reta_entry64 *reta_conf,
> -                       uint16_t reta_size)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
> -       struct lio_rss_set *rss_param;
> -       struct lio_dev_ctrl_cmd ctrl_cmd;
> -       struct lio_ctrl_pkt ctrl_pkt;
> -       int i, j, index;
> -
> -       if (!lio_dev->intf_open) {
> -               lio_dev_err(lio_dev, "Port %d down, can't update reta\n",
> -                           lio_dev->port_id);
> -               return -EINVAL;
> -       }
> -
> -       if (reta_size != LIO_RSS_MAX_TABLE_SZ) {
> -               lio_dev_err(lio_dev,
> -                           "The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)\n",
> -                           reta_size, LIO_RSS_MAX_TABLE_SZ);
> -               return -EINVAL;
> -       }
> -
> -       /* flush added to prevent cmd failure
> -        * incase the queue is full
> -        */
> -       lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> -       memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> -       memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> -       rss_param = (struct lio_rss_set *)&ctrl_pkt.udd[0];
> -
> -       ctrl_cmd.eth_dev = eth_dev;
> -       ctrl_cmd.cond = 0;
> -
> -       ctrl_pkt.ncmd.s.cmd = LIO_CMD_SET_RSS;
> -       ctrl_pkt.ncmd.s.more = sizeof(struct lio_rss_set) >> 3;
> -       ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> -       rss_param->param.flags = 0xF;
> -       rss_param->param.flags &= ~LIO_RSS_PARAM_ITABLE_UNCHANGED;
> -       rss_param->param.itablesize = LIO_RSS_MAX_TABLE_SZ;
> -
> -       for (i = 0; i < (reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
> -               for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
> -                       if ((reta_conf[i].mask) & ((uint64_t)1 << j)) {
> -                               index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
> -                               rss_state->itable[index] = reta_conf[i].reta[j];
> -                       }
> -               }
> -       }
> -
> -       rss_state->itable_size = LIO_RSS_MAX_TABLE_SZ;
> -       memcpy(rss_param->itable, rss_state->itable, rss_state->itable_size);
> -
> -       lio_swap_8B_data((uint64_t *)rss_param, LIO_RSS_PARAM_SIZE >> 3);
> -
> -       if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> -               lio_dev_err(lio_dev, "Failed to set rss hash\n");
> -               return -1;
> -       }
> -
> -       if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> -               lio_dev_err(lio_dev, "Set rss hash timed out\n");
> -               return -1;
> -       }
> -
> -       return 0;
> -}
> -
> -static int
> -lio_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
> -                      struct rte_eth_rss_reta_entry64 *reta_conf,
> -                      uint16_t reta_size)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
> -       int i, num;
> -
> -       if (reta_size != LIO_RSS_MAX_TABLE_SZ) {
> -               lio_dev_err(lio_dev,
> -                           "The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)\n",
> -                           reta_size, LIO_RSS_MAX_TABLE_SZ);
> -               return -EINVAL;
> -       }
> -
> -       num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
> -
> -       for (i = 0; i < num; i++) {
> -               memcpy(reta_conf->reta,
> -                      &rss_state->itable[i * RTE_ETH_RETA_GROUP_SIZE],
> -                      RTE_ETH_RETA_GROUP_SIZE);
> -               reta_conf++;
> -       }
> -
> -       return 0;
> -}
> -
> -static int
> -lio_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
> -                         struct rte_eth_rss_conf *rss_conf)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
> -       uint8_t *hash_key = NULL;
> -       uint64_t rss_hf = 0;
> -
> -       if (rss_state->hash_disable) {
> -               lio_dev_info(lio_dev, "RSS disabled in nic\n");
> -               rss_conf->rss_hf = 0;
> -               return 0;
> -       }
> -
> -       /* Get key value */
> -       hash_key = rss_conf->rss_key;
> -       if (hash_key != NULL)
> -               memcpy(hash_key, rss_state->hash_key, rss_state->hash_key_size);
> -
> -       if (rss_state->ip)
> -               rss_hf |= RTE_ETH_RSS_IPV4;
> -       if (rss_state->tcp_hash)
> -               rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
> -       if (rss_state->ipv6)
> -               rss_hf |= RTE_ETH_RSS_IPV6;
> -       if (rss_state->ipv6_tcp_hash)
> -               rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
> -       if (rss_state->ipv6_ex)
> -               rss_hf |= RTE_ETH_RSS_IPV6_EX;
> -       if (rss_state->ipv6_tcp_ex_hash)
> -               rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
> -
> -       rss_conf->rss_hf = rss_hf;
> -
> -       return 0;
> -}
> -
> -static int
> -lio_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
> -                       struct rte_eth_rss_conf *rss_conf)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
> -       struct lio_rss_set *rss_param;
> -       struct lio_dev_ctrl_cmd ctrl_cmd;
> -       struct lio_ctrl_pkt ctrl_pkt;
> -
> -       if (!lio_dev->intf_open) {
> -               lio_dev_err(lio_dev, "Port %d down, can't update hash\n",
> -                           lio_dev->port_id);
> -               return -EINVAL;
> -       }
> -
> -       /* flush added to prevent cmd failure
> -        * incase the queue is full
> -        */
> -       lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> -       memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> -       memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> -       rss_param = (struct lio_rss_set *)&ctrl_pkt.udd[0];
> -
> -       ctrl_cmd.eth_dev = eth_dev;
> -       ctrl_cmd.cond = 0;
> -
> -       ctrl_pkt.ncmd.s.cmd = LIO_CMD_SET_RSS;
> -       ctrl_pkt.ncmd.s.more = sizeof(struct lio_rss_set) >> 3;
> -       ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> -       rss_param->param.flags = 0xF;
> -
> -       if (rss_conf->rss_key) {
> -               rss_param->param.flags &= ~LIO_RSS_PARAM_HASH_KEY_UNCHANGED;
> -               rss_state->hash_key_size = LIO_RSS_MAX_KEY_SZ;
> -               rss_param->param.hashkeysize = LIO_RSS_MAX_KEY_SZ;
> -               memcpy(rss_state->hash_key, rss_conf->rss_key,
> -                      rss_state->hash_key_size);
> -               memcpy(rss_param->key, rss_state->hash_key,
> -                      rss_state->hash_key_size);
> -       }
> -
> -       if ((rss_conf->rss_hf & LIO_RSS_OFFLOAD_ALL) == 0) {
> -               /* Can't disable rss through hash flags,
> -                * if it is enabled by default during init
> -                */
> -               if (!rss_state->hash_disable)
> -                       return -EINVAL;
> -
> -               /* This is for --disable-rss during testpmd launch */
> -               rss_param->param.flags |= LIO_RSS_PARAM_DISABLE_RSS;
> -       } else {
> -               uint32_t hashinfo = 0;
> -
> -               /* Can't enable rss if disabled by default during init */
> -               if (rss_state->hash_disable)
> -                       return -EINVAL;
> -
> -               if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
> -                       hashinfo |= LIO_RSS_HASH_IPV4;
> -                       rss_state->ip = 1;
> -               } else {
> -                       rss_state->ip = 0;
> -               }
> -
> -               if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
> -                       hashinfo |= LIO_RSS_HASH_TCP_IPV4;
> -                       rss_state->tcp_hash = 1;
> -               } else {
> -                       rss_state->tcp_hash = 0;
> -               }
> -
> -               if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6) {
> -                       hashinfo |= LIO_RSS_HASH_IPV6;
> -                       rss_state->ipv6 = 1;
> -               } else {
> -                       rss_state->ipv6 = 0;
> -               }
> -
> -               if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
> -                       hashinfo |= LIO_RSS_HASH_TCP_IPV6;
> -                       rss_state->ipv6_tcp_hash = 1;
> -               } else {
> -                       rss_state->ipv6_tcp_hash = 0;
> -               }
> -
> -               if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX) {
> -                       hashinfo |= LIO_RSS_HASH_IPV6_EX;
> -                       rss_state->ipv6_ex = 1;
> -               } else {
> -                       rss_state->ipv6_ex = 0;
> -               }
> -
> -               if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) {
> -                       hashinfo |= LIO_RSS_HASH_TCP_IPV6_EX;
> -                       rss_state->ipv6_tcp_ex_hash = 1;
> -               } else {
> -                       rss_state->ipv6_tcp_ex_hash = 0;
> -               }
> -
> -               rss_param->param.flags &= ~LIO_RSS_PARAM_HASH_INFO_UNCHANGED;
> -               rss_param->param.hashinfo = hashinfo;
> -       }
> -
> -       lio_swap_8B_data((uint64_t *)rss_param, LIO_RSS_PARAM_SIZE >> 3);
> -
> -       if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> -               lio_dev_err(lio_dev, "Failed to set rss hash\n");
> -               return -1;
> -       }
> -
> -       if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> -               lio_dev_err(lio_dev, "Set rss hash timed out\n");
> -               return -1;
> -       }
> -
> -       return 0;
> -}
> -
> -/**
> - * Add vxlan dest udp port for an interface.
> - *
> - * @param eth_dev
> - *  Pointer to the structure rte_eth_dev
> - * @param udp_tnl
> - *  udp tunnel conf
> - *
> - * @return
> - *  On success return 0
> - *  On failure return -1
> - */
> -static int
> -lio_dev_udp_tunnel_add(struct rte_eth_dev *eth_dev,
> -                      struct rte_eth_udp_tunnel *udp_tnl)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct lio_dev_ctrl_cmd ctrl_cmd;
> -       struct lio_ctrl_pkt ctrl_pkt;
> -
> -       if (udp_tnl == NULL)
> -               return -EINVAL;
> -
> -       if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
> -               lio_dev_err(lio_dev, "Unsupported tunnel type\n");
> -               return -1;
> -       }
> -
> -       /* flush added to prevent cmd failure
> -        * incase the queue is full
> -        */
> -       lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> -       memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> -       memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> -       ctrl_cmd.eth_dev = eth_dev;
> -       ctrl_cmd.cond = 0;
> -
> -       ctrl_pkt.ncmd.s.cmd = LIO_CMD_VXLAN_PORT_CONFIG;
> -       ctrl_pkt.ncmd.s.param1 = udp_tnl->udp_port;
> -       ctrl_pkt.ncmd.s.more = LIO_CMD_VXLAN_PORT_ADD;
> -       ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> -       if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> -               lio_dev_err(lio_dev, "Failed to send VXLAN_PORT_ADD command\n");
> -               return -1;
> -       }
> -
> -       if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> -               lio_dev_err(lio_dev, "VXLAN_PORT_ADD command timed out\n");
> -               return -1;
> -       }
> -
> -       return 0;
> -}
> -
> -/**
> - * Remove vxlan dest udp port for an interface.
> - *
> - * @param eth_dev
> - *  Pointer to the structure rte_eth_dev
> - * @param udp_tnl
> - *  udp tunnel conf
> - *
> - * @return
> - *  On success return 0
> - *  On failure return -1
> - */
> -static int
> -lio_dev_udp_tunnel_del(struct rte_eth_dev *eth_dev,
> -                      struct rte_eth_udp_tunnel *udp_tnl)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct lio_dev_ctrl_cmd ctrl_cmd;
> -       struct lio_ctrl_pkt ctrl_pkt;
> -
> -       if (udp_tnl == NULL)
> -               return -EINVAL;
> -
> -       if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
> -               lio_dev_err(lio_dev, "Unsupported tunnel type\n");
> -               return -1;
> -       }
> -
> -       /* flush added to prevent cmd failure
> -        * incase the queue is full
> -        */
> -       lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> -       memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> -       memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> -       ctrl_cmd.eth_dev = eth_dev;
> -       ctrl_cmd.cond = 0;
> -
> -       ctrl_pkt.ncmd.s.cmd = LIO_CMD_VXLAN_PORT_CONFIG;
> -       ctrl_pkt.ncmd.s.param1 = udp_tnl->udp_port;
> -       ctrl_pkt.ncmd.s.more = LIO_CMD_VXLAN_PORT_DEL;
> -       ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> -       if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> -               lio_dev_err(lio_dev, "Failed to send VXLAN_PORT_DEL command\n");
> -               return -1;
> -       }
> -
> -       if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> -               lio_dev_err(lio_dev, "VXLAN_PORT_DEL command timed out\n");
> -               return -1;
> -       }
> -
> -       return 0;
> -}
> -
> -static int
> -lio_dev_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id, int on)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct lio_dev_ctrl_cmd ctrl_cmd;
> -       struct lio_ctrl_pkt ctrl_pkt;
> -
> -       if (lio_dev->linfo.vlan_is_admin_assigned)
> -               return -EPERM;
> -
> -       /* flush added to prevent cmd failure
> -        * incase the queue is full
> -        */
> -       lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> -       memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> -       memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> -       ctrl_cmd.eth_dev = eth_dev;
> -       ctrl_cmd.cond = 0;
> -
> -       ctrl_pkt.ncmd.s.cmd = on ?
> -                       LIO_CMD_ADD_VLAN_FILTER : LIO_CMD_DEL_VLAN_FILTER;
> -       ctrl_pkt.ncmd.s.param1 = vlan_id;
> -       ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> -       if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> -               lio_dev_err(lio_dev, "Failed to %s VLAN port\n",
> -                           on ? "add" : "remove");
> -               return -1;
> -       }
> -
> -       if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> -               lio_dev_err(lio_dev, "Command to %s VLAN port timed out\n",
> -                           on ? "add" : "remove");
> -               return -1;
> -       }
> -
> -       return 0;
> -}
> -
> -static uint64_t
> -lio_hweight64(uint64_t w)
> -{
> -       uint64_t res = w - ((w >> 1) & 0x5555555555555555ul);
> -
> -       res =
> -           (res & 0x3333333333333333ul) + ((res >> 2) & 0x3333333333333333ul);
> -       res = (res + (res >> 4)) & 0x0F0F0F0F0F0F0F0Ful;
> -       res = res + (res >> 8);
> -       res = res + (res >> 16);
> -
> -       return (res + (res >> 32)) & 0x00000000000000FFul;
> -}
> -
> -static int
> -lio_dev_link_update(struct rte_eth_dev *eth_dev,
> -                   int wait_to_complete __rte_unused)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct rte_eth_link link;
> -
> -       /* Initialize */
> -       memset(&link, 0, sizeof(link));
> -       link.link_status = RTE_ETH_LINK_DOWN;
> -       link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> -       link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
> -       link.link_autoneg = RTE_ETH_LINK_AUTONEG;
> -
> -       /* Return what we found */
> -       if (lio_dev->linfo.link.s.link_up == 0) {
> -               /* Interface is down */
> -               return rte_eth_linkstatus_set(eth_dev, &link);
> -       }
> -
> -       link.link_status = RTE_ETH_LINK_UP; /* Interface is up */
> -       link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
> -       switch (lio_dev->linfo.link.s.speed) {
> -       case LIO_LINK_SPEED_10000:
> -               link.link_speed = RTE_ETH_SPEED_NUM_10G;
> -               break;
> -       case LIO_LINK_SPEED_25000:
> -               link.link_speed = RTE_ETH_SPEED_NUM_25G;
> -               break;
> -       default:
> -               link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> -               link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
> -       }
> -
> -       return rte_eth_linkstatus_set(eth_dev, &link);
> -}
> -
> -/**
> - * \brief Net device enable, disable allmulticast
> - * @param eth_dev Pointer to the structure rte_eth_dev
> - *
> - * @return
> - *  On success return 0
> - *  On failure return negative errno
> - */
> -static int
> -lio_change_dev_flag(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct lio_dev_ctrl_cmd ctrl_cmd;
> -       struct lio_ctrl_pkt ctrl_pkt;
> -
> -       /* flush added to prevent cmd failure
> -        * incase the queue is full
> -        */
> -       lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> -       memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> -       memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> -       ctrl_cmd.eth_dev = eth_dev;
> -       ctrl_cmd.cond = 0;
> -
> -       /* Create a ctrl pkt command to be sent to core app. */
> -       ctrl_pkt.ncmd.s.cmd = LIO_CMD_CHANGE_DEVFLAGS;
> -       ctrl_pkt.ncmd.s.param1 = lio_dev->ifflags;
> -       ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> -       if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> -               lio_dev_err(lio_dev, "Failed to send change flag message\n");
> -               return -EAGAIN;
> -       }
> -
> -       if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> -               lio_dev_err(lio_dev, "Change dev flag command timed out\n");
> -               return -ETIMEDOUT;
> -       }
> -
> -       return 0;
> -}
> -
> -static int
> -lio_dev_promiscuous_enable(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> -       if (strcmp(lio_dev->firmware_version, LIO_VF_TRUST_MIN_VERSION) < 0) {
> -               lio_dev_err(lio_dev, "Require firmware version >= %s\n",
> -                           LIO_VF_TRUST_MIN_VERSION);
> -               return -EAGAIN;
> -       }
> -
> -       if (!lio_dev->intf_open) {
> -               lio_dev_err(lio_dev, "Port %d down, can't enable promiscuous\n",
> -                           lio_dev->port_id);
> -               return -EAGAIN;
> -       }
> -
> -       lio_dev->ifflags |= LIO_IFFLAG_PROMISC;
> -       return lio_change_dev_flag(eth_dev);
> -}
> -
> -static int
> -lio_dev_promiscuous_disable(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> -       if (strcmp(lio_dev->firmware_version, LIO_VF_TRUST_MIN_VERSION) < 0) {
> -               lio_dev_err(lio_dev, "Require firmware version >= %s\n",
> -                           LIO_VF_TRUST_MIN_VERSION);
> -               return -EAGAIN;
> -       }
> -
> -       if (!lio_dev->intf_open) {
> -               lio_dev_err(lio_dev, "Port %d down, can't disable promiscuous\n",
> -                           lio_dev->port_id);
> -               return -EAGAIN;
> -       }
> -
> -       lio_dev->ifflags &= ~LIO_IFFLAG_PROMISC;
> -       return lio_change_dev_flag(eth_dev);
> -}
> -
> -static int
> -lio_dev_allmulticast_enable(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> -       if (!lio_dev->intf_open) {
> -               lio_dev_err(lio_dev, "Port %d down, can't enable multicast\n",
> -                           lio_dev->port_id);
> -               return -EAGAIN;
> -       }
> -
> -       lio_dev->ifflags |= LIO_IFFLAG_ALLMULTI;
> -       return lio_change_dev_flag(eth_dev);
> -}
> -
> -static int
> -lio_dev_allmulticast_disable(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> -       if (!lio_dev->intf_open) {
> -               lio_dev_err(lio_dev, "Port %d down, can't disable multicast\n",
> -                           lio_dev->port_id);
> -               return -EAGAIN;
> -       }
> -
> -       lio_dev->ifflags &= ~LIO_IFFLAG_ALLMULTI;
> -       return lio_change_dev_flag(eth_dev);
> -}
> -
> -static void
> -lio_dev_rss_configure(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
> -       struct rte_eth_rss_reta_entry64 reta_conf[8];
> -       struct rte_eth_rss_conf rss_conf;
> -       uint16_t i;
> -
> -       /* Configure the RSS key and the RSS protocols used to compute
> -        * the RSS hash of input packets.
> -        */
> -       rss_conf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
> -       if ((rss_conf.rss_hf & LIO_RSS_OFFLOAD_ALL) == 0) {
> -               rss_state->hash_disable = 1;
> -               lio_dev_rss_hash_update(eth_dev, &rss_conf);
> -               return;
> -       }
> -
> -       if (rss_conf.rss_key == NULL)
> -               rss_conf.rss_key = lio_rss_key; /* Default hash key */
> -
> -       lio_dev_rss_hash_update(eth_dev, &rss_conf);
> -
> -       memset(reta_conf, 0, sizeof(reta_conf));
> -       for (i = 0; i < LIO_RSS_MAX_TABLE_SZ; i++) {
> -               uint8_t q_idx, conf_idx, reta_idx;
> -
> -               q_idx = (uint8_t)((eth_dev->data->nb_rx_queues > 1) ?
> -                                 i % eth_dev->data->nb_rx_queues : 0);
> -               conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
> -               reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
> -               reta_conf[conf_idx].reta[reta_idx] = q_idx;
> -               reta_conf[conf_idx].mask |= ((uint64_t)1 << reta_idx);
> -       }
> -
> -       lio_dev_rss_reta_update(eth_dev, reta_conf, LIO_RSS_MAX_TABLE_SZ);
> -}
> -
> -static void
> -lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
> -       struct rte_eth_rss_conf rss_conf;
> -
> -       switch (eth_dev->data->dev_conf.rxmode.mq_mode) {
> -       case RTE_ETH_MQ_RX_RSS:
> -               lio_dev_rss_configure(eth_dev);
> -               break;
> -       case RTE_ETH_MQ_RX_NONE:
> -       /* if mq_mode is none, disable rss mode. */
> -       default:
> -               memset(&rss_conf, 0, sizeof(rss_conf));
> -               rss_state->hash_disable = 1;
> -               lio_dev_rss_hash_update(eth_dev, &rss_conf);
> -       }
> -}
> -
> -/**
> - * Setup our receive queue/ringbuffer. This is the
> - * queue the Octeon uses to send us packets and
> - * responses. We are given a memory pool for our
> - * packet buffers that are used to populate the receive
> - * queue.
> - *
> - * @param eth_dev
> - *    Pointer to the structure rte_eth_dev
> - * @param q_no
> - *    Queue number
> - * @param num_rx_descs
> - *    Number of entries in the queue
> - * @param socket_id
> - *    Where to allocate memory
> - * @param rx_conf
> - *    Pointer to the struction rte_eth_rxconf
> - * @param mp
> - *    Pointer to the packet pool
> - *
> - * @return
> - *    - On success, return 0
> - *    - On failure, return -1
> - */
> -static int
> -lio_dev_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
> -                      uint16_t num_rx_descs, unsigned int socket_id,
> -                      const struct rte_eth_rxconf *rx_conf __rte_unused,
> -                      struct rte_mempool *mp)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct rte_pktmbuf_pool_private *mbp_priv;
> -       uint32_t fw_mapped_oq;
> -       uint16_t buf_size;
> -
> -       if (q_no >= lio_dev->nb_rx_queues) {
> -               lio_dev_err(lio_dev, "Invalid rx queue number %u\n", q_no);
> -               return -EINVAL;
> -       }
> -
> -       lio_dev_dbg(lio_dev, "setting up rx queue %u\n", q_no);
> -
> -       fw_mapped_oq = lio_dev->linfo.rxpciq[q_no].s.q_no;
> -
> -       /* Free previous allocation if any */
> -       if (eth_dev->data->rx_queues[q_no] != NULL) {
> -               lio_dev_rx_queue_release(eth_dev, q_no);
> -               eth_dev->data->rx_queues[q_no] = NULL;
> -       }
> -
> -       mbp_priv = rte_mempool_get_priv(mp);
> -       buf_size = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
> -
> -       if (lio_setup_droq(lio_dev, fw_mapped_oq, num_rx_descs, buf_size, mp,
> -                          socket_id)) {
> -               lio_dev_err(lio_dev, "droq allocation failed\n");
> -               return -1;
> -       }
> -
> -       eth_dev->data->rx_queues[q_no] = lio_dev->droq[fw_mapped_oq];
> -
> -       return 0;
> -}
> -
> -/**
> - * Release the receive queue/ringbuffer. Called by
> - * the upper layers.
> - *
> - * @param eth_dev
> - *    Pointer to Ethernet device structure.
> - * @param q_no
> - *    Receive queue index.
> - *
> - * @return
> - *    - nothing
> - */
> -void
> -lio_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
> -{
> -       struct lio_droq *droq = dev->data->rx_queues[q_no];
> -       int oq_no;
> -
> -       if (droq) {
> -               oq_no = droq->q_no;
> -               lio_delete_droq_queue(droq->lio_dev, oq_no);
> -       }
> -}
> -
> -/**
> - * Allocate and initialize SW ring. Initialize associated HW registers.
> - *
> - * @param eth_dev
> - *   Pointer to structure rte_eth_dev
> - *
> - * @param q_no
> - *   Queue number
> - *
> - * @param num_tx_descs
> - *   Number of ringbuffer descriptors
> - *
> - * @param socket_id
> - *   NUMA socket id, used for memory allocations
> - *
> - * @param tx_conf
> - *   Pointer to the structure rte_eth_txconf
> - *
> - * @return
> - *   - On success, return 0
> - *   - On failure, return -errno value
> - */
> -static int
> -lio_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
> -                      uint16_t num_tx_descs, unsigned int socket_id,
> -                      const struct rte_eth_txconf *tx_conf __rte_unused)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       int fw_mapped_iq = lio_dev->linfo.txpciq[q_no].s.q_no;
> -       int retval;
> -
> -       if (q_no >= lio_dev->nb_tx_queues) {
> -               lio_dev_err(lio_dev, "Invalid tx queue number %u\n", q_no);
> -               return -EINVAL;
> -       }
> -
> -       lio_dev_dbg(lio_dev, "setting up tx queue %u\n", q_no);
> -
> -       /* Free previous allocation if any */
> -       if (eth_dev->data->tx_queues[q_no] != NULL) {
> -               lio_dev_tx_queue_release(eth_dev, q_no);
> -               eth_dev->data->tx_queues[q_no] = NULL;
> -       }
> -
> -       retval = lio_setup_iq(lio_dev, q_no, lio_dev->linfo.txpciq[q_no],
> -                             num_tx_descs, lio_dev, socket_id);
> -
> -       if (retval) {
> -               lio_dev_err(lio_dev, "Runtime IQ(TxQ) creation failed.\n");
> -               return retval;
> -       }
> -
> -       retval = lio_setup_sglists(lio_dev, q_no, fw_mapped_iq,
> -                               lio_dev->instr_queue[fw_mapped_iq]->nb_desc,
> -                               socket_id);
> -
> -       if (retval) {
> -               lio_delete_instruction_queue(lio_dev, fw_mapped_iq);
> -               return retval;
> -       }
> -
> -       eth_dev->data->tx_queues[q_no] = lio_dev->instr_queue[fw_mapped_iq];
> -
> -       return 0;
> -}
> -
> -/**
> - * Release the transmit queue/ringbuffer. Called by
> - * the upper layers.
> - *
> - * @param eth_dev
> - *    Pointer to Ethernet device structure.
> - * @param q_no
> - *   Transmit queue index.
> - *
> - * @return
> - *    - nothing
> - */
> -void
> -lio_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
> -{
> -       struct lio_instr_queue *tq = dev->data->tx_queues[q_no];
> -       uint32_t fw_mapped_iq_no;
> -
> -
> -       if (tq) {
> -               /* Free sg_list */
> -               lio_delete_sglist(tq);
> -
> -               fw_mapped_iq_no = tq->txpciq.s.q_no;
> -               lio_delete_instruction_queue(tq->lio_dev, fw_mapped_iq_no);
> -       }
> -}
> -
> -/**
> - * Api to check link state.
> - */
> -static void
> -lio_dev_get_link_status(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> -       struct lio_link_status_resp *resp;
> -       union octeon_link_status *ls;
> -       struct lio_soft_command *sc;
> -       uint32_t resp_size;
> -
> -       if (!lio_dev->intf_open)
> -               return;
> -
> -       resp_size = sizeof(struct lio_link_status_resp);
> -       sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
> -       if (sc == NULL)
> -               return;
> -
> -       resp = (struct lio_link_status_resp *)sc->virtrptr;
> -       lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
> -                                LIO_OPCODE_INFO, 0, 0, 0);
> -
> -       /* Setting wait time in seconds */
> -       sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
> -
> -       if (lio_send_soft_command(lio_dev, sc) == LIO_IQ_SEND_FAILED)
> -               goto get_status_fail;
> -
> -       while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
> -               lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
> -               rte_delay_ms(1);
> -       }
> -
> -       if (resp->status)
> -               goto get_status_fail;
> -
> -       ls = &resp->link_info.link;
> -
> -       lio_swap_8B_data((uint64_t *)ls, sizeof(union octeon_link_status) >> 3);
> -
> -       if (lio_dev->linfo.link.link_status64 != ls->link_status64) {
> -               if (ls->s.mtu < eth_dev->data->mtu) {
> -                       lio_dev_info(lio_dev, "Lowered VF MTU to %d as PF MTU dropped\n",
> -                                    ls->s.mtu);
> -                       eth_dev->data->mtu = ls->s.mtu;
> -               }
> -               lio_dev->linfo.link.link_status64 = ls->link_status64;
> -               lio_dev_link_update(eth_dev, 0);
> -       }
> -
> -       lio_free_soft_command(sc);
> -
> -       return;
> -
> -get_status_fail:
> -       lio_free_soft_command(sc);
> -}
> -
> -/* This function will be invoked every LSC_TIMEOUT ns (100ms)
> - * and will update link state if it changes.
> - */
> -static void
> -lio_sync_link_state_check(void *eth_dev)
> -{
> -       struct lio_device *lio_dev =
> -               (((struct rte_eth_dev *)eth_dev)->data->dev_private);
> -
> -       if (lio_dev->port_configured)
> -               lio_dev_get_link_status(eth_dev);
> -
> -       /* Schedule periodic link status check.
> -        * Stop check if interface is close and start again while opening.
> -        */
> -       if (lio_dev->intf_open)
> -               rte_eal_alarm_set(LIO_LSC_TIMEOUT, lio_sync_link_state_check,
> -                                 eth_dev);
> -}
> -
> -static int
> -lio_dev_start(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> -       int ret = 0;
> -
> -       lio_dev_info(lio_dev, "Starting port %d\n", eth_dev->data->port_id);
> -
> -       if (lio_dev->fn_list.enable_io_queues(lio_dev))
> -               return -1;
> -
> -       if (lio_send_rx_ctrl_cmd(eth_dev, 1))
> -               return -1;
> -
> -       /* Ready for link status updates */
> -       lio_dev->intf_open = 1;
> -       rte_mb();
> -
> -       /* Configure RSS if device configured with multiple RX queues. */
> -       lio_dev_mq_rx_configure(eth_dev);
> -
> -       /* Before update the link info,
> -        * must set linfo.link.link_status64 to 0.
> -        */
> -       lio_dev->linfo.link.link_status64 = 0;
> -
> -       /* start polling for lsc */
> -       ret = rte_eal_alarm_set(LIO_LSC_TIMEOUT,
> -                               lio_sync_link_state_check,
> -                               eth_dev);
> -       if (ret) {
> -               lio_dev_err(lio_dev,
> -                           "link state check handler creation failed\n");
> -               goto dev_lsc_handle_error;
> -       }
> -
> -       while ((lio_dev->linfo.link.link_status64 == 0) && (--timeout))
> -               rte_delay_ms(1);
> -
> -       if (lio_dev->linfo.link.link_status64 == 0) {
> -               ret = -1;
> -               goto dev_mtu_set_error;
> -       }
> -
> -       ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu);
> -       if (ret != 0)
> -               goto dev_mtu_set_error;
> -
> -       return 0;
> -
> -dev_mtu_set_error:
> -       rte_eal_alarm_cancel(lio_sync_link_state_check, eth_dev);
> -
> -dev_lsc_handle_error:
> -       lio_dev->intf_open = 0;
> -       lio_send_rx_ctrl_cmd(eth_dev, 0);
> -
> -       return ret;
> -}
> -
> -/* Stop device and disable input/output functions */
> -static int
> -lio_dev_stop(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> -       lio_dev_info(lio_dev, "Stopping port %d\n", eth_dev->data->port_id);
> -       eth_dev->data->dev_started = 0;
> -       lio_dev->intf_open = 0;
> -       rte_mb();
> -
> -       /* Cancel callback if still running. */
> -       rte_eal_alarm_cancel(lio_sync_link_state_check, eth_dev);
> -
> -       lio_send_rx_ctrl_cmd(eth_dev, 0);
> -
> -       lio_wait_for_instr_fetch(lio_dev);
> -
> -       /* Clear recorded link status */
> -       lio_dev->linfo.link.link_status64 = 0;
> -
> -       return 0;
> -}
> -
> -static int
> -lio_dev_set_link_up(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> -       if (!lio_dev->intf_open) {
> -               lio_dev_info(lio_dev, "Port is stopped, Start the port first\n");
> -               return 0;
> -       }
> -
> -       if (lio_dev->linfo.link.s.link_up) {
> -               lio_dev_info(lio_dev, "Link is already UP\n");
> -               return 0;
> -       }
> -
> -       if (lio_send_rx_ctrl_cmd(eth_dev, 1)) {
> -               lio_dev_err(lio_dev, "Unable to set Link UP\n");
> -               return -1;
> -       }
> -
> -       lio_dev->linfo.link.s.link_up = 1;
> -       eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
> -
> -       return 0;
> -}
> -
> -static int
> -lio_dev_set_link_down(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> -       if (!lio_dev->intf_open) {
> -               lio_dev_info(lio_dev, "Port is stopped, Start the port first\n");
> -               return 0;
> -       }
> -
> -       if (!lio_dev->linfo.link.s.link_up) {
> -               lio_dev_info(lio_dev, "Link is already DOWN\n");
> -               return 0;
> -       }
> -
> -       lio_dev->linfo.link.s.link_up = 0;
> -       eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
> -
> -       if (lio_send_rx_ctrl_cmd(eth_dev, 0)) {
> -               lio_dev->linfo.link.s.link_up = 1;
> -               eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
> -               lio_dev_err(lio_dev, "Unable to set Link Down\n");
> -               return -1;
> -       }
> -
> -       return 0;
> -}
> -
> -/**
> - * Reset and stop the device. This occurs on the first
> - * call to this routine. Subsequent calls will simply
> - * return. NB: This will require the NIC to be rebooted.
> - *
> - * @param eth_dev
> - *    Pointer to the structure rte_eth_dev
> - *
> - * @return
> - *    - nothing
> - */
> -static int
> -lio_dev_close(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       int ret = 0;
> -
> -       if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> -               return 0;
> -
> -       lio_dev_info(lio_dev, "closing port %d\n", eth_dev->data->port_id);
> -
> -       if (lio_dev->intf_open)
> -               ret = lio_dev_stop(eth_dev);
> -
> -       /* Reset ioq regs */
> -       lio_dev->fn_list.setup_device_regs(lio_dev);
> -
> -       if (lio_dev->pci_dev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
> -               cn23xx_vf_ask_pf_to_do_flr(lio_dev);
> -               rte_delay_ms(LIO_PCI_FLR_WAIT);
> -       }
> -
> -       /* lio_free_mbox */
> -       lio_dev->fn_list.free_mbox(lio_dev);
> -
> -       /* Free glist resources */
> -       rte_free(lio_dev->glist_head);
> -       rte_free(lio_dev->glist_lock);
> -       lio_dev->glist_head = NULL;
> -       lio_dev->glist_lock = NULL;
> -
> -       lio_dev->port_configured = 0;
> -
> -        /* Delete all queues */
> -       lio_dev_clear_queues(eth_dev);
> -
> -       return ret;
> -}
> -
> -/**
> - * Enable tunnel rx checksum verification from firmware.
> - */
> -static void
> -lio_enable_hw_tunnel_rx_checksum(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct lio_dev_ctrl_cmd ctrl_cmd;
> -       struct lio_ctrl_pkt ctrl_pkt;
> -
> -       /* flush added to prevent cmd failure
> -        * incase the queue is full
> -        */
> -       lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> -       memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> -       memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> -       ctrl_cmd.eth_dev = eth_dev;
> -       ctrl_cmd.cond = 0;
> -
> -       ctrl_pkt.ncmd.s.cmd = LIO_CMD_TNL_RX_CSUM_CTL;
> -       ctrl_pkt.ncmd.s.param1 = LIO_CMD_RXCSUM_ENABLE;
> -       ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> -       if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> -               lio_dev_err(lio_dev, "Failed to send TNL_RX_CSUM command\n");
> -               return;
> -       }
> -
> -       if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd))
> -               lio_dev_err(lio_dev, "TNL_RX_CSUM command timed out\n");
> -}
> -
> -/**
> - * Enable checksum calculation for inner packet in a tunnel.
> - */
> -static void
> -lio_enable_hw_tunnel_tx_checksum(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct lio_dev_ctrl_cmd ctrl_cmd;
> -       struct lio_ctrl_pkt ctrl_pkt;
> -
> -       /* flush added to prevent cmd failure
> -        * incase the queue is full
> -        */
> -       lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> -       memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> -       memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> -       ctrl_cmd.eth_dev = eth_dev;
> -       ctrl_cmd.cond = 0;
> -
> -       ctrl_pkt.ncmd.s.cmd = LIO_CMD_TNL_TX_CSUM_CTL;
> -       ctrl_pkt.ncmd.s.param1 = LIO_CMD_TXCSUM_ENABLE;
> -       ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> -       if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> -               lio_dev_err(lio_dev, "Failed to send TNL_TX_CSUM command\n");
> -               return;
> -       }
> -
> -       if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd))
> -               lio_dev_err(lio_dev, "TNL_TX_CSUM command timed out\n");
> -}
> -
> -static int
> -lio_send_queue_count_update(struct rte_eth_dev *eth_dev, int num_txq,
> -                           int num_rxq)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       struct lio_dev_ctrl_cmd ctrl_cmd;
> -       struct lio_ctrl_pkt ctrl_pkt;
> -
> -       if (strcmp(lio_dev->firmware_version, LIO_Q_RECONF_MIN_VERSION) < 0) {
> -               lio_dev_err(lio_dev, "Require firmware version >= %s\n",
> -                           LIO_Q_RECONF_MIN_VERSION);
> -               return -ENOTSUP;
> -       }
> -
> -       /* flush added to prevent cmd failure
> -        * incase the queue is full
> -        */
> -       lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
> -
> -       memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
> -       memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
> -
> -       ctrl_cmd.eth_dev = eth_dev;
> -       ctrl_cmd.cond = 0;
> -
> -       ctrl_pkt.ncmd.s.cmd = LIO_CMD_QUEUE_COUNT_CTL;
> -       ctrl_pkt.ncmd.s.param1 = num_txq;
> -       ctrl_pkt.ncmd.s.param2 = num_rxq;
> -       ctrl_pkt.ctrl_cmd = &ctrl_cmd;
> -
> -       if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
> -               lio_dev_err(lio_dev, "Failed to send queue count control command\n");
> -               return -1;
> -       }
> -
> -       if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
> -               lio_dev_err(lio_dev, "Queue count control command timed out\n");
> -               return -1;
> -       }
> -
> -       return 0;
> -}
> -
> -static int
> -lio_reconf_queues(struct rte_eth_dev *eth_dev, int num_txq, int num_rxq)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       int ret;
> -
> -       if (lio_dev->nb_rx_queues != num_rxq ||
> -           lio_dev->nb_tx_queues != num_txq) {
> -               if (lio_send_queue_count_update(eth_dev, num_txq, num_rxq))
> -                       return -1;
> -               lio_dev->nb_rx_queues = num_rxq;
> -               lio_dev->nb_tx_queues = num_txq;
> -       }
> -
> -       if (lio_dev->intf_open) {
> -               ret = lio_dev_stop(eth_dev);
> -               if (ret != 0)
> -                       return ret;
> -       }
> -
> -       /* Reset ioq registers */
> -       if (lio_dev->fn_list.setup_device_regs(lio_dev)) {
> -               lio_dev_err(lio_dev, "Failed to configure device registers\n");
> -               return -1;
> -       }
> -
> -       return 0;
> -}
> -
> -static int
> -lio_dev_configure(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -       uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> -       int retval, num_iqueues, num_oqueues;
> -       uint8_t mac[RTE_ETHER_ADDR_LEN], i;
> -       struct lio_if_cfg_resp *resp;
> -       struct lio_soft_command *sc;
> -       union lio_if_cfg if_cfg;
> -       uint32_t resp_size;
> -
> -       PMD_INIT_FUNC_TRACE();
> -
> -       if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
> -               eth_dev->data->dev_conf.rxmode.offloads |=
> -                       RTE_ETH_RX_OFFLOAD_RSS_HASH;
> -
> -       /* Inform firmware about change in number of queues to use.
> -        * Disable IO queues and reset registers for re-configuration.
> -        */
> -       if (lio_dev->port_configured)
> -               return lio_reconf_queues(eth_dev,
> -                                        eth_dev->data->nb_tx_queues,
> -                                        eth_dev->data->nb_rx_queues);
> -
> -       lio_dev->nb_rx_queues = eth_dev->data->nb_rx_queues;
> -       lio_dev->nb_tx_queues = eth_dev->data->nb_tx_queues;
> -
> -       /* Set max number of queues which can be re-configured. */
> -       lio_dev->max_rx_queues = eth_dev->data->nb_rx_queues;
> -       lio_dev->max_tx_queues = eth_dev->data->nb_tx_queues;
> -
> -       resp_size = sizeof(struct lio_if_cfg_resp);
> -       sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
> -       if (sc == NULL)
> -               return -ENOMEM;
> -
> -       resp = (struct lio_if_cfg_resp *)sc->virtrptr;
> -
> -       /* Firmware doesn't have capability to reconfigure the queues,
> -        * Claim all queues, and use as many required
> -        */
> -       if_cfg.if_cfg64 = 0;
> -       if_cfg.s.num_iqueues = lio_dev->nb_tx_queues;
> -       if_cfg.s.num_oqueues = lio_dev->nb_rx_queues;
> -       if_cfg.s.base_queue = 0;
> -
> -       if_cfg.s.gmx_port_id = lio_dev->pf_num;
> -
> -       lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
> -                                LIO_OPCODE_IF_CFG, 0,
> -                                if_cfg.if_cfg64, 0);
> -
> -       /* Setting wait time in seconds */
> -       sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
> -
> -       retval = lio_send_soft_command(lio_dev, sc);
> -       if (retval == LIO_IQ_SEND_FAILED) {
> -               lio_dev_err(lio_dev, "iq/oq config failed status: %x\n",
> -                           retval);
> -               /* Soft instr is freed by driver in case of failure. */
> -               goto nic_config_fail;
> -       }
> -
> -       /* Sleep on a wait queue till the cond flag indicates that the
> -        * response arrived or timed-out.
> -        */
> -       while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
> -               lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
> -               lio_process_ordered_list(lio_dev);
> -               rte_delay_ms(1);
> -       }
> -
> -       retval = resp->status;
> -       if (retval) {
> -               lio_dev_err(lio_dev, "iq/oq config failed\n");
> -               goto nic_config_fail;
> -       }
> -
> -       strlcpy(lio_dev->firmware_version,
> -               resp->cfg_info.lio_firmware_version, LIO_FW_VERSION_LENGTH);
> -
> -       lio_swap_8B_data((uint64_t *)(&resp->cfg_info),
> -                        sizeof(struct octeon_if_cfg_info) >> 3);
> -
> -       num_iqueues = lio_hweight64(resp->cfg_info.iqmask);
> -       num_oqueues = lio_hweight64(resp->cfg_info.oqmask);
> -
> -       if (!(num_iqueues) || !(num_oqueues)) {
> -               lio_dev_err(lio_dev,
> -                           "Got bad iqueues (%016lx) or oqueues (%016lx) from firmware.\n",
> -                           (unsigned long)resp->cfg_info.iqmask,
> -                           (unsigned long)resp->cfg_info.oqmask);
> -               goto nic_config_fail;
> -       }
> -
> -       lio_dev_dbg(lio_dev,
> -                   "interface %d, iqmask %016lx, oqmask %016lx, numiqueues %d, numoqueues %d\n",
> -                   eth_dev->data->port_id,
> -                   (unsigned long)resp->cfg_info.iqmask,
> -                   (unsigned long)resp->cfg_info.oqmask,
> -                   num_iqueues, num_oqueues);
> -
> -       lio_dev->linfo.num_rxpciq = num_oqueues;
> -       lio_dev->linfo.num_txpciq = num_iqueues;
> -
> -       for (i = 0; i < num_oqueues; i++) {
> -               lio_dev->linfo.rxpciq[i].rxpciq64 =
> -                   resp->cfg_info.linfo.rxpciq[i].rxpciq64;
> -               lio_dev_dbg(lio_dev, "index %d OQ %d\n",
> -                           i, lio_dev->linfo.rxpciq[i].s.q_no);
> -       }
> -
> -       for (i = 0; i < num_iqueues; i++) {
> -               lio_dev->linfo.txpciq[i].txpciq64 =
> -                   resp->cfg_info.linfo.txpciq[i].txpciq64;
> -               lio_dev_dbg(lio_dev, "index %d IQ %d\n",
> -                           i, lio_dev->linfo.txpciq[i].s.q_no);
> -       }
> -
> -       lio_dev->linfo.hw_addr = resp->cfg_info.linfo.hw_addr;
> -       lio_dev->linfo.gmxport = resp->cfg_info.linfo.gmxport;
> -       lio_dev->linfo.link.link_status64 =
> -                       resp->cfg_info.linfo.link.link_status64;
> -
> -       /* 64-bit swap required on LE machines */
> -       lio_swap_8B_data(&lio_dev->linfo.hw_addr, 1);
> -       for (i = 0; i < RTE_ETHER_ADDR_LEN; i++)
> -               mac[i] = *((uint8_t *)(((uint8_t *)&lio_dev->linfo.hw_addr) +
> -                                      2 + i));
> -
> -       /* Copy the permanent MAC address */
> -       rte_ether_addr_copy((struct rte_ether_addr *)mac,
> -                       &eth_dev->data->mac_addrs[0]);
> -
> -       /* enable firmware checksum support for tunnel packets */
> -       lio_enable_hw_tunnel_rx_checksum(eth_dev);
> -       lio_enable_hw_tunnel_tx_checksum(eth_dev);
> -
> -       lio_dev->glist_lock =
> -           rte_zmalloc(NULL, sizeof(*lio_dev->glist_lock) * num_iqueues, 0);
> -       if (lio_dev->glist_lock == NULL)
> -               return -ENOMEM;
> -
> -       lio_dev->glist_head =
> -               rte_zmalloc(NULL, sizeof(*lio_dev->glist_head) * num_iqueues,
> -                           0);
> -       if (lio_dev->glist_head == NULL) {
> -               rte_free(lio_dev->glist_lock);
> -               lio_dev->glist_lock = NULL;
> -               return -ENOMEM;
> -       }
> -
> -       lio_dev_link_update(eth_dev, 0);
> -
> -       lio_dev->port_configured = 1;
> -
> -       lio_free_soft_command(sc);
> -
> -       /* Reset ioq regs */
> -       lio_dev->fn_list.setup_device_regs(lio_dev);
> -
> -       /* Free iq_0 used during init */
> -       lio_free_instr_queue0(lio_dev);
> -
> -       return 0;
> -
> -nic_config_fail:
> -       lio_dev_err(lio_dev, "Failed retval %d\n", retval);
> -       lio_free_soft_command(sc);
> -       lio_free_instr_queue0(lio_dev);
> -
> -       return -ENODEV;
> -}
> -
> -/* Define our ethernet definitions */
> -static const struct eth_dev_ops liovf_eth_dev_ops = {
> -       .dev_configure          = lio_dev_configure,
> -       .dev_start              = lio_dev_start,
> -       .dev_stop               = lio_dev_stop,
> -       .dev_set_link_up        = lio_dev_set_link_up,
> -       .dev_set_link_down      = lio_dev_set_link_down,
> -       .dev_close              = lio_dev_close,
> -       .promiscuous_enable     = lio_dev_promiscuous_enable,
> -       .promiscuous_disable    = lio_dev_promiscuous_disable,
> -       .allmulticast_enable    = lio_dev_allmulticast_enable,
> -       .allmulticast_disable   = lio_dev_allmulticast_disable,
> -       .link_update            = lio_dev_link_update,
> -       .stats_get              = lio_dev_stats_get,
> -       .xstats_get             = lio_dev_xstats_get,
> -       .xstats_get_names       = lio_dev_xstats_get_names,
> -       .stats_reset            = lio_dev_stats_reset,
> -       .xstats_reset           = lio_dev_xstats_reset,
> -       .dev_infos_get          = lio_dev_info_get,
> -       .vlan_filter_set        = lio_dev_vlan_filter_set,
> -       .rx_queue_setup         = lio_dev_rx_queue_setup,
> -       .rx_queue_release       = lio_dev_rx_queue_release,
> -       .tx_queue_setup         = lio_dev_tx_queue_setup,
> -       .tx_queue_release       = lio_dev_tx_queue_release,
> -       .reta_update            = lio_dev_rss_reta_update,
> -       .reta_query             = lio_dev_rss_reta_query,
> -       .rss_hash_conf_get      = lio_dev_rss_hash_conf_get,
> -       .rss_hash_update        = lio_dev_rss_hash_update,
> -       .udp_tunnel_port_add    = lio_dev_udp_tunnel_add,
> -       .udp_tunnel_port_del    = lio_dev_udp_tunnel_del,
> -       .mtu_set                = lio_dev_mtu_set,
> -};
> -
> -static void
> -lio_check_pf_hs_response(void *lio_dev)
> -{
> -       struct lio_device *dev = lio_dev;
> -
> -       /* check till response arrives */
> -       if (dev->pfvf_hsword.coproc_tics_per_us)
> -               return;
> -
> -       cn23xx_vf_handle_mbox(dev);
> -
> -       rte_eal_alarm_set(1, lio_check_pf_hs_response, lio_dev);
> -}
> -
> -/**
> - * \brief Identify the LIO device and to map the BAR address space
> - * @param lio_dev lio device
> - */
> -static int
> -lio_chip_specific_setup(struct lio_device *lio_dev)
> -{
> -       struct rte_pci_device *pdev = lio_dev->pci_dev;
> -       uint32_t dev_id = pdev->id.device_id;
> -       const char *s;
> -       int ret = 1;
> -
> -       switch (dev_id) {
> -       case LIO_CN23XX_VF_VID:
> -               lio_dev->chip_id = LIO_CN23XX_VF_VID;
> -               ret = cn23xx_vf_setup_device(lio_dev);
> -               s = "CN23XX VF";
> -               break;
> -       default:
> -               s = "?";
> -               lio_dev_err(lio_dev, "Unsupported Chip\n");
> -       }
> -
> -       if (!ret)
> -               lio_dev_info(lio_dev, "DEVICE : %s\n", s);
> -
> -       return ret;
> -}
> -
> -static int
> -lio_first_time_init(struct lio_device *lio_dev,
> -                   struct rte_pci_device *pdev)
> -{
> -       int dpdk_queues;
> -
> -       PMD_INIT_FUNC_TRACE();
> -
> -       /* set dpdk specific pci device pointer */
> -       lio_dev->pci_dev = pdev;
> -
> -       /* Identify the LIO type and set device ops */
> -       if (lio_chip_specific_setup(lio_dev)) {
> -               lio_dev_err(lio_dev, "Chip specific setup failed\n");
> -               return -1;
> -       }
> -
> -       /* Initialize soft command buffer pool */
> -       if (lio_setup_sc_buffer_pool(lio_dev)) {
> -               lio_dev_err(lio_dev, "sc buffer pool allocation failed\n");
> -               return -1;
> -       }
> -
> -       /* Initialize lists to manage the requests of different types that
> -        * arrive from applications for this lio device.
> -        */
> -       lio_setup_response_list(lio_dev);
> -
> -       if (lio_dev->fn_list.setup_mbox(lio_dev)) {
> -               lio_dev_err(lio_dev, "Mailbox setup failed\n");
> -               goto error;
> -       }
> -
> -       /* Check PF response */
> -       lio_check_pf_hs_response((void *)lio_dev);
> -
> -       /* Do handshake and exit if incompatible PF driver */
> -       if (cn23xx_pfvf_handshake(lio_dev))
> -               goto error;
> -
> -       /* Request and wait for device reset. */
> -       if (pdev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
> -               cn23xx_vf_ask_pf_to_do_flr(lio_dev);
> -               /* FLR wait time doubled as a precaution. */
> -               rte_delay_ms(LIO_PCI_FLR_WAIT * 2);
> -       }
> -
> -       if (lio_dev->fn_list.setup_device_regs(lio_dev)) {
> -               lio_dev_err(lio_dev, "Failed to configure device registers\n");
> -               goto error;
> -       }
> -
> -       if (lio_setup_instr_queue0(lio_dev)) {
> -               lio_dev_err(lio_dev, "Failed to setup instruction queue 0\n");
> -               goto error;
> -       }
> -
> -       dpdk_queues = (int)lio_dev->sriov_info.rings_per_vf;
> -
> -       lio_dev->max_tx_queues = dpdk_queues;
> -       lio_dev->max_rx_queues = dpdk_queues;
> -
> -       /* Enable input and output queues for this device */
> -       if (lio_dev->fn_list.enable_io_queues(lio_dev))
> -               goto error;
> -
> -       return 0;
> -
> -error:
> -       lio_free_sc_buffer_pool(lio_dev);
> -       if (lio_dev->mbox[0])
> -               lio_dev->fn_list.free_mbox(lio_dev);
> -       if (lio_dev->instr_queue[0])
> -               lio_free_instr_queue0(lio_dev);
> -
> -       return -1;
> -}
> -
> -static int
> -lio_eth_dev_uninit(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> -       PMD_INIT_FUNC_TRACE();
> -
> -       if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> -               return 0;
> -
> -       /* lio_free_sc_buffer_pool */
> -       lio_free_sc_buffer_pool(lio_dev);
> -
> -       return 0;
> -}
> -
> -static int
> -lio_eth_dev_init(struct rte_eth_dev *eth_dev)
> -{
> -       struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(eth_dev);
> -       struct lio_device *lio_dev = LIO_DEV(eth_dev);
> -
> -       PMD_INIT_FUNC_TRACE();
> -
> -       eth_dev->rx_pkt_burst = &lio_dev_recv_pkts;
> -       eth_dev->tx_pkt_burst = &lio_dev_xmit_pkts;
> -
> -       /* Primary does the initialization. */
> -       if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> -               return 0;
> -
> -       rte_eth_copy_pci_info(eth_dev, pdev);
> -
> -       if (pdev->mem_resource[0].addr) {
> -               lio_dev->hw_addr = pdev->mem_resource[0].addr;
> -       } else {
> -               PMD_INIT_LOG(ERR, "ERROR: Failed to map BAR0\n");
> -               return -ENODEV;
> -       }
> -
> -       lio_dev->eth_dev = eth_dev;
> -       /* set lio device print string */
> -       snprintf(lio_dev->dev_string, sizeof(lio_dev->dev_string),
> -                "%s[%02x:%02x.%x]", pdev->driver->driver.name,
> -                pdev->addr.bus, pdev->addr.devid, pdev->addr.function);
> -
> -       lio_dev->port_id = eth_dev->data->port_id;
> -
> -       if (lio_first_time_init(lio_dev, pdev)) {
> -               lio_dev_err(lio_dev, "Device init failed\n");
> -               return -EINVAL;
> -       }
> -
> -       eth_dev->dev_ops = &liovf_eth_dev_ops;
> -       eth_dev->data->mac_addrs = rte_zmalloc("lio", RTE_ETHER_ADDR_LEN, 0);
> -       if (eth_dev->data->mac_addrs == NULL) {
> -               lio_dev_err(lio_dev,
> -                           "MAC addresses memory allocation failed\n");
> -               eth_dev->dev_ops = NULL;
> -               eth_dev->rx_pkt_burst = NULL;
> -               eth_dev->tx_pkt_burst = NULL;
> -               return -ENOMEM;
> -       }
> -
> -       rte_atomic64_set(&lio_dev->status, LIO_DEV_RUNNING);
> -       rte_wmb();
> -
> -       lio_dev->port_configured = 0;
> -       /* Always allow unicast packets */
> -       lio_dev->ifflags |= LIO_IFFLAG_UNICAST;
> -
> -       return 0;
> -}
> -
> -static int
> -lio_eth_dev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> -                     struct rte_pci_device *pci_dev)
> -{
> -       return rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct lio_device),
> -                       lio_eth_dev_init);
> -}
> -
> -static int
> -lio_eth_dev_pci_remove(struct rte_pci_device *pci_dev)
> -{
> -       return rte_eth_dev_pci_generic_remove(pci_dev,
> -                                             lio_eth_dev_uninit);
> -}
> -
> -/* Set of PCI devices this driver supports */
> -static const struct rte_pci_id pci_id_liovf_map[] = {
> -       { RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, LIO_CN23XX_VF_VID) },
> -       { .vendor_id = 0, /* sentinel */ }
> -};
> -
> -static struct rte_pci_driver rte_liovf_pmd = {
> -       .id_table       = pci_id_liovf_map,
> -       .drv_flags      = RTE_PCI_DRV_NEED_MAPPING,
> -       .probe          = lio_eth_dev_pci_probe,
> -       .remove         = lio_eth_dev_pci_remove,
> -};
> -
> -RTE_PMD_REGISTER_PCI(net_liovf, rte_liovf_pmd);
> -RTE_PMD_REGISTER_PCI_TABLE(net_liovf, pci_id_liovf_map);
> -RTE_PMD_REGISTER_KMOD_DEP(net_liovf, "* igb_uio | vfio-pci");
> -RTE_LOG_REGISTER_SUFFIX(lio_logtype_init, init, NOTICE);
> -RTE_LOG_REGISTER_SUFFIX(lio_logtype_driver, driver, NOTICE);
> diff --git a/drivers/net/liquidio/lio_ethdev.h b/drivers/net/liquidio/lio_ethdev.h
> deleted file mode 100644
> index ece2b03858..0000000000
> --- a/drivers/net/liquidio/lio_ethdev.h
> +++ /dev/null
> @@ -1,179 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_ETHDEV_H_
> -#define _LIO_ETHDEV_H_
> -
> -#include <stdint.h>
> -
> -#include "lio_struct.h"
> -
> -/* timeout to check link state updates from firmware in us */
> -#define LIO_LSC_TIMEOUT                100000 /* 100000us (100ms) */
> -#define LIO_MAX_CMD_TIMEOUT     10000 /* 10000ms (10s) */
> -
> -/* The max frame size with default MTU */
> -#define LIO_ETH_MAX_LEN (RTE_ETHER_MTU + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
> -
> -#define LIO_DEV(_eth_dev)              ((_eth_dev)->data->dev_private)
> -
> -/* LIO Response condition variable */
> -struct lio_dev_ctrl_cmd {
> -       struct rte_eth_dev *eth_dev;
> -       uint64_t cond;
> -};
> -
> -enum lio_bus_speed {
> -       LIO_LINK_SPEED_UNKNOWN  = 0,
> -       LIO_LINK_SPEED_10000    = 10000,
> -       LIO_LINK_SPEED_25000    = 25000
> -};
> -
> -struct octeon_if_cfg_info {
> -       uint64_t iqmask;        /** mask for IQs enabled for the port */
> -       uint64_t oqmask;        /** mask for OQs enabled for the port */
> -       struct octeon_link_info linfo; /** initial link information */
> -       char lio_firmware_version[LIO_FW_VERSION_LENGTH];
> -};
> -
> -/** Stats for each NIC port in RX direction. */
> -struct octeon_rx_stats {
> -       /* link-level stats */
> -       uint64_t total_rcvd;
> -       uint64_t bytes_rcvd;
> -       uint64_t total_bcst;
> -       uint64_t total_mcst;
> -       uint64_t runts;
> -       uint64_t ctl_rcvd;
> -       uint64_t fifo_err; /* Accounts for over/under-run of buffers */
> -       uint64_t dmac_drop;
> -       uint64_t fcs_err;
> -       uint64_t jabber_err;
> -       uint64_t l2_err;
> -       uint64_t frame_err;
> -
> -       /* firmware stats */
> -       uint64_t fw_total_rcvd;
> -       uint64_t fw_total_fwd;
> -       uint64_t fw_total_fwd_bytes;
> -       uint64_t fw_err_pko;
> -       uint64_t fw_err_link;
> -       uint64_t fw_err_drop;
> -       uint64_t fw_rx_vxlan;
> -       uint64_t fw_rx_vxlan_err;
> -
> -       /* LRO */
> -       uint64_t fw_lro_pkts;   /* Number of packets that are LROed */
> -       uint64_t fw_lro_octs;   /* Number of octets that are LROed */
> -       uint64_t fw_total_lro;  /* Number of LRO packets formed */
> -       uint64_t fw_lro_aborts; /* Number of times lRO of packet aborted */
> -       uint64_t fw_lro_aborts_port;
> -       uint64_t fw_lro_aborts_seq;
> -       uint64_t fw_lro_aborts_tsval;
> -       uint64_t fw_lro_aborts_timer;
> -       /* intrmod: packet forward rate */
> -       uint64_t fwd_rate;
> -};
> -
> -/** Stats for each NIC port in RX direction. */
> -struct octeon_tx_stats {
> -       /* link-level stats */
> -       uint64_t total_pkts_sent;
> -       uint64_t total_bytes_sent;
> -       uint64_t mcast_pkts_sent;
> -       uint64_t bcast_pkts_sent;
> -       uint64_t ctl_sent;
> -       uint64_t one_collision_sent;    /* Packets sent after one collision */
> -       /* Packets sent after multiple collision */
> -       uint64_t multi_collision_sent;
> -       /* Packets not sent due to max collisions */
> -       uint64_t max_collision_fail;
> -       /* Packets not sent due to max deferrals */
> -       uint64_t max_deferral_fail;
> -       /* Accounts for over/under-run of buffers */
> -       uint64_t fifo_err;
> -       uint64_t runts;
> -       uint64_t total_collisions; /* Total number of collisions detected */
> -
> -       /* firmware stats */
> -       uint64_t fw_total_sent;
> -       uint64_t fw_total_fwd;
> -       uint64_t fw_total_fwd_bytes;
> -       uint64_t fw_err_pko;
> -       uint64_t fw_err_link;
> -       uint64_t fw_err_drop;
> -       uint64_t fw_err_tso;
> -       uint64_t fw_tso;     /* number of tso requests */
> -       uint64_t fw_tso_fwd; /* number of packets segmented in tso */
> -       uint64_t fw_tx_vxlan;
> -};
> -
> -struct octeon_link_stats {
> -       struct octeon_rx_stats fromwire;
> -       struct octeon_tx_stats fromhost;
> -};
> -
> -union lio_if_cfg {
> -       uint64_t if_cfg64;
> -       struct {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -               uint64_t base_queue : 16;
> -               uint64_t num_iqueues : 16;
> -               uint64_t num_oqueues : 16;
> -               uint64_t gmx_port_id : 8;
> -               uint64_t vf_id : 8;
> -#else
> -               uint64_t vf_id : 8;
> -               uint64_t gmx_port_id : 8;
> -               uint64_t num_oqueues : 16;
> -               uint64_t num_iqueues : 16;
> -               uint64_t base_queue : 16;
> -#endif
> -       } s;
> -};
> -
> -struct lio_if_cfg_resp {
> -       uint64_t rh;
> -       struct octeon_if_cfg_info cfg_info;
> -       uint64_t status;
> -};
> -
> -struct lio_link_stats_resp {
> -       uint64_t rh;
> -       struct octeon_link_stats link_stats;
> -       uint64_t status;
> -};
> -
> -struct lio_link_status_resp {
> -       uint64_t rh;
> -       struct octeon_link_info link_info;
> -       uint64_t status;
> -};
> -
> -struct lio_rss_set {
> -       struct param {
> -#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> -               uint64_t flags : 16;
> -               uint64_t hashinfo : 32;
> -               uint64_t itablesize : 16;
> -               uint64_t hashkeysize : 16;
> -               uint64_t reserved : 48;
> -#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -               uint64_t itablesize : 16;
> -               uint64_t hashinfo : 32;
> -               uint64_t flags : 16;
> -               uint64_t reserved : 48;
> -               uint64_t hashkeysize : 16;
> -#endif
> -       } param;
> -
> -       uint8_t itable[LIO_RSS_MAX_TABLE_SZ];
> -       uint8_t key[LIO_RSS_MAX_KEY_SZ];
> -};
> -
> -void lio_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no);
> -
> -void lio_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_no);
> -
> -#endif /* _LIO_ETHDEV_H_ */
> diff --git a/drivers/net/liquidio/lio_logs.h b/drivers/net/liquidio/lio_logs.h
> deleted file mode 100644
> index f227827081..0000000000
> --- a/drivers/net/liquidio/lio_logs.h
> +++ /dev/null
> @@ -1,58 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_LOGS_H_
> -#define _LIO_LOGS_H_
> -
> -extern int lio_logtype_driver;
> -#define lio_dev_printf(lio_dev, level, fmt, args...)           \
> -       rte_log(RTE_LOG_ ## level, lio_logtype_driver,          \
> -               "%s" fmt, (lio_dev)->dev_string, ##args)
> -
> -#define lio_dev_info(lio_dev, fmt, args...)                            \
> -       lio_dev_printf(lio_dev, INFO, "INFO: " fmt, ##args)
> -
> -#define lio_dev_err(lio_dev, fmt, args...)                             \
> -       lio_dev_printf(lio_dev, ERR, "ERROR: %s() " fmt, __func__, ##args)
> -
> -extern int lio_logtype_init;
> -#define PMD_INIT_LOG(level, fmt, args...) \
> -       rte_log(RTE_LOG_ ## level, lio_logtype_init, \
> -               fmt, ## args)
> -
> -/* Enable these through config options */
> -#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, "%s() >>\n", __func__)
> -
> -#define lio_dev_dbg(lio_dev, fmt, args...)                             \
> -       lio_dev_printf(lio_dev, DEBUG, "DEBUG: %s() " fmt, __func__, ##args)
> -
> -#ifdef RTE_LIBRTE_LIO_DEBUG_RX
> -#define PMD_RX_LOG(lio_dev, level, fmt, args...)                       \
> -       lio_dev_printf(lio_dev, level, "RX: %s() " fmt, __func__, ##args)
> -#else /* !RTE_LIBRTE_LIO_DEBUG_RX */
> -#define PMD_RX_LOG(lio_dev, level, fmt, args...) do { } while (0)
> -#endif /* RTE_LIBRTE_LIO_DEBUG_RX */
> -
> -#ifdef RTE_LIBRTE_LIO_DEBUG_TX
> -#define PMD_TX_LOG(lio_dev, level, fmt, args...)                       \
> -       lio_dev_printf(lio_dev, level, "TX: %s() " fmt, __func__, ##args)
> -#else /* !RTE_LIBRTE_LIO_DEBUG_TX */
> -#define PMD_TX_LOG(lio_dev, level, fmt, args...) do { } while (0)
> -#endif /* RTE_LIBRTE_LIO_DEBUG_TX */
> -
> -#ifdef RTE_LIBRTE_LIO_DEBUG_MBOX
> -#define PMD_MBOX_LOG(lio_dev, level, fmt, args...)                     \
> -       lio_dev_printf(lio_dev, level, "MBOX: %s() " fmt, __func__, ##args)
> -#else /* !RTE_LIBRTE_LIO_DEBUG_MBOX */
> -#define PMD_MBOX_LOG(level, fmt, args...) do { } while (0)
> -#endif /* RTE_LIBRTE_LIO_DEBUG_MBOX */
> -
> -#ifdef RTE_LIBRTE_LIO_DEBUG_REGS
> -#define PMD_REGS_LOG(lio_dev, fmt, args...)                            \
> -       lio_dev_printf(lio_dev, DEBUG, "REGS: " fmt, ##args)
> -#else /* !RTE_LIBRTE_LIO_DEBUG_REGS */
> -#define PMD_REGS_LOG(level, fmt, args...) do { } while (0)
> -#endif /* RTE_LIBRTE_LIO_DEBUG_REGS */
> -
> -#endif  /* _LIO_LOGS_H_ */
> diff --git a/drivers/net/liquidio/lio_rxtx.c b/drivers/net/liquidio/lio_rxtx.c
> deleted file mode 100644
> index e09798ddd7..0000000000
> --- a/drivers/net/liquidio/lio_rxtx.c
> +++ /dev/null
> @@ -1,1804 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#include <ethdev_driver.h>
> -#include <rte_cycles.h>
> -#include <rte_malloc.h>
> -
> -#include "lio_logs.h"
> -#include "lio_struct.h"
> -#include "lio_ethdev.h"
> -#include "lio_rxtx.h"
> -
> -#define LIO_MAX_SG 12
> -/* Flush iq if available tx_desc fall below LIO_FLUSH_WM */
> -#define LIO_FLUSH_WM(_iq) ((_iq)->nb_desc / 2)
> -#define LIO_PKT_IN_DONE_CNT_MASK 0x00000000FFFFFFFFULL
> -
> -static void
> -lio_droq_compute_max_packet_bufs(struct lio_droq *droq)
> -{
> -       uint32_t count = 0;
> -
> -       do {
> -               count += droq->buffer_size;
> -       } while (count < LIO_MAX_RX_PKTLEN);
> -}
> -
> -static void
> -lio_droq_reset_indices(struct lio_droq *droq)
> -{
> -       droq->read_idx  = 0;
> -       droq->write_idx = 0;
> -       droq->refill_idx = 0;
> -       droq->refill_count = 0;
> -       rte_atomic64_set(&droq->pkts_pending, 0);
> -}
> -
> -static void
> -lio_droq_destroy_ring_buffers(struct lio_droq *droq)
> -{
> -       uint32_t i;
> -
> -       for (i = 0; i < droq->nb_desc; i++) {
> -               if (droq->recv_buf_list[i].buffer) {
> -                       rte_pktmbuf_free((struct rte_mbuf *)
> -                                        droq->recv_buf_list[i].buffer);
> -                       droq->recv_buf_list[i].buffer = NULL;
> -               }
> -       }
> -
> -       lio_droq_reset_indices(droq);
> -}
> -
> -static int
> -lio_droq_setup_ring_buffers(struct lio_device *lio_dev,
> -                           struct lio_droq *droq)
> -{
> -       struct lio_droq_desc *desc_ring = droq->desc_ring;
> -       uint32_t i;
> -       void *buf;
> -
> -       for (i = 0; i < droq->nb_desc; i++) {
> -               buf = rte_pktmbuf_alloc(droq->mpool);
> -               if (buf == NULL) {
> -                       lio_dev_err(lio_dev, "buffer alloc failed\n");
> -                       droq->stats.rx_alloc_failure++;
> -                       lio_droq_destroy_ring_buffers(droq);
> -                       return -ENOMEM;
> -               }
> -
> -               droq->recv_buf_list[i].buffer = buf;
> -               droq->info_list[i].length = 0;
> -
> -               /* map ring buffers into memory */
> -               desc_ring[i].info_ptr = lio_map_ring_info(droq, i);
> -               desc_ring[i].buffer_ptr =
> -                       lio_map_ring(droq->recv_buf_list[i].buffer);
> -       }
> -
> -       lio_droq_reset_indices(droq);
> -
> -       lio_droq_compute_max_packet_bufs(droq);
> -
> -       return 0;
> -}
> -
> -static void
> -lio_dma_zone_free(struct lio_device *lio_dev, const struct rte_memzone *mz)
> -{
> -       const struct rte_memzone *mz_tmp;
> -       int ret = 0;
> -
> -       if (mz == NULL) {
> -               lio_dev_err(lio_dev, "Memzone NULL\n");
> -               return;
> -       }
> -
> -       mz_tmp = rte_memzone_lookup(mz->name);
> -       if (mz_tmp == NULL) {
> -               lio_dev_err(lio_dev, "Memzone %s Not Found\n", mz->name);
> -               return;
> -       }
> -
> -       ret = rte_memzone_free(mz);
> -       if (ret)
> -               lio_dev_err(lio_dev, "Memzone free Failed ret %d\n", ret);
> -}
> -
> -/**
> - *  Frees the space for descriptor ring for the droq.
> - *
> - *  @param lio_dev     - pointer to the lio device structure
> - *  @param q_no                - droq no.
> - */
> -static void
> -lio_delete_droq(struct lio_device *lio_dev, uint32_t q_no)
> -{
> -       struct lio_droq *droq = lio_dev->droq[q_no];
> -
> -       lio_dev_dbg(lio_dev, "OQ[%d]\n", q_no);
> -
> -       lio_droq_destroy_ring_buffers(droq);
> -       rte_free(droq->recv_buf_list);
> -       droq->recv_buf_list = NULL;
> -       lio_dma_zone_free(lio_dev, droq->info_mz);
> -       lio_dma_zone_free(lio_dev, droq->desc_ring_mz);
> -
> -       memset(droq, 0, LIO_DROQ_SIZE);
> -}
> -
> -static void *
> -lio_alloc_info_buffer(struct lio_device *lio_dev,
> -                     struct lio_droq *droq, unsigned int socket_id)
> -{
> -       droq->info_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
> -                                                "info_list", droq->q_no,
> -                                                (droq->nb_desc *
> -                                                       LIO_DROQ_INFO_SIZE),
> -                                                RTE_CACHE_LINE_SIZE,
> -                                                socket_id);
> -
> -       if (droq->info_mz == NULL)
> -               return NULL;
> -
> -       droq->info_list_dma = droq->info_mz->iova;
> -       droq->info_alloc_size = droq->info_mz->len;
> -       droq->info_base_addr = (size_t)droq->info_mz->addr;
> -
> -       return droq->info_mz->addr;
> -}
> -
> -/**
> - *  Allocates space for the descriptor ring for the droq and
> - *  sets the base addr, num desc etc in Octeon registers.
> - *
> - * @param lio_dev      - pointer to the lio device structure
> - * @param q_no         - droq no.
> - * @param app_ctx      - pointer to application context
> - * @return Success: 0  Failure: -1
> - */
> -static int
> -lio_init_droq(struct lio_device *lio_dev, uint32_t q_no,
> -             uint32_t num_descs, uint32_t desc_size,
> -             struct rte_mempool *mpool, unsigned int socket_id)
> -{
> -       uint32_t c_refill_threshold;
> -       uint32_t desc_ring_size;
> -       struct lio_droq *droq;
> -
> -       lio_dev_dbg(lio_dev, "OQ[%d]\n", q_no);
> -
> -       droq = lio_dev->droq[q_no];
> -       droq->lio_dev = lio_dev;
> -       droq->q_no = q_no;
> -       droq->mpool = mpool;
> -
> -       c_refill_threshold = LIO_OQ_REFILL_THRESHOLD_CFG(lio_dev);
> -
> -       droq->nb_desc = num_descs;
> -       droq->buffer_size = desc_size;
> -
> -       desc_ring_size = droq->nb_desc * LIO_DROQ_DESC_SIZE;
> -       droq->desc_ring_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
> -                                                     "droq", q_no,
> -                                                     desc_ring_size,
> -                                                     RTE_CACHE_LINE_SIZE,
> -                                                     socket_id);
> -
> -       if (droq->desc_ring_mz == NULL) {
> -               lio_dev_err(lio_dev,
> -                           "Output queue %d ring alloc failed\n", q_no);
> -               return -1;
> -       }
> -
> -       droq->desc_ring_dma = droq->desc_ring_mz->iova;
> -       droq->desc_ring = (struct lio_droq_desc *)droq->desc_ring_mz->addr;
> -
> -       lio_dev_dbg(lio_dev, "droq[%d]: desc_ring: virt: 0x%p, dma: %lx\n",
> -                   q_no, droq->desc_ring, (unsigned long)droq->desc_ring_dma);
> -       lio_dev_dbg(lio_dev, "droq[%d]: num_desc: %d\n", q_no,
> -                   droq->nb_desc);
> -
> -       droq->info_list = lio_alloc_info_buffer(lio_dev, droq, socket_id);
> -       if (droq->info_list == NULL) {
> -               lio_dev_err(lio_dev, "Cannot allocate memory for info list.\n");
> -               goto init_droq_fail;
> -       }
> -
> -       droq->recv_buf_list = rte_zmalloc_socket("recv_buf_list",
> -                                                (droq->nb_desc *
> -                                                       LIO_DROQ_RECVBUF_SIZE),
> -                                                RTE_CACHE_LINE_SIZE,
> -                                                socket_id);
> -       if (droq->recv_buf_list == NULL) {
> -               lio_dev_err(lio_dev,
> -                           "Output queue recv buf list alloc failed\n");
> -               goto init_droq_fail;
> -       }
> -
> -       if (lio_droq_setup_ring_buffers(lio_dev, droq))
> -               goto init_droq_fail;
> -
> -       droq->refill_threshold = c_refill_threshold;
> -
> -       rte_spinlock_init(&droq->lock);
> -
> -       lio_dev->fn_list.setup_oq_regs(lio_dev, q_no);
> -
> -       lio_dev->io_qmask.oq |= (1ULL << q_no);
> -
> -       return 0;
> -
> -init_droq_fail:
> -       lio_delete_droq(lio_dev, q_no);
> -
> -       return -1;
> -}
> -
> -int
> -lio_setup_droq(struct lio_device *lio_dev, int oq_no, int num_descs,
> -              int desc_size, struct rte_mempool *mpool, unsigned int socket_id)
> -{
> -       struct lio_droq *droq;
> -
> -       PMD_INIT_FUNC_TRACE();
> -
> -       /* Allocate the DS for the new droq. */
> -       droq = rte_zmalloc_socket("ethdev RX queue", sizeof(*droq),
> -                                 RTE_CACHE_LINE_SIZE, socket_id);
> -       if (droq == NULL)
> -               return -ENOMEM;
> -
> -       lio_dev->droq[oq_no] = droq;
> -
> -       /* Initialize the Droq */
> -       if (lio_init_droq(lio_dev, oq_no, num_descs, desc_size, mpool,
> -                         socket_id)) {
> -               lio_dev_err(lio_dev, "Droq[%u] Initialization Failed\n", oq_no);
> -               rte_free(lio_dev->droq[oq_no]);
> -               lio_dev->droq[oq_no] = NULL;
> -               return -ENOMEM;
> -       }
> -
> -       lio_dev->num_oqs++;
> -
> -       lio_dev_dbg(lio_dev, "Total number of OQ: %d\n", lio_dev->num_oqs);
> -
> -       /* Send credit for octeon output queues. credits are always
> -        * sent after the output queue is enabled.
> -        */
> -       rte_write32(lio_dev->droq[oq_no]->nb_desc,
> -                   lio_dev->droq[oq_no]->pkts_credit_reg);
> -       rte_wmb();
> -
> -       return 0;
> -}
> -
> -static inline uint32_t
> -lio_droq_get_bufcount(uint32_t buf_size, uint32_t total_len)
> -{
> -       uint32_t buf_cnt = 0;
> -
> -       while (total_len > (buf_size * buf_cnt))
> -               buf_cnt++;
> -
> -       return buf_cnt;
> -}
> -
> -/* If we were not able to refill all buffers, try to move around
> - * the buffers that were not dispatched.
> - */
> -static inline uint32_t
> -lio_droq_refill_pullup_descs(struct lio_droq *droq,
> -                            struct lio_droq_desc *desc_ring)
> -{
> -       uint32_t refill_index = droq->refill_idx;
> -       uint32_t desc_refilled = 0;
> -
> -       while (refill_index != droq->read_idx) {
> -               if (droq->recv_buf_list[refill_index].buffer) {
> -                       droq->recv_buf_list[droq->refill_idx].buffer =
> -                               droq->recv_buf_list[refill_index].buffer;
> -                       desc_ring[droq->refill_idx].buffer_ptr =
> -                               desc_ring[refill_index].buffer_ptr;
> -                       droq->recv_buf_list[refill_index].buffer = NULL;
> -                       desc_ring[refill_index].buffer_ptr = 0;
> -                       do {
> -                               droq->refill_idx = lio_incr_index(
> -                                                       droq->refill_idx, 1,
> -                                                       droq->nb_desc);
> -                               desc_refilled++;
> -                               droq->refill_count--;
> -                       } while (droq->recv_buf_list[droq->refill_idx].buffer);
> -               }
> -               refill_index = lio_incr_index(refill_index, 1,
> -                                             droq->nb_desc);
> -       }       /* while */
> -
> -       return desc_refilled;
> -}
> -
> -/* lio_droq_refill
> - *
> - * @param droq         - droq in which descriptors require new buffers.
> - *
> - * Description:
> - *  Called during normal DROQ processing in interrupt mode or by the poll
> - *  thread to refill the descriptors from which buffers were dispatched
> - *  to upper layers. Attempts to allocate new buffers. If that fails, moves
> - *  up buffers (that were not dispatched) to form a contiguous ring.
> - *
> - * Returns:
> - *  No of descriptors refilled.
> - *
> - * Locks:
> - * This routine is called with droq->lock held.
> - */
> -static uint32_t
> -lio_droq_refill(struct lio_droq *droq)
> -{
> -       struct lio_droq_desc *desc_ring;
> -       uint32_t desc_refilled = 0;
> -       void *buf = NULL;
> -
> -       desc_ring = droq->desc_ring;
> -
> -       while (droq->refill_count && (desc_refilled < droq->nb_desc)) {
> -               /* If a valid buffer exists (happens if there is no dispatch),
> -                * reuse the buffer, else allocate.
> -                */
> -               if (droq->recv_buf_list[droq->refill_idx].buffer == NULL) {
> -                       buf = rte_pktmbuf_alloc(droq->mpool);
> -                       /* If a buffer could not be allocated, no point in
> -                        * continuing
> -                        */
> -                       if (buf == NULL) {
> -                               droq->stats.rx_alloc_failure++;
> -                               break;
> -                       }
> -
> -                       droq->recv_buf_list[droq->refill_idx].buffer = buf;
> -               }
> -
> -               desc_ring[droq->refill_idx].buffer_ptr =
> -                   lio_map_ring(droq->recv_buf_list[droq->refill_idx].buffer);
> -               /* Reset any previous values in the length field. */
> -               droq->info_list[droq->refill_idx].length = 0;
> -
> -               droq->refill_idx = lio_incr_index(droq->refill_idx, 1,
> -                                                 droq->nb_desc);
> -               desc_refilled++;
> -               droq->refill_count--;
> -       }
> -
> -       if (droq->refill_count)
> -               desc_refilled += lio_droq_refill_pullup_descs(droq, desc_ring);
> -
> -       /* if droq->refill_count
> -        * The refill count would not change in pass two. We only moved buffers
> -        * to close the gap in the ring, but we would still have the same no. of
> -        * buffers to refill.
> -        */
> -       return desc_refilled;
> -}
> -
> -static int
> -lio_droq_fast_process_packet(struct lio_device *lio_dev,
> -                            struct lio_droq *droq,
> -                            struct rte_mbuf **rx_pkts)
> -{
> -       struct rte_mbuf *nicbuf = NULL;
> -       struct lio_droq_info *info;
> -       uint32_t total_len = 0;
> -       int data_total_len = 0;
> -       uint32_t pkt_len = 0;
> -       union octeon_rh *rh;
> -       int data_pkts = 0;
> -
> -       info = &droq->info_list[droq->read_idx];
> -       lio_swap_8B_data((uint64_t *)info, 2);
> -
> -       if (!info->length)
> -               return -1;
> -
> -       /* Len of resp hdr in included in the received data len. */
> -       info->length -= OCTEON_RH_SIZE;
> -       rh = &info->rh;
> -
> -       total_len += (uint32_t)info->length;
> -
> -       if (lio_opcode_slow_path(rh)) {
> -               uint32_t buf_cnt;
> -
> -               buf_cnt = lio_droq_get_bufcount(droq->buffer_size,
> -                                               (uint32_t)info->length);
> -               droq->read_idx = lio_incr_index(droq->read_idx, buf_cnt,
> -                                               droq->nb_desc);
> -               droq->refill_count += buf_cnt;
> -       } else {
> -               if (info->length <= droq->buffer_size) {
> -                       if (rh->r_dh.has_hash)
> -                               pkt_len = (uint32_t)(info->length - 8);
> -                       else
> -                               pkt_len = (uint32_t)info->length;
> -
> -                       nicbuf = droq->recv_buf_list[droq->read_idx].buffer;
> -                       droq->recv_buf_list[droq->read_idx].buffer = NULL;
> -                       droq->read_idx = lio_incr_index(
> -                                               droq->read_idx, 1,
> -                                               droq->nb_desc);
> -                       droq->refill_count++;
> -
> -                       if (likely(nicbuf != NULL)) {
> -                               /* We don't have a way to pass flags yet */
> -                               nicbuf->ol_flags = 0;
> -                               if (rh->r_dh.has_hash) {
> -                                       uint64_t *hash_ptr;
> -
> -                                       nicbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
> -                                       hash_ptr = rte_pktmbuf_mtod(nicbuf,
> -                                                                   uint64_t *);
> -                                       lio_swap_8B_data(hash_ptr, 1);
> -                                       nicbuf->hash.rss = (uint32_t)*hash_ptr;
> -                                       nicbuf->data_off += 8;
> -                               }
> -
> -                               nicbuf->pkt_len = pkt_len;
> -                               nicbuf->data_len = pkt_len;
> -                               nicbuf->port = lio_dev->port_id;
> -                               /* Store the mbuf */
> -                               rx_pkts[data_pkts++] = nicbuf;
> -                               data_total_len += pkt_len;
> -                       }
> -
> -                       /* Prefetch buffer pointers when on a cache line
> -                        * boundary
> -                        */
> -                       if ((droq->read_idx & 3) == 0) {
> -                               rte_prefetch0(
> -                                   &droq->recv_buf_list[droq->read_idx]);
> -                               rte_prefetch0(
> -                                   &droq->info_list[droq->read_idx]);
> -                       }
> -               } else {
> -                       struct rte_mbuf *first_buf = NULL;
> -                       struct rte_mbuf *last_buf = NULL;
> -
> -                       while (pkt_len < info->length) {
> -                               int cpy_len = 0;
> -
> -                               cpy_len = ((pkt_len + droq->buffer_size) >
> -                                               info->length)
> -                                               ? ((uint32_t)info->length -
> -                                                       pkt_len)
> -                                               : droq->buffer_size;
> -
> -                               nicbuf =
> -                                   droq->recv_buf_list[droq->read_idx].buffer;
> -                               droq->recv_buf_list[droq->read_idx].buffer =
> -                                   NULL;
> -
> -                               if (likely(nicbuf != NULL)) {
> -                                       /* Note the first seg */
> -                                       if (!pkt_len)
> -                                               first_buf = nicbuf;
> -
> -                                       nicbuf->port = lio_dev->port_id;
> -                                       /* We don't have a way to pass
> -                                        * flags yet
> -                                        */
> -                                       nicbuf->ol_flags = 0;
> -                                       if ((!pkt_len) && (rh->r_dh.has_hash)) {
> -                                               uint64_t *hash_ptr;
> -
> -                                               nicbuf->ol_flags |=
> -                                                   RTE_MBUF_F_RX_RSS_HASH;
> -                                               hash_ptr = rte_pktmbuf_mtod(
> -                                                   nicbuf, uint64_t *);
> -                                               lio_swap_8B_data(hash_ptr, 1);
> -                                               nicbuf->hash.rss =
> -                                                   (uint32_t)*hash_ptr;
> -                                               nicbuf->data_off += 8;
> -                                               nicbuf->pkt_len = cpy_len - 8;
> -                                               nicbuf->data_len = cpy_len - 8;
> -                                       } else {
> -                                               nicbuf->pkt_len = cpy_len;
> -                                               nicbuf->data_len = cpy_len;
> -                                       }
> -
> -                                       if (pkt_len)
> -                                               first_buf->nb_segs++;
> -
> -                                       if (last_buf)
> -                                               last_buf->next = nicbuf;
> -
> -                                       last_buf = nicbuf;
> -                               } else {
> -                                       PMD_RX_LOG(lio_dev, ERR, "no buf\n");
> -                               }
> -
> -                               pkt_len += cpy_len;
> -                               droq->read_idx = lio_incr_index(
> -                                                       droq->read_idx,
> -                                                       1, droq->nb_desc);
> -                               droq->refill_count++;
> -
> -                               /* Prefetch buffer pointers when on a
> -                                * cache line boundary
> -                                */
> -                               if ((droq->read_idx & 3) == 0) {
> -                                       rte_prefetch0(&droq->recv_buf_list
> -                                                             [droq->read_idx]);
> -
> -                                       rte_prefetch0(
> -                                           &droq->info_list[droq->read_idx]);
> -                               }
> -                       }
> -                       rx_pkts[data_pkts++] = first_buf;
> -                       if (rh->r_dh.has_hash)
> -                               data_total_len += (pkt_len - 8);
> -                       else
> -                               data_total_len += pkt_len;
> -               }
> -
> -               /* Inform upper layer about packet checksum verification */
> -               struct rte_mbuf *m = rx_pkts[data_pkts - 1];
> -
> -               if (rh->r_dh.csum_verified & LIO_IP_CSUM_VERIFIED)
> -                       m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
> -
> -               if (rh->r_dh.csum_verified & LIO_L4_CSUM_VERIFIED)
> -                       m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
> -       }
> -
> -       if (droq->refill_count >= droq->refill_threshold) {
> -               int desc_refilled = lio_droq_refill(droq);
> -
> -               /* Flush the droq descriptor data to memory to be sure
> -                * that when we update the credits the data in memory is
> -                * accurate.
> -                */
> -               rte_wmb();
> -               rte_write32(desc_refilled, droq->pkts_credit_reg);
> -               /* make sure mmio write completes */
> -               rte_wmb();
> -       }
> -
> -       info->length = 0;
> -       info->rh.rh64 = 0;
> -
> -       droq->stats.pkts_received++;
> -       droq->stats.rx_pkts_received += data_pkts;
> -       droq->stats.rx_bytes_received += data_total_len;
> -       droq->stats.bytes_received += total_len;
> -
> -       return data_pkts;
> -}
> -
> -static uint32_t
> -lio_droq_fast_process_packets(struct lio_device *lio_dev,
> -                             struct lio_droq *droq,
> -                             struct rte_mbuf **rx_pkts,
> -                             uint32_t pkts_to_process)
> -{
> -       int ret, data_pkts = 0;
> -       uint32_t pkt;
> -
> -       for (pkt = 0; pkt < pkts_to_process; pkt++) {
> -               ret = lio_droq_fast_process_packet(lio_dev, droq,
> -                                                  &rx_pkts[data_pkts]);
> -               if (ret < 0) {
> -                       lio_dev_err(lio_dev, "Port[%d] DROQ[%d] idx: %d len:0, pkt_cnt: %d\n",
> -                                   lio_dev->port_id, droq->q_no,
> -                                   droq->read_idx, pkts_to_process);
> -                       break;
> -               }
> -               data_pkts += ret;
> -       }
> -
> -       rte_atomic64_sub(&droq->pkts_pending, pkt);
> -
> -       return data_pkts;
> -}
> -
> -static inline uint32_t
> -lio_droq_check_hw_for_pkts(struct lio_droq *droq)
> -{
> -       uint32_t last_count;
> -       uint32_t pkt_count;
> -
> -       pkt_count = rte_read32(droq->pkts_sent_reg);
> -
> -       last_count = pkt_count - droq->pkt_count;
> -       droq->pkt_count = pkt_count;
> -
> -       if (last_count)
> -               rte_atomic64_add(&droq->pkts_pending, last_count);
> -
> -       return last_count;
> -}
> -
> -uint16_t
> -lio_dev_recv_pkts(void *rx_queue,
> -                 struct rte_mbuf **rx_pkts,
> -                 uint16_t budget)
> -{
> -       struct lio_droq *droq = rx_queue;
> -       struct lio_device *lio_dev = droq->lio_dev;
> -       uint32_t pkts_processed = 0;
> -       uint32_t pkt_count = 0;
> -
> -       lio_droq_check_hw_for_pkts(droq);
> -
> -       pkt_count = rte_atomic64_read(&droq->pkts_pending);
> -       if (!pkt_count)
> -               return 0;
> -
> -       if (pkt_count > budget)
> -               pkt_count = budget;
> -
> -       /* Grab the lock */
> -       rte_spinlock_lock(&droq->lock);
> -       pkts_processed = lio_droq_fast_process_packets(lio_dev,
> -                                                      droq, rx_pkts,
> -                                                      pkt_count);
> -
> -       if (droq->pkt_count) {
> -               rte_write32(droq->pkt_count, droq->pkts_sent_reg);
> -               droq->pkt_count = 0;
> -       }
> -
> -       /* Release the spin lock */
> -       rte_spinlock_unlock(&droq->lock);
> -
> -       return pkts_processed;
> -}
> -
> -void
> -lio_delete_droq_queue(struct lio_device *lio_dev,
> -                     int oq_no)
> -{
> -       lio_delete_droq(lio_dev, oq_no);
> -       lio_dev->num_oqs--;
> -       rte_free(lio_dev->droq[oq_no]);
> -       lio_dev->droq[oq_no] = NULL;
> -}
> -
> -/**
> - *  lio_init_instr_queue()
> - *  @param lio_dev     - pointer to the lio device structure.
> - *  @param txpciq      - queue to be initialized.
> - *
> - *  Called at driver init time for each input queue. iq_conf has the
> - *  configuration parameters for the queue.
> - *
> - *  @return  Success: 0        Failure: -1
> - */
> -static int
> -lio_init_instr_queue(struct lio_device *lio_dev,
> -                    union octeon_txpciq txpciq,
> -                    uint32_t num_descs, unsigned int socket_id)
> -{
> -       uint32_t iq_no = (uint32_t)txpciq.s.q_no;
> -       struct lio_instr_queue *iq;
> -       uint32_t instr_type;
> -       uint32_t q_size;
> -
> -       instr_type = LIO_IQ_INSTR_TYPE(lio_dev);
> -
> -       q_size = instr_type * num_descs;
> -       iq = lio_dev->instr_queue[iq_no];
> -       iq->iq_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
> -                                            "instr_queue", iq_no, q_size,
> -                                            RTE_CACHE_LINE_SIZE,
> -                                            socket_id);
> -       if (iq->iq_mz == NULL) {
> -               lio_dev_err(lio_dev, "Cannot allocate memory for instr queue %d\n",
> -                           iq_no);
> -               return -1;
> -       }
> -
> -       iq->base_addr_dma = iq->iq_mz->iova;
> -       iq->base_addr = (uint8_t *)iq->iq_mz->addr;
> -
> -       iq->nb_desc = num_descs;
> -
> -       /* Initialize a list to holds requests that have been posted to Octeon
> -        * but has yet to be fetched by octeon
> -        */
> -       iq->request_list = rte_zmalloc_socket("request_list",
> -                                             sizeof(*iq->request_list) *
> -                                                       num_descs,
> -                                             RTE_CACHE_LINE_SIZE,
> -                                             socket_id);
> -       if (iq->request_list == NULL) {
> -               lio_dev_err(lio_dev, "Alloc failed for IQ[%d] nr free list\n",
> -                           iq_no);
> -               lio_dma_zone_free(lio_dev, iq->iq_mz);
> -               return -1;
> -       }
> -
> -       lio_dev_dbg(lio_dev, "IQ[%d]: base: %p basedma: %lx count: %d\n",
> -                   iq_no, iq->base_addr, (unsigned long)iq->base_addr_dma,
> -                   iq->nb_desc);
> -
> -       iq->lio_dev = lio_dev;
> -       iq->txpciq.txpciq64 = txpciq.txpciq64;
> -       iq->fill_cnt = 0;
> -       iq->host_write_index = 0;
> -       iq->lio_read_index = 0;
> -       iq->flush_index = 0;
> -
> -       rte_atomic64_set(&iq->instr_pending, 0);
> -
> -       /* Initialize the spinlock for this instruction queue */
> -       rte_spinlock_init(&iq->lock);
> -       rte_spinlock_init(&iq->post_lock);
> -
> -       rte_atomic64_clear(&iq->iq_flush_running);
> -
> -       lio_dev->io_qmask.iq |= (1ULL << iq_no);
> -
> -       /* Set the 32B/64B mode for each input queue */
> -       lio_dev->io_qmask.iq64B |= ((instr_type == 64) << iq_no);
> -       iq->iqcmd_64B = (instr_type == 64);
> -
> -       lio_dev->fn_list.setup_iq_regs(lio_dev, iq_no);
> -
> -       return 0;
> -}
> -
> -int
> -lio_setup_instr_queue0(struct lio_device *lio_dev)
> -{
> -       union octeon_txpciq txpciq;
> -       uint32_t num_descs = 0;
> -       uint32_t iq_no = 0;
> -
> -       num_descs = LIO_NUM_DEF_TX_DESCS_CFG(lio_dev);
> -
> -       lio_dev->num_iqs = 0;
> -
> -       lio_dev->instr_queue[0] = rte_zmalloc(NULL,
> -                                       sizeof(struct lio_instr_queue), 0);
> -       if (lio_dev->instr_queue[0] == NULL)
> -               return -ENOMEM;
> -
> -       lio_dev->instr_queue[0]->q_index = 0;
> -       lio_dev->instr_queue[0]->app_ctx = (void *)(size_t)0;
> -       txpciq.txpciq64 = 0;
> -       txpciq.s.q_no = iq_no;
> -       txpciq.s.pkind = lio_dev->pfvf_hsword.pkind;
> -       txpciq.s.use_qpg = 0;
> -       txpciq.s.qpg = 0;
> -       if (lio_init_instr_queue(lio_dev, txpciq, num_descs, SOCKET_ID_ANY)) {
> -               rte_free(lio_dev->instr_queue[0]);
> -               lio_dev->instr_queue[0] = NULL;
> -               return -1;
> -       }
> -
> -       lio_dev->num_iqs++;
> -
> -       return 0;
> -}
> -
> -/**
> - *  lio_delete_instr_queue()
> - *  @param lio_dev     - pointer to the lio device structure.
> - *  @param iq_no       - queue to be deleted.
> - *
> - *  Called at driver unload time for each input queue. Deletes all
> - *  allocated resources for the input queue.
> - */
> -static void
> -lio_delete_instr_queue(struct lio_device *lio_dev, uint32_t iq_no)
> -{
> -       struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
> -
> -       rte_free(iq->request_list);
> -       iq->request_list = NULL;
> -       lio_dma_zone_free(lio_dev, iq->iq_mz);
> -}
> -
> -void
> -lio_free_instr_queue0(struct lio_device *lio_dev)
> -{
> -       lio_delete_instr_queue(lio_dev, 0);
> -       rte_free(lio_dev->instr_queue[0]);
> -       lio_dev->instr_queue[0] = NULL;
> -       lio_dev->num_iqs--;
> -}
> -
> -/* Return 0 on success, -1 on failure */
> -int
> -lio_setup_iq(struct lio_device *lio_dev, int q_index,
> -            union octeon_txpciq txpciq, uint32_t num_descs, void *app_ctx,
> -            unsigned int socket_id)
> -{
> -       uint32_t iq_no = (uint32_t)txpciq.s.q_no;
> -
> -       lio_dev->instr_queue[iq_no] = rte_zmalloc_socket("ethdev TX queue",
> -                                               sizeof(struct lio_instr_queue),
> -                                               RTE_CACHE_LINE_SIZE, socket_id);
> -       if (lio_dev->instr_queue[iq_no] == NULL)
> -               return -1;
> -
> -       lio_dev->instr_queue[iq_no]->q_index = q_index;
> -       lio_dev->instr_queue[iq_no]->app_ctx = app_ctx;
> -
> -       if (lio_init_instr_queue(lio_dev, txpciq, num_descs, socket_id)) {
> -               rte_free(lio_dev->instr_queue[iq_no]);
> -               lio_dev->instr_queue[iq_no] = NULL;
> -               return -1;
> -       }
> -
> -       lio_dev->num_iqs++;
> -
> -       return 0;
> -}
> -
> -int
> -lio_wait_for_instr_fetch(struct lio_device *lio_dev)
> -{
> -       int pending, instr_cnt;
> -       int i, retry = 1000;
> -
> -       do {
> -               instr_cnt = 0;
> -
> -               for (i = 0; i < LIO_MAX_INSTR_QUEUES(lio_dev); i++) {
> -                       if (!(lio_dev->io_qmask.iq & (1ULL << i)))
> -                               continue;
> -
> -                       if (lio_dev->instr_queue[i] == NULL)
> -                               break;
> -
> -                       pending = rte_atomic64_read(
> -                           &lio_dev->instr_queue[i]->instr_pending);
> -                       if (pending)
> -                               lio_flush_iq(lio_dev, lio_dev->instr_queue[i]);
> -
> -                       instr_cnt += pending;
> -               }
> -
> -               if (instr_cnt == 0)
> -                       break;
> -
> -               rte_delay_ms(1);
> -
> -       } while (retry-- && instr_cnt);
> -
> -       return instr_cnt;
> -}
> -
> -static inline void
> -lio_ring_doorbell(struct lio_device *lio_dev,
> -                 struct lio_instr_queue *iq)
> -{
> -       if (rte_atomic64_read(&lio_dev->status) == LIO_DEV_RUNNING) {
> -               rte_write32(iq->fill_cnt, iq->doorbell_reg);
> -               /* make sure doorbell write goes through */
> -               rte_wmb();
> -               iq->fill_cnt = 0;
> -       }
> -}
> -
> -static inline void
> -copy_cmd_into_iq(struct lio_instr_queue *iq, uint8_t *cmd)
> -{
> -       uint8_t *iqptr, cmdsize;
> -
> -       cmdsize = ((iq->iqcmd_64B) ? 64 : 32);
> -       iqptr = iq->base_addr + (cmdsize * iq->host_write_index);
> -
> -       rte_memcpy(iqptr, cmd, cmdsize);
> -}
> -
> -static inline struct lio_iq_post_status
> -post_command2(struct lio_instr_queue *iq, uint8_t *cmd)
> -{
> -       struct lio_iq_post_status st;
> -
> -       st.status = LIO_IQ_SEND_OK;
> -
> -       /* This ensures that the read index does not wrap around to the same
> -        * position if queue gets full before Octeon could fetch any instr.
> -        */
> -       if (rte_atomic64_read(&iq->instr_pending) >=
> -                       (int32_t)(iq->nb_desc - 1)) {
> -               st.status = LIO_IQ_SEND_FAILED;
> -               st.index = -1;
> -               return st;
> -       }
> -
> -       if (rte_atomic64_read(&iq->instr_pending) >=
> -                       (int32_t)(iq->nb_desc - 2))
> -               st.status = LIO_IQ_SEND_STOP;
> -
> -       copy_cmd_into_iq(iq, cmd);
> -
> -       /* "index" is returned, host_write_index is modified. */
> -       st.index = iq->host_write_index;
> -       iq->host_write_index = lio_incr_index(iq->host_write_index, 1,
> -                                             iq->nb_desc);
> -       iq->fill_cnt++;
> -
> -       /* Flush the command into memory. We need to be sure the data is in
> -        * memory before indicating that the instruction is pending.
> -        */
> -       rte_wmb();
> -
> -       rte_atomic64_inc(&iq->instr_pending);
> -
> -       return st;
> -}
> -
> -static inline void
> -lio_add_to_request_list(struct lio_instr_queue *iq,
> -                       int idx, void *buf, int reqtype)
> -{
> -       iq->request_list[idx].buf = buf;
> -       iq->request_list[idx].reqtype = reqtype;
> -}
> -
> -static inline void
> -lio_free_netsgbuf(void *buf)
> -{
> -       struct lio_buf_free_info *finfo = buf;
> -       struct lio_device *lio_dev = finfo->lio_dev;
> -       struct rte_mbuf *m = finfo->mbuf;
> -       struct lio_gather *g = finfo->g;
> -       uint8_t iq = finfo->iq_no;
> -
> -       /* This will take care of multiple segments also */
> -       rte_pktmbuf_free(m);
> -
> -       rte_spinlock_lock(&lio_dev->glist_lock[iq]);
> -       STAILQ_INSERT_TAIL(&lio_dev->glist_head[iq], &g->list, entries);
> -       rte_spinlock_unlock(&lio_dev->glist_lock[iq]);
> -       rte_free(finfo);
> -}
> -
> -/* Can only run in process context */
> -static int
> -lio_process_iq_request_list(struct lio_device *lio_dev,
> -                           struct lio_instr_queue *iq)
> -{
> -       struct octeon_instr_irh *irh = NULL;
> -       uint32_t old = iq->flush_index;
> -       struct lio_soft_command *sc;
> -       uint32_t inst_count = 0;
> -       int reqtype;
> -       void *buf;
> -
> -       while (old != iq->lio_read_index) {
> -               reqtype = iq->request_list[old].reqtype;
> -               buf     = iq->request_list[old].buf;
> -
> -               if (reqtype == LIO_REQTYPE_NONE)
> -                       goto skip_this;
> -
> -               switch (reqtype) {
> -               case LIO_REQTYPE_NORESP_NET:
> -                       rte_pktmbuf_free((struct rte_mbuf *)buf);
> -                       break;
> -               case LIO_REQTYPE_NORESP_NET_SG:
> -                       lio_free_netsgbuf(buf);
> -                       break;
> -               case LIO_REQTYPE_SOFT_COMMAND:
> -                       sc = buf;
> -                       irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
> -                       if (irh->rflag) {
> -                               /* We're expecting a response from Octeon.
> -                                * It's up to lio_process_ordered_list() to
> -                                * process sc. Add sc to the ordered soft
> -                                * command response list because we expect
> -                                * a response from Octeon.
> -                                */
> -                               rte_spinlock_lock(&lio_dev->response_list.lock);
> -                               rte_atomic64_inc(
> -                                   &lio_dev->response_list.pending_req_count);
> -                               STAILQ_INSERT_TAIL(
> -                                       &lio_dev->response_list.head,
> -                                       &sc->node, entries);
> -                               rte_spinlock_unlock(
> -                                               &lio_dev->response_list.lock);
> -                       } else {
> -                               if (sc->callback) {
> -                                       /* This callback must not sleep */
> -                                       sc->callback(LIO_REQUEST_DONE,
> -                                                    sc->callback_arg);
> -                               }
> -                       }
> -                       break;
> -               default:
> -                       lio_dev_err(lio_dev,
> -                                   "Unknown reqtype: %d buf: %p at idx %d\n",
> -                                   reqtype, buf, old);
> -               }
> -
> -               iq->request_list[old].buf = NULL;
> -               iq->request_list[old].reqtype = 0;
> -
> -skip_this:
> -               inst_count++;
> -               old = lio_incr_index(old, 1, iq->nb_desc);
> -       }
> -
> -       iq->flush_index = old;
> -
> -       return inst_count;
> -}
> -
> -static void
> -lio_update_read_index(struct lio_instr_queue *iq)
> -{
> -       uint32_t pkt_in_done = rte_read32(iq->inst_cnt_reg);
> -       uint32_t last_done;
> -
> -       last_done = pkt_in_done - iq->pkt_in_done;
> -       iq->pkt_in_done = pkt_in_done;
> -
> -       /* Add last_done and modulo with the IQ size to get new index */
> -       iq->lio_read_index = (iq->lio_read_index +
> -                       (uint32_t)(last_done & LIO_PKT_IN_DONE_CNT_MASK)) %
> -                       iq->nb_desc;
> -}
> -
> -int
> -lio_flush_iq(struct lio_device *lio_dev, struct lio_instr_queue *iq)
> -{
> -       uint32_t inst_processed = 0;
> -       int tx_done = 1;
> -
> -       if (rte_atomic64_test_and_set(&iq->iq_flush_running) == 0)
> -               return tx_done;
> -
> -       rte_spinlock_lock(&iq->lock);
> -
> -       lio_update_read_index(iq);
> -
> -       do {
> -               /* Process any outstanding IQ packets. */
> -               if (iq->flush_index == iq->lio_read_index)
> -                       break;
> -
> -               inst_processed = lio_process_iq_request_list(lio_dev, iq);
> -
> -               if (inst_processed) {
> -                       rte_atomic64_sub(&iq->instr_pending, inst_processed);
> -                       iq->stats.instr_processed += inst_processed;
> -               }
> -
> -               inst_processed = 0;
> -
> -       } while (1);
> -
> -       rte_spinlock_unlock(&iq->lock);
> -
> -       rte_atomic64_clear(&iq->iq_flush_running);
> -
> -       return tx_done;
> -}
> -
> -static int
> -lio_send_command(struct lio_device *lio_dev, uint32_t iq_no, void *cmd,
> -                void *buf, uint32_t datasize, uint32_t reqtype)
> -{
> -       struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
> -       struct lio_iq_post_status st;
> -
> -       rte_spinlock_lock(&iq->post_lock);
> -
> -       st = post_command2(iq, cmd);
> -
> -       if (st.status != LIO_IQ_SEND_FAILED) {
> -               lio_add_to_request_list(iq, st.index, buf, reqtype);
> -               LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, bytes_sent,
> -                                             datasize);
> -               LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, instr_posted, 1);
> -
> -               lio_ring_doorbell(lio_dev, iq);
> -       } else {
> -               LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, instr_dropped, 1);
> -       }
> -
> -       rte_spinlock_unlock(&iq->post_lock);
> -
> -       return st.status;
> -}
> -
> -void
> -lio_prepare_soft_command(struct lio_device *lio_dev,
> -                        struct lio_soft_command *sc, uint8_t opcode,
> -                        uint8_t subcode, uint32_t irh_ossp, uint64_t ossp0,
> -                        uint64_t ossp1)
> -{
> -       struct octeon_instr_pki_ih3 *pki_ih3;
> -       struct octeon_instr_ih3 *ih3;
> -       struct octeon_instr_irh *irh;
> -       struct octeon_instr_rdp *rdp;
> -
> -       RTE_ASSERT(opcode <= 15);
> -       RTE_ASSERT(subcode <= 127);
> -
> -       ih3       = (struct octeon_instr_ih3 *)&sc->cmd.cmd3.ih3;
> -
> -       ih3->pkind = lio_dev->instr_queue[sc->iq_no]->txpciq.s.pkind;
> -
> -       pki_ih3 = (struct octeon_instr_pki_ih3 *)&sc->cmd.cmd3.pki_ih3;
> -
> -       pki_ih3->w      = 1;
> -       pki_ih3->raw    = 1;
> -       pki_ih3->utag   = 1;
> -       pki_ih3->uqpg   = lio_dev->instr_queue[sc->iq_no]->txpciq.s.use_qpg;
> -       pki_ih3->utt    = 1;
> -
> -       pki_ih3->tag    = LIO_CONTROL;
> -       pki_ih3->tagtype = OCTEON_ATOMIC_TAG;
> -       pki_ih3->qpg    = lio_dev->instr_queue[sc->iq_no]->txpciq.s.qpg;
> -       pki_ih3->pm     = 0x7;
> -       pki_ih3->sl     = 8;
> -
> -       if (sc->datasize)
> -               ih3->dlengsz = sc->datasize;
> -
> -       irh             = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
> -       irh->opcode     = opcode;
> -       irh->subcode    = subcode;
> -
> -       /* opcode/subcode specific parameters (ossp) */
> -       irh->ossp = irh_ossp;
> -       sc->cmd.cmd3.ossp[0] = ossp0;
> -       sc->cmd.cmd3.ossp[1] = ossp1;
> -
> -       if (sc->rdatasize) {
> -               rdp = (struct octeon_instr_rdp *)&sc->cmd.cmd3.rdp;
> -               rdp->pcie_port = lio_dev->pcie_port;
> -               rdp->rlen      = sc->rdatasize;
> -               irh->rflag = 1;
> -               /* PKI IH3 */
> -               ih3->fsz    = OCTEON_SOFT_CMD_RESP_IH3;
> -       } else {
> -               irh->rflag = 0;
> -               /* PKI IH3 */
> -               ih3->fsz    = OCTEON_PCI_CMD_O3;
> -       }
> -}
> -
> -int
> -lio_send_soft_command(struct lio_device *lio_dev,
> -                     struct lio_soft_command *sc)
> -{
> -       struct octeon_instr_ih3 *ih3;
> -       struct octeon_instr_irh *irh;
> -       uint32_t len = 0;
> -
> -       ih3 = (struct octeon_instr_ih3 *)&sc->cmd.cmd3.ih3;
> -       if (ih3->dlengsz) {
> -               RTE_ASSERT(sc->dmadptr);
> -               sc->cmd.cmd3.dptr = sc->dmadptr;
> -       }
> -
> -       irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
> -       if (irh->rflag) {
> -               RTE_ASSERT(sc->dmarptr);
> -               RTE_ASSERT(sc->status_word != NULL);
> -               *sc->status_word = LIO_COMPLETION_WORD_INIT;
> -               sc->cmd.cmd3.rptr = sc->dmarptr;
> -       }
> -
> -       len = (uint32_t)ih3->dlengsz;
> -
> -       if (sc->wait_time)
> -               sc->timeout = lio_uptime + sc->wait_time;
> -
> -       return lio_send_command(lio_dev, sc->iq_no, &sc->cmd, sc, len,
> -                               LIO_REQTYPE_SOFT_COMMAND);
> -}
> -
> -int
> -lio_setup_sc_buffer_pool(struct lio_device *lio_dev)
> -{
> -       char sc_pool_name[RTE_MEMPOOL_NAMESIZE];
> -       uint16_t buf_size;
> -
> -       buf_size = LIO_SOFT_COMMAND_BUFFER_SIZE + RTE_PKTMBUF_HEADROOM;
> -       snprintf(sc_pool_name, sizeof(sc_pool_name),
> -                "lio_sc_pool_%u", lio_dev->port_id);
> -       lio_dev->sc_buf_pool = rte_pktmbuf_pool_create(sc_pool_name,
> -                                               LIO_MAX_SOFT_COMMAND_BUFFERS,
> -                                               0, 0, buf_size, SOCKET_ID_ANY);
> -       return 0;
> -}
> -
> -void
> -lio_free_sc_buffer_pool(struct lio_device *lio_dev)
> -{
> -       rte_mempool_free(lio_dev->sc_buf_pool);
> -}
> -
> -struct lio_soft_command *
> -lio_alloc_soft_command(struct lio_device *lio_dev, uint32_t datasize,
> -                      uint32_t rdatasize, uint32_t ctxsize)
> -{
> -       uint32_t offset = sizeof(struct lio_soft_command);
> -       struct lio_soft_command *sc;
> -       struct rte_mbuf *m;
> -       uint64_t dma_addr;
> -
> -       RTE_ASSERT((offset + datasize + rdatasize + ctxsize) <=
> -                  LIO_SOFT_COMMAND_BUFFER_SIZE);
> -
> -       m = rte_pktmbuf_alloc(lio_dev->sc_buf_pool);
> -       if (m == NULL) {
> -               lio_dev_err(lio_dev, "Cannot allocate mbuf for sc\n");
> -               return NULL;
> -       }
> -
> -       /* set rte_mbuf data size and there is only 1 segment */
> -       m->pkt_len = LIO_SOFT_COMMAND_BUFFER_SIZE;
> -       m->data_len = LIO_SOFT_COMMAND_BUFFER_SIZE;
> -
> -       /* use rte_mbuf buffer for soft command */
> -       sc = rte_pktmbuf_mtod(m, struct lio_soft_command *);
> -       memset(sc, 0, LIO_SOFT_COMMAND_BUFFER_SIZE);
> -       sc->size = LIO_SOFT_COMMAND_BUFFER_SIZE;
> -       sc->dma_addr = rte_mbuf_data_iova(m);
> -       sc->mbuf = m;
> -
> -       dma_addr = sc->dma_addr;
> -
> -       if (ctxsize) {
> -               sc->ctxptr = (uint8_t *)sc + offset;
> -               sc->ctxsize = ctxsize;
> -       }
> -
> -       /* Start data at 128 byte boundary */
> -       offset = (offset + ctxsize + 127) & 0xffffff80;
> -
> -       if (datasize) {
> -               sc->virtdptr = (uint8_t *)sc + offset;
> -               sc->dmadptr = dma_addr + offset;
> -               sc->datasize = datasize;
> -       }
> -
> -       /* Start rdata at 128 byte boundary */
> -       offset = (offset + datasize + 127) & 0xffffff80;
> -
> -       if (rdatasize) {
> -               RTE_ASSERT(rdatasize >= 16);
> -               sc->virtrptr = (uint8_t *)sc + offset;
> -               sc->dmarptr = dma_addr + offset;
> -               sc->rdatasize = rdatasize;
> -               sc->status_word = (uint64_t *)((uint8_t *)(sc->virtrptr) +
> -                                              rdatasize - 8);
> -       }
> -
> -       return sc;
> -}
> -
> -void
> -lio_free_soft_command(struct lio_soft_command *sc)
> -{
> -       rte_pktmbuf_free(sc->mbuf);
> -}
> -
> -void
> -lio_setup_response_list(struct lio_device *lio_dev)
> -{
> -       STAILQ_INIT(&lio_dev->response_list.head);
> -       rte_spinlock_init(&lio_dev->response_list.lock);
> -       rte_atomic64_set(&lio_dev->response_list.pending_req_count, 0);
> -}
> -
> -int
> -lio_process_ordered_list(struct lio_device *lio_dev)
> -{
> -       int resp_to_process = LIO_MAX_ORD_REQS_TO_PROCESS;
> -       struct lio_response_list *ordered_sc_list;
> -       struct lio_soft_command *sc;
> -       int request_complete = 0;
> -       uint64_t status64;
> -       uint32_t status;
> -
> -       ordered_sc_list = &lio_dev->response_list;
> -
> -       do {
> -               rte_spinlock_lock(&ordered_sc_list->lock);
> -
> -               if (STAILQ_EMPTY(&ordered_sc_list->head)) {
> -                       /* ordered_sc_list is empty; there is
> -                        * nothing to process
> -                        */
> -                       rte_spinlock_unlock(&ordered_sc_list->lock);
> -                       return -1;
> -               }
> -
> -               sc = LIO_STQUEUE_FIRST_ENTRY(&ordered_sc_list->head,
> -                                            struct lio_soft_command, node);
> -
> -               status = LIO_REQUEST_PENDING;
> -
> -               /* check if octeon has finished DMA'ing a response
> -                * to where rptr is pointing to
> -                */
> -               status64 = *sc->status_word;
> -
> -               if (status64 != LIO_COMPLETION_WORD_INIT) {
> -                       /* This logic ensures that all 64b have been written.
> -                        * 1. check byte 0 for non-FF
> -                        * 2. if non-FF, then swap result from BE to host order
> -                        * 3. check byte 7 (swapped to 0) for non-FF
> -                        * 4. if non-FF, use the low 32-bit status code
> -                        * 5. if either byte 0 or byte 7 is FF, don't use status
> -                        */
> -                       if ((status64 & 0xff) != 0xff) {
> -                               lio_swap_8B_data(&status64, 1);
> -                               if (((status64 & 0xff) != 0xff)) {
> -                                       /* retrieve 16-bit firmware status */
> -                                       status = (uint32_t)(status64 &
> -                                                           0xffffULL);
> -                                       if (status) {
> -                                               status =
> -                                               LIO_FIRMWARE_STATUS_CODE(
> -                                                                       status);
> -                                       } else {
> -                                               /* i.e. no error */
> -                                               status = LIO_REQUEST_DONE;
> -                                       }
> -                               }
> -                       }
> -               } else if ((sc->timeout && lio_check_timeout(lio_uptime,
> -                                                            sc->timeout))) {
> -                       lio_dev_err(lio_dev,
> -                                   "cmd failed, timeout (%ld, %ld)\n",
> -                                   (long)lio_uptime, (long)sc->timeout);
> -                       status = LIO_REQUEST_TIMEOUT;
> -               }
> -
> -               if (status != LIO_REQUEST_PENDING) {
> -                       /* we have received a response or we have timed out.
> -                        * remove node from linked list
> -                        */
> -                       STAILQ_REMOVE(&ordered_sc_list->head,
> -                                     &sc->node, lio_stailq_node, entries);
> -                       rte_atomic64_dec(
> -                           &lio_dev->response_list.pending_req_count);
> -                       rte_spinlock_unlock(&ordered_sc_list->lock);
> -
> -                       if (sc->callback)
> -                               sc->callback(status, sc->callback_arg);
> -
> -                       request_complete++;
> -               } else {
> -                       /* no response yet */
> -                       request_complete = 0;
> -                       rte_spinlock_unlock(&ordered_sc_list->lock);
> -               }
> -
> -               /* If we hit the Max Ordered requests to process every loop,
> -                * we quit and let this function be invoked the next time
> -                * the poll thread runs to process the remaining requests.
> -                * This function can take up the entire CPU if there is
> -                * no upper limit to the requests processed.
> -                */
> -               if (request_complete >= resp_to_process)
> -                       break;
> -       } while (request_complete);
> -
> -       return 0;
> -}
> -
> -static inline struct lio_stailq_node *
> -list_delete_first_node(struct lio_stailq_head *head)
> -{
> -       struct lio_stailq_node *node;
> -
> -       if (STAILQ_EMPTY(head))
> -               node = NULL;
> -       else
> -               node = STAILQ_FIRST(head);
> -
> -       if (node)
> -               STAILQ_REMOVE(head, node, lio_stailq_node, entries);
> -
> -       return node;
> -}
> -
> -void
> -lio_delete_sglist(struct lio_instr_queue *txq)
> -{
> -       struct lio_device *lio_dev = txq->lio_dev;
> -       int iq_no = txq->q_index;
> -       struct lio_gather *g;
> -
> -       if (lio_dev->glist_head == NULL)
> -               return;
> -
> -       do {
> -               g = (struct lio_gather *)list_delete_first_node(
> -                                               &lio_dev->glist_head[iq_no]);
> -               if (g) {
> -                       if (g->sg)
> -                               rte_free(
> -                                   (void *)((unsigned long)g->sg - g->adjust));
> -                       rte_free(g);
> -               }
> -       } while (g);
> -}
> -
> -/**
> - * \brief Setup gather lists
> - * @param lio per-network private data
> - */
> -int
> -lio_setup_sglists(struct lio_device *lio_dev, int iq_no,
> -                 int fw_mapped_iq, int num_descs, unsigned int socket_id)
> -{
> -       struct lio_gather *g;
> -       int i;
> -
> -       rte_spinlock_init(&lio_dev->glist_lock[iq_no]);
> -
> -       STAILQ_INIT(&lio_dev->glist_head[iq_no]);
> -
> -       for (i = 0; i < num_descs; i++) {
> -               g = rte_zmalloc_socket(NULL, sizeof(*g), RTE_CACHE_LINE_SIZE,
> -                                      socket_id);
> -               if (g == NULL) {
> -                       lio_dev_err(lio_dev,
> -                                   "lio_gather memory allocation failed for qno %d\n",
> -                                   iq_no);
> -                       break;
> -               }
> -
> -               g->sg_size =
> -                   ((ROUNDUP4(LIO_MAX_SG) >> 2) * LIO_SG_ENTRY_SIZE);
> -
> -               g->sg = rte_zmalloc_socket(NULL, g->sg_size + 8,
> -                                          RTE_CACHE_LINE_SIZE, socket_id);
> -               if (g->sg == NULL) {
> -                       lio_dev_err(lio_dev,
> -                                   "sg list memory allocation failed for qno %d\n",
> -                                   iq_no);
> -                       rte_free(g);
> -                       break;
> -               }
> -
> -               /* The gather component should be aligned on 64-bit boundary */
> -               if (((unsigned long)g->sg) & 7) {
> -                       g->adjust = 8 - (((unsigned long)g->sg) & 7);
> -                       g->sg =
> -                           (struct lio_sg_entry *)((unsigned long)g->sg +
> -                                                      g->adjust);
> -               }
> -
> -               STAILQ_INSERT_TAIL(&lio_dev->glist_head[iq_no], &g->list,
> -                                  entries);
> -       }
> -
> -       if (i != num_descs) {
> -               lio_delete_sglist(lio_dev->instr_queue[fw_mapped_iq]);
> -               return -ENOMEM;
> -       }
> -
> -       return 0;
> -}
> -
> -void
> -lio_delete_instruction_queue(struct lio_device *lio_dev, int iq_no)
> -{
> -       lio_delete_instr_queue(lio_dev, iq_no);
> -       rte_free(lio_dev->instr_queue[iq_no]);
> -       lio_dev->instr_queue[iq_no] = NULL;
> -       lio_dev->num_iqs--;
> -}
> -
> -static inline uint32_t
> -lio_iq_get_available(struct lio_device *lio_dev, uint32_t q_no)
> -{
> -       return ((lio_dev->instr_queue[q_no]->nb_desc - 1) -
> -               (uint32_t)rte_atomic64_read(
> -                               &lio_dev->instr_queue[q_no]->instr_pending));
> -}
> -
> -static inline int
> -lio_iq_is_full(struct lio_device *lio_dev, uint32_t q_no)
> -{
> -       return ((uint32_t)rte_atomic64_read(
> -                               &lio_dev->instr_queue[q_no]->instr_pending) >=
> -                               (lio_dev->instr_queue[q_no]->nb_desc - 2));
> -}
> -
> -static int
> -lio_dev_cleanup_iq(struct lio_device *lio_dev, int iq_no)
> -{
> -       struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
> -       uint32_t count = 10000;
> -
> -       while ((lio_iq_get_available(lio_dev, iq_no) < LIO_FLUSH_WM(iq)) &&
> -                       --count)
> -               lio_flush_iq(lio_dev, iq);
> -
> -       return count ? 0 : 1;
> -}
> -
> -static void
> -lio_ctrl_cmd_callback(uint32_t status __rte_unused, void *sc_ptr)
> -{
> -       struct lio_soft_command *sc = sc_ptr;
> -       struct lio_dev_ctrl_cmd *ctrl_cmd;
> -       struct lio_ctrl_pkt *ctrl_pkt;
> -
> -       ctrl_pkt = (struct lio_ctrl_pkt *)sc->ctxptr;
> -       ctrl_cmd = ctrl_pkt->ctrl_cmd;
> -       ctrl_cmd->cond = 1;
> -
> -       lio_free_soft_command(sc);
> -}
> -
> -static inline struct lio_soft_command *
> -lio_alloc_ctrl_pkt_sc(struct lio_device *lio_dev,
> -                     struct lio_ctrl_pkt *ctrl_pkt)
> -{
> -       struct lio_soft_command *sc = NULL;
> -       uint32_t uddsize, datasize;
> -       uint32_t rdatasize;
> -       uint8_t *data;
> -
> -       uddsize = (uint32_t)(ctrl_pkt->ncmd.s.more * 8);
> -
> -       datasize = OCTEON_CMD_SIZE + uddsize;
> -       rdatasize = (ctrl_pkt->wait_time) ? 16 : 0;
> -
> -       sc = lio_alloc_soft_command(lio_dev, datasize,
> -                                   rdatasize, sizeof(struct lio_ctrl_pkt));
> -       if (sc == NULL)
> -               return NULL;
> -
> -       rte_memcpy(sc->ctxptr, ctrl_pkt, sizeof(struct lio_ctrl_pkt));
> -
> -       data = (uint8_t *)sc->virtdptr;
> -
> -       rte_memcpy(data, &ctrl_pkt->ncmd, OCTEON_CMD_SIZE);
> -
> -       lio_swap_8B_data((uint64_t *)data, OCTEON_CMD_SIZE >> 3);
> -
> -       if (uddsize) {
> -               /* Endian-Swap for UDD should have been done by caller. */
> -               rte_memcpy(data + OCTEON_CMD_SIZE, ctrl_pkt->udd, uddsize);
> -       }
> -
> -       sc->iq_no = (uint32_t)ctrl_pkt->iq_no;
> -
> -       lio_prepare_soft_command(lio_dev, sc,
> -                                LIO_OPCODE, LIO_OPCODE_CMD,
> -                                0, 0, 0);
> -
> -       sc->callback = lio_ctrl_cmd_callback;
> -       sc->callback_arg = sc;
> -       sc->wait_time = ctrl_pkt->wait_time;
> -
> -       return sc;
> -}
> -
> -int
> -lio_send_ctrl_pkt(struct lio_device *lio_dev, struct lio_ctrl_pkt *ctrl_pkt)
> -{
> -       struct lio_soft_command *sc = NULL;
> -       int retval;
> -
> -       sc = lio_alloc_ctrl_pkt_sc(lio_dev, ctrl_pkt);
> -       if (sc == NULL) {
> -               lio_dev_err(lio_dev, "soft command allocation failed\n");
> -               return -1;
> -       }
> -
> -       retval = lio_send_soft_command(lio_dev, sc);
> -       if (retval == LIO_IQ_SEND_FAILED) {
> -               lio_free_soft_command(sc);
> -               lio_dev_err(lio_dev, "Port: %d soft command: %d send failed status: %x\n",
> -                           lio_dev->port_id, ctrl_pkt->ncmd.s.cmd, retval);
> -               return -1;
> -       }
> -
> -       return retval;
> -}
> -
> -/** Send data packet to the device
> - *  @param lio_dev - lio device pointer
> - *  @param ndata   - control structure with queueing, and buffer information
> - *
> - *  @returns IQ_FAILED if it failed to add to the input queue. IQ_STOP if it the
> - *  queue should be stopped, and LIO_IQ_SEND_OK if it sent okay.
> - */
> -static inline int
> -lio_send_data_pkt(struct lio_device *lio_dev, struct lio_data_pkt *ndata)
> -{
> -       return lio_send_command(lio_dev, ndata->q_no, &ndata->cmd,
> -                               ndata->buf, ndata->datasize, ndata->reqtype);
> -}
> -
> -uint16_t
> -lio_dev_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
> -{
> -       struct lio_instr_queue *txq = tx_queue;
> -       union lio_cmd_setup cmdsetup;
> -       struct lio_device *lio_dev;
> -       struct lio_iq_stats *stats;
> -       struct lio_data_pkt ndata;
> -       int i, processed = 0;
> -       struct rte_mbuf *m;
> -       uint32_t tag = 0;
> -       int status = 0;
> -       int iq_no;
> -
> -       lio_dev = txq->lio_dev;
> -       iq_no = txq->txpciq.s.q_no;
> -       stats = &lio_dev->instr_queue[iq_no]->stats;
> -
> -       if (!lio_dev->intf_open || !lio_dev->linfo.link.s.link_up) {
> -               PMD_TX_LOG(lio_dev, ERR, "Transmit failed link_status : %d\n",
> -                          lio_dev->linfo.link.s.link_up);
> -               goto xmit_failed;
> -       }
> -
> -       lio_dev_cleanup_iq(lio_dev, iq_no);
> -
> -       for (i = 0; i < nb_pkts; i++) {
> -               uint32_t pkt_len = 0;
> -
> -               m = pkts[i];
> -
> -               /* Prepare the attributes for the data to be passed to BASE. */
> -               memset(&ndata, 0, sizeof(struct lio_data_pkt));
> -
> -               ndata.buf = m;
> -
> -               ndata.q_no = iq_no;
> -               if (lio_iq_is_full(lio_dev, ndata.q_no)) {
> -                       stats->tx_iq_busy++;
> -                       if (lio_dev_cleanup_iq(lio_dev, iq_no)) {
> -                               PMD_TX_LOG(lio_dev, ERR,
> -                                          "Transmit failed iq:%d full\n",
> -                                          ndata.q_no);
> -                               break;
> -                       }
> -               }
> -
> -               cmdsetup.cmd_setup64 = 0;
> -               cmdsetup.s.iq_no = iq_no;
> -
> -               /* check checksum offload flags to form cmd */
> -               if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
> -                       cmdsetup.s.ip_csum = 1;
> -
> -               if (m->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
> -                       cmdsetup.s.tnl_csum = 1;
> -               else if ((m->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) ||
> -                               (m->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM))
> -                       cmdsetup.s.transport_csum = 1;
> -
> -               if (m->nb_segs == 1) {
> -                       pkt_len = rte_pktmbuf_data_len(m);
> -                       cmdsetup.s.u.datasize = pkt_len;
> -                       lio_prepare_pci_cmd(lio_dev, &ndata.cmd,
> -                                           &cmdsetup, tag);
> -                       ndata.cmd.cmd3.dptr = rte_mbuf_data_iova(m);
> -                       ndata.reqtype = LIO_REQTYPE_NORESP_NET;
> -               } else {
> -                       struct lio_buf_free_info *finfo;
> -                       struct lio_gather *g;
> -                       rte_iova_t phyaddr;
> -                       int i, frags;
> -
> -                       finfo = (struct lio_buf_free_info *)rte_malloc(NULL,
> -                                                       sizeof(*finfo), 0);
> -                       if (finfo == NULL) {
> -                               PMD_TX_LOG(lio_dev, ERR,
> -                                          "free buffer alloc failed\n");
> -                               goto xmit_failed;
> -                       }
> -
> -                       rte_spinlock_lock(&lio_dev->glist_lock[iq_no]);
> -                       g = (struct lio_gather *)list_delete_first_node(
> -                                               &lio_dev->glist_head[iq_no]);
> -                       rte_spinlock_unlock(&lio_dev->glist_lock[iq_no]);
> -                       if (g == NULL) {
> -                               PMD_TX_LOG(lio_dev, ERR,
> -                                          "Transmit scatter gather: glist null!\n");
> -                               goto xmit_failed;
> -                       }
> -
> -                       cmdsetup.s.gather = 1;
> -                       cmdsetup.s.u.gatherptrs = m->nb_segs;
> -                       lio_prepare_pci_cmd(lio_dev, &ndata.cmd,
> -                                           &cmdsetup, tag);
> -
> -                       memset(g->sg, 0, g->sg_size);
> -                       g->sg[0].ptr[0] = rte_mbuf_data_iova(m);
> -                       lio_add_sg_size(&g->sg[0], m->data_len, 0);
> -                       pkt_len = m->data_len;
> -                       finfo->mbuf = m;
> -
> -                       /* First seg taken care above */
> -                       frags = m->nb_segs - 1;
> -                       i = 1;
> -                       m = m->next;
> -                       while (frags--) {
> -                               g->sg[(i >> 2)].ptr[(i & 3)] =
> -                                               rte_mbuf_data_iova(m);
> -                               lio_add_sg_size(&g->sg[(i >> 2)],
> -                                               m->data_len, (i & 3));
> -                               pkt_len += m->data_len;
> -                               i++;
> -                               m = m->next;
> -                       }
> -
> -                       phyaddr = rte_mem_virt2iova(g->sg);
> -                       if (phyaddr == RTE_BAD_IOVA) {
> -                               PMD_TX_LOG(lio_dev, ERR, "bad phys addr\n");
> -                               goto xmit_failed;
> -                       }
> -
> -                       ndata.cmd.cmd3.dptr = phyaddr;
> -                       ndata.reqtype = LIO_REQTYPE_NORESP_NET_SG;
> -
> -                       finfo->g = g;
> -                       finfo->lio_dev = lio_dev;
> -                       finfo->iq_no = (uint64_t)iq_no;
> -                       ndata.buf = finfo;
> -               }
> -
> -               ndata.datasize = pkt_len;
> -
> -               status = lio_send_data_pkt(lio_dev, &ndata);
> -
> -               if (unlikely(status == LIO_IQ_SEND_FAILED)) {
> -                       PMD_TX_LOG(lio_dev, ERR, "send failed\n");
> -                       break;
> -               }
> -
> -               if (unlikely(status == LIO_IQ_SEND_STOP)) {
> -                       PMD_TX_LOG(lio_dev, DEBUG, "iq full\n");
> -                       /* create space as iq is full */
> -                       lio_dev_cleanup_iq(lio_dev, iq_no);
> -               }
> -
> -               stats->tx_done++;
> -               stats->tx_tot_bytes += pkt_len;
> -               processed++;
> -       }
> -
> -xmit_failed:
> -       stats->tx_dropped += (nb_pkts - processed);
> -
> -       return processed;
> -}
> -
> -void
> -lio_dev_clear_queues(struct rte_eth_dev *eth_dev)
> -{
> -       struct lio_instr_queue *txq;
> -       struct lio_droq *rxq;
> -       uint16_t i;
> -
> -       for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
> -               txq = eth_dev->data->tx_queues[i];
> -               if (txq != NULL) {
> -                       lio_dev_tx_queue_release(eth_dev, i);
> -                       eth_dev->data->tx_queues[i] = NULL;
> -               }
> -       }
> -
> -       for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
> -               rxq = eth_dev->data->rx_queues[i];
> -               if (rxq != NULL) {
> -                       lio_dev_rx_queue_release(eth_dev, i);
> -                       eth_dev->data->rx_queues[i] = NULL;
> -               }
> -       }
> -}
> diff --git a/drivers/net/liquidio/lio_rxtx.h b/drivers/net/liquidio/lio_rxtx.h
> deleted file mode 100644
> index d2a45104f0..0000000000
> --- a/drivers/net/liquidio/lio_rxtx.h
> +++ /dev/null
> @@ -1,740 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_RXTX_H_
> -#define _LIO_RXTX_H_
> -
> -#include <stdio.h>
> -#include <stdint.h>
> -
> -#include <rte_spinlock.h>
> -#include <rte_memory.h>
> -
> -#include "lio_struct.h"
> -
> -#ifndef ROUNDUP4
> -#define ROUNDUP4(val) (((val) + 3) & 0xfffffffc)
> -#endif
> -
> -#define LIO_STQUEUE_FIRST_ENTRY(ptr, type, elem)       \
> -       (type *)((char *)((ptr)->stqh_first) - offsetof(type, elem))
> -
> -#define lio_check_timeout(cur_time, chk_time) ((cur_time) > (chk_time))
> -
> -#define lio_uptime             \
> -       (size_t)(rte_get_timer_cycles() / rte_get_timer_hz())
> -
> -/** Descriptor format.
> - *  The descriptor ring is made of descriptors which have 2 64-bit values:
> - *  -# Physical (bus) address of the data buffer.
> - *  -# Physical (bus) address of a lio_droq_info structure.
> - *  The device DMA's incoming packets and its information at the address
> - *  given by these descriptor fields.
> - */
> -struct lio_droq_desc {
> -       /** The buffer pointer */
> -       uint64_t buffer_ptr;
> -
> -       /** The Info pointer */
> -       uint64_t info_ptr;
> -};
> -
> -#define LIO_DROQ_DESC_SIZE     (sizeof(struct lio_droq_desc))
> -
> -/** Information about packet DMA'ed by Octeon.
> - *  The format of the information available at Info Pointer after Octeon
> - *  has posted a packet. Not all descriptors have valid information. Only
> - *  the Info field of the first descriptor for a packet has information
> - *  about the packet.
> - */
> -struct lio_droq_info {
> -       /** The Output Receive Header. */
> -       union octeon_rh rh;
> -
> -       /** The Length of the packet. */
> -       uint64_t length;
> -};
> -
> -#define LIO_DROQ_INFO_SIZE     (sizeof(struct lio_droq_info))
> -
> -/** Pointer to data buffer.
> - *  Driver keeps a pointer to the data buffer that it made available to
> - *  the Octeon device. Since the descriptor ring keeps physical (bus)
> - *  addresses, this field is required for the driver to keep track of
> - *  the virtual address pointers.
> - */
> -struct lio_recv_buffer {
> -       /** Packet buffer, including meta data. */
> -       void *buffer;
> -
> -       /** Data in the packet buffer. */
> -       uint8_t *data;
> -
> -};
> -
> -#define LIO_DROQ_RECVBUF_SIZE  (sizeof(struct lio_recv_buffer))
> -
> -#define LIO_DROQ_SIZE          (sizeof(struct lio_droq))
> -
> -#define LIO_IQ_SEND_OK         0
> -#define LIO_IQ_SEND_STOP       1
> -#define LIO_IQ_SEND_FAILED     -1
> -
> -/* conditions */
> -#define LIO_REQTYPE_NONE               0
> -#define LIO_REQTYPE_NORESP_NET         1
> -#define LIO_REQTYPE_NORESP_NET_SG      2
> -#define LIO_REQTYPE_SOFT_COMMAND       3
> -
> -struct lio_request_list {
> -       uint32_t reqtype;
> -       void *buf;
> -};
> -
> -/*----------------------  INSTRUCTION FORMAT ----------------------------*/
> -
> -struct lio_instr3_64B {
> -       /** Pointer where the input data is available. */
> -       uint64_t dptr;
> -
> -       /** Instruction Header. */
> -       uint64_t ih3;
> -
> -       /** Instruction Header. */
> -       uint64_t pki_ih3;
> -
> -       /** Input Request Header. */
> -       uint64_t irh;
> -
> -       /** opcode/subcode specific parameters */
> -       uint64_t ossp[2];
> -
> -       /** Return Data Parameters */
> -       uint64_t rdp;
> -
> -       /** Pointer where the response for a RAW mode packet will be written
> -        *  by Octeon.
> -        */
> -       uint64_t rptr;
> -
> -};
> -
> -union lio_instr_64B {
> -       struct lio_instr3_64B cmd3;
> -};
> -
> -/** The size of each buffer in soft command buffer pool */
> -#define LIO_SOFT_COMMAND_BUFFER_SIZE   1536
> -
> -/** Maximum number of buffers to allocate into soft command buffer pool */
> -#define LIO_MAX_SOFT_COMMAND_BUFFERS   255
> -
> -struct lio_soft_command {
> -       /** Soft command buffer info. */
> -       struct lio_stailq_node node;
> -       uint64_t dma_addr;
> -       uint32_t size;
> -
> -       /** Command and return status */
> -       union lio_instr_64B cmd;
> -
> -#define LIO_COMPLETION_WORD_INIT       0xffffffffffffffffULL
> -       uint64_t *status_word;
> -
> -       /** Data buffer info */
> -       void *virtdptr;
> -       uint64_t dmadptr;
> -       uint32_t datasize;
> -
> -       /** Return buffer info */
> -       void *virtrptr;
> -       uint64_t dmarptr;
> -       uint32_t rdatasize;
> -
> -       /** Context buffer info */
> -       void *ctxptr;
> -       uint32_t ctxsize;
> -
> -       /** Time out and callback */
> -       size_t wait_time;
> -       size_t timeout;
> -       uint32_t iq_no;
> -       void (*callback)(uint32_t, void *);
> -       void *callback_arg;
> -       struct rte_mbuf *mbuf;
> -};
> -
> -struct lio_iq_post_status {
> -       int status;
> -       int index;
> -};
> -
> -/*   wqe
> - *  ---------------  0
> - * |  wqe  word0-3 |
> - *  ---------------  32
> - * |    PCI IH     |
> - *  ---------------  40
> - * |     RPTR      |
> - *  ---------------  48
> - * |    PCI IRH    |
> - *  ---------------  56
> - * |    OCTEON_CMD |
> - *  ---------------  64
> - * | Addtl 8-BData |
> - * |               |
> - *  ---------------
> - */
> -
> -union octeon_cmd {
> -       uint64_t cmd64;
> -
> -       struct  {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -               uint64_t cmd : 5;
> -
> -               uint64_t more : 6; /* How many udd words follow the command */
> -
> -               uint64_t reserved : 29;
> -
> -               uint64_t param1 : 16;
> -
> -               uint64_t param2 : 8;
> -
> -#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> -
> -               uint64_t param2 : 8;
> -
> -               uint64_t param1 : 16;
> -
> -               uint64_t reserved : 29;
> -
> -               uint64_t more : 6;
> -
> -               uint64_t cmd : 5;
> -
> -#endif
> -       } s;
> -};
> -
> -#define OCTEON_CMD_SIZE (sizeof(union octeon_cmd))
> -
> -/* Maximum number of 8-byte words can be
> - * sent in a NIC control message.
> - */
> -#define LIO_MAX_NCTRL_UDD      32
> -
> -/* Structure of control information passed by driver to the BASE
> - * layer when sending control commands to Octeon device software.
> - */
> -struct lio_ctrl_pkt {
> -       /** Command to be passed to the Octeon device software. */
> -       union octeon_cmd ncmd;
> -
> -       /** Send buffer */
> -       void *data;
> -       uint64_t dmadata;
> -
> -       /** Response buffer */
> -       void *rdata;
> -       uint64_t dmardata;
> -
> -       /** Additional data that may be needed by some commands. */
> -       uint64_t udd[LIO_MAX_NCTRL_UDD];
> -
> -       /** Input queue to use to send this command. */
> -       uint64_t iq_no;
> -
> -       /** Time to wait for Octeon software to respond to this control command.
> -        *  If wait_time is 0, BASE assumes no response is expected.
> -        */
> -       size_t wait_time;
> -
> -       struct lio_dev_ctrl_cmd *ctrl_cmd;
> -};
> -
> -/** Structure of data information passed by driver to the BASE
> - *  layer when forwarding data to Octeon device software.
> - */
> -struct lio_data_pkt {
> -       /** Pointer to information maintained by NIC module for this packet. The
> -        *  BASE layer passes this as-is to the driver.
> -        */
> -       void *buf;
> -
> -       /** Type of buffer passed in "buf" above. */
> -       uint32_t reqtype;
> -
> -       /** Total data bytes to be transferred in this command. */
> -       uint32_t datasize;
> -
> -       /** Command to be passed to the Octeon device software. */
> -       union lio_instr_64B cmd;
> -
> -       /** Input queue to use to send this command. */
> -       uint32_t q_no;
> -};
> -
> -/** Structure passed by driver to BASE layer to prepare a command to send
> - *  network data to Octeon.
> - */
> -union lio_cmd_setup {
> -       struct {
> -               uint32_t iq_no : 8;
> -               uint32_t gather : 1;
> -               uint32_t timestamp : 1;
> -               uint32_t ip_csum : 1;
> -               uint32_t transport_csum : 1;
> -               uint32_t tnl_csum : 1;
> -               uint32_t rsvd : 19;
> -
> -               union {
> -                       uint32_t datasize;
> -                       uint32_t gatherptrs;
> -               } u;
> -       } s;
> -
> -       uint64_t cmd_setup64;
> -};
> -
> -/* Instruction Header */
> -struct octeon_instr_ih3 {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -
> -       /** Reserved3 */
> -       uint64_t reserved3 : 1;
> -
> -       /** Gather indicator 1=gather*/
> -       uint64_t gather : 1;
> -
> -       /** Data length OR no. of entries in gather list */
> -       uint64_t dlengsz : 14;
> -
> -       /** Front Data size */
> -       uint64_t fsz : 6;
> -
> -       /** Reserved2 */
> -       uint64_t reserved2 : 4;
> -
> -       /** PKI port kind - PKIND */
> -       uint64_t pkind : 6;
> -
> -       /** Reserved1 */
> -       uint64_t reserved1 : 32;
> -
> -#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> -       /** Reserved1 */
> -       uint64_t reserved1 : 32;
> -
> -       /** PKI port kind - PKIND */
> -       uint64_t pkind : 6;
> -
> -       /** Reserved2 */
> -       uint64_t reserved2 : 4;
> -
> -       /** Front Data size */
> -       uint64_t fsz : 6;
> -
> -       /** Data length OR no. of entries in gather list */
> -       uint64_t dlengsz : 14;
> -
> -       /** Gather indicator 1=gather*/
> -       uint64_t gather : 1;
> -
> -       /** Reserved3 */
> -       uint64_t reserved3 : 1;
> -
> -#endif
> -};
> -
> -/* PKI Instruction Header(PKI IH) */
> -struct octeon_instr_pki_ih3 {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -
> -       /** Wider bit */
> -       uint64_t w : 1;
> -
> -       /** Raw mode indicator 1 = RAW */
> -       uint64_t raw : 1;
> -
> -       /** Use Tag */
> -       uint64_t utag : 1;
> -
> -       /** Use QPG */
> -       uint64_t uqpg : 1;
> -
> -       /** Reserved2 */
> -       uint64_t reserved2 : 1;
> -
> -       /** Parse Mode */
> -       uint64_t pm : 3;
> -
> -       /** Skip Length */
> -       uint64_t sl : 8;
> -
> -       /** Use Tag Type */
> -       uint64_t utt : 1;
> -
> -       /** Tag type */
> -       uint64_t tagtype : 2;
> -
> -       /** Reserved1 */
> -       uint64_t reserved1 : 2;
> -
> -       /** QPG Value */
> -       uint64_t qpg : 11;
> -
> -       /** Tag Value */
> -       uint64_t tag : 32;
> -
> -#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> -
> -       /** Tag Value */
> -       uint64_t tag : 32;
> -
> -       /** QPG Value */
> -       uint64_t qpg : 11;
> -
> -       /** Reserved1 */
> -       uint64_t reserved1 : 2;
> -
> -       /** Tag type */
> -       uint64_t tagtype : 2;
> -
> -       /** Use Tag Type */
> -       uint64_t utt : 1;
> -
> -       /** Skip Length */
> -       uint64_t sl : 8;
> -
> -       /** Parse Mode */
> -       uint64_t pm : 3;
> -
> -       /** Reserved2 */
> -       uint64_t reserved2 : 1;
> -
> -       /** Use QPG */
> -       uint64_t uqpg : 1;
> -
> -       /** Use Tag */
> -       uint64_t utag : 1;
> -
> -       /** Raw mode indicator 1 = RAW */
> -       uint64_t raw : 1;
> -
> -       /** Wider bit */
> -       uint64_t w : 1;
> -#endif
> -};
> -
> -/** Input Request Header */
> -struct octeon_instr_irh {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -       uint64_t opcode : 4;
> -       uint64_t rflag : 1;
> -       uint64_t subcode : 7;
> -       uint64_t vlan : 12;
> -       uint64_t priority : 3;
> -       uint64_t reserved : 5;
> -       uint64_t ossp : 32; /* opcode/subcode specific parameters */
> -#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> -       uint64_t ossp : 32; /* opcode/subcode specific parameters */
> -       uint64_t reserved : 5;
> -       uint64_t priority : 3;
> -       uint64_t vlan : 12;
> -       uint64_t subcode : 7;
> -       uint64_t rflag : 1;
> -       uint64_t opcode : 4;
> -#endif
> -};
> -
> -/* pkiih3 + irh + ossp[0] + ossp[1] + rdp + rptr = 40 bytes */
> -#define OCTEON_SOFT_CMD_RESP_IH3       (40 + 8)
> -/* pki_h3 + irh + ossp[0] + ossp[1] = 32 bytes */
> -#define OCTEON_PCI_CMD_O3              (24 + 8)
> -
> -/** Return Data Parameters */
> -struct octeon_instr_rdp {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -       uint64_t reserved : 49;
> -       uint64_t pcie_port : 3;
> -       uint64_t rlen : 12;
> -#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> -       uint64_t rlen : 12;
> -       uint64_t pcie_port : 3;
> -       uint64_t reserved : 49;
> -#endif
> -};
> -
> -union octeon_packet_params {
> -       uint32_t pkt_params32;
> -       struct {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -               uint32_t reserved : 24;
> -               uint32_t ip_csum : 1; /* Perform IP header checksum(s) */
> -               /* Perform Outer transport header checksum */
> -               uint32_t transport_csum : 1;
> -               /* Find tunnel, and perform transport csum. */
> -               uint32_t tnl_csum : 1;
> -               uint32_t tsflag : 1;   /* Timestamp this packet */
> -               uint32_t ipsec_ops : 4; /* IPsec operation */
> -#else
> -               uint32_t ipsec_ops : 4;
> -               uint32_t tsflag : 1;
> -               uint32_t tnl_csum : 1;
> -               uint32_t transport_csum : 1;
> -               uint32_t ip_csum : 1;
> -               uint32_t reserved : 7;
> -#endif
> -       } s;
> -};
> -
> -/** Utility function to prepare a 64B NIC instruction based on a setup command
> - * @param cmd - pointer to instruction to be filled in.
> - * @param setup - pointer to the setup structure
> - * @param q_no - which queue for back pressure
> - *
> - * Assumes the cmd instruction is pre-allocated, but no fields are filled in.
> - */
> -static inline void
> -lio_prepare_pci_cmd(struct lio_device *lio_dev,
> -                   union lio_instr_64B *cmd,
> -                   union lio_cmd_setup *setup,
> -                   uint32_t tag)
> -{
> -       union octeon_packet_params packet_params;
> -       struct octeon_instr_pki_ih3 *pki_ih3;
> -       struct octeon_instr_irh *irh;
> -       struct octeon_instr_ih3 *ih3;
> -       int port;
> -
> -       memset(cmd, 0, sizeof(union lio_instr_64B));
> -
> -       ih3 = (struct octeon_instr_ih3 *)&cmd->cmd3.ih3;
> -       pki_ih3 = (struct octeon_instr_pki_ih3 *)&cmd->cmd3.pki_ih3;
> -
> -       /* assume that rflag is cleared so therefore front data will only have
> -        * irh and ossp[1] and ossp[2] for a total of 24 bytes
> -        */
> -       ih3->pkind = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.pkind;
> -       /* PKI IH */
> -       ih3->fsz = OCTEON_PCI_CMD_O3;
> -
> -       if (!setup->s.gather) {
> -               ih3->dlengsz = setup->s.u.datasize;
> -       } else {
> -               ih3->gather = 1;
> -               ih3->dlengsz = setup->s.u.gatherptrs;
> -       }
> -
> -       pki_ih3->w = 1;
> -       pki_ih3->raw = 0;
> -       pki_ih3->utag = 0;
> -       pki_ih3->utt = 1;
> -       pki_ih3->uqpg = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.use_qpg;
> -
> -       port = (int)lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.port;
> -
> -       if (tag)
> -               pki_ih3->tag = tag;
> -       else
> -               pki_ih3->tag = LIO_DATA(port);
> -
> -       pki_ih3->tagtype = OCTEON_ORDERED_TAG;
> -       pki_ih3->qpg = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.qpg;
> -       pki_ih3->pm = 0x0; /* parse from L2 */
> -       pki_ih3->sl = 32;  /* sl will be sizeof(pki_ih3) + irh + ossp0 + ossp1*/
> -
> -       irh = (struct octeon_instr_irh *)&cmd->cmd3.irh;
> -
> -       irh->opcode = LIO_OPCODE;
> -       irh->subcode = LIO_OPCODE_NW_DATA;
> -
> -       packet_params.pkt_params32 = 0;
> -       packet_params.s.ip_csum = setup->s.ip_csum;
> -       packet_params.s.transport_csum = setup->s.transport_csum;
> -       packet_params.s.tnl_csum = setup->s.tnl_csum;
> -       packet_params.s.tsflag = setup->s.timestamp;
> -
> -       irh->ossp = packet_params.pkt_params32;
> -}
> -
> -int lio_setup_sc_buffer_pool(struct lio_device *lio_dev);
> -void lio_free_sc_buffer_pool(struct lio_device *lio_dev);
> -
> -struct lio_soft_command *
> -lio_alloc_soft_command(struct lio_device *lio_dev,
> -                      uint32_t datasize, uint32_t rdatasize,
> -                      uint32_t ctxsize);
> -void lio_prepare_soft_command(struct lio_device *lio_dev,
> -                             struct lio_soft_command *sc,
> -                             uint8_t opcode, uint8_t subcode,
> -                             uint32_t irh_ossp, uint64_t ossp0,
> -                             uint64_t ossp1);
> -int lio_send_soft_command(struct lio_device *lio_dev,
> -                         struct lio_soft_command *sc);
> -void lio_free_soft_command(struct lio_soft_command *sc);
> -
> -/** Send control packet to the device
> - *  @param lio_dev - lio device pointer
> - *  @param nctrl   - control structure with command, timeout, and callback info
> - *
> - *  @returns IQ_FAILED if it failed to add to the input queue. IQ_STOP if it the
> - *  queue should be stopped, and LIO_IQ_SEND_OK if it sent okay.
> - */
> -int lio_send_ctrl_pkt(struct lio_device *lio_dev,
> -                     struct lio_ctrl_pkt *ctrl_pkt);
> -
> -/** Maximum ordered requests to process in every invocation of
> - *  lio_process_ordered_list(). The function will continue to process requests
> - *  as long as it can find one that has finished processing. If it keeps
> - *  finding requests that have completed, the function can run for ever. The
> - *  value defined here sets an upper limit on the number of requests it can
> - *  process before it returns control to the poll thread.
> - */
> -#define LIO_MAX_ORD_REQS_TO_PROCESS    4096
> -
> -/** Error codes used in Octeon Host-Core communication.
> - *
> - *   31                16 15           0
> - *   ----------------------------
> - * |           |               |
> - *   ----------------------------
> - *   Error codes are 32-bit wide. The upper 16-bits, called Major Error Number,
> - *   are reserved to identify the group to which the error code belongs. The
> - *   lower 16-bits, called Minor Error Number, carry the actual code.
> - *
> - *   So error codes are (MAJOR NUMBER << 16)| MINOR_NUMBER.
> - */
> -/** Status for a request.
> - *  If the request is successfully queued, the driver will return
> - *  a LIO_REQUEST_PENDING status. LIO_REQUEST_TIMEOUT is only returned by
> - *  the driver if the response for request failed to arrive before a
> - *  time-out period or if the request processing * got interrupted due to
> - *  a signal respectively.
> - */
> -enum {
> -       /** A value of 0x00000000 indicates no error i.e. success */
> -       LIO_REQUEST_DONE        = 0x00000000,
> -       /** (Major number: 0x0000; Minor Number: 0x0001) */
> -       LIO_REQUEST_PENDING     = 0x00000001,
> -       LIO_REQUEST_TIMEOUT     = 0x00000003,
> -
> -};
> -
> -/*------ Error codes used by firmware (bits 15..0 set by firmware */
> -#define LIO_FIRMWARE_MAJOR_ERROR_CODE   0x0001
> -#define LIO_FIRMWARE_STATUS_CODE(status) \
> -       ((LIO_FIRMWARE_MAJOR_ERROR_CODE << 16) | (status))
> -
> -/** Initialize the response lists. The number of response lists to create is
> - *  given by count.
> - *  @param lio_dev - the lio device structure.
> - */
> -void lio_setup_response_list(struct lio_device *lio_dev);
> -
> -/** Check the status of first entry in the ordered list. If the instruction at
> - *  that entry finished processing or has timed-out, the entry is cleaned.
> - *  @param lio_dev - the lio device structure.
> - *  @return 1 if the ordered list is empty, 0 otherwise.
> - */
> -int lio_process_ordered_list(struct lio_device *lio_dev);
> -
> -#define LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, field, count)    \
> -       (((lio_dev)->instr_queue[iq_no]->stats.field) += count)
> -
> -static inline void
> -lio_swap_8B_data(uint64_t *data, uint32_t blocks)
> -{
> -       while (blocks) {
> -               *data = rte_cpu_to_be_64(*data);
> -               blocks--;
> -               data++;
> -       }
> -}
> -
> -static inline uint64_t
> -lio_map_ring(void *buf)
> -{
> -       rte_iova_t dma_addr;
> -
> -       dma_addr = rte_mbuf_data_iova_default(((struct rte_mbuf *)buf));
> -
> -       return (uint64_t)dma_addr;
> -}
> -
> -static inline uint64_t
> -lio_map_ring_info(struct lio_droq *droq, uint32_t i)
> -{
> -       rte_iova_t dma_addr;
> -
> -       dma_addr = droq->info_list_dma + (i * LIO_DROQ_INFO_SIZE);
> -
> -       return (uint64_t)dma_addr;
> -}
> -
> -static inline int
> -lio_opcode_slow_path(union octeon_rh *rh)
> -{
> -       uint16_t subcode1, subcode2;
> -
> -       subcode1 = LIO_OPCODE_SUBCODE(rh->r.opcode, rh->r.subcode);
> -       subcode2 = LIO_OPCODE_SUBCODE(LIO_OPCODE, LIO_OPCODE_NW_DATA);
> -
> -       return subcode2 != subcode1;
> -}
> -
> -static inline void
> -lio_add_sg_size(struct lio_sg_entry *sg_entry,
> -               uint16_t size, uint32_t pos)
> -{
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -       sg_entry->u.size[pos] = size;
> -#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> -       sg_entry->u.size[3 - pos] = size;
> -#endif
> -}
> -
> -/* Macro to increment index.
> - * Index is incremented by count; if the sum exceeds
> - * max, index is wrapped-around to the start.
> - */
> -static inline uint32_t
> -lio_incr_index(uint32_t index, uint32_t count, uint32_t max)
> -{
> -       if ((index + count) >= max)
> -               index = index + count - max;
> -       else
> -               index += count;
> -
> -       return index;
> -}
> -
> -int lio_setup_droq(struct lio_device *lio_dev, int q_no, int num_descs,
> -                  int desc_size, struct rte_mempool *mpool,
> -                  unsigned int socket_id);
> -uint16_t lio_dev_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> -                          uint16_t budget);
> -void lio_delete_droq_queue(struct lio_device *lio_dev, int oq_no);
> -
> -void lio_delete_sglist(struct lio_instr_queue *txq);
> -int lio_setup_sglists(struct lio_device *lio_dev, int iq_no,
> -                     int fw_mapped_iq, int num_descs, unsigned int socket_id);
> -uint16_t lio_dev_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts,
> -                          uint16_t nb_pkts);
> -int lio_wait_for_instr_fetch(struct lio_device *lio_dev);
> -int lio_setup_iq(struct lio_device *lio_dev, int q_index,
> -                union octeon_txpciq iq_no, uint32_t num_descs, void *app_ctx,
> -                unsigned int socket_id);
> -int lio_flush_iq(struct lio_device *lio_dev, struct lio_instr_queue *iq);
> -void lio_delete_instruction_queue(struct lio_device *lio_dev, int iq_no);
> -/** Setup instruction queue zero for the device
> - *  @param lio_dev which lio device to setup
> - *
> - *  @return 0 if success. -1 if fails
> - */
> -int lio_setup_instr_queue0(struct lio_device *lio_dev);
> -void lio_free_instr_queue0(struct lio_device *lio_dev);
> -void lio_dev_clear_queues(struct rte_eth_dev *eth_dev);
> -#endif /* _LIO_RXTX_H_ */
> diff --git a/drivers/net/liquidio/lio_struct.h b/drivers/net/liquidio/lio_struct.h
> deleted file mode 100644
> index 10270c560e..0000000000
> --- a/drivers/net/liquidio/lio_struct.h
> +++ /dev/null
> @@ -1,661 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2017 Cavium, Inc
> - */
> -
> -#ifndef _LIO_STRUCT_H_
> -#define _LIO_STRUCT_H_
> -
> -#include <stdio.h>
> -#include <stdint.h>
> -#include <sys/queue.h>
> -
> -#include <rte_spinlock.h>
> -#include <rte_atomic.h>
> -
> -#include "lio_hw_defs.h"
> -
> -struct lio_stailq_node {
> -       STAILQ_ENTRY(lio_stailq_node) entries;
> -};
> -
> -STAILQ_HEAD(lio_stailq_head, lio_stailq_node);
> -
> -struct lio_version {
> -       uint16_t major;
> -       uint16_t minor;
> -       uint16_t micro;
> -       uint16_t reserved;
> -};
> -
> -/** Input Queue statistics. Each input queue has four stats fields. */
> -struct lio_iq_stats {
> -       uint64_t instr_posted; /**< Instructions posted to this queue. */
> -       uint64_t instr_processed; /**< Instructions processed in this queue. */
> -       uint64_t instr_dropped; /**< Instructions that could not be processed */
> -       uint64_t bytes_sent; /**< Bytes sent through this queue. */
> -       uint64_t tx_done; /**< Num of packets sent to network. */
> -       uint64_t tx_iq_busy; /**< Num of times this iq was found to be full. */
> -       uint64_t tx_dropped; /**< Num of pkts dropped due to xmitpath errors. */
> -       uint64_t tx_tot_bytes; /**< Total count of bytes sent to network. */
> -};
> -
> -/** Output Queue statistics. Each output queue has four stats fields. */
> -struct lio_droq_stats {
> -       /** Number of packets received in this queue. */
> -       uint64_t pkts_received;
> -
> -       /** Bytes received by this queue. */
> -       uint64_t bytes_received;
> -
> -       /** Packets dropped due to no memory available. */
> -       uint64_t dropped_nomem;
> -
> -       /** Packets dropped due to large number of pkts to process. */
> -       uint64_t dropped_toomany;
> -
> -       /** Number of packets  sent to stack from this queue. */
> -       uint64_t rx_pkts_received;
> -
> -       /** Number of Bytes sent to stack from this queue. */
> -       uint64_t rx_bytes_received;
> -
> -       /** Num of Packets dropped due to receive path failures. */
> -       uint64_t rx_dropped;
> -
> -       /** Num of vxlan packets received; */
> -       uint64_t rx_vxlan;
> -
> -       /** Num of failures of rte_pktmbuf_alloc() */
> -       uint64_t rx_alloc_failure;
> -
> -};
> -
> -/** The Descriptor Ring Output Queue structure.
> - *  This structure has all the information required to implement a
> - *  DROQ.
> - */
> -struct lio_droq {
> -       /** A spinlock to protect access to this ring. */
> -       rte_spinlock_t lock;
> -
> -       uint32_t q_no;
> -
> -       uint32_t pkt_count;
> -
> -       struct lio_device *lio_dev;
> -
> -       /** The 8B aligned descriptor ring starts at this address. */
> -       struct lio_droq_desc *desc_ring;
> -
> -       /** Index in the ring where the driver should read the next packet */
> -       uint32_t read_idx;
> -
> -       /** Index in the ring where Octeon will write the next packet */
> -       uint32_t write_idx;
> -
> -       /** Index in the ring where the driver will refill the descriptor's
> -        * buffer
> -        */
> -       uint32_t refill_idx;
> -
> -       /** Packets pending to be processed */
> -       rte_atomic64_t pkts_pending;
> -
> -       /** Number of  descriptors in this ring. */
> -       uint32_t nb_desc;
> -
> -       /** The number of descriptors pending refill. */
> -       uint32_t refill_count;
> -
> -       uint32_t refill_threshold;
> -
> -       /** The 8B aligned info ptrs begin from this address. */
> -       struct lio_droq_info *info_list;
> -
> -       /** The receive buffer list. This list has the virtual addresses of the
> -        *  buffers.
> -        */
> -       struct lio_recv_buffer *recv_buf_list;
> -
> -       /** The size of each buffer pointed by the buffer pointer. */
> -       uint32_t buffer_size;
> -
> -       /** Pointer to the mapped packet credit register.
> -        *  Host writes number of info/buffer ptrs available to this register
> -        */
> -       void *pkts_credit_reg;
> -
> -       /** Pointer to the mapped packet sent register.
> -        *  Octeon writes the number of packets DMA'ed to host memory
> -        *  in this register.
> -        */
> -       void *pkts_sent_reg;
> -
> -       /** Statistics for this DROQ. */
> -       struct lio_droq_stats stats;
> -
> -       /** DMA mapped address of the DROQ descriptor ring. */
> -       size_t desc_ring_dma;
> -
> -       /** Info ptr list are allocated at this virtual address. */
> -       size_t info_base_addr;
> -
> -       /** DMA mapped address of the info list */
> -       size_t info_list_dma;
> -
> -       /** Allocated size of info list. */
> -       uint32_t info_alloc_size;
> -
> -       /** Memory zone **/
> -       const struct rte_memzone *desc_ring_mz;
> -       const struct rte_memzone *info_mz;
> -       struct rte_mempool *mpool;
> -};
> -
> -/** Receive Header */
> -union octeon_rh {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -       uint64_t rh64;
> -       struct  {
> -               uint64_t opcode : 4;
> -               uint64_t subcode : 8;
> -               uint64_t len : 3; /** additional 64-bit words */
> -               uint64_t reserved : 17;
> -               uint64_t ossp : 32; /** opcode/subcode specific parameters */
> -       } r;
> -       struct  {
> -               uint64_t opcode : 4;
> -               uint64_t subcode : 8;
> -               uint64_t len : 3; /** additional 64-bit words */
> -               uint64_t extra : 28;
> -               uint64_t vlan : 12;
> -               uint64_t priority : 3;
> -               uint64_t csum_verified : 3; /** checksum verified. */
> -               uint64_t has_hwtstamp : 1; /** Has hardware timestamp.1 = yes.*/
> -               uint64_t encap_on : 1;
> -               uint64_t has_hash : 1; /** Has hash (rth or rss). 1 = yes. */
> -       } r_dh;
> -       struct {
> -               uint64_t opcode : 4;
> -               uint64_t subcode : 8;
> -               uint64_t len : 3; /** additional 64-bit words */
> -               uint64_t reserved : 8;
> -               uint64_t extra : 25;
> -               uint64_t gmxport : 16;
> -       } r_nic_info;
> -#else
> -       uint64_t rh64;
> -       struct {
> -               uint64_t ossp : 32; /** opcode/subcode specific parameters */
> -               uint64_t reserved : 17;
> -               uint64_t len : 3; /** additional 64-bit words */
> -               uint64_t subcode : 8;
> -               uint64_t opcode : 4;
> -       } r;
> -       struct {
> -               uint64_t has_hash : 1; /** Has hash (rth or rss). 1 = yes. */
> -               uint64_t encap_on : 1;
> -               uint64_t has_hwtstamp : 1;  /** 1 = has hwtstamp */
> -               uint64_t csum_verified : 3; /** checksum verified. */
> -               uint64_t priority : 3;
> -               uint64_t vlan : 12;
> -               uint64_t extra : 28;
> -               uint64_t len : 3; /** additional 64-bit words */
> -               uint64_t subcode : 8;
> -               uint64_t opcode : 4;
> -       } r_dh;
> -       struct {
> -               uint64_t gmxport : 16;
> -               uint64_t extra : 25;
> -               uint64_t reserved : 8;
> -               uint64_t len : 3; /** additional 64-bit words */
> -               uint64_t subcode : 8;
> -               uint64_t opcode : 4;
> -       } r_nic_info;
> -#endif
> -};
> -
> -#define OCTEON_RH_SIZE (sizeof(union octeon_rh))
> -
> -/** The txpciq info passed to host from the firmware */
> -union octeon_txpciq {
> -       uint64_t txpciq64;
> -
> -       struct {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -               uint64_t q_no : 8;
> -               uint64_t port : 8;
> -               uint64_t pkind : 6;
> -               uint64_t use_qpg : 1;
> -               uint64_t qpg : 11;
> -               uint64_t aura_num : 10;
> -               uint64_t reserved : 20;
> -#else
> -               uint64_t reserved : 20;
> -               uint64_t aura_num : 10;
> -               uint64_t qpg : 11;
> -               uint64_t use_qpg : 1;
> -               uint64_t pkind : 6;
> -               uint64_t port : 8;
> -               uint64_t q_no : 8;
> -#endif
> -       } s;
> -};
> -
> -/** The instruction (input) queue.
> - *  The input queue is used to post raw (instruction) mode data or packet
> - *  data to Octeon device from the host. Each input queue for
> - *  a LIO device has one such structure to represent it.
> - */
> -struct lio_instr_queue {
> -       /** A spinlock to protect access to the input ring.  */
> -       rte_spinlock_t lock;
> -
> -       rte_spinlock_t post_lock;
> -
> -       struct lio_device *lio_dev;
> -
> -       uint32_t pkt_in_done;
> -
> -       rte_atomic64_t iq_flush_running;
> -
> -       /** Flag that indicates if the queue uses 64 byte commands. */
> -       uint32_t iqcmd_64B:1;
> -
> -       /** Queue info. */
> -       union octeon_txpciq txpciq;
> -
> -       uint32_t rsvd:17;
> -
> -       uint32_t status:8;
> -
> -       /** Number of  descriptors in this ring. */
> -       uint32_t nb_desc;
> -
> -       /** Index in input ring where the driver should write the next packet */
> -       uint32_t host_write_index;
> -
> -       /** Index in input ring where Octeon is expected to read the next
> -        *  packet.
> -        */
> -       uint32_t lio_read_index;
> -
> -       /** This index aids in finding the window in the queue where Octeon
> -        *  has read the commands.
> -        */
> -       uint32_t flush_index;
> -
> -       /** This field keeps track of the instructions pending in this queue. */
> -       rte_atomic64_t instr_pending;
> -
> -       /** Pointer to the Virtual Base addr of the input ring. */
> -       uint8_t *base_addr;
> -
> -       struct lio_request_list *request_list;
> -
> -       /** Octeon doorbell register for the ring. */
> -       void *doorbell_reg;
> -
> -       /** Octeon instruction count register for this ring. */
> -       void *inst_cnt_reg;
> -
> -       /** Number of instructions pending to be posted to Octeon. */
> -       uint32_t fill_cnt;
> -
> -       /** Statistics for this input queue. */
> -       struct lio_iq_stats stats;
> -
> -       /** DMA mapped base address of the input descriptor ring. */
> -       uint64_t base_addr_dma;
> -
> -       /** Application context */
> -       void *app_ctx;
> -
> -       /* network stack queue index */
> -       int q_index;
> -
> -       /* Memory zone */
> -       const struct rte_memzone *iq_mz;
> -};
> -
> -/** This structure is used by driver to store information required
> - *  to free the mbuff when the packet has been fetched by Octeon.
> - *  Bytes offset below assume worst-case of a 64-bit system.
> - */
> -struct lio_buf_free_info {
> -       /** Bytes 1-8. Pointer to network device private structure. */
> -       struct lio_device *lio_dev;
> -
> -       /** Bytes 9-16. Pointer to mbuff. */
> -       struct rte_mbuf *mbuf;
> -
> -       /** Bytes 17-24. Pointer to gather list. */
> -       struct lio_gather *g;
> -
> -       /** Bytes 25-32. Physical address of mbuf->data or gather list. */
> -       uint64_t dptr;
> -
> -       /** Bytes 33-47. Piggybacked soft command, if any */
> -       struct lio_soft_command *sc;
> -
> -       /** Bytes 48-63. iq no */
> -       uint64_t iq_no;
> -};
> -
> -/* The Scatter-Gather List Entry. The scatter or gather component used with
> - * input instruction has this format.
> - */
> -struct lio_sg_entry {
> -       /** The first 64 bit gives the size of data in each dptr. */
> -       union {
> -               uint16_t size[4];
> -               uint64_t size64;
> -       } u;
> -
> -       /** The 4 dptr pointers for this entry. */
> -       uint64_t ptr[4];
> -};
> -
> -#define LIO_SG_ENTRY_SIZE      (sizeof(struct lio_sg_entry))
> -
> -/** Structure of a node in list of gather components maintained by
> - *  driver for each network device.
> - */
> -struct lio_gather {
> -       /** List manipulation. Next and prev pointers. */
> -       struct lio_stailq_node list;
> -
> -       /** Size of the gather component at sg in bytes. */
> -       int sg_size;
> -
> -       /** Number of bytes that sg was adjusted to make it 8B-aligned. */
> -       int adjust;
> -
> -       /** Gather component that can accommodate max sized fragment list
> -        *  received from the IP layer.
> -        */
> -       struct lio_sg_entry *sg;
> -};
> -
> -struct lio_rss_ctx {
> -       uint16_t hash_key_size;
> -       uint8_t  hash_key[LIO_RSS_MAX_KEY_SZ];
> -       /* Ideally a factor of number of queues */
> -       uint8_t  itable[LIO_RSS_MAX_TABLE_SZ];
> -       uint8_t  itable_size;
> -       uint8_t  ip;
> -       uint8_t  tcp_hash;
> -       uint8_t  ipv6;
> -       uint8_t  ipv6_tcp_hash;
> -       uint8_t  ipv6_ex;
> -       uint8_t  ipv6_tcp_ex_hash;
> -       uint8_t  hash_disable;
> -};
> -
> -struct lio_io_enable {
> -       uint64_t iq;
> -       uint64_t oq;
> -       uint64_t iq64B;
> -};
> -
> -struct lio_fn_list {
> -       void (*setup_iq_regs)(struct lio_device *, uint32_t);
> -       void (*setup_oq_regs)(struct lio_device *, uint32_t);
> -
> -       int (*setup_mbox)(struct lio_device *);
> -       void (*free_mbox)(struct lio_device *);
> -
> -       int (*setup_device_regs)(struct lio_device *);
> -       int (*enable_io_queues)(struct lio_device *);
> -       void (*disable_io_queues)(struct lio_device *);
> -};
> -
> -struct lio_pf_vf_hs_word {
> -#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> -       /** PKIND value assigned for the DPI interface */
> -       uint64_t pkind : 8;
> -
> -       /** OCTEON core clock multiplier */
> -       uint64_t core_tics_per_us : 16;
> -
> -       /** OCTEON coprocessor clock multiplier */
> -       uint64_t coproc_tics_per_us : 16;
> -
> -       /** app that currently running on OCTEON */
> -       uint64_t app_mode : 8;
> -
> -       /** RESERVED */
> -       uint64_t reserved : 16;
> -
> -#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -
> -       /** RESERVED */
> -       uint64_t reserved : 16;
> -
> -       /** app that currently running on OCTEON */
> -       uint64_t app_mode : 8;
> -
> -       /** OCTEON coprocessor clock multiplier */
> -       uint64_t coproc_tics_per_us : 16;
> -
> -       /** OCTEON core clock multiplier */
> -       uint64_t core_tics_per_us : 16;
> -
> -       /** PKIND value assigned for the DPI interface */
> -       uint64_t pkind : 8;
> -#endif
> -};
> -
> -struct lio_sriov_info {
> -       /** Number of rings assigned to VF */
> -       uint32_t rings_per_vf;
> -
> -       /** Number of VF devices enabled */
> -       uint32_t num_vfs;
> -};
> -
> -/* Head of a response list */
> -struct lio_response_list {
> -       /** List structure to add delete pending entries to */
> -       struct lio_stailq_head head;
> -
> -       /** A lock for this response list */
> -       rte_spinlock_t lock;
> -
> -       rte_atomic64_t pending_req_count;
> -};
> -
> -/* Structure to define the configuration attributes for each Input queue. */
> -struct lio_iq_config {
> -       /* Max number of IQs available */
> -       uint8_t max_iqs;
> -
> -       /** Pending list size (usually set to the sum of the size of all Input
> -        *  queues)
> -        */
> -       uint32_t pending_list_size;
> -
> -       /** Command size - 32 or 64 bytes */
> -       uint32_t instr_type;
> -};
> -
> -/* Structure to define the configuration attributes for each Output queue. */
> -struct lio_oq_config {
> -       /* Max number of OQs available */
> -       uint8_t max_oqs;
> -
> -       /** If set, the Output queue uses info-pointer mode. (Default: 1 ) */
> -       uint32_t info_ptr;
> -
> -       /** The number of buffers that were consumed during packet processing by
> -        *  the driver on this Output queue before the driver attempts to
> -        *  replenish the descriptor ring with new buffers.
> -        */
> -       uint32_t refill_threshold;
> -};
> -
> -/* Structure to define the configuration. */
> -struct lio_config {
> -       uint16_t card_type;
> -       const char *card_name;
> -
> -       /** Input Queue attributes. */
> -       struct lio_iq_config iq;
> -
> -       /** Output Queue attributes. */
> -       struct lio_oq_config oq;
> -
> -       int num_nic_ports;
> -
> -       int num_def_tx_descs;
> -
> -       /* Num of desc for rx rings */
> -       int num_def_rx_descs;
> -
> -       int def_rx_buf_size;
> -};
> -
> -/** Status of a RGMII Link on Octeon as seen by core driver. */
> -union octeon_link_status {
> -       uint64_t link_status64;
> -
> -       struct {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -               uint64_t duplex : 8;
> -               uint64_t mtu : 16;
> -               uint64_t speed : 16;
> -               uint64_t link_up : 1;
> -               uint64_t autoneg : 1;
> -               uint64_t if_mode : 5;
> -               uint64_t pause : 1;
> -               uint64_t flashing : 1;
> -               uint64_t reserved : 15;
> -#else
> -               uint64_t reserved : 15;
> -               uint64_t flashing : 1;
> -               uint64_t pause : 1;
> -               uint64_t if_mode : 5;
> -               uint64_t autoneg : 1;
> -               uint64_t link_up : 1;
> -               uint64_t speed : 16;
> -               uint64_t mtu : 16;
> -               uint64_t duplex : 8;
> -#endif
> -       } s;
> -};
> -
> -/** The rxpciq info passed to host from the firmware */
> -union octeon_rxpciq {
> -       uint64_t rxpciq64;
> -
> -       struct {
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -               uint64_t q_no : 8;
> -               uint64_t reserved : 56;
> -#else
> -               uint64_t reserved : 56;
> -               uint64_t q_no : 8;
> -#endif
> -       } s;
> -};
> -
> -/** Information for a OCTEON ethernet interface shared between core & host. */
> -struct octeon_link_info {
> -       union octeon_link_status link;
> -       uint64_t hw_addr;
> -
> -#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> -       uint64_t gmxport : 16;
> -       uint64_t macaddr_is_admin_assigned : 1;
> -       uint64_t vlan_is_admin_assigned : 1;
> -       uint64_t rsvd : 30;
> -       uint64_t num_txpciq : 8;
> -       uint64_t num_rxpciq : 8;
> -#else
> -       uint64_t num_rxpciq : 8;
> -       uint64_t num_txpciq : 8;
> -       uint64_t rsvd : 30;
> -       uint64_t vlan_is_admin_assigned : 1;
> -       uint64_t macaddr_is_admin_assigned : 1;
> -       uint64_t gmxport : 16;
> -#endif
> -
> -       union octeon_txpciq txpciq[LIO_MAX_IOQS_PER_IF];
> -       union octeon_rxpciq rxpciq[LIO_MAX_IOQS_PER_IF];
> -};
> -
> -/* -----------------------  THE LIO DEVICE  --------------------------- */
> -/** The lio device.
> - *  Each lio device has this structure to represent all its
> - *  components.
> - */
> -struct lio_device {
> -       /** PCI device pointer */
> -       struct rte_pci_device *pci_dev;
> -
> -       /** Octeon Chip type */
> -       uint16_t chip_id;
> -       uint16_t pf_num;
> -       uint16_t vf_num;
> -
> -       /** This device's PCIe port used for traffic. */
> -       uint16_t pcie_port;
> -
> -       /** The state of this device */
> -       rte_atomic64_t status;
> -
> -       uint8_t intf_open;
> -
> -       struct octeon_link_info linfo;
> -
> -       uint8_t *hw_addr;
> -
> -       struct lio_fn_list fn_list;
> -
> -       uint32_t num_iqs;
> -
> -       /** Guards each glist */
> -       rte_spinlock_t *glist_lock;
> -       /** Array of gather component linked lists */
> -       struct lio_stailq_head *glist_head;
> -
> -       /* The pool containing pre allocated buffers used for soft commands */
> -       struct rte_mempool *sc_buf_pool;
> -
> -       /** The input instruction queues */
> -       struct lio_instr_queue *instr_queue[LIO_MAX_POSSIBLE_INSTR_QUEUES];
> -
> -       /** The singly-linked tail queues of instruction response */
> -       struct lio_response_list response_list;
> -
> -       uint32_t num_oqs;
> -
> -       /** The DROQ output queues  */
> -       struct lio_droq *droq[LIO_MAX_POSSIBLE_OUTPUT_QUEUES];
> -
> -       struct lio_io_enable io_qmask;
> -
> -       struct lio_sriov_info sriov_info;
> -
> -       struct lio_pf_vf_hs_word pfvf_hsword;
> -
> -       /** Mail Box details of each lio queue. */
> -       struct lio_mbox **mbox;
> -
> -       char dev_string[LIO_DEVICE_NAME_LEN]; /* Device print string */
> -
> -       const struct lio_config *default_config;
> -
> -       struct rte_eth_dev      *eth_dev;
> -
> -       uint64_t ifflags;
> -       uint8_t max_rx_queues;
> -       uint8_t max_tx_queues;
> -       uint8_t nb_rx_queues;
> -       uint8_t nb_tx_queues;
> -       uint8_t port_configured;
> -       struct lio_rss_ctx rss_state;
> -       uint16_t port_id;
> -       char firmware_version[LIO_FW_VERSION_LENGTH];
> -};
> -#endif /* _LIO_STRUCT_H_ */
> diff --git a/drivers/net/liquidio/meson.build b/drivers/net/liquidio/meson.build
> deleted file mode 100644
> index ebadbf3dea..0000000000
> --- a/drivers/net/liquidio/meson.build
> +++ /dev/null
> @@ -1,16 +0,0 @@
> -# SPDX-License-Identifier: BSD-3-Clause
> -# Copyright(c) 2018 Intel Corporation
> -
> -if is_windows
> -    build = false
> -    reason = 'not supported on Windows'
> -    subdir_done()
> -endif
> -
> -sources = files(
> -        'base/lio_23xx_vf.c',
> -        'base/lio_mbox.c',
> -        'lio_ethdev.c',
> -        'lio_rxtx.c',
> -)
> -includes += include_directories('base')
> diff --git a/drivers/net/meson.build b/drivers/net/meson.build
> index b1df17ce8c..f68bbc27a7 100644
> --- a/drivers/net/meson.build
> +++ b/drivers/net/meson.build
> @@ -36,7 +36,6 @@ drivers = [
>          'ipn3ke',
>          'ixgbe',
>          'kni',
> -        'liquidio',
>          'mana',
>          'memif',
>          'mlx4',
> --
> 2.40.1
>

^ permalink raw reply	[relevance 0%]

* Re: [PATCH] net/bonding: replace master/slave to main/member
  @ 2023-05-17 14:52  1% ` Stephen Hemminger
  2023-05-18  6:32  1% ` [PATCH v2] " Chaoyong He
  1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-05-17 14:52 UTC (permalink / raw)
  To: Chaoyong He; +Cc: dev, oss-drivers, niklas.soderlund, Long Wu, James Hershaw

[-- Attachment #1: Type: text/plain, Size: 2214 bytes --]

On Wed, 17 May 2023 14:59:05 +0800
Chaoyong He <chaoyong.he@corigine.com> wrote:

> This patch replaces the usage of the word 'master/slave' with more
> appropriate word 'main/member' in bonding PMD as well as in its docs
> and examples. Also the test app and testpmd were modified to use the
> new wording.
> 
> The bonding PMD's public API was modified according to the changes
> in word:
> rte_eth_bond_8023ad_slave_info is now called
> rte_eth_bond_8023ad_member_info,
> rte_eth_bond_active_slaves_get is now called
> rte_eth_bond_active_members_get,
> rte_eth_bond_slave_add is now called
> rte_eth_bond_member_add,
> rte_eth_bond_slave_remove is now called
> rte_eth_bond_member_remove,
> rte_eth_bond_slaves_get is now called
> rte_eth_bond_members_get.
> 
> Also the macro RTE_ETH_DEV_BONDED_SLAVE was renamed to
> RTE_ETH_DEV_BONDED_MEMBER.
> 
> Signed-off-by: Long Wu <long.wu@corigine.com>
> Acked-by: Niklas Söderlund <niklas.soderlund@corigine.com>
> Reviewed-by: James Hershaw <james.hershaw@corigine.com>
> Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
> ---

This looks great.

I had started on this and chose the new names as parent and child,
but that choice was arbitrary.  Did some background research and

============ ================== ============== ===============
Origin       Feature Name       Aggregate Name Device Name
============ ================== ============== ===============
IEEE 802.1AX Link Aggregation   aggregator     port
Linux        Bonding            master         slave
FreeBSD      Link Aggregate     lagg           laggport
Windows      Teaming            team
OpenVswitch  Bonding            bond           members
Solaris      Link Aggregate     aggregation    datalink
Cisco        EtherChannel       group          channel
Juniper      Aggregate Ethernet lag interface  lag link
Arista       Port Channel       group          channel
SONiC        LAG                portchannel    member
============ ================== ============== ===============


You also need to modify how this is done since it ends up
being an API change.

My version of the patch had some of that, if you want here it is.

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0005-bonding-replace-use-of-slave-device-with-child-devic.patch --]
[-- Type: text/x-patch, Size: 580804 bytes --]

From 25aea59871533585bbaa18bdf7757e48aecb5380 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 30 Mar 2023 10:24:03 -0700
Subject: [PATCH 05/12] bonding: replace use of slave device with child device

The term slave is inherited from the Linux bonding device and does not
conform to the Linux Foundation Non-Inclusive Naming policy.
Other networking products, operating systems, and 802 standards
do not use the terms master or slave.

For DPDK change to using the terms parent and child when
referring to devices that are managed in bond device.

Mark the old visible API's as deprecated and remove
from the ABI.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 app/test-pmd/testpmd.c                        |  112 +-
 app/test-pmd/testpmd.h                        |    8 +-
 app/test/test_link_bonding.c                  | 2724 ++++++++---------
 app/test/test_link_bonding_mode4.c            |  584 ++--
 app/test/test_link_bonding_rssconf.c          |  166 +-
 doc/guides/howto/lm_bond_virtio_sriov.rst     |   24 +-
 doc/guides/nics/bnxt.rst                      |    4 +-
 .../link_bonding_poll_mode_drv_lib.rst        |  222 +-
 doc/guides/testpmd_app_ug/testpmd_funcs.rst   |    4 +-
 drivers/net/bonding/bonding_testpmd.c         |  178 +-
 drivers/net/bonding/eth_bond_8023ad_private.h |   40 +-
 drivers/net/bonding/eth_bond_private.h        |  116 +-
 drivers/net/bonding/rte_eth_bond.h            |  102 +-
 drivers/net/bonding/rte_eth_bond_8023ad.c     |  370 +--
 drivers/net/bonding/rte_eth_bond_8023ad.h     |   66 +-
 drivers/net/bonding/rte_eth_bond_alb.c        |   44 +-
 drivers/net/bonding/rte_eth_bond_alb.h        |   20 +-
 drivers/net/bonding/rte_eth_bond_api.c        |  464 +--
 drivers/net/bonding/rte_eth_bond_args.c       |   32 +-
 drivers/net/bonding/rte_eth_bond_flow.c       |   54 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        | 1368 ++++-----
 drivers/net/bonding/version.map               |   10 +-
 examples/bond/main.c                          |   40 +-
 lib/ethdev/rte_ethdev.h                       |    6 +-
 24 files changed, 3391 insertions(+), 3367 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 5cb6f9252395..64465d0f151d 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -588,27 +588,27 @@ eth_dev_configure_mp(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 }
 
 static int
-change_bonding_slave_port_status(portid_t bond_pid, bool is_stop)
+change_bonding_child_port_status(portid_t bond_pid, bool is_stop)
 {
 #ifdef RTE_NET_BOND
 
-	portid_t slave_pids[RTE_MAX_ETHPORTS];
+	portid_t child_pids[RTE_MAX_ETHPORTS];
 	struct rte_port *port;
-	int num_slaves;
-	portid_t slave_pid;
+	int num_children;
+	portid_t child_pid;
 	int i;
 
-	num_slaves = rte_eth_bond_slaves_get(bond_pid, slave_pids,
+	num_children = rte_eth_bond_children_get(bond_pid, child_pids,
 						RTE_MAX_ETHPORTS);
-	if (num_slaves < 0) {
-		fprintf(stderr, "Failed to get slave list for port = %u\n",
+	if (num_children < 0) {
+		fprintf(stderr, "Failed to get child list for port = %u\n",
 			bond_pid);
-		return num_slaves;
+		return num_children;
 	}
 
-	for (i = 0; i < num_slaves; i++) {
-		slave_pid = slave_pids[i];
-		port = &ports[slave_pid];
+	for (i = 0; i < num_children; i++) {
+		child_pid = child_pids[i];
+		port = &ports[child_pid];
 		port->port_status =
 			is_stop ? RTE_PORT_STOPPED : RTE_PORT_STARTED;
 	}
@@ -632,12 +632,12 @@ eth_dev_start_mp(uint16_t port_id)
 		struct rte_port *port = &ports[port_id];
 
 		/*
-		 * Starting a bonded port also starts all slaves under the bonded
+		 * Starting a bonded port also starts all children under the bonded
 		 * device. So if this port is bond device, we need to modify the
-		 * port status of these slaves.
+		 * port status of these children.
 		 */
 		if (port->bond_flag == 1)
-			return change_bonding_slave_port_status(port_id, false);
+			return change_bonding_child_port_status(port_id, false);
 	}
 
 	return 0;
@@ -656,12 +656,12 @@ eth_dev_stop_mp(uint16_t port_id)
 		struct rte_port *port = &ports[port_id];
 
 		/*
-		 * Stopping a bonded port also stops all slaves under the bonded
+		 * Stopping a bonded port also stops all children under the bonded
 		 * device. So if this port is bond device, we need to modify the
-		 * port status of these slaves.
+		 * port status of these children.
 		 */
 		if (port->bond_flag == 1)
-			return change_bonding_slave_port_status(port_id, true);
+			return change_bonding_child_port_status(port_id, true);
 	}
 
 	return 0;
@@ -2610,7 +2610,7 @@ all_ports_started(void)
 		port = &ports[pi];
 		/* Check if there is a port which is not started */
 		if ((port->port_status != RTE_PORT_STARTED) &&
-			(port->slave_flag == 0))
+			(port->child_flag == 0))
 			return 0;
 	}
 
@@ -2624,7 +2624,7 @@ port_is_stopped(portid_t port_id)
 	struct rte_port *port = &ports[port_id];
 
 	if ((port->port_status != RTE_PORT_STOPPED) &&
-	    (port->slave_flag == 0))
+	    (port->child_flag == 0))
 		return 0;
 	return 1;
 }
@@ -2970,8 +2970,8 @@ fill_xstats_display_info(void)
 
 /*
  * Some capabilities (like, rx_offload_capa and tx_offload_capa) of bonding
- * device in dev_info is zero when no slave is added. And its capability
- * will be updated when add a new slave device. So adding a slave device need
+ * device in dev_info is zero when no child is added. And its capability
+ * will be updated when add a new child device. So adding a child device need
  * to update the port configurations of bonding device.
  */
 static void
@@ -3028,7 +3028,7 @@ start_port(portid_t pid)
 		if (pid != pi && pid != (portid_t)RTE_PORT_ALL)
 			continue;
 
-		if (port_is_bonding_slave(pi)) {
+		if (port_is_bonding_child(pi)) {
 			fprintf(stderr,
 				"Please remove port %d from bonded device.\n",
 				pi);
@@ -3350,7 +3350,7 @@ stop_port(portid_t pid)
 			continue;
 		}
 
-		if (port_is_bonding_slave(pi)) {
+		if (port_is_bonding_child(pi)) {
 			fprintf(stderr,
 				"Please remove port %d from bonded device.\n",
 				pi);
@@ -3439,28 +3439,28 @@ flush_port_owned_resources(portid_t pi)
 }
 
 static void
-clear_bonding_slave_device(portid_t *slave_pids, uint16_t num_slaves)
+clear_bonding_child_device(portid_t *child_pids, uint16_t num_children)
 {
 	struct rte_port *port;
-	portid_t slave_pid;
+	portid_t child_pid;
 	uint16_t i;
 
-	for (i = 0; i < num_slaves; i++) {
-		slave_pid = slave_pids[i];
-		if (port_is_started(slave_pid) == 1) {
-			if (rte_eth_dev_stop(slave_pid) != 0)
+	for (i = 0; i < num_children; i++) {
+		child_pid = child_pids[i];
+		if (port_is_started(child_pid) == 1) {
+			if (rte_eth_dev_stop(child_pid) != 0)
 				fprintf(stderr, "rte_eth_dev_stop failed for port %u\n",
-					slave_pid);
+					child_pid);
 
-			port = &ports[slave_pid];
+			port = &ports[child_pid];
 			port->port_status = RTE_PORT_STOPPED;
 		}
 
-		clear_port_slave_flag(slave_pid);
+		clear_port_child_flag(child_pid);
 
-		/* Close slave device when testpmd quit or is killed. */
+		/* Close child device when testpmd quit or is killed. */
 		if (cl_quit == 1 || f_quit == 1)
-			rte_eth_dev_close(slave_pid);
+			rte_eth_dev_close(child_pid);
 	}
 }
 
@@ -3469,8 +3469,8 @@ close_port(portid_t pid)
 {
 	portid_t pi;
 	struct rte_port *port;
-	portid_t slave_pids[RTE_MAX_ETHPORTS];
-	int num_slaves = 0;
+	portid_t child_pids[RTE_MAX_ETHPORTS];
+	int num_children = 0;
 
 	if (port_id_is_invalid(pid, ENABLED_WARN))
 		return;
@@ -3488,7 +3488,7 @@ close_port(portid_t pid)
 			continue;
 		}
 
-		if (port_is_bonding_slave(pi)) {
+		if (port_is_bonding_child(pi)) {
 			fprintf(stderr,
 				"Please remove port %d from bonded device.\n",
 				pi);
@@ -3505,17 +3505,17 @@ close_port(portid_t pid)
 			flush_port_owned_resources(pi);
 #ifdef RTE_NET_BOND
 			if (port->bond_flag == 1)
-				num_slaves = rte_eth_bond_slaves_get(pi,
-						slave_pids, RTE_MAX_ETHPORTS);
+				num_children = rte_eth_bond_children_get(pi,
+						child_pids, RTE_MAX_ETHPORTS);
 #endif
 			rte_eth_dev_close(pi);
 			/*
-			 * If this port is bonded device, all slaves under the
+			 * If this port is bonded device, all children under the
 			 * device need to be removed or closed.
 			 */
-			if (port->bond_flag == 1 && num_slaves > 0)
-				clear_bonding_slave_device(slave_pids,
-							num_slaves);
+			if (port->bond_flag == 1 && num_children > 0)
+				clear_bonding_child_device(child_pids,
+							num_children);
 		}
 
 		free_xstats_display_info(pi);
@@ -3555,7 +3555,7 @@ reset_port(portid_t pid)
 			continue;
 		}
 
-		if (port_is_bonding_slave(pi)) {
+		if (port_is_bonding_child(pi)) {
 			fprintf(stderr,
 				"Please remove port %d from bonded device.\n",
 				pi);
@@ -4203,38 +4203,38 @@ init_port_config(void)
 	}
 }
 
-void set_port_slave_flag(portid_t slave_pid)
+void set_port_child_flag(portid_t child_pid)
 {
 	struct rte_port *port;
 
-	port = &ports[slave_pid];
-	port->slave_flag = 1;
+	port = &ports[child_pid];
+	port->child_flag = 1;
 }
 
-void clear_port_slave_flag(portid_t slave_pid)
+void clear_port_child_flag(portid_t child_pid)
 {
 	struct rte_port *port;
 
-	port = &ports[slave_pid];
-	port->slave_flag = 0;
+	port = &ports[child_pid];
+	port->child_flag = 0;
 }
 
-uint8_t port_is_bonding_slave(portid_t slave_pid)
+uint8_t port_is_bonding_child(portid_t child_pid)
 {
 	struct rte_port *port;
 	struct rte_eth_dev_info dev_info;
 	int ret;
 
-	port = &ports[slave_pid];
-	ret = eth_dev_info_get_print_err(slave_pid, &dev_info);
+	port = &ports[child_pid];
+	ret = eth_dev_info_get_print_err(child_pid, &dev_info);
 	if (ret != 0) {
 		TESTPMD_LOG(ERR,
 			"Failed to get device info for port id %d,"
-			"cannot determine if the port is a bonded slave",
-			slave_pid);
+			"cannot determine if the port is a bonded child",
+			child_pid);
 		return 0;
 	}
-	if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDED_SLAVE) || (port->slave_flag == 1))
+	if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDED_CHILD) || (port->child_flag == 1))
 		return 1;
 	return 0;
 }
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index bdfbfd36d3c5..51cf600dc49e 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -321,7 +321,7 @@ struct rte_port {
 	uint32_t                mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
 	queueid_t               queue_nb; /**< nb. of queues for flow rules */
 	uint32_t                queue_sz; /**< size of a queue for flow rules */
-	uint8_t                 slave_flag : 1, /**< bonding slave port */
+	uint8_t                 child_flag : 1, /**< bonding child port */
 				bond_flag : 1, /**< port is bond device */
 				fwd_mac_swap : 1, /**< swap packet MAC before forward */
 				update_conf : 1; /**< need to update bonding device configuration */
@@ -1082,9 +1082,9 @@ void stop_packet_forwarding(void);
 void dev_set_link_up(portid_t pid);
 void dev_set_link_down(portid_t pid);
 void init_port_config(void);
-void set_port_slave_flag(portid_t slave_pid);
-void clear_port_slave_flag(portid_t slave_pid);
-uint8_t port_is_bonding_slave(portid_t slave_pid);
+void set_port_child_flag(portid_t child_pid);
+void clear_port_child_flag(portid_t child_pid);
+uint8_t port_is_bonding_child(portid_t child_pid);
 
 int init_port_dcb_config(portid_t pid, enum dcb_mode_enable dcb_mode,
 		     enum rte_eth_nb_tcs num_tcs,
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 5c496352c2b3..a0e1e8e833fe 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -59,13 +59,13 @@
 #define INVALID_BONDING_MODE	(-1)
 
 
-uint8_t slave_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
+uint8_t child_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
 uint8_t bonded_mac[] = {0xAA, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF };
 
 struct link_bonding_unittest_params {
 	int16_t bonded_port_id;
-	int16_t slave_port_ids[TEST_MAX_NUMBER_OF_PORTS];
-	uint16_t bonded_slave_count;
+	int16_t child_port_ids[TEST_MAX_NUMBER_OF_PORTS];
+	uint16_t bonded_child_count;
 	uint8_t bonding_mode;
 
 	uint16_t nb_rx_q;
@@ -73,7 +73,7 @@ struct link_bonding_unittest_params {
 
 	struct rte_mempool *mbuf_pool;
 
-	struct rte_ether_addr *default_slave_mac;
+	struct rte_ether_addr *default_child_mac;
 	struct rte_ether_addr *default_bonded_mac;
 
 	/* Packet Headers */
@@ -90,8 +90,8 @@ static struct rte_udp_hdr pkt_udp_hdr;
 
 static struct link_bonding_unittest_params default_params  = {
 	.bonded_port_id = -1,
-	.slave_port_ids = { -1 },
-	.bonded_slave_count = 0,
+	.child_port_ids = { -1 },
+	.bonded_child_count = 0,
 	.bonding_mode = BONDING_MODE_ROUND_ROBIN,
 
 	.nb_rx_q = 1,
@@ -99,7 +99,7 @@ static struct link_bonding_unittest_params default_params  = {
 
 	.mbuf_pool = NULL,
 
-	.default_slave_mac = (struct rte_ether_addr *)slave_mac,
+	.default_child_mac = (struct rte_ether_addr *)child_mac,
 	.default_bonded_mac = (struct rte_ether_addr *)bonded_mac,
 
 	.pkt_eth_hdr = NULL,
@@ -202,8 +202,8 @@ configure_ethdev(uint16_t port_id, uint8_t start, uint8_t en_isr)
 	return 0;
 }
 
-static int slaves_initialized;
-static int mac_slaves_initialized;
+static int children_initialized;
+static int mac_children_initialized;
 
 static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
 static pthread_cond_t cvar = PTHREAD_COND_INITIALIZER;
@@ -213,7 +213,7 @@ static int
 test_setup(void)
 {
 	int i, nb_mbuf_per_pool;
-	struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)slave_mac;
+	struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)child_mac;
 
 	/* Allocate ethernet packet header with space for VLAN header */
 	if (test_params->pkt_eth_hdr == NULL) {
@@ -235,7 +235,7 @@ test_setup(void)
 	}
 
 	/* Create / Initialize virtual eth devs */
-	if (!slaves_initialized) {
+	if (!children_initialized) {
 		for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++) {
 			char pmd_name[RTE_ETH_NAME_MAX_LEN];
 
@@ -243,16 +243,16 @@ test_setup(void)
 
 			snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_%d", i);
 
-			test_params->slave_port_ids[i] = virtual_ethdev_create(pmd_name,
+			test_params->child_port_ids[i] = virtual_ethdev_create(pmd_name,
 					mac_addr, rte_socket_id(), 1);
-			TEST_ASSERT(test_params->slave_port_ids[i] >= 0,
+			TEST_ASSERT(test_params->child_port_ids[i] >= 0,
 					"Failed to create virtual virtual ethdev %s", pmd_name);
 
 			TEST_ASSERT_SUCCESS(configure_ethdev(
-					test_params->slave_port_ids[i], 1, 0),
+					test_params->child_port_ids[i], 1, 0),
 					"Failed to configure virtual ethdev %s", pmd_name);
 		}
-		slaves_initialized = 1;
+		children_initialized = 1;
 	}
 
 	return 0;
@@ -261,9 +261,9 @@ test_setup(void)
 static int
 test_create_bonded_device(void)
 {
-	int current_slave_count;
+	int current_child_count;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t children[RTE_MAX_ETHPORTS];
 
 	/* Don't try to recreate bonded device if re-running test suite*/
 	if (test_params->bonded_port_id == -1) {
@@ -281,19 +281,19 @@ test_create_bonded_device(void)
 			test_params->bonding_mode), "Failed to set ethdev %d to mode %d",
 			test_params->bonded_port_id, test_params->bonding_mode);
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
+	current_child_count = rte_eth_bond_children_get(test_params->bonded_port_id,
+			children, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(current_slave_count, 0,
-			"Number of slaves %d is great than expected %d.",
-			current_slave_count, 0);
+	TEST_ASSERT_EQUAL(current_child_count, 0,
+			"Number of children %d is great than expected %d.",
+			current_child_count, 0);
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
+	current_child_count = rte_eth_bond_active_children_get(
+			test_params->bonded_port_id, children, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(current_slave_count, 0,
-			"Number of active slaves %d is great than expected %d.",
-			current_slave_count, 0);
+	TEST_ASSERT_EQUAL(current_child_count, 0,
+			"Number of active children %d is great than expected %d.",
+			current_child_count, 0);
 
 	return 0;
 }
@@ -329,46 +329,46 @@ test_create_bonded_device_with_invalid_params(void)
 }
 
 static int
-test_add_slave_to_bonded_device(void)
+test_add_child_to_bonded_device(void)
 {
-	int current_slave_count;
+	int current_child_count;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t children[RTE_MAX_ETHPORTS];
 
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
-			test_params->slave_port_ids[test_params->bonded_slave_count]),
-			"Failed to add slave (%d) to bonded port (%d).",
-			test_params->slave_port_ids[test_params->bonded_slave_count],
+	TEST_ASSERT_SUCCESS(rte_eth_bond_child_add(test_params->bonded_port_id,
+			test_params->child_port_ids[test_params->bonded_child_count]),
+			"Failed to add child (%d) to bonded port (%d).",
+			test_params->child_port_ids[test_params->bonded_child_count],
 			test_params->bonded_port_id);
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count + 1,
-			"Number of slaves (%d) is greater than expected (%d).",
-			current_slave_count, test_params->bonded_slave_count + 1);
+	current_child_count = rte_eth_bond_children_get(test_params->bonded_port_id,
+			children, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_child_count, test_params->bonded_child_count + 1,
+			"Number of children (%d) is greater than expected (%d).",
+			current_child_count, test_params->bonded_child_count + 1);
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, 0,
-					"Number of active slaves (%d) is not as expected (%d).\n",
-					current_slave_count, 0);
+	current_child_count = rte_eth_bond_active_children_get(
+			test_params->bonded_port_id, children, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_child_count, 0,
+					"Number of active children (%d) is not as expected (%d).\n",
+					current_child_count, 0);
 
-	test_params->bonded_slave_count++;
+	test_params->bonded_child_count++;
 
 	return 0;
 }
 
 static int
-test_add_slave_to_invalid_bonded_device(void)
+test_add_child_to_invalid_bonded_device(void)
 {
 	/* Invalid port ID */
-	TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->bonded_port_id + 5,
-			test_params->slave_port_ids[test_params->bonded_slave_count]),
+	TEST_ASSERT_FAIL(rte_eth_bond_child_add(test_params->bonded_port_id + 5,
+			test_params->child_port_ids[test_params->bonded_child_count]),
 			"Expected call to failed as invalid port specified.");
 
 	/* Non bonded device */
-	TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->slave_port_ids[0],
-			test_params->slave_port_ids[test_params->bonded_slave_count]),
+	TEST_ASSERT_FAIL(rte_eth_bond_child_add(test_params->child_port_ids[0],
+			test_params->child_port_ids[test_params->bonded_child_count]),
 			"Expected call to failed as invalid port specified.");
 
 	return 0;
@@ -376,63 +376,63 @@ test_add_slave_to_invalid_bonded_device(void)
 
 
 static int
-test_remove_slave_from_bonded_device(void)
+test_remove_child_from_bonded_device(void)
 {
-	int current_slave_count;
+	int current_child_count;
 	struct rte_ether_addr read_mac_addr, *mac_addr;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t children[RTE_MAX_ETHPORTS];
 
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params->bonded_port_id,
-			test_params->slave_port_ids[test_params->bonded_slave_count-1]),
-			"Failed to remove slave %d from bonded port (%d).",
-			test_params->slave_port_ids[test_params->bonded_slave_count-1],
+	TEST_ASSERT_SUCCESS(rte_eth_bond_child_remove(test_params->bonded_port_id,
+			test_params->child_port_ids[test_params->bonded_child_count-1]),
+			"Failed to remove child %d from bonded port (%d).",
+			test_params->child_port_ids[test_params->bonded_child_count-1],
 			test_params->bonded_port_id);
 
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
+	current_child_count = rte_eth_bond_children_get(test_params->bonded_port_id,
+			children, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count - 1,
-			"Number of slaves (%d) is great than expected (%d).\n",
-			current_slave_count, test_params->bonded_slave_count - 1);
+	TEST_ASSERT_EQUAL(current_child_count, test_params->bonded_child_count - 1,
+			"Number of children (%d) is great than expected (%d).\n",
+			current_child_count, test_params->bonded_child_count - 1);
 
 
-	mac_addr = (struct rte_ether_addr *)slave_mac;
+	mac_addr = (struct rte_ether_addr *)child_mac;
 	mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] =
-			test_params->bonded_slave_count-1;
+			test_params->bonded_child_count-1;
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(
-			test_params->slave_port_ids[test_params->bonded_slave_count-1],
+			test_params->child_port_ids[test_params->bonded_child_count-1],
 			&read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[test_params->bonded_slave_count-1]);
+			test_params->child_port_ids[test_params->bonded_child_count-1]);
 	TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
 			"bonded port mac address not set to that of primary port\n");
 
 	rte_eth_stats_reset(
-			test_params->slave_port_ids[test_params->bonded_slave_count-1]);
+			test_params->child_port_ids[test_params->bonded_child_count-1]);
 
 	virtual_ethdev_simulate_link_status_interrupt(test_params->bonded_port_id,
 			0);
 
-	test_params->bonded_slave_count--;
+	test_params->bonded_child_count--;
 
 	return 0;
 }
 
 static int
-test_remove_slave_from_invalid_bonded_device(void)
+test_remove_child_from_invalid_bonded_device(void)
 {
 	/* Invalid port ID */
-	TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
+	TEST_ASSERT_FAIL(rte_eth_bond_child_remove(
 			test_params->bonded_port_id + 5,
-			test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
+			test_params->child_port_ids[test_params->bonded_child_count - 1]),
 			"Expected call to failed as invalid port specified.");
 
 	/* Non bonded device */
-	TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
-			test_params->slave_port_ids[0],
-			test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
+	TEST_ASSERT_FAIL(rte_eth_bond_child_remove(
+			test_params->child_port_ids[0],
+			test_params->child_port_ids[test_params->bonded_child_count - 1]),
 			"Expected call to failed as invalid port specified.");
 
 	return 0;
@@ -441,19 +441,19 @@ test_remove_slave_from_invalid_bonded_device(void)
 static int bonded_id = 2;
 
 static int
-test_add_already_bonded_slave_to_bonded_device(void)
+test_add_already_bonded_child_to_bonded_device(void)
 {
-	int port_id, current_slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int port_id, current_child_count;
+	uint16_t children[RTE_MAX_ETHPORTS];
 	char pmd_name[RTE_ETH_NAME_MAX_LEN];
 
-	test_add_slave_to_bonded_device();
+	test_add_child_to_bonded_device();
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, 1,
-			"Number of slaves (%d) is not that expected (%d).",
-			current_slave_count, 1);
+	current_child_count = rte_eth_bond_children_get(test_params->bonded_port_id,
+			children, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_child_count, 1,
+			"Number of children (%d) is not that expected (%d).",
+			current_child_count, 1);
 
 	snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "%s_%d", BONDED_DEV_NAME, ++bonded_id);
 
@@ -461,93 +461,93 @@ test_add_already_bonded_slave_to_bonded_device(void)
 			rte_socket_id());
 	TEST_ASSERT(port_id >= 0, "Failed to create bonded device.");
 
-	TEST_ASSERT(rte_eth_bond_slave_add(port_id,
-			test_params->slave_port_ids[test_params->bonded_slave_count - 1])
+	TEST_ASSERT(rte_eth_bond_child_add(port_id,
+			test_params->child_port_ids[test_params->bonded_child_count - 1])
 			< 0,
-			"Added slave (%d) to bonded port (%d) unexpectedly.",
-			test_params->slave_port_ids[test_params->bonded_slave_count-1],
+			"Added child (%d) to bonded port (%d) unexpectedly.",
+			test_params->child_port_ids[test_params->bonded_child_count-1],
 			port_id);
 
-	return test_remove_slave_from_bonded_device();
+	return test_remove_child_from_bonded_device();
 }
 
 
 static int
-test_get_slaves_from_bonded_device(void)
+test_get_children_from_bonded_device(void)
 {
-	int current_slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int current_child_count;
+	uint16_t children[RTE_MAX_ETHPORTS];
 
-	TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-			"Failed to add slave to bonded device");
+	TEST_ASSERT_SUCCESS(test_add_child_to_bonded_device(),
+			"Failed to add child to bonded device");
 
 	/* Invalid port id */
-	current_slave_count = rte_eth_bond_slaves_get(INVALID_PORT_ID, slaves,
+	current_child_count = rte_eth_bond_children_get(INVALID_PORT_ID, children,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
+	TEST_ASSERT(current_child_count < 0,
 			"Invalid port id unexpectedly succeeded");
 
-	current_slave_count = rte_eth_bond_active_slaves_get(INVALID_PORT_ID,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
+	current_child_count = rte_eth_bond_active_children_get(INVALID_PORT_ID,
+			children, RTE_MAX_ETHPORTS);
+	TEST_ASSERT(current_child_count < 0,
 			"Invalid port id unexpectedly succeeded");
 
-	/* Invalid slaves pointer */
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
+	/* Invalid children pointer */
+	current_child_count = rte_eth_bond_children_get(test_params->bonded_port_id,
 			NULL, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
-			"Invalid slave array unexpectedly succeeded");
+	TEST_ASSERT(current_child_count < 0,
+			"Invalid child array unexpectedly succeeded");
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
+	current_child_count = rte_eth_bond_active_children_get(
 			test_params->bonded_port_id, NULL, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
-			"Invalid slave array unexpectedly succeeded");
+	TEST_ASSERT(current_child_count < 0,
+			"Invalid child array unexpectedly succeeded");
 
 	/* non bonded device*/
-	current_slave_count = rte_eth_bond_slaves_get(
-			test_params->slave_port_ids[0], NULL, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
+	current_child_count = rte_eth_bond_children_get(
+			test_params->child_port_ids[0], NULL, RTE_MAX_ETHPORTS);
+	TEST_ASSERT(current_child_count < 0,
 			"Invalid port id unexpectedly succeeded");
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->slave_port_ids[0],	NULL, RTE_MAX_ETHPORTS);
-	TEST_ASSERT(current_slave_count < 0,
+	current_child_count = rte_eth_bond_active_children_get(
+			test_params->child_port_ids[0],	NULL, RTE_MAX_ETHPORTS);
+	TEST_ASSERT(current_child_count < 0,
 			"Invalid port id unexpectedly succeeded");
 
-	TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
-			"Failed to remove slaves from bonded device");
+	TEST_ASSERT_SUCCESS(test_remove_child_from_bonded_device(),
+			"Failed to remove children from bonded device");
 
 	return 0;
 }
 
 
 static int
-test_add_remove_multiple_slaves_to_from_bonded_device(void)
+test_add_remove_multiple_children_to_from_bonded_device(void)
 {
 	int i;
 
 	for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
-		TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-				"Failed to add slave to bonded device");
+		TEST_ASSERT_SUCCESS(test_add_child_to_bonded_device(),
+				"Failed to add child to bonded device");
 
 	for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
-		TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
-				"Failed to remove slaves from bonded device");
+		TEST_ASSERT_SUCCESS(test_remove_child_from_bonded_device(),
+				"Failed to remove children from bonded device");
 
 	return 0;
 }
 
 static void
-enable_bonded_slaves(void)
+enable_bonded_children(void)
 {
 	int i;
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		virtual_ethdev_tx_burst_fn_set_success(test_params->slave_port_ids[i],
+	for (i = 0; i < test_params->bonded_child_count; i++) {
+		virtual_ethdev_tx_burst_fn_set_success(test_params->child_port_ids[i],
 				1);
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 1);
+				test_params->child_port_ids[i], 1);
 	}
 }
 
@@ -556,34 +556,34 @@ test_start_bonded_device(void)
 {
 	struct rte_eth_link link_status;
 
-	int current_slave_count, current_bonding_mode, primary_port;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int current_child_count, current_bonding_mode, primary_port;
+	uint16_t children[RTE_MAX_ETHPORTS];
 	int retval;
 
-	/* Add slave to bonded device*/
-	TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-			"Failed to add slave to bonded device");
+	/* Add child to bonded device*/
+	TEST_ASSERT_SUCCESS(test_add_child_to_bonded_device(),
+			"Failed to add child to bonded device");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params->bonded_port_id),
 		"Failed to start bonded pmd eth device %d.",
 		test_params->bonded_port_id);
 
 	/* Change link status of virtual pmd so it will be added to the active
-	 * slave list of the bonded device*/
+	 * child list of the bonded device*/
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[test_params->bonded_slave_count-1], 1);
+			test_params->child_port_ids[test_params->bonded_child_count-1], 1);
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
-			"Number of slaves (%d) is not expected value (%d).",
-			current_slave_count, test_params->bonded_slave_count);
+	current_child_count = rte_eth_bond_children_get(test_params->bonded_port_id,
+			children, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_child_count, test_params->bonded_child_count,
+			"Number of children (%d) is not expected value (%d).",
+			current_child_count, test_params->bonded_child_count);
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
-			"Number of active slaves (%d) is not expected value (%d).",
-			current_slave_count, test_params->bonded_slave_count);
+	current_child_count = rte_eth_bond_active_children_get(
+			test_params->bonded_port_id, children, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_child_count, test_params->bonded_child_count,
+			"Number of active children (%d) is not expected value (%d).",
+			current_child_count, test_params->bonded_child_count);
 
 	current_bonding_mode = rte_eth_bond_mode_get(test_params->bonded_port_id);
 	TEST_ASSERT_EQUAL(current_bonding_mode, test_params->bonding_mode,
@@ -591,9 +591,9 @@ test_start_bonded_device(void)
 			current_bonding_mode, test_params->bonding_mode);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+	TEST_ASSERT_EQUAL(primary_port, test_params->child_port_ids[0],
 			"Primary port (%d) is not expected value (%d).",
-			primary_port, test_params->slave_port_ids[0]);
+			primary_port, test_params->child_port_ids[0]);
 
 	retval = rte_eth_link_get(test_params->bonded_port_id, &link_status);
 	TEST_ASSERT(retval >= 0,
@@ -609,8 +609,8 @@ test_start_bonded_device(void)
 static int
 test_stop_bonded_device(void)
 {
-	int current_slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int current_child_count;
+	uint16_t children[RTE_MAX_ETHPORTS];
 
 	struct rte_eth_link link_status;
 	int retval;
@@ -627,29 +627,29 @@ test_stop_bonded_device(void)
 			"Bonded port (%d) status (%d) is not expected value (%d).",
 			test_params->bonded_port_id, link_status.link_status, 0);
 
-	current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
-			"Number of slaves (%d) is not expected value (%d).",
-			current_slave_count, test_params->bonded_slave_count);
+	current_child_count = rte_eth_bond_children_get(test_params->bonded_port_id,
+			children, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_child_count, test_params->bonded_child_count,
+			"Number of children (%d) is not expected value (%d).",
+			current_child_count, test_params->bonded_child_count);
 
-	current_slave_count = rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(current_slave_count, 0,
-			"Number of active slaves (%d) is not expected value (%d).",
-			current_slave_count, 0);
+	current_child_count = rte_eth_bond_active_children_get(
+			test_params->bonded_port_id, children, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(current_child_count, 0,
+			"Number of active children (%d) is not expected value (%d).",
+			current_child_count, 0);
 
 	return 0;
 }
 
 static int
-remove_slaves_and_stop_bonded_device(void)
+remove_children_and_stop_bonded_device(void)
 {
-	/* Clean up and remove slaves from bonded device */
+	/* Clean up and remove children from bonded device */
 	free_virtualpmd_tx_queue();
-	while (test_params->bonded_slave_count > 0)
-		TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
-				"test_remove_slave_from_bonded_device failed");
+	while (test_params->bonded_child_count > 0)
+		TEST_ASSERT_SUCCESS(test_remove_child_from_bonded_device(),
+				"test_remove_child_from_bonded_device failed");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -681,10 +681,10 @@ test_set_bonding_mode(void)
 				INVALID_PORT_ID);
 
 		/* Non bonded device */
-		TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->slave_port_ids[0],
+		TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->child_port_ids[0],
 				bonding_modes[i]),
 				"Expected call to failed as invalid port (%d) specified.",
-				test_params->slave_port_ids[0]);
+				test_params->child_port_ids[0]);
 
 		TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
 				bonding_modes[i]),
@@ -704,26 +704,26 @@ test_set_bonding_mode(void)
 				INVALID_PORT_ID);
 
 		/* Non bonded device */
-		bonding_mode = rte_eth_bond_mode_get(test_params->slave_port_ids[0]);
+		bonding_mode = rte_eth_bond_mode_get(test_params->child_port_ids[0]);
 		TEST_ASSERT(bonding_mode < 0,
 				"Expected call to failed as invalid port (%d) specified.",
-				test_params->slave_port_ids[0]);
+				test_params->child_port_ids[0]);
 	}
 
-	return remove_slaves_and_stop_bonded_device();
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
-test_set_primary_slave(void)
+test_set_primary_child(void)
 {
 	int i, j, retval;
 	struct rte_ether_addr read_mac_addr;
 	struct rte_ether_addr *expected_mac_addr;
 
-	/* Add 4 slaves to bonded device */
-	for (i = test_params->bonded_slave_count; i < 4; i++)
-		TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-				"Failed to add slave to bonded device.");
+	/* Add 4 children to bonded device */
+	for (i = test_params->bonded_child_count; i < 4; i++)
+		TEST_ASSERT_SUCCESS(test_add_child_to_bonded_device(),
+				"Failed to add child to bonded device.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
 			BONDING_MODE_ROUND_ROBIN),
@@ -732,34 +732,34 @@ test_set_primary_slave(void)
 
 	/* Invalid port ID */
 	TEST_ASSERT_FAIL(rte_eth_bond_primary_set(INVALID_PORT_ID,
-			test_params->slave_port_ids[i]),
+			test_params->child_port_ids[i]),
 			"Expected call to failed as invalid port specified.");
 
 	/* Non bonded device */
-	TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->slave_port_ids[i],
-			test_params->slave_port_ids[i]),
+	TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->child_port_ids[i],
+			test_params->child_port_ids[i]),
 			"Expected call to failed as invalid port specified.");
 
-	/* Set slave as primary
-	 * Verify slave it is now primary slave
-	 * Verify that MAC address of bonded device is that of primary slave
-	 * Verify that MAC address of all bonded slaves are that of primary slave
+	/* Set child as primary
+	 * Verify child it is now primary child
+	 * Verify that MAC address of bonded device is that of primary child
+	 * Verify that MAC address of all bonded children are that of primary child
 	 */
 	for (i = 0; i < 4; i++) {
 		TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
-				test_params->slave_port_ids[i]),
+				test_params->child_port_ids[i]),
 				"Failed to set bonded port (%d) primary port to (%d)",
-				test_params->bonded_port_id, test_params->slave_port_ids[i]);
+				test_params->bonded_port_id, test_params->child_port_ids[i]);
 
 		retval = rte_eth_bond_primary_get(test_params->bonded_port_id);
 		TEST_ASSERT(retval >= 0,
 				"Failed to read primary port from bonded port (%d)\n",
 					test_params->bonded_port_id);
 
-		TEST_ASSERT_EQUAL(retval, test_params->slave_port_ids[i],
+		TEST_ASSERT_EQUAL(retval, test_params->child_port_ids[i],
 				"Bonded port (%d) primary port (%d) not expected value (%d)\n",
 				test_params->bonded_port_id, retval,
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 
 		/* stop/start bonded eth dev to apply new MAC */
 		TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
@@ -770,13 +770,13 @@ test_set_primary_slave(void)
 				"Failed to start bonded port %d",
 				test_params->bonded_port_id);
 
-		expected_mac_addr = (struct rte_ether_addr *)&slave_mac;
+		expected_mac_addr = (struct rte_ether_addr *)&child_mac;
 		expected_mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
 
-		/* Check primary slave MAC */
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+		/* Check primary child MAC */
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
 				sizeof(read_mac_addr)),
 				"bonded port mac address not set to that of primary port\n");
@@ -789,16 +789,16 @@ test_set_primary_slave(void)
 				sizeof(read_mac_addr)),
 				"bonded port mac address not set to that of primary port\n");
 
-		/* Check other slaves MACs */
+		/* Check other children MACs */
 		for (j = 0; j < 4; j++) {
 			if (j != i) {
-				TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[j],
+				TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[j],
 						&read_mac_addr),
 						"Failed to get mac address (port %d)",
-						test_params->slave_port_ids[j]);
+						test_params->child_port_ids[j]);
 				TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
 						sizeof(read_mac_addr)),
-						"slave port mac address not set to that of primary "
+						"child port mac address not set to that of primary "
 						"port");
 			}
 		}
@@ -809,14 +809,14 @@ test_set_primary_slave(void)
 	TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->bonded_port_id + 10),
 			"read primary port from expectedly");
 
-	/* Test with slave port */
-	TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->slave_port_ids[0]),
+	/* Test with child port */
+	TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->child_port_ids[0]),
 			"read primary port from expectedly\n");
 
-	TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
-			"Failed to stop and remove slaves from bonded device");
+	TEST_ASSERT_SUCCESS(remove_children_and_stop_bonded_device(),
+			"Failed to stop and remove children from bonded device");
 
-	/* No slaves  */
+	/* No children  */
 	TEST_ASSERT(rte_eth_bond_primary_get(test_params->bonded_port_id)  < 0,
 			"read primary port from expectedly\n");
 
@@ -840,7 +840,7 @@ test_set_explicit_bonded_mac(void)
 
 	/* Non bonded device */
 	TEST_ASSERT_FAIL(rte_eth_bond_mac_address_set(
-			test_params->slave_port_ids[0],	mac_addr),
+			test_params->child_port_ids[0],	mac_addr),
 			"Expected call to failed as invalid port specified.");
 
 	/* NULL MAC address */
@@ -853,10 +853,10 @@ test_set_explicit_bonded_mac(void)
 			"Failed to set MAC address on bonded port (%d)",
 			test_params->bonded_port_id);
 
-	/* Add 4 slaves to bonded device */
-	for (i = test_params->bonded_slave_count; i < 4; i++) {
-		TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-				"Failed to add slave to bonded device.\n");
+	/* Add 4 children to bonded device */
+	for (i = test_params->bonded_child_count; i < 4; i++) {
+		TEST_ASSERT_SUCCESS(test_add_child_to_bonded_device(),
+				"Failed to add child to bonded device.\n");
 	}
 
 	/* Check bonded MAC */
@@ -866,14 +866,14 @@ test_set_explicit_bonded_mac(void)
 	TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
 			"bonded port mac address not set to that of primary port");
 
-	/* Check other slaves MACs */
+	/* Check other children MACs */
 	for (i = 0; i < 4; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port mac address not set to that of primary port");
+				"child port mac address not set to that of primary port");
 	}
 
 	/* test resetting mac address on bonded device */
@@ -883,13 +883,13 @@ test_set_explicit_bonded_mac(void)
 			test_params->bonded_port_id);
 
 	TEST_ASSERT_FAIL(
-			rte_eth_bond_mac_address_reset(test_params->slave_port_ids[0]),
+			rte_eth_bond_mac_address_reset(test_params->child_port_ids[0]),
 			"Reset MAC address on bonded port (%d) unexpectedly",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 
-	/* test resetting mac address on bonded device with no slaves */
-	TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
-			"Failed to remove slaves and stop bonded device");
+	/* test resetting mac address on bonded device with no children */
+	TEST_ASSERT_SUCCESS(remove_children_and_stop_bonded_device(),
+			"Failed to remove children and stop bonded device");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_reset(test_params->bonded_port_id),
 			"Failed to reset MAC address on bonded port (%d)",
@@ -898,25 +898,25 @@ test_set_explicit_bonded_mac(void)
 	return 0;
 }
 
-#define BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT (3)
+#define BONDED_INIT_MAC_ASSIGNMENT_CHILD_COUNT (3)
 
 static int
 test_set_bonded_port_initialization_mac_assignment(void)
 {
-	int i, slave_count;
+	int i, child_count;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t children[RTE_MAX_ETHPORTS];
 	static int bonded_port_id = -1;
-	static int slave_port_ids[BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT];
+	static int child_port_ids[BONDED_INIT_MAC_ASSIGNMENT_CHILD_COUNT];
 
-	struct rte_ether_addr slave_mac_addr, bonded_mac_addr, read_mac_addr;
+	struct rte_ether_addr child_mac_addr, bonded_mac_addr, read_mac_addr;
 
 	/* Initialize default values for MAC addresses */
-	memcpy(&slave_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
-	memcpy(&bonded_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
+	memcpy(&child_mac_addr, child_mac, sizeof(struct rte_ether_addr));
+	memcpy(&bonded_mac_addr, child_mac, sizeof(struct rte_ether_addr));
 
 	/*
-	 * 1. a - Create / configure  bonded / slave ethdevs
+	 * 1. a - Create / configure  bonded / child ethdevs
 	 */
 	if (bonded_port_id == -1) {
 		bonded_port_id = rte_eth_bond_create("net_bonding_mac_ass_test",
@@ -927,46 +927,46 @@ test_set_bonded_port_initialization_mac_assignment(void)
 					"Failed to configure bonded ethdev");
 	}
 
-	if (!mac_slaves_initialized) {
-		for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
+	if (!mac_children_initialized) {
+		for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_CHILD_COUNT; i++) {
 			char pmd_name[RTE_ETH_NAME_MAX_LEN];
 
-			slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
+			child_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
 				i + 100;
 
 			snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN,
-				"eth_slave_%d", i);
+				"eth_child_%d", i);
 
-			slave_port_ids[i] = virtual_ethdev_create(pmd_name,
-					&slave_mac_addr, rte_socket_id(), 1);
+			child_port_ids[i] = virtual_ethdev_create(pmd_name,
+					&child_mac_addr, rte_socket_id(), 1);
 
-			TEST_ASSERT(slave_port_ids[i] >= 0,
-					"Failed to create slave ethdev %s",
+			TEST_ASSERT(child_port_ids[i] >= 0,
+					"Failed to create child ethdev %s",
 					pmd_name);
 
-			TEST_ASSERT_SUCCESS(configure_ethdev(slave_port_ids[i], 1, 0),
+			TEST_ASSERT_SUCCESS(configure_ethdev(child_port_ids[i], 1, 0),
 					"Failed to configure virtual ethdev %s",
 					pmd_name);
 		}
-		mac_slaves_initialized = 1;
+		mac_children_initialized = 1;
 	}
 
 
 	/*
-	 * 2. Add slave ethdevs to bonded device
+	 * 2. Add child ethdevs to bonded device
 	 */
-	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(bonded_port_id,
-				slave_port_ids[i]),
-				"Failed to add slave (%d) to bonded port (%d).",
-				slave_port_ids[i], bonded_port_id);
+	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_CHILD_COUNT; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_bond_child_add(bonded_port_id,
+				child_port_ids[i]),
+				"Failed to add child (%d) to bonded port (%d).",
+				child_port_ids[i], bonded_port_id);
 	}
 
-	slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
+	child_count = rte_eth_bond_children_get(bonded_port_id, children,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT, slave_count,
-			"Number of slaves (%d) is not as expected (%d)",
-			slave_count, BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT);
+	TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_CHILD_COUNT, child_count,
+			"Number of children (%d) is not as expected (%d)",
+			child_count, BONDED_INIT_MAC_ASSIGNMENT_CHILD_COUNT);
 
 
 	/*
@@ -982,16 +982,16 @@ test_set_bonded_port_initialization_mac_assignment(void)
 
 
 	/* 4. a - Start bonded ethdev
-	 *    b - Enable slave devices
-	 *    c - Verify bonded/slaves ethdev MAC addresses
+	 *    b - Enable child devices
+	 *    c - Verify bonded/children ethdev MAC addresses
 	 */
 	TEST_ASSERT_SUCCESS(rte_eth_dev_start(bonded_port_id),
 			"Failed to start bonded pmd eth device %d.",
 			bonded_port_id);
 
-	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
+	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_CHILD_COUNT; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				slave_port_ids[i], 1);
+				child_port_ids[i], 1);
 	}
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(bonded_port_id, &read_mac_addr),
@@ -1001,36 +1001,36 @@ test_set_bonded_port_initialization_mac_assignment(void)
 			sizeof(read_mac_addr)),
 			"bonded port mac address not as expected");
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(child_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[0]);
+			child_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 0 mac address not as expected");
+			"child port 0 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+	child_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(child_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[1]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			child_port_ids[1]);
+	TEST_ASSERT_SUCCESS(memcmp(&child_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 1 mac address not as expected");
+			"child port 1 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+	child_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(child_port_ids[2], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[2]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			child_port_ids[2]);
+	TEST_ASSERT_SUCCESS(memcmp(&child_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 2 mac address not as expected");
+			"child port 2 mac address not as expected");
 
 
 	/* 7. a - Change primary port
 	 *    b - Stop / Start bonded port
-	 *    d - Verify slave ethdev MAC addresses
+	 *    d - Verify child ethdev MAC addresses
 	 */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(bonded_port_id,
-			slave_port_ids[2]),
+			child_port_ids[2]),
 			"failed to set primary port on bonded device.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
@@ -1048,94 +1048,94 @@ test_set_bonded_port_initialization_mac_assignment(void)
 			sizeof(read_mac_addr)),
 			"bonded port mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+	child_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(child_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			child_port_ids[0]);
+	TEST_ASSERT_SUCCESS(memcmp(&child_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 0 mac address not as expected");
+			"child port 0 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+	child_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(child_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[1]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			child_port_ids[1]);
+	TEST_ASSERT_SUCCESS(memcmp(&child_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 1 mac address not as expected");
+			"child port 1 mac address not as expected");
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(child_port_ids[2], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[2]);
+			child_port_ids[2]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 2 mac address not as expected");
+			"child port 2 mac address not as expected");
 
 	/* 6. a - Stop bonded ethdev
-	 *    b - remove slave ethdevs
-	 *    c - Verify slave ethdevs MACs are restored
+	 *    b - remove child ethdevs
+	 *    c - Verify child ethdevs MACs are restored
 	 */
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
 			"Failed to stop bonded port %u",
 			bonded_port_id);
 
-	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(bonded_port_id,
-				slave_port_ids[i]),
-				"Failed to remove slave %d from bonded port (%d).",
-				slave_port_ids[i], bonded_port_id);
+	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_CHILD_COUNT; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_bond_child_remove(bonded_port_id,
+				child_port_ids[i]),
+				"Failed to remove child %d from bonded port (%d).",
+				child_port_ids[i], bonded_port_id);
 	}
 
-	slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
+	child_count = rte_eth_bond_children_get(bonded_port_id, children,
 			RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(slave_count, 0,
-			"Number of slaves (%d) is great than expected (%d).",
-			slave_count, 0);
+	TEST_ASSERT_EQUAL(child_count, 0,
+			"Number of children (%d) is great than expected (%d).",
+			child_count, 0);
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+	child_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(child_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			child_port_ids[0]);
+	TEST_ASSERT_SUCCESS(memcmp(&child_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 0 mac address not as expected");
+			"child port 0 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+	child_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(child_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[1]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			child_port_ids[1]);
+	TEST_ASSERT_SUCCESS(memcmp(&child_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 1 mac address not as expected");
+			"child port 1 mac address not as expected");
 
-	slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+	child_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(child_port_ids[2], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			slave_port_ids[2]);
-	TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+			child_port_ids[2]);
+	TEST_ASSERT_SUCCESS(memcmp(&child_mac_addr, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port 2 mac address not as expected");
+			"child port 2 mac address not as expected");
 
 	return 0;
 }
 
 
 static int
-initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
-		uint16_t number_of_slaves, uint8_t enable_slave)
+initialize_bonded_device_with_children(uint8_t bonding_mode, uint8_t bond_en_isr,
+		uint16_t number_of_children, uint8_t enable_child)
 {
 	/* Configure bonded device */
 	TEST_ASSERT_SUCCESS(configure_ethdev(test_params->bonded_port_id, 0,
 			bond_en_isr), "Failed to configure bonding port (%d) in mode %d "
-			"with (%d) slaves.", test_params->bonded_port_id, bonding_mode,
-			number_of_slaves);
-
-	/* Add slaves to bonded device */
-	while (number_of_slaves > test_params->bonded_slave_count)
-		TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
-				"Failed to add slave (%d to  bonding port (%d).",
-				test_params->bonded_slave_count - 1,
+			"with (%d) children.", test_params->bonded_port_id, bonding_mode,
+			number_of_children);
+
+	/* Add children to bonded device */
+	while (number_of_children > test_params->bonded_child_count)
+		TEST_ASSERT_SUCCESS(test_add_child_to_bonded_device(),
+				"Failed to add child (%d to  bonding port (%d).",
+				test_params->bonded_child_count - 1,
 				test_params->bonded_port_id);
 
 	/* Set link bonding mode  */
@@ -1148,40 +1148,40 @@ initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
 		"Failed to start bonded pmd eth device %d.",
 		test_params->bonded_port_id);
 
-	if (enable_slave)
-		enable_bonded_slaves();
+	if (enable_child)
+		enable_bonded_children();
 
 	return 0;
 }
 
 static int
-test_adding_slave_after_bonded_device_started(void)
+test_adding_child_after_bonded_device_started(void)
 {
 	int i;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_ROUND_ROBIN, 0, 4, 0),
-			"Failed to add slaves to bonded device");
+			"Failed to add children to bonded device");
 
-	/* Enabled slave devices */
-	for (i = 0; i < test_params->bonded_slave_count + 1; i++) {
+	/* Enabled child devices */
+	for (i = 0; i < test_params->bonded_child_count + 1; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 1);
+				test_params->child_port_ids[i], 1);
 	}
 
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
-			test_params->slave_port_ids[test_params->bonded_slave_count]),
-			"Failed to add slave to bonded port.\n");
+	TEST_ASSERT_SUCCESS(rte_eth_bond_child_add(test_params->bonded_port_id,
+			test_params->child_port_ids[test_params->bonded_child_count]),
+			"Failed to add child to bonded port.\n");
 
 	rte_eth_stats_reset(
-			test_params->slave_port_ids[test_params->bonded_slave_count]);
+			test_params->child_port_ids[test_params->bonded_child_count]);
 
-	test_params->bonded_slave_count++;
+	test_params->bonded_child_count++;
 
-	return remove_slaves_and_stop_bonded_device();
+	return remove_children_and_stop_bonded_device();
 }
 
-#define TEST_STATUS_INTERRUPT_SLAVE_COUNT	4
+#define TEST_STATUS_INTERRUPT_CHILD_COUNT	4
 #define TEST_LSC_WAIT_TIMEOUT_US	500000
 
 int test_lsc_interrupt_count;
@@ -1237,13 +1237,13 @@ lsc_timeout(int wait_us)
 static int
 test_status_interrupt(void)
 {
-	int slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	int child_count;
+	uint16_t children[RTE_MAX_ETHPORTS];
 
-	/* initialized bonding device with T slaves */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* initialized bonding device with T children */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_ROUND_ROBIN, 1,
-			TEST_STATUS_INTERRUPT_SLAVE_COUNT, 1),
+			TEST_STATUS_INTERRUPT_CHILD_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	test_lsc_interrupt_count = 0;
@@ -1253,27 +1253,27 @@ test_status_interrupt(void)
 			RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
 			&test_params->bonded_port_id);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
+	child_count = rte_eth_bond_active_children_get(test_params->bonded_port_id,
+			children, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT,
-			"Number of active slaves (%d) is not as expected (%d)",
-			slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT);
+	TEST_ASSERT_EQUAL(child_count, TEST_STATUS_INTERRUPT_CHILD_COUNT,
+			"Number of active children (%d) is not as expected (%d)",
+			child_count, TEST_STATUS_INTERRUPT_CHILD_COUNT);
 
-	/* Bring all 4 slaves link status to down and test that we have received a
+	/* Bring all 4 children link status to down and test that we have received a
 	 * lsc interrupts */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 0);
+			test_params->child_port_ids[0], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->child_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[2], 0);
+			test_params->child_port_ids[2], 0);
 
 	TEST_ASSERT_EQUAL(test_lsc_interrupt_count, 0,
 			"Received a link status change interrupt unexpectedly");
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->child_port_ids[3], 0);
 
 	TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
 			"timed out waiting for interrupt");
@@ -1281,18 +1281,18 @@ test_status_interrupt(void)
 	TEST_ASSERT(test_lsc_interrupt_count > 0,
 			"Did not receive link status change interrupt");
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
+	child_count = rte_eth_bond_active_children_get(test_params->bonded_port_id,
+			children, RTE_MAX_ETHPORTS);
 
-	TEST_ASSERT_EQUAL(slave_count, 0,
-			"Number of active slaves (%d) is not as expected (%d)",
-			slave_count, 0);
+	TEST_ASSERT_EQUAL(child_count, 0,
+			"Number of active children (%d) is not as expected (%d)",
+			child_count, 0);
 
-	/* bring one slave port up so link status will change */
+	/* bring one child port up so link status will change */
 	test_lsc_interrupt_count = 0;
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 1);
+			test_params->child_port_ids[0], 1);
 
 	TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
 			"timed out waiting for interrupt");
@@ -1301,12 +1301,12 @@ test_status_interrupt(void)
 	TEST_ASSERT(test_lsc_interrupt_count > 0,
 			"Did not receive link status change interrupt");
 
-	/* Verify that calling the same slave lsc interrupt doesn't cause another
+	/* Verify that calling the same child lsc interrupt doesn't cause another
 	 * lsc interrupt from bonded device */
 	test_lsc_interrupt_count = 0;
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 1);
+			test_params->child_port_ids[0], 1);
 
 	TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) != 0,
 			"received unexpected interrupt");
@@ -1320,8 +1320,8 @@ test_status_interrupt(void)
 				RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
 				&test_params->bonded_port_id);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
@@ -1398,11 +1398,11 @@ test_roundrobin_tx_burst(void)
 	struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_ROUND_ROBIN, 0, 2, 1),
 			"Failed to initialise bonded device");
 
-	burst_size = 20 * test_params->bonded_slave_count;
+	burst_size = 20 * test_params->bonded_child_count;
 
 	TEST_ASSERT(burst_size <= MAX_PKT_BURST,
 			"Burst size specified is greater than supported.");
@@ -1423,20 +1423,20 @@ test_roundrobin_tx_burst(void)
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
 			burst_size);
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+	/* Verify child ports tx stats */
+	for (i = 0; i < test_params->bonded_child_count; i++) {
+		rte_eth_stats_get(test_params->child_port_ids[i], &port_stats);
 		TEST_ASSERT_EQUAL(port_stats.opackets,
-				(uint64_t)burst_size / test_params->bonded_slave_count,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)\n",
+				(uint64_t)burst_size / test_params->bonded_child_count,
+				"Child Port (%d) opackets value (%u) not as expected (%d)\n",
 				test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-				burst_size / test_params->bonded_slave_count);
+				burst_size / test_params->bonded_child_count);
 	}
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all children down and try and transmit */
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->child_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -1444,8 +1444,8 @@ test_roundrobin_tx_burst(void)
 			pkt_burst, burst_size), 0,
 			"tx burst return unexpected value");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
@@ -1471,13 +1471,13 @@ free_mbufs(struct rte_mbuf **mbufs, int nb_mbufs)
 		rte_pktmbuf_free(mbufs[i]);
 }
 
-#define TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT		(2)
-#define TEST_RR_SLAVE_TX_FAIL_BURST_SIZE		(64)
-#define TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT		(22)
-#define TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX	(1)
+#define TEST_RR_CHILD_TX_FAIL_CHILD_COUNT		(2)
+#define TEST_RR_CHILD_TX_FAIL_BURST_SIZE		(64)
+#define TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT		(22)
+#define TEST_RR_CHILD_TX_FAIL_FAILING_CHILD_IDX	(1)
 
 static int
-test_roundrobin_tx_burst_slave_tx_fail(void)
+test_roundrobin_tx_burst_child_tx_fail(void)
 {
 	struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
 	struct rte_mbuf *expected_tx_fail_pkts[MAX_PKT_BURST];
@@ -1486,49 +1486,49 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
 
 	int i, first_fail_idx, tx_count;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_ROUND_ROBIN, 0,
-			TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+			TEST_RR_CHILD_TX_FAIL_CHILD_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	/* Generate test bursts of packets to transmit */
 	TEST_ASSERT_EQUAL(generate_test_burst(pkt_burst,
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE,
+			TEST_RR_CHILD_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
+			TEST_RR_CHILD_TX_FAIL_BURST_SIZE,
 			"Failed to generate test packet burst");
 
 	/* Copy references to packets which we expect not to be transmitted */
-	first_fail_idx = (TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			(TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT *
-			TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)) +
-			TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX;
+	first_fail_idx = (TEST_RR_CHILD_TX_FAIL_BURST_SIZE -
+			(TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT *
+			TEST_RR_CHILD_TX_FAIL_CHILD_COUNT)) +
+			TEST_RR_CHILD_TX_FAIL_FAILING_CHILD_IDX;
 
-	for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+	for (i = 0; i < TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT; i++) {
 		expected_tx_fail_pkts[i] = pkt_burst[first_fail_idx +
-				(i * TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)];
+				(i * TEST_RR_CHILD_TX_FAIL_CHILD_COUNT)];
 	}
 
-	/* Set virtual slave to only fail transmission of
-	 * TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT packets in burst */
+	/* Set virtual child to only fail transmission of
+	 * TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT packets in burst */
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
+			test_params->child_port_ids[TEST_RR_CHILD_TX_FAIL_FAILING_CHILD_IDX],
 			0);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+			test_params->child_port_ids[TEST_RR_CHILD_TX_FAIL_FAILING_CHILD_IDX],
+			TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT);
 
 	tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkt_burst,
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE);
+			TEST_RR_CHILD_TX_FAIL_BURST_SIZE);
 
-	TEST_ASSERT_EQUAL(tx_count, TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
+	TEST_ASSERT_EQUAL(tx_count, TEST_RR_CHILD_TX_FAIL_BURST_SIZE -
+			TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT,
 			"Transmitted (%d) an unexpected (%d) number of packets", tx_count,
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+			TEST_RR_CHILD_TX_FAIL_BURST_SIZE -
+			TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT);
 
 	/* Verify that failed packet are expected failed packets */
-	for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+	for (i = 0; i < TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT; i++) {
 		TEST_ASSERT_EQUAL(expected_tx_fail_pkts[i], pkt_burst[i + tx_count],
 				"expected mbuf (%d) pointer %p not expected pointer %p",
 				i, expected_tx_fail_pkts[i], pkt_burst[i + tx_count]);
@@ -1538,45 +1538,45 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
 	rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
+			(uint64_t)TEST_RR_CHILD_TX_FAIL_BURST_SIZE -
+			TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT,
 			"Bonded Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+			TEST_RR_CHILD_TX_FAIL_BURST_SIZE -
+			TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT);
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		int slave_expected_tx_count;
+	/* Verify child ports tx stats */
+	for (i = 0; i < test_params->bonded_child_count; i++) {
+		int child_expected_tx_count;
 
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+		rte_eth_stats_get(test_params->child_port_ids[i], &port_stats);
 
-		slave_expected_tx_count = TEST_RR_SLAVE_TX_FAIL_BURST_SIZE /
-				test_params->bonded_slave_count;
+		child_expected_tx_count = TEST_RR_CHILD_TX_FAIL_BURST_SIZE /
+				test_params->bonded_child_count;
 
-		if (i == TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX)
-			slave_expected_tx_count = slave_expected_tx_count -
-					TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT;
+		if (i == TEST_RR_CHILD_TX_FAIL_FAILING_CHILD_IDX)
+			child_expected_tx_count = child_expected_tx_count -
+					TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT;
 
 		TEST_ASSERT_EQUAL(port_stats.opackets,
-				(uint64_t)slave_expected_tx_count,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[i],
-				(unsigned int)port_stats.opackets, slave_expected_tx_count);
+				(uint64_t)child_expected_tx_count,
+				"Child Port (%d) opackets value (%u) not as expected (%d)",
+				test_params->child_port_ids[i],
+				(unsigned int)port_stats.opackets, child_expected_tx_count);
 	}
 
 	/* Verify that all mbufs have a ref value of zero */
 	TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkt_burst[tx_count],
-			TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
+			TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT, 1),
 			"mbufs refcnts not as expected");
-	free_mbufs(&pkt_burst[tx_count], TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+	free_mbufs(&pkt_burst[tx_count], TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
-test_roundrobin_rx_burst_on_single_slave(void)
+test_roundrobin_rx_burst_on_single_child(void)
 {
 	struct rte_mbuf *gen_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
@@ -1585,19 +1585,19 @@ test_roundrobin_rx_burst_on_single_slave(void)
 
 	int i, j, burst_size = 25;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 children in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with children");
 
 	/* Generate test bursts of packets to transmit */
 	TEST_ASSERT_EQUAL(generate_test_burst(
 			gen_pkt_burst, burst_size, 0, 1, 0, 0, 0), burst_size,
 			"burst generation failed");
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		/* Add rx data to slave */
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+	for (i = 0; i < test_params->bonded_child_count; i++) {
+		/* Add rx data to child */
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[i],
 				&gen_pkt_burst[0], burst_size);
 
 		/* Call rx burst on bonded device */
@@ -1616,25 +1616,25 @@ test_roundrobin_rx_burst_on_single_slave(void)
 
 
 
-		/* Verify bonded slave devices rx count */
-		/* Verify slave ports tx stats */
-		for (j = 0; j < test_params->bonded_slave_count; j++) {
-			rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+		/* Verify bonded child devices rx count */
+		/* Verify child ports tx stats */
+		for (j = 0; j < test_params->bonded_child_count; j++) {
+			rte_eth_stats_get(test_params->child_port_ids[j], &port_stats);
 
 			if (i == j) {
 				TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
-						"Slave Port (%d) ipackets value (%u) not as expected"
-						" (%d)", test_params->slave_port_ids[i],
+						"Child Port (%d) ipackets value (%u) not as expected"
+						" (%d)", test_params->child_port_ids[i],
 						(unsigned int)port_stats.ipackets, burst_size);
 			} else {
 				TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-						"Slave Port (%d) ipackets value (%u) not as expected"
-						" (%d)", test_params->slave_port_ids[i],
+						"Child Port (%d) ipackets value (%u) not as expected"
+						" (%d)", test_params->child_port_ids[i],
 						(unsigned int)port_stats.ipackets, 0);
 			}
 
-			/* Reset bonded slaves stats */
-			rte_eth_stats_reset(test_params->slave_port_ids[j]);
+			/* Reset bonded children stats */
+			rte_eth_stats_reset(test_params->child_port_ids[j]);
 		}
 		/* reset bonded device stats */
 		rte_eth_stats_reset(test_params->bonded_port_id);
@@ -1646,38 +1646,38 @@ test_roundrobin_rx_burst_on_single_slave(void)
 	}
 
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
-#define TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT (3)
+#define TEST_ROUNDROBIN_TX_BURST_CHILD_COUNT (3)
 
 static int
-test_roundrobin_rx_burst_on_multiple_slaves(void)
+test_roundrobin_rx_burst_on_multiple_children(void)
 {
-	struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_CHILD_COUNT][MAX_PKT_BURST];
 
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	int burst_size[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT] = { 15, 13, 36 };
+	int burst_size[TEST_ROUNDROBIN_TX_BURST_CHILD_COUNT] = { 15, 13, 36 };
 	int i, nb_rx;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 children in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with children");
 
 	/* Generate test bursts of packets to transmit */
-	for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_CHILD_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[i][0], burst_size[i], 0, 1, 0, 0, 0),
 				burst_size[i], "burst generation failed");
 	}
 
-	/* Add rx data to slaves */
-	for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+	/* Add rx data to children */
+	for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_CHILD_COUNT; i++) {
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[i],
 				&gen_pkt_burst[i][0], burst_size[i]);
 	}
 
@@ -1697,29 +1697,29 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
 			test_params->bonded_port_id, (unsigned int)port_stats.ipackets,
 			burst_size[0] + burst_size[1] + burst_size[2]);
 
-	/* Verify bonded slave devices rx counts */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify bonded child devices rx counts */
+	rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0],
+			"Child Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->child_port_ids[0],
 			(unsigned int)port_stats.ipackets, burst_size[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
+			"Child Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->child_port_ids[1], (unsigned int)port_stats.ipackets,
 			burst_size[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[2],
+			"Child Port (%d) ipackets value (%u) not as expected (%d)",
+				test_params->child_port_ids[2],
 				(unsigned int)port_stats.ipackets, burst_size[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[3],
+			"Child Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->child_port_ids[3],
 			(unsigned int)port_stats.ipackets, 0);
 
 	/* free mbufs */
@@ -1727,8 +1727,8 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
 		rte_pktmbuf_free(rx_pkt_burst[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
@@ -1739,48 +1739,48 @@ test_roundrobin_verify_mac_assignment(void)
 
 	int i;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_2),
+			test_params->child_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[2], &expected_mac_addr_2),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[2]);
+			test_params->child_port_ids[2]);
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 children in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 				BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
-				"Failed to initialize bonded device with slaves");
+				"Failed to initialize bonded device with children");
 
-	/* Verify that all MACs are the same as first slave added to bonded dev */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	/* Verify that all MACs are the same as first child added to bonded dev */
+	for (i = 0; i < test_params->bonded_child_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of primary port",
-				test_params->slave_port_ids[i]);
+				"child port (%d) mac address not set to that of primary port",
+				test_params->child_port_ids[i]);
 	}
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[2]),
+			test_params->child_port_ids[2]),
 			"Failed to set bonded port (%d) primary port to (%d)",
-			test_params->bonded_port_id, test_params->slave_port_ids[i]);
+			test_params->bonded_port_id, test_params->child_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_child_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address has changed to that of primary"
+				"child port (%d) mac address has changed to that of primary"
 				" port without stop/start toggle of bonded device",
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 	}
 
 	/* stop / start bonded device and verify that primary MAC address is
-	 * propagate to bonded device and slaves */
+	 * propagate to bonded device and children */
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
 			test_params->bonded_port_id);
@@ -1794,16 +1794,16 @@ test_roundrobin_verify_mac_assignment(void)
 	TEST_ASSERT_SUCCESS(
 			memcmp(&expected_mac_addr_2, &read_mac_addr, sizeof(read_mac_addr)),
 			"bonded port (%d) mac address not set to that of new primary port",
-			test_params->slave_port_ids[i]);
+			test_params->child_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_child_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_2, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of new primary"
-				" port", test_params->slave_port_ids[i]);
+				"child port (%d) mac address not set to that of new primary"
+				" port", test_params->child_port_ids[i]);
 	}
 
 	/* Set explicit MAC address */
@@ -1818,19 +1818,19 @@ test_roundrobin_verify_mac_assignment(void)
 	TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
 			"bonded port (%d) mac address not set to that of new primary port",
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_child_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
-				sizeof(read_mac_addr)), "slave port (%d) mac address not set to"
-				" that of new primary port\n", test_params->slave_port_ids[i]);
+				sizeof(read_mac_addr)), "child port (%d) mac address not set to"
+				" that of new primary port\n", test_params->child_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
@@ -1839,10 +1839,10 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
 	int i, promiscuous_en;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 children in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with children");
 
 	ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
 	TEST_ASSERT_SUCCESS(ret,
@@ -1854,12 +1854,12 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not enabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 		TEST_ASSERT_EQUAL(promiscuous_en, 1,
-				"slave port (%d) promiscuous mode not enabled",
-				test_params->slave_port_ids[i]);
+				"child port (%d) promiscuous mode not enabled",
+				test_params->child_port_ids[i]);
 	}
 
 	ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -1872,76 +1872,76 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled\n",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 		TEST_ASSERT_EQUAL(promiscuous_en, 0,
 				"Port (%d) promiscuous mode not disabled\n",
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
-#define TEST_RR_LINK_STATUS_SLAVE_COUNT (4)
-#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT (2)
+#define TEST_RR_LINK_STATUS_CHILD_COUNT (4)
+#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_CHILD_COUNT (2)
 
 static int
-test_roundrobin_verify_slave_link_status_change_behaviour(void)
+test_roundrobin_verify_child_link_status_change_behaviour(void)
 {
 	struct rte_mbuf *tx_pkt_burst[MAX_PKT_BURST] = { NULL };
-	struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_CHILD_COUNT][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 
 	struct rte_eth_stats port_stats;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t children[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count;
+	int i, burst_size, child_count;
 
 	/* NULL all pointers in array to simplify cleanup */
 	memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
 
-	/* Initialize bonded device with TEST_RR_LINK_STATUS_SLAVE_COUNT slaves
+	/* Initialize bonded device with TEST_RR_LINK_STATUS_CHILD_COUNT children
 	 * in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
-			BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_SLAVE_COUNT, 1),
-			"Failed to initialize bonded device with slaves");
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
+			BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_CHILD_COUNT, 1),
+			"Failed to initialize bonded device with children");
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Childs Count /Active Child Count is */
+	child_count = rte_eth_bond_children_get(test_params->bonded_port_id, children,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
-			"Number of slaves (%d) is not as expected (%d).",
-			slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
+	TEST_ASSERT_EQUAL(child_count, TEST_RR_LINK_STATUS_CHILD_COUNT,
+			"Number of children (%d) is not as expected (%d).",
+			child_count, TEST_RR_LINK_STATUS_CHILD_COUNT);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
+	child_count = rte_eth_bond_active_children_get(test_params->bonded_port_id,
+			children, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(child_count, TEST_RR_LINK_STATUS_CHILD_COUNT,
+			"Number of active children (%d) is not as expected (%d).",
+			child_count, TEST_RR_LINK_STATUS_CHILD_COUNT);
 
-	/* Set 2 slaves eth_devs link status to down */
+	/* Set 2 children eth_devs link status to down */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->child_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->child_port_ids[3], 0);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count,
-			TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT,
-			"Number of active slaves (%d) is not as expected (%d).\n",
-			slave_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT);
+	child_count = rte_eth_bond_active_children_get(test_params->bonded_port_id,
+			children, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(child_count,
+			TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_CHILD_COUNT,
+			"Number of active children (%d) is not as expected (%d).\n",
+			child_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_CHILD_COUNT);
 
 	burst_size = 20;
 
-	/* Verify that pkts are not sent on slaves with link status down:
+	/* Verify that pkts are not sent on children with link status down:
 	 *
 	 * 1. Generate test burst of traffic
 	 * 2. Transmit burst on bonded eth_dev
 	 * 3. Verify stats for bonded eth_dev (opackets = burst_size)
-	 * 4. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
+	 * 4. Verify stats for child eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
 	 */
 	TEST_ASSERT_EQUAL(
 			generate_test_burst(tx_pkt_burst, burst_size, 0, 1, 0, 0, 0),
@@ -1960,41 +1960,41 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
 			test_params->bonded_port_id, (int)port_stats.opackets,
 			burst_size);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
 			"Port (%d) opackets stats (%d) not expected (%d) value",
-			test_params->slave_port_ids[0], (int)port_stats.opackets, 10);
+			test_params->child_port_ids[0], (int)port_stats.opackets, 10);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
 			"Port (%d) opackets stats (%d) not expected (%d) value",
-			test_params->slave_port_ids[1], (int)port_stats.opackets, 0);
+			test_params->child_port_ids[1], (int)port_stats.opackets, 0);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
 			"Port (%d) opackets stats (%d) not expected (%d) value",
-			test_params->slave_port_ids[2], (int)port_stats.opackets, 10);
+			test_params->child_port_ids[2], (int)port_stats.opackets, 10);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
 			"Port (%d) opackets stats (%d) not expected (%d) value",
-			test_params->slave_port_ids[3], (int)port_stats.opackets, 0);
+			test_params->child_port_ids[3], (int)port_stats.opackets, 0);
 
-	/* Verify that pkts are not sent on slaves with link status down:
+	/* Verify that pkts are not sent on children with link status down:
 	 *
 	 * 1. Generate test bursts of traffic
 	 * 2. Add bursts on to virtual eth_devs
 	 * 3. Rx burst on bonded eth_dev, expected (burst_ size *
-	 *    TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT) received
+	 *    TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_CHILD_COUNT) received
 	 * 4. Verify stats for bonded eth_dev
-	 * 6. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
+	 * 6. Verify stats for child eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
 	 */
-	for (i = 0; i < TEST_RR_LINK_STATUS_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_RR_LINK_STATUS_CHILD_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0),
 				burst_size, "failed to generate packet burst");
 
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[i],
 				&gen_pkt_burst[i][0], burst_size);
 	}
 
@@ -2014,49 +2014,49 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
 		rte_pktmbuf_free(rx_pkt_burst[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
-#define TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT (2)
+#define TEST_RR_POLLING_LINK_STATUS_CHILD_COUNT (2)
 
-uint8_t polling_slave_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
+uint8_t polling_child_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
 
 
-int polling_test_slaves[TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT] = { -1, -1 };
+int polling_test_children[TEST_RR_POLLING_LINK_STATUS_CHILD_COUNT] = { -1, -1 };
 
 static int
-test_roundrobin_verfiy_polling_slave_link_status_change(void)
+test_roundrobin_verfiy_polling_child_link_status_change(void)
 {
 	struct rte_ether_addr *mac_addr =
-		(struct rte_ether_addr *)polling_slave_mac;
-	char slave_name[RTE_ETH_NAME_MAX_LEN];
+		(struct rte_ether_addr *)polling_child_mac;
+	char child_name[RTE_ETH_NAME_MAX_LEN];
 
 	int i;
 
-	for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
-		/* Generate slave name / MAC address */
-		snprintf(slave_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
+	for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_CHILD_COUNT; i++) {
+		/* Generate child name / MAC address */
+		snprintf(child_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
 		mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
 
-		/* Create slave devices with no ISR Support */
-		if (polling_test_slaves[i] == -1) {
-			polling_test_slaves[i] = virtual_ethdev_create(slave_name, mac_addr,
+		/* Create child devices with no ISR Support */
+		if (polling_test_children[i] == -1) {
+			polling_test_children[i] = virtual_ethdev_create(child_name, mac_addr,
 					rte_socket_id(), 0);
-			TEST_ASSERT(polling_test_slaves[i] >= 0,
-					"Failed to create virtual virtual ethdev %s\n", slave_name);
+			TEST_ASSERT(polling_test_children[i] >= 0,
+					"Failed to create virtual virtual ethdev %s\n", child_name);
 
-			/* Configure slave */
-			TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_slaves[i], 0, 0),
-					"Failed to configure virtual ethdev %s(%d)", slave_name,
-					polling_test_slaves[i]);
+			/* Configure child */
+			TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_children[i], 0, 0),
+					"Failed to configure virtual ethdev %s(%d)", child_name,
+					polling_test_children[i]);
 		}
 
-		/* Add slave to bonded device */
-		TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
-				polling_test_slaves[i]),
-				"Failed to add slave %s(%d) to bonded device %d",
-				slave_name, polling_test_slaves[i],
+		/* Add child to bonded device */
+		TEST_ASSERT_SUCCESS(rte_eth_bond_child_add(test_params->bonded_port_id,
+				polling_test_children[i]),
+				"Failed to add child %s(%d) to bonded device %d",
+				child_name, polling_test_children[i],
 				test_params->bonded_port_id);
 	}
 
@@ -2071,26 +2071,26 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
 			RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
 			&test_params->bonded_port_id);
 
-	/* link status change callback for first slave link up */
+	/* link status change callback for first child link up */
 	test_lsc_interrupt_count = 0;
 
-	virtual_ethdev_set_link_status(polling_test_slaves[0], 1);
+	virtual_ethdev_set_link_status(polling_test_children[0], 1);
 
 	TEST_ASSERT_SUCCESS(lsc_timeout(15000), "timed out waiting for interrupt");
 
 
-	/* no link status change callback for second slave link up */
+	/* no link status change callback for second child link up */
 	test_lsc_interrupt_count = 0;
 
-	virtual_ethdev_set_link_status(polling_test_slaves[1], 1);
+	virtual_ethdev_set_link_status(polling_test_children[1], 1);
 
 	TEST_ASSERT_FAIL(lsc_timeout(15000), "unexpectedly succeeded");
 
-	/* link status change callback for both slave links down */
+	/* link status change callback for both child links down */
 	test_lsc_interrupt_count = 0;
 
-	virtual_ethdev_set_link_status(polling_test_slaves[0], 0);
-	virtual_ethdev_set_link_status(polling_test_slaves[1], 0);
+	virtual_ethdev_set_link_status(polling_test_children[0], 0);
+	virtual_ethdev_set_link_status(polling_test_children[1], 0);
 
 	TEST_ASSERT_SUCCESS(lsc_timeout(20000), "timed out waiting for interrupt");
 
@@ -2100,17 +2100,17 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
 			&test_params->bonded_port_id);
 
 
-	/* Clean up and remove slaves from bonded device */
-	for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
+	/* Clean up and remove children from bonded device */
+	for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_CHILD_COUNT; i++) {
 
 		TEST_ASSERT_SUCCESS(
-				rte_eth_bond_slave_remove(test_params->bonded_port_id,
-						polling_test_slaves[i]),
-				"Failed to remove slave %d from bonded port (%d)",
-				polling_test_slaves[i], test_params->bonded_port_id);
+				rte_eth_bond_child_remove(test_params->bonded_port_id,
+						polling_test_children[i]),
+				"Failed to remove child %d from bonded port (%d)",
+				polling_test_children[i], test_params->bonded_port_id);
 	}
 
-	return remove_slaves_and_stop_bonded_device();
+	return remove_children_and_stop_bonded_device();
 }
 
 
@@ -2123,9 +2123,9 @@ test_activebackup_tx_burst(void)
 	struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_ACTIVE_BACKUP, 0, 1, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with children");
 
 	initialize_eth_header(test_params->pkt_eth_hdr,
 			(struct rte_ether_addr *)src_mac,
@@ -2136,7 +2136,7 @@ test_activebackup_tx_burst(void)
 	pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
 			dst_addr_0, pktlen);
 
-	burst_size = 20 * test_params->bonded_slave_count;
+	burst_size = 20 * test_params->bonded_child_count;
 
 	TEST_ASSERT(burst_size < MAX_PKT_BURST,
 			"Burst size specified is greater than supported.");
@@ -2160,38 +2160,38 @@ test_activebackup_tx_burst(void)
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
-		if (test_params->slave_port_ids[i] == primary_port) {
+	/* Verify child ports tx stats */
+	for (i = 0; i < test_params->bonded_child_count; i++) {
+		rte_eth_stats_get(test_params->child_port_ids[i], &port_stats);
+		if (test_params->child_port_ids[i] == primary_port) {
 			TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
-					"Slave Port (%d) opackets value (%u) not as expected (%d)",
+					"Child Port (%d) opackets value (%u) not as expected (%d)",
 					test_params->bonded_port_id,
 					(unsigned int)port_stats.opackets,
-					burst_size / test_params->bonded_slave_count);
+					burst_size / test_params->bonded_child_count);
 		} else {
 			TEST_ASSERT_EQUAL(port_stats.opackets, 0,
-					"Slave Port (%d) opackets value (%u) not as expected (%d)",
+					"Child Port (%d) opackets value (%u) not as expected (%d)",
 					test_params->bonded_port_id,
 					(unsigned int)port_stats.opackets, 0);
 		}
 	}
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all children down and try and transmit */
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->child_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
 	TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
 			pkts_burst, burst_size), 0, "Sending empty burst failed");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
-#define TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT (4)
+#define TEST_ACTIVE_BACKUP_RX_BURST_CHILD_COUNT (4)
 
 static int
 test_activebackup_rx_burst(void)
@@ -2205,24 +2205,24 @@ test_activebackup_rx_burst(void)
 
 	int i, j, burst_size = 17;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_ACTIVE_BACKUP, 0,
-			TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
-			"Failed to initialize bonded device with slaves");
+			TEST_ACTIVE_BACKUP_RX_BURST_CHILD_COUNT, 1),
+			"Failed to initialize bonded device with children");
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 	TEST_ASSERT(primary_port >= 0,
-			"failed to get primary slave for bonded port (%d)",
+			"failed to get primary child for bonded port (%d)",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 		/* Generate test bursts of packets to transmit */
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0),
 				burst_size, "burst generation failed");
 
-		/* Add rx data to slave */
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		/* Add rx data to child */
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[i],
 				&gen_pkt_burst[0], burst_size);
 
 		/* Call rx burst on bonded device */
@@ -2230,7 +2230,7 @@ test_activebackup_rx_burst(void)
 				&rx_pkt_burst[0], MAX_PKT_BURST), burst_size,
 				"rte_eth_rx_burst failed");
 
-		if (test_params->slave_port_ids[i] == primary_port) {
+		if (test_params->child_port_ids[i] == primary_port) {
 			/* Verify bonded device rx count */
 			rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 			TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
@@ -2238,27 +2238,27 @@ test_activebackup_rx_burst(void)
 					test_params->bonded_port_id,
 					(unsigned int)port_stats.ipackets, burst_size);
 
-			/* Verify bonded slave devices rx count */
-			for (j = 0; j < test_params->bonded_slave_count; j++) {
-				rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+			/* Verify bonded child devices rx count */
+			for (j = 0; j < test_params->bonded_child_count; j++) {
+				rte_eth_stats_get(test_params->child_port_ids[j], &port_stats);
 				if (i == j) {
 					TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
-							"Slave Port (%d) ipackets value (%u) not as "
-							"expected (%d)", test_params->slave_port_ids[i],
+							"Child Port (%d) ipackets value (%u) not as "
+							"expected (%d)", test_params->child_port_ids[i],
 							(unsigned int)port_stats.ipackets, burst_size);
 				} else {
 					TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-							"Slave Port (%d) ipackets value (%u) not as "
-							"expected (%d)\n", test_params->slave_port_ids[i],
+							"Child Port (%d) ipackets value (%u) not as "
+							"expected (%d)\n", test_params->child_port_ids[i],
 							(unsigned int)port_stats.ipackets, 0);
 				}
 			}
 		} else {
-			for (j = 0; j < test_params->bonded_slave_count; j++) {
-				rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+			for (j = 0; j < test_params->bonded_child_count; j++) {
+				rte_eth_stats_get(test_params->child_port_ids[j], &port_stats);
 				TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-						"Slave Port (%d) ipackets value (%u) not as expected "
-						"(%d)", test_params->slave_port_ids[i],
+						"Child Port (%d) ipackets value (%u) not as expected "
+						"(%d)", test_params->child_port_ids[i],
 						(unsigned int)port_stats.ipackets, 0);
 			}
 		}
@@ -2275,8 +2275,8 @@ test_activebackup_rx_burst(void)
 		rte_eth_stats_reset(test_params->bonded_port_id);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
@@ -2285,14 +2285,14 @@ test_activebackup_verify_promiscuous_enable_disable(void)
 	int i, primary_port, promiscuous_en;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 children in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_ACTIVE_BACKUP, 0, 4, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with children");
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 	TEST_ASSERT(primary_port >= 0,
-			"failed to get primary slave for bonded port (%d)",
+			"failed to get primary child for bonded port (%d)",
 			test_params->bonded_port_id);
 
 	ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -2304,17 +2304,17 @@ test_activebackup_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not enabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
-		if (primary_port == test_params->slave_port_ids[i]) {
+				test_params->child_port_ids[i]);
+		if (primary_port == test_params->child_port_ids[i]) {
 			TEST_ASSERT_EQUAL(promiscuous_en, 1,
-					"slave port (%d) promiscuous mode not enabled",
-					test_params->slave_port_ids[i]);
+					"child port (%d) promiscuous mode not enabled",
+					test_params->child_port_ids[i]);
 		} else {
 			TEST_ASSERT_EQUAL(promiscuous_en, 0,
-					"slave port (%d) promiscuous mode enabled",
-					test_params->slave_port_ids[i]);
+					"child port (%d) promiscuous mode enabled",
+					test_params->child_port_ids[i]);
 		}
 
 	}
@@ -2328,16 +2328,16 @@ test_activebackup_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled\n",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 		TEST_ASSERT_EQUAL(promiscuous_en, 0,
-				"slave port (%d) promiscuous mode not disabled\n",
-				test_params->slave_port_ids[i]);
+				"child port (%d) promiscuous mode not disabled\n",
+				test_params->child_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
@@ -2346,19 +2346,19 @@ test_activebackup_verify_mac_assignment(void)
 	struct rte_ether_addr read_mac_addr;
 	struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+			test_params->child_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &expected_mac_addr_1),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 
-	/* Initialize bonded device with 2 slaves in active backup mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 2 children in active backup mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
-			"Failed to initialize bonded device with slaves");
+			"Failed to initialize bonded device with children");
 
-	/* Verify that bonded MACs is that of first slave and that the other slave
+	/* Verify that bonded MACs is that of first child and that the other child
 	 * MAC hasn't been changed */
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -2368,27 +2368,27 @@ test_activebackup_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->child_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"child port (%d) mac address not set to that of primary port",
+			test_params->child_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[1]);
+			"child port (%d) mac address not as expected",
+			test_params->child_port_ids[1]);
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[1]), 0,
+			test_params->child_port_ids[1]), 0,
 			"Failed to set bonded port (%d) primary port to (%d)",
-			test_params->bonded_port_id, test_params->slave_port_ids[1]);
+			test_params->bonded_port_id, test_params->child_port_ids[1]);
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -2398,24 +2398,24 @@ test_activebackup_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->child_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"child port (%d) mac address not set to that of primary port",
+			test_params->child_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[1]);
+			"child port (%d) mac address not as expected",
+			test_params->child_port_ids[1]);
 
 	/* stop / start bonded device and verify that primary MAC address is
-	 * propagated to bonded device and slaves */
+	 * propagated to bonded device and children */
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -2432,21 +2432,21 @@ test_activebackup_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->child_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[0]);
+			"child port (%d) mac address not as expected",
+			test_params->child_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"child port (%d) mac address not set to that of primary port",
+			test_params->child_port_ids[1]);
 
 	/* Set explicit MAC address */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
@@ -2462,36 +2462,36 @@ test_activebackup_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of bonded port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->child_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[0]);
+			"child port (%d) mac address not as expected",
+			test_params->child_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of bonded port",
-			test_params->slave_port_ids[1]);
+			"child port (%d) mac address not set to that of bonded port",
+			test_params->child_port_ids[1]);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
-test_activebackup_verify_slave_link_status_change_failover(void)
+test_activebackup_verify_child_link_status_change_failover(void)
 {
-	struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_CHILD_COUNT][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t children[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count, primary_port;
+	int i, burst_size, child_count, primary_port;
 
 	burst_size = 21;
 
@@ -2502,96 +2502,96 @@ test_activebackup_verify_slave_link_status_change_failover(void)
 			&pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
 			"generate_test_burst failed");
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 children in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_ACTIVE_BACKUP, 0,
-			TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
-			"Failed to initialize bonded device with slaves");
+			TEST_ACTIVE_BACKUP_RX_BURST_CHILD_COUNT, 1),
+			"Failed to initialize bonded device with children");
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Childs Count /Active Child Count is */
+	child_count = rte_eth_bond_children_get(test_params->bonded_port_id, children,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of slaves (%d) is not as expected (%d).",
-			slave_count, 4);
+	TEST_ASSERT_EQUAL(child_count, 4,
+			"Number of children (%d) is not as expected (%d).",
+			child_count, 4);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 4);
+	child_count = rte_eth_bond_active_children_get(test_params->bonded_port_id,
+			children, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(child_count, 4,
+			"Number of active children (%d) is not as expected (%d).",
+			child_count, 4);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+	TEST_ASSERT_EQUAL(primary_port, test_params->child_port_ids[0],
 			"Primary port not as expected");
 
-	/* Bring 2 slaves down and verify active slave count */
+	/* Bring 2 children down and verify active child count */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->child_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->child_port_ids[3], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 2);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_children_get(
+			test_params->bonded_port_id, children, RTE_MAX_ETHPORTS), 2,
+			"Number of active children (%d) is not as expected (%d).",
+			child_count, 2);
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 1);
+			test_params->child_port_ids[1], 1);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 1);
+			test_params->child_port_ids[3], 1);
 
 
-	/* Bring primary port down, verify that active slave count is 3 and primary
+	/* Bring primary port down, verify that active child count is 3 and primary
 	 *  has changed */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 0);
+			test_params->child_port_ids[0], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS),
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_children_get(
+			test_params->bonded_port_id, children, RTE_MAX_ETHPORTS),
 			3,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 3);
+			"Number of active children (%d) is not as expected (%d).",
+			child_count, 3);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
+	TEST_ASSERT_EQUAL(primary_port, test_params->child_port_ids[2],
 			"Primary port not as expected");
 
-	/* Verify that pkts are sent on new primary slave */
+	/* Verify that pkts are sent on new primary child */
 
 	TEST_ASSERT_EQUAL(rte_eth_tx_burst(
 			test_params->bonded_port_id, 0, &pkt_burst[0][0],
 			burst_size), burst_size, "rte_eth_tx_burst failed");
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[2]);
+			test_params->child_port_ids[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[0]);
+			test_params->child_port_ids[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[3]);
+			test_params->child_port_ids[3]);
 
 	/* Generate packet burst for testing */
 
-	for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_CHILD_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
 				"generate_test_burst failed");
 
 		virtual_ethdev_add_mbufs_to_rx_queue(
-			test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
+			test_params->child_port_ids[i], &pkt_burst[i][0], burst_size);
 	}
 
 	TEST_ASSERT_EQUAL(rte_eth_rx_burst(
@@ -2604,28 +2604,28 @@ test_activebackup_verify_slave_link_status_change_failover(void)
 			"(%d) port_stats.ipackets not as expected",
 			test_params->bonded_port_id);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[2]);
+			test_params->child_port_ids[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[0]);
+			test_params->child_port_ids[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[3]);
+			test_params->child_port_ids[3]);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 /** Balance Mode Tests */
@@ -2633,9 +2633,9 @@ test_activebackup_verify_slave_link_status_change_failover(void)
 static int
 test_balance_xmit_policy_configuration(void)
 {
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			"Failed to initialize_bonded_device_with_children.");
 
 	/* Invalid port id */
 	TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
@@ -2644,7 +2644,7 @@ test_balance_xmit_policy_configuration(void)
 
 	/* Set xmit policy on non bonded device */
 	TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
-			test_params->slave_port_ids[0],	BALANCE_XMIT_POLICY_LAYER2),
+			test_params->child_port_ids[0],	BALANCE_XMIT_POLICY_LAYER2),
 			"Expected call to failed as invalid port specified.");
 
 
@@ -2677,25 +2677,25 @@ test_balance_xmit_policy_configuration(void)
 	TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_get(INVALID_PORT_ID),
 			"Expected call to failed as invalid port specified.");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
-#define TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT (2)
+#define TEST_BALANCE_L2_TX_BURST_CHILD_COUNT (2)
 
 static int
 test_balance_l2_tx_burst(void)
 {
-	struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
-	int burst_size[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT] = { 10, 15 };
+	struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_CHILD_COUNT][MAX_PKT_BURST];
+	int burst_size[TEST_BALANCE_L2_TX_BURST_CHILD_COUNT] = { 10, 15 };
 
 	uint16_t pktlen;
 	int i;
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
-			BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
+			BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_CHILD_COUNT, 1),
+			"Failed to initialize_bonded_device_with_children.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
 			test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER2),
@@ -2730,7 +2730,7 @@ test_balance_l2_tx_burst(void)
 			"failed to generate packet burst");
 
 	/* Send burst 1 on bonded port */
-	for (i = 0; i < TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_BALANCE_L2_TX_BURST_CHILD_COUNT; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
 				&pkts_burst[i][0], burst_size[i]),
 				burst_size[i], "Failed to transmit packet burst");
@@ -2745,24 +2745,24 @@ test_balance_l2_tx_burst(void)
 			burst_size[0] + burst_size[1]);
 
 
-	/* Verify slave ports tx stats */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify child ports tx stats */
+	rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[0],
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+			"Child Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->child_port_ids[0], (unsigned int)port_stats.opackets,
 			burst_size[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[1],
-			"Slave Port (%d) opackets value (%u) not as expected (%d)\n",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+			"Child Port (%d) opackets value (%u) not as expected (%d)\n",
+			test_params->child_port_ids[1], (unsigned int)port_stats.opackets,
 			burst_size[1]);
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all children down and try and transmit */
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->child_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -2770,8 +2770,8 @@ test_balance_l2_tx_burst(void)
 			test_params->bonded_port_id, 0, &pkts_burst[0][0], burst_size[0]),
 			0, "Expected zero packet");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
@@ -2785,9 +2785,9 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_BALANCE, 0, 2, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			"Failed to initialize_bonded_device_with_children.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
 			test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER23),
@@ -2825,24 +2825,24 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
 			nb_tx_1 + nb_tx_2);
 
-	/* Verify slave ports tx stats */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify child ports tx stats */
+	rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+			"Child Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->child_port_ids[0], (unsigned int)port_stats.opackets,
 			nb_tx_1);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+			"Child Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->child_port_ids[1], (unsigned int)port_stats.opackets,
 			nb_tx_2);
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all children down and try and transmit */
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->child_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -2851,8 +2851,8 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 			burst_size_1), 0, "Expected zero packet");
 
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
@@ -2897,9 +2897,9 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_BALANCE, 0, 2, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			"Failed to initialize_bonded_device_with_children.");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
 			test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER34),
@@ -2938,24 +2938,24 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
 			nb_tx_1 + nb_tx_2);
 
-	/* Verify slave ports tx stats */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify child ports tx stats */
+	rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+			"Child Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->child_port_ids[0], (unsigned int)port_stats.opackets,
 			nb_tx_1);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
-			"Slave Port (%d) opackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+			"Child Port (%d) opackets value (%u) not as expected (%d)",
+			test_params->child_port_ids[1], (unsigned int)port_stats.opackets,
 			nb_tx_2);
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all children down and try and transmit */
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->child_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -2963,8 +2963,8 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
 			test_params->bonded_port_id, 0, pkts_burst_1,
 			burst_size_1), 0, "Expected zero packet");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
@@ -3003,27 +3003,27 @@ test_balance_l34_tx_burst_ipv6_toggle_udp_port(void)
 	return balance_l34_tx_burst(0, 0, 0, 0, 1);
 }
 
-#define TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT			(2)
-#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1			(40)
-#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2			(20)
-#define TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT		(25)
-#define TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX	(0)
+#define TEST_BAL_CHILD_TX_FAIL_CHILD_COUNT			(2)
+#define TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1			(40)
+#define TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2			(20)
+#define TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT		(25)
+#define TEST_BAL_CHILD_TX_FAIL_FAILING_CHILD_IDX	(0)
 
 static int
-test_balance_tx_burst_slave_tx_fail(void)
+test_balance_tx_burst_child_tx_fail(void)
 {
-	struct rte_mbuf *pkts_burst_1[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1];
-	struct rte_mbuf *pkts_burst_2[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2];
+	struct rte_mbuf *pkts_burst_1[TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1];
+	struct rte_mbuf *pkts_burst_2[TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2];
 
-	struct rte_mbuf *expected_fail_pkts[TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT];
+	struct rte_mbuf *expected_fail_pkts[TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT];
 
 	struct rte_eth_stats port_stats;
 
 	int i, first_tx_fail_idx, tx_count_1, tx_count_2;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_BALANCE, 0,
-			TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+			TEST_BAL_CHILD_TX_FAIL_CHILD_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
@@ -3033,46 +3033,46 @@ test_balance_tx_burst_slave_tx_fail(void)
 
 	/* Generate test bursts for transmission */
 	TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_1,
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1,
+			TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
+			TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1,
 			"Failed to generate test packet burst 1");
 
-	first_tx_fail_idx = TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT;
+	first_tx_fail_idx = TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT;
 
 	/* copy mbuf references for expected transmission failures */
-	for (i = 0; i < TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT; i++)
+	for (i = 0; i < TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT; i++)
 		expected_fail_pkts[i] = pkts_burst_1[i + first_tx_fail_idx];
 
 	TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_2,
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
+			TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
+			TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2,
 			"Failed to generate test packet burst 2");
 
 
-	/* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
-	 * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
+	/* Set virtual child TEST_BAL_CHILD_TX_FAIL_FAILING_CHILD_IDX to only fail
+	 * transmission of TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT packets of burst */
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
+			test_params->child_port_ids[TEST_BAL_CHILD_TX_FAIL_FAILING_CHILD_IDX],
 			0);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+			test_params->child_port_ids[TEST_BAL_CHILD_TX_FAIL_FAILING_CHILD_IDX],
+			TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT);
 
 
 	/* Transmit burst 1 */
 	tx_count_1 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_1,
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1);
+			TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1);
 
-	TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
+	TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT,
 			"Transmitted (%d) packets, expected to transmit (%d) packets",
-			tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+			tx_count_1, TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT);
 
 	/* Verify that failed packet are expected failed packets */
-	for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+	for (i = 0; i < TEST_RR_CHILD_TX_FAIL_PACKETS_COUNT; i++) {
 		TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst_1[i + tx_count_1],
 				"expected mbuf (%d) pointer %p not expected pointer %p",
 				i, expected_fail_pkts[i], pkts_burst_1[i + tx_count_1]);
@@ -3080,94 +3080,94 @@ test_balance_tx_burst_slave_tx_fail(void)
 
 	/* Transmit burst 2 */
 	tx_count_2 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_2,
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+			TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2);
 
-	TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
+	TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2,
 			"Transmitted (%d) packets, expected to transmit (%d) packets",
-			tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+			tx_count_2, TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2);
 
 
 	/* Verify bonded port tx stats */
 	rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)((TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2),
+			(uint64_t)((TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT) +
+			TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2),
 			"Bonded Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			(TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
-			TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+			(TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1 -
+			TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT) +
+			TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2);
 
-	/* Verify slave ports tx stats */
+	/* Verify child ports tx stats */
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)
-				TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-				TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[0],
+				TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1 -
+				TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT,
+				"Child Port (%d) opackets value (%u) not as expected (%d)",
+				test_params->child_port_ids[0],
 				(unsigned int)port_stats.opackets,
-				TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
-				TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+				TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_1 -
+				TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT);
 
 
 
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-				(uint64_t)TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[1],
+				(uint64_t)TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2,
+				"Child Port (%d) opackets value (%u) not as expected (%d)",
+				test_params->child_port_ids[1],
 				(unsigned int)port_stats.opackets,
-				TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+				TEST_BAL_CHILD_TX_FAIL_BURST_SIZE_2);
 
 	/* Verify that all mbufs have a ref value of zero */
 	TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst_1[tx_count_1],
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
+			TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT, 1),
 			"mbufs refcnts not as expected");
 
 	free_mbufs(&pkts_burst_1[tx_count_1],
-			TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+			TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
-#define TEST_BALANCE_RX_BURST_SLAVE_COUNT (3)
+#define TEST_BALANCE_RX_BURST_CHILD_COUNT (3)
 
 static int
 test_balance_rx_burst(void)
 {
-	struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_CHILD_COUNT][MAX_PKT_BURST];
 
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	int burst_size[TEST_BALANCE_RX_BURST_SLAVE_COUNT] = { 10, 5, 30 };
+	int burst_size[TEST_BALANCE_RX_BURST_CHILD_COUNT] = { 10, 5, 30 };
 	int i, j;
 
 	memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 children in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_BALANCE, 0, 3, 1),
 			"Failed to initialise bonded device");
 
 	/* Generate test bursts of packets to transmit */
-	for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_BALANCE_RX_BURST_CHILD_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[i][0], burst_size[i], 0, 0, 1,
 				0, 0), burst_size[i],
 				"failed to generate packet burst");
 	}
 
-	/* Add rx data to slaves */
-	for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+	/* Add rx data to children */
+	for (i = 0; i < TEST_BALANCE_RX_BURST_CHILD_COUNT; i++) {
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[i],
 				&gen_pkt_burst[i][0], burst_size[i]);
 	}
 
@@ -3187,33 +3187,33 @@ test_balance_rx_burst(void)
 			burst_size[0] + burst_size[1] + burst_size[2]);
 
 
-	/* Verify bonded slave devices rx counts */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify bonded child devices rx counts */
+	rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-				test_params->slave_port_ids[0],
+			"Child Port (%d) ipackets value (%u) not as expected (%d)",
+				test_params->child_port_ids[0],
 				(unsigned int)port_stats.ipackets, burst_size[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
+			"Child Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->child_port_ids[1], (unsigned int)port_stats.ipackets,
 			burst_size[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
+			"Child Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->child_port_ids[2], (unsigned int)port_stats.ipackets,
 			burst_size[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[3],	(unsigned int)port_stats.ipackets,
+			"Child Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->child_port_ids[3],	(unsigned int)port_stats.ipackets,
 			0);
 
 	/* free mbufs */
-	for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_BALANCE_RX_BURST_CHILD_COUNT; i++) {
 		for (j = 0; j < MAX_PKT_BURST; j++) {
 			if (gen_pkt_burst[i][j] != NULL) {
 				rte_pktmbuf_free(gen_pkt_burst[i][j]);
@@ -3222,8 +3222,8 @@ test_balance_rx_burst(void)
 		}
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
@@ -3232,8 +3232,8 @@ test_balance_verify_promiscuous_enable_disable(void)
 	int i;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 children in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_BALANCE, 0, 4, 1),
 			"Failed to initialise bonded device");
 
@@ -3246,11 +3246,11 @@ test_balance_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not enabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]), 1,
+				test_params->child_port_ids[i]), 1,
 				"Port (%d) promiscuous mode not enabled",
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 	}
 
 	ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -3262,15 +3262,15 @@ test_balance_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]), 0,
+				test_params->child_port_ids[i]), 0,
 				"Port (%d) promiscuous mode not disabled",
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
@@ -3279,19 +3279,19 @@ test_balance_verify_mac_assignment(void)
 	struct rte_ether_addr read_mac_addr;
 	struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+			test_params->child_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &expected_mac_addr_1),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 
-	/* Initialize bonded device with 2 slaves in active backup mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 2 children in active backup mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_BALANCE, 0, 2, 1),
 			"Failed to initialise bonded device");
 
-	/* Verify that bonded MACs is that of first slave and that the other slave
+	/* Verify that bonded MACs is that of first child and that the other child
 	 * MAC hasn't been changed */
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -3301,27 +3301,27 @@ test_balance_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->child_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"child port (%d) mac address not set to that of primary port",
+			test_params->child_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"child port (%d) mac address not set to that of primary port",
+			test_params->child_port_ids[1]);
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[1]),
+			test_params->child_port_ids[1]),
 			"Failed to set bonded port (%d) primary port to (%d)\n",
-			test_params->bonded_port_id, test_params->slave_port_ids[1]);
+			test_params->bonded_port_id, test_params->child_port_ids[1]);
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -3331,24 +3331,24 @@ test_balance_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->child_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"child port (%d) mac address not set to that of primary port",
+			test_params->child_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"child port (%d) mac address not set to that of primary port",
+			test_params->child_port_ids[1]);
 
 	/* stop / start bonded device and verify that primary MAC address is
-	 * propagated to bonded device and slaves */
+	 * propagated to bonded device and children */
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -3365,21 +3365,21 @@ test_balance_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->child_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"child port (%d) mac address not set to that of primary port",
+			test_params->child_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"child port (%d) mac address not set to that of primary port",
+			test_params->child_port_ids[1]);
 
 	/* Set explicit MAC address */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
@@ -3395,44 +3395,44 @@ test_balance_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of bonded port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->child_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected\n",
-				test_params->slave_port_ids[0]);
+			"child port (%d) mac address not as expected\n",
+				test_params->child_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of bonded port",
-			test_params->slave_port_ids[1]);
+			"child port (%d) mac address not set to that of bonded port",
+			test_params->child_port_ids[1]);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
-#define TEST_BALANCE_LINK_STATUS_SLAVE_COUNT (4)
+#define TEST_BALANCE_LINK_STATUS_CHILD_COUNT (4)
 
 static int
-test_balance_verify_slave_link_status_change_behaviour(void)
+test_balance_verify_child_link_status_change_behaviour(void)
 {
-	struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_CHILD_COUNT][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t children[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count;
+	int i, burst_size, child_count;
 
 	memset(pkt_burst, 0, sizeof(pkt_burst));
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
-			BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT, 1),
+	/* Initialize bonded device with 4 children in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
+			BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_CHILD_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
@@ -3440,32 +3440,32 @@ test_balance_verify_slave_link_status_change_behaviour(void)
 			"Failed to set balance xmit policy.");
 
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Childs Count /Active Child Count is */
+	child_count = rte_eth_bond_children_get(test_params->bonded_port_id, children,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
-			"Number of slaves (%d) is not as expected (%d).",
-			slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
+	TEST_ASSERT_EQUAL(child_count, TEST_BALANCE_LINK_STATUS_CHILD_COUNT,
+			"Number of children (%d) is not as expected (%d).",
+			child_count, TEST_BALANCE_LINK_STATUS_CHILD_COUNT);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
+	child_count = rte_eth_bond_active_children_get(test_params->bonded_port_id,
+			children, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(child_count, TEST_BALANCE_LINK_STATUS_CHILD_COUNT,
+			"Number of active children (%d) is not as expected (%d).",
+			child_count, TEST_BALANCE_LINK_STATUS_CHILD_COUNT);
 
-	/* Set 2 slaves link status to down */
+	/* Set 2 children link status to down */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->child_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->child_port_ids[3], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 2);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_children_get(
+			test_params->bonded_port_id, children, RTE_MAX_ETHPORTS), 2,
+			"Number of active children (%d) is not as expected (%d).",
+			child_count, 2);
 
 	/* Send to sets of packet burst and verify that they are balanced across
-	 *  slaves */
+	 *  children */
 	burst_size = 21;
 
 	TEST_ASSERT_EQUAL(generate_test_burst(
@@ -3491,27 +3491,27 @@ test_balance_verify_slave_link_status_change_behaviour(void)
 			test_params->bonded_port_id, (int)port_stats.opackets,
 			burst_size + burst_size);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets (%d) not as expected (%d).",
-			test_params->slave_port_ids[0], (int)port_stats.opackets,
+			test_params->child_port_ids[0], (int)port_stats.opackets,
 			burst_size);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets (%d) not as expected (%d).",
-			test_params->slave_port_ids[2], (int)port_stats.opackets,
+			test_params->child_port_ids[2], (int)port_stats.opackets,
 			burst_size);
 
-	/* verify that all packets get send on primary slave when no other slaves
+	/* verify that all packets get send on primary child when no other children
 	 * are available */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[2], 0);
+			test_params->child_port_ids[2], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 1,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 1);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_children_get(
+			test_params->bonded_port_id, children, RTE_MAX_ETHPORTS), 1,
+			"Number of active children (%d) is not as expected (%d).",
+			child_count, 1);
 
 	TEST_ASSERT_EQUAL(generate_test_burst(
 			&pkt_burst[1][0], burst_size, 0, 1, 1, 0, 0), burst_size,
@@ -3528,31 +3528,31 @@ test_balance_verify_slave_link_status_change_behaviour(void)
 			test_params->bonded_port_id, (int)port_stats.opackets,
 			burst_size + burst_size + burst_size);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size + burst_size),
 			"(%d) port_stats.opackets (%d) not as expected (%d).",
-			test_params->slave_port_ids[0], (int)port_stats.opackets,
+			test_params->child_port_ids[0], (int)port_stats.opackets,
 			burst_size + burst_size);
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 0);
+			test_params->child_port_ids[0], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 1);
+			test_params->child_port_ids[1], 1);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[2], 1);
+			test_params->child_port_ids[2], 1);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 1);
+			test_params->child_port_ids[3], 1);
 
-	for (i = 0; i < TEST_BALANCE_LINK_STATUS_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_BALANCE_LINK_STATUS_CHILD_COUNT; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
 				"Failed to generate packet burst");
 
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[i],
 				&pkt_burst[i][0], burst_size);
 	}
 
-	/* Verify that pkts are not received on slaves with link status down */
+	/* Verify that pkts are not received on children with link status down */
 
 	rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
 			MAX_PKT_BURST);
@@ -3564,8 +3564,8 @@ test_balance_verify_slave_link_status_change_behaviour(void)
 			test_params->bonded_port_id, (int)port_stats.ipackets,
 			burst_size * 3);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
@@ -3576,7 +3576,7 @@ test_broadcast_tx_burst(void)
 
 	struct rte_eth_stats port_stats;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_BROADCAST, 0, 2, 1),
 			"Failed to initialise bonded device");
 
@@ -3590,7 +3590,7 @@ test_broadcast_tx_burst(void)
 	pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
 			dst_addr_0, pktlen);
 
-	burst_size = 20 * test_params->bonded_slave_count;
+	burst_size = 20 * test_params->bonded_child_count;
 
 	TEST_ASSERT(burst_size < MAX_PKT_BURST,
 			"Burst size specified is greater than supported.");
@@ -3611,25 +3611,25 @@ test_broadcast_tx_burst(void)
 	/* Verify bonded port tx stats */
 	rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)burst_size * test_params->bonded_slave_count,
+			(uint64_t)burst_size * test_params->bonded_child_count,
 			"Bonded Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
 			burst_size);
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+	/* Verify child ports tx stats */
+	for (i = 0; i < test_params->bonded_child_count; i++) {
+		rte_eth_stats_get(test_params->child_port_ids[i], &port_stats);
 		TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
-				"Slave Port (%d) opackets value (%u) not as expected (%d)\n",
+				"Child Port (%d) opackets value (%u) not as expected (%d)\n",
 				test_params->bonded_port_id,
 				(unsigned int)port_stats.opackets, burst_size);
 	}
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all children down and try and transmit */
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->child_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -3637,159 +3637,159 @@ test_broadcast_tx_burst(void)
 			test_params->bonded_port_id, 0, pkts_burst, burst_size),  0,
 			"transmitted an unexpected number of packets");
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 
-#define TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT		(3)
-#define TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE			(40)
-#define TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT	(15)
-#define TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT	(10)
+#define TEST_BCAST_CHILD_TX_FAIL_CHILD_COUNT		(3)
+#define TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE			(40)
+#define TEST_BCAST_CHILD_TX_FAIL_MAX_PACKETS_COUNT	(15)
+#define TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT	(10)
 
 static int
-test_broadcast_tx_burst_slave_tx_fail(void)
+test_broadcast_tx_burst_child_tx_fail(void)
 {
-	struct rte_mbuf *pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE];
-	struct rte_mbuf *expected_fail_pkts[TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT];
+	struct rte_mbuf *pkts_burst[TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE];
+	struct rte_mbuf *expected_fail_pkts[TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT];
 
 	struct rte_eth_stats port_stats;
 
 	int i, tx_count;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_BROADCAST, 0,
-			TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+			TEST_BCAST_CHILD_TX_FAIL_CHILD_COUNT, 1),
 			"Failed to initialise bonded device");
 
 	/* Generate test bursts for transmission */
 	TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE,
+			TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
+			TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE,
 			"Failed to generate test packet burst");
 
-	for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
-		expected_fail_pkts[i] = pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT + i];
+	for (i = 0; i < TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT; i++) {
+		expected_fail_pkts[i] = pkts_burst[TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT + i];
 	}
 
-	/* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
-	 * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
+	/* Set virtual child TEST_BAL_CHILD_TX_FAIL_FAILING_CHILD_IDX to only fail
+	 * transmission of TEST_BAL_CHILD_TX_FAIL_PACKETS_COUNT packets of burst */
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[0],
+			test_params->child_port_ids[0],
 			0);
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[1],
+			test_params->child_port_ids[1],
 			0);
 	virtual_ethdev_tx_burst_fn_set_success(
-			test_params->slave_port_ids[2],
+			test_params->child_port_ids[2],
 			0);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[0],
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+			test_params->child_port_ids[0],
+			TEST_BCAST_CHILD_TX_FAIL_MAX_PACKETS_COUNT);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[1],
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+			test_params->child_port_ids[1],
+			TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT);
 
 	virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
-			test_params->slave_port_ids[2],
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+			test_params->child_port_ids[2],
+			TEST_BCAST_CHILD_TX_FAIL_MAX_PACKETS_COUNT);
 
 	/* Transmit burst */
 	tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE);
+			TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE);
 
-	TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
+	TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT,
 			"Transmitted (%d) packets, expected to transmit (%d) packets",
-			tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+			tx_count, TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT);
 
 	/* Verify that failed packet are expected failed packets */
-	for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
+	for (i = 0; i < TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT; i++) {
 		TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst[i + tx_count],
 				"expected mbuf (%d) pointer %p not expected pointer %p",
 				i, expected_fail_pkts[i], pkts_burst[i + tx_count]);
 	}
 
-	/* Verify slave ports tx stats */
+	/* Verify child ports tx stats */
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
+			(uint64_t)TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_CHILD_TX_FAIL_MAX_PACKETS_COUNT,
 			"Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+			TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_CHILD_TX_FAIL_MAX_PACKETS_COUNT);
 
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
+			(uint64_t)TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT,
 			"Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+			TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
 
 	TEST_ASSERT_EQUAL(port_stats.opackets,
-			(uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
+			(uint64_t)TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_CHILD_TX_FAIL_MAX_PACKETS_COUNT,
 			"Port (%d) opackets value (%u) not as expected (%d)",
 			test_params->bonded_port_id, (unsigned int)port_stats.opackets,
-			TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
-			TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+			TEST_BCAST_CHILD_TX_FAIL_BURST_SIZE -
+			TEST_BCAST_CHILD_TX_FAIL_MAX_PACKETS_COUNT);
 
 
 	/* Verify that all mbufs who transmission failed have a ref value of one */
 	TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst[tx_count],
-			TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT, 1),
+			TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT, 1),
 			"mbufs refcnts not as expected");
 
 	free_mbufs(&pkts_burst[tx_count],
-		TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+		TEST_BCAST_CHILD_TX_FAIL_MIN_PACKETS_COUNT);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
-#define BROADCAST_RX_BURST_NUM_OF_SLAVES (3)
+#define BROADCAST_RX_BURST_NUM_OF_CHILDS (3)
 
 static int
 test_broadcast_rx_burst(void)
 {
-	struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_SLAVES][MAX_PKT_BURST];
+	struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_CHILDS][MAX_PKT_BURST];
 
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	int burst_size[BROADCAST_RX_BURST_NUM_OF_SLAVES] = { 10, 5, 30 };
+	int burst_size[BROADCAST_RX_BURST_NUM_OF_CHILDS] = { 10, 5, 30 };
 	int i, j;
 
 	memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 children in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_BROADCAST, 0, 3, 1),
 			"Failed to initialise bonded device");
 
 	/* Generate test bursts of packets to transmit */
-	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
+	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_CHILDS; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[i][0], burst_size[i], 0, 0, 1, 0, 0),
 				burst_size[i], "failed to generate packet burst");
 	}
 
-	/* Add rx data to slave 0 */
-	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+	/* Add rx data to child 0 */
+	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_CHILDS; i++) {
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[i],
 				&gen_pkt_burst[i][0], burst_size[i]);
 	}
 
@@ -3810,33 +3810,33 @@ test_broadcast_rx_burst(void)
 			burst_size[0] + burst_size[1] + burst_size[2]);
 
 
-	/* Verify bonded slave devices rx counts */
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	/* Verify bonded child devices rx counts */
+	rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
+			"Child Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->child_port_ids[0], (unsigned int)port_stats.ipackets,
 			burst_size[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
+			"Child Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->child_port_ids[0], (unsigned int)port_stats.ipackets,
 			burst_size[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
+			"Child Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->child_port_ids[2], (unsigned int)port_stats.ipackets,
 			burst_size[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
-			"Slave Port (%d) ipackets value (%u) not as expected (%d)",
-			test_params->slave_port_ids[3], (unsigned int)port_stats.ipackets,
+			"Child Port (%d) ipackets value (%u) not as expected (%d)",
+			test_params->child_port_ids[3], (unsigned int)port_stats.ipackets,
 			0);
 
 	/* free mbufs allocate for rx testing */
-	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
+	for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_CHILDS; i++) {
 		for (j = 0; j < MAX_PKT_BURST; j++) {
 			if (gen_pkt_burst[i][j] != NULL) {
 				rte_pktmbuf_free(gen_pkt_burst[i][j]);
@@ -3845,8 +3845,8 @@ test_broadcast_rx_burst(void)
 		}
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
@@ -3855,8 +3855,8 @@ test_broadcast_verify_promiscuous_enable_disable(void)
 	int i;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 children in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_BROADCAST, 0, 4, 1),
 			"Failed to initialise bonded device");
 
@@ -3870,11 +3870,11 @@ test_broadcast_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not enabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]), 1,
+				test_params->child_port_ids[i]), 1,
 				"Port (%d) promiscuous mode not enabled",
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 	}
 
 	ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -3886,15 +3886,15 @@ test_broadcast_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 		TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]), 0,
+				test_params->child_port_ids[i]), 0,
 				"Port (%d) promiscuous mode not disabled",
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
@@ -3905,49 +3905,49 @@ test_broadcast_verify_mac_assignment(void)
 
 	int i;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_1),
+			test_params->child_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[2], &expected_mac_addr_1),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[2]);
+			test_params->child_port_ids[2]);
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 children in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_BROADCAST, 0, 4, 1),
 			"Failed to initialise bonded device");
 
-	/* Verify that all MACs are the same as first slave added to bonded
+	/* Verify that all MACs are the same as first child added to bonded
 	 * device */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_child_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of primary port",
-				test_params->slave_port_ids[i]);
+				"child port (%d) mac address not set to that of primary port",
+				test_params->child_port_ids[i]);
 	}
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[2]),
+			test_params->child_port_ids[2]),
 			"Failed to set bonded port (%d) primary port to (%d)",
-			test_params->bonded_port_id, test_params->slave_port_ids[i]);
+			test_params->bonded_port_id, test_params->child_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_child_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address has changed to that of primary "
+				"child port (%d) mac address has changed to that of primary "
 				"port without stop/start toggle of bonded device",
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 	}
 
 	/* stop / start bonded device and verify that primary MAC address is
-	 * propagated to bonded device and slaves */
+	 * propagated to bonded device and children */
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -3962,16 +3962,16 @@ test_broadcast_verify_mac_assignment(void)
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
 			"bonded port (%d) mac address not set to that of new primary  port",
-			test_params->slave_port_ids[i]);
+			test_params->child_port_ids[i]);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_child_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of new primary "
-				"port", test_params->slave_port_ids[i]);
+				"child port (%d) mac address not set to that of new primary "
+				"port", test_params->child_port_ids[i]);
 	}
 
 	/* Set explicit MAC address */
@@ -3986,71 +3986,71 @@ test_broadcast_verify_mac_assignment(void)
 	TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
 			"bonded port (%d) mac address not set to that of new primary port",
-			test_params->slave_port_ids[i]);
+			test_params->child_port_ids[i]);
 
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+	for (i = 0; i < test_params->bonded_child_count; i++) {
+		TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[i], &read_mac_addr),
 				"Failed to get mac address (port %d)",
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 		TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
 				sizeof(read_mac_addr)),
-				"slave port (%d) mac address not set to that of new primary "
-				"port", test_params->slave_port_ids[i]);
+				"child port (%d) mac address not set to that of new primary "
+				"port", test_params->child_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
-#define BROADCAST_LINK_STATUS_NUM_OF_SLAVES (4)
+#define BROADCAST_LINK_STATUS_NUM_OF_CHILDS (4)
 static int
-test_broadcast_verify_slave_link_status_change_behaviour(void)
+test_broadcast_verify_child_link_status_change_behaviour(void)
 {
-	struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_SLAVES][MAX_PKT_BURST];
+	struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_CHILDS][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t children[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count;
+	int i, burst_size, child_count;
 
 	memset(pkt_burst, 0, sizeof(pkt_burst));
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
-				BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_SLAVES,
+	/* Initialize bonded device with 4 children in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
+				BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_CHILDS,
 				1), "Failed to initialise bonded device");
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Childs Count /Active Child Count is */
+	child_count = rte_eth_bond_children_get(test_params->bonded_port_id, children,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of slaves (%d) is not as expected (%d).",
-			slave_count, 4);
+	TEST_ASSERT_EQUAL(child_count, 4,
+			"Number of children (%d) is not as expected (%d).",
+			child_count, 4);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 4);
+	child_count = rte_eth_bond_active_children_get(test_params->bonded_port_id,
+			children, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(child_count, 4,
+			"Number of active children (%d) is not as expected (%d).",
+			child_count, 4);
 
-	/* Set 2 slaves link status to down */
+	/* Set 2 children link status to down */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->child_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->child_port_ids[3], 0);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 2,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 2);
+	child_count = rte_eth_bond_active_children_get(test_params->bonded_port_id,
+			children, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(child_count, 2,
+			"Number of active children (%d) is not as expected (%d).",
+			child_count, 2);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++)
-		rte_eth_stats_reset(test_params->slave_port_ids[i]);
+	for (i = 0; i < test_params->bonded_child_count; i++)
+		rte_eth_stats_reset(test_params->child_port_ids[i]);
 
-	/* Verify that pkts are not sent on slaves with link status down */
+	/* Verify that pkts are not sent on children with link status down */
 	burst_size = 21;
 
 	TEST_ASSERT_EQUAL(generate_test_burst(
@@ -4062,43 +4062,43 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
 			"rte_eth_tx_burst failed\n");
 
 	rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
-	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * slave_count),
+	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * child_count),
 			"(%d) port_stats.opackets (%d) not as expected (%d)\n",
 			test_params->bonded_port_id, (int)port_stats.opackets,
-			burst_size * slave_count);
+			burst_size * child_count);
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[0]);
+			test_params->child_port_ids[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-				test_params->slave_port_ids[1]);
+				test_params->child_port_ids[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
 			"(%d) port_stats.opackets not as expected",
-				test_params->slave_port_ids[2]);
+				test_params->child_port_ids[2]);
 
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[3], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, 0,
 			"(%d) port_stats.opackets not as expected",
-			test_params->slave_port_ids[3]);
+			test_params->child_port_ids[3]);
 
 
-	for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_SLAVES; i++) {
+	for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_CHILDS; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&pkt_burst[i][0], burst_size, 0, 0, 1, 0, 0),
 				burst_size, "failed to generate packet burst");
 
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[i],
 				&pkt_burst[i][0], burst_size);
 	}
 
-	/* Verify that pkts are not received on slaves with link status down */
+	/* Verify that pkts are not received on children with link status down */
 	TEST_ASSERT_EQUAL(rte_eth_rx_burst(
 			test_params->bonded_port_id, 0, rx_pkt_burst, MAX_PKT_BURST),
 			burst_size + burst_size, "rte_eth_rx_burst failed");
@@ -4110,8 +4110,8 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
 			"(%d) port_stats.ipackets not as expected\n",
 			test_params->bonded_port_id);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
@@ -4146,21 +4146,21 @@ testsuite_teardown(void)
 	free(test_params->pkt_eth_hdr);
 	test_params->pkt_eth_hdr = NULL;
 
-	/* Clean up and remove slaves from bonded device */
-	remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	remove_children_and_stop_bonded_device();
 }
 
 static void
 free_virtualpmd_tx_queue(void)
 {
-	int i, slave_port, to_free_cnt;
+	int i, child_port, to_free_cnt;
 	struct rte_mbuf *pkts_to_free[MAX_PKT_BURST];
 
 	/* Free tx queue of virtual pmd */
-	for (slave_port = 0; slave_port < test_params->bonded_slave_count;
-			slave_port++) {
+	for (child_port = 0; child_port < test_params->bonded_child_count;
+			child_port++) {
 		to_free_cnt = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_port],
+				test_params->child_port_ids[child_port],
 				pkts_to_free, MAX_PKT_BURST);
 		for (i = 0; i < to_free_cnt; i++)
 			rte_pktmbuf_free(pkts_to_free[i]);
@@ -4177,11 +4177,11 @@ test_tlb_tx_burst(void)
 	uint64_t sum_ports_opackets = 0, all_bond_opackets = 0, all_bond_obytes = 0;
 	uint16_t pktlen;
 
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children
 			(BONDING_MODE_TLB, 1, 3, 1),
 			"Failed to initialise bonded device");
 
-	burst_size = 20 * test_params->bonded_slave_count;
+	burst_size = 20 * test_params->bonded_child_count;
 
 	TEST_ASSERT(burst_size < MAX_PKT_BURST,
 			"Burst size specified is greater than supported.\n");
@@ -4197,7 +4197,7 @@ test_tlb_tx_burst(void)
 					RTE_ETHER_TYPE_IPV4, 0, 0);
 		} else {
 			initialize_eth_header(test_params->pkt_eth_hdr,
-					(struct rte_ether_addr *)test_params->default_slave_mac,
+					(struct rte_ether_addr *)test_params->default_child_mac,
 					(struct rte_ether_addr *)dst_mac_0,
 					RTE_ETHER_TYPE_IPV4, 0, 0);
 		}
@@ -4234,26 +4234,26 @@ test_tlb_tx_burst(void)
 			burst_size);
 
 
-	/* Verify slave ports tx stats */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
-		rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats[i]);
+	/* Verify child ports tx stats */
+	for (i = 0; i < test_params->bonded_child_count; i++) {
+		rte_eth_stats_get(test_params->child_port_ids[i], &port_stats[i]);
 		sum_ports_opackets += port_stats[i].opackets;
 	}
 
 	TEST_ASSERT_EQUAL(sum_ports_opackets, (uint64_t)all_bond_opackets,
-			"Total packets sent by slaves is not equal to packets sent by bond interface");
+			"Total packets sent by children is not equal to packets sent by bond interface");
 
-	/* checking if distribution of packets is balanced over slaves */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* checking if distribution of packets is balanced over children */
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 		TEST_ASSERT(port_stats[i].obytes > 0 &&
 				port_stats[i].obytes < all_bond_obytes,
-						"Packets are not balanced over slaves");
+						"Packets are not balanced over children");
 	}
 
-	/* Put all slaves down and try and transmit */
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	/* Put all children down and try and transmit */
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 		virtual_ethdev_simulate_link_status_interrupt(
-				test_params->slave_port_ids[i], 0);
+				test_params->child_port_ids[i], 0);
 	}
 
 	/* Send burst on bonded port */
@@ -4261,11 +4261,11 @@ test_tlb_tx_burst(void)
 			burst_size);
 	TEST_ASSERT_EQUAL(nb_tx, 0, " bad number of packet in burst");
 
-	/* Clean ugit checkout masterp and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
-#define TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT (4)
+#define TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_CHILD_COUNT (4)
 
 static int
 test_tlb_rx_burst(void)
@@ -4279,26 +4279,26 @@ test_tlb_rx_burst(void)
 
 	uint16_t i, j, nb_rx, burst_size = 17;
 
-	/* Initialize bonded device with 4 slaves in transmit load balancing mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 children in transmit load balancing mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_TLB,
-			TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1, 1),
+			TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_CHILD_COUNT, 1, 1),
 			"Failed to initialize bonded device");
 
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 	TEST_ASSERT(primary_port >= 0,
-			"failed to get primary slave for bonded port (%d)",
+			"failed to get primary child for bonded port (%d)",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 		/* Generate test bursts of packets to transmit */
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0), burst_size,
 				"burst generation failed");
 
-		/* Add rx data to slave */
-		virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+		/* Add rx data to child */
+		virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[i],
 				&gen_pkt_burst[0], burst_size);
 
 		/* Call rx burst on bonded device */
@@ -4307,7 +4307,7 @@ test_tlb_rx_burst(void)
 
 		TEST_ASSERT_EQUAL(nb_rx, burst_size, "rte_eth_rx_burst failed\n");
 
-		if (test_params->slave_port_ids[i] == primary_port) {
+		if (test_params->child_port_ids[i] == primary_port) {
 			/* Verify bonded device rx count */
 			rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
 			TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
@@ -4315,27 +4315,27 @@ test_tlb_rx_burst(void)
 					test_params->bonded_port_id,
 					(unsigned int)port_stats.ipackets, burst_size);
 
-			/* Verify bonded slave devices rx count */
-			for (j = 0; j < test_params->bonded_slave_count; j++) {
-				rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+			/* Verify bonded child devices rx count */
+			for (j = 0; j < test_params->bonded_child_count; j++) {
+				rte_eth_stats_get(test_params->child_port_ids[j], &port_stats);
 				if (i == j) {
 					TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
-							"Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
-							test_params->slave_port_ids[i],
+							"Child Port (%d) ipackets value (%u) not as expected (%d)\n",
+							test_params->child_port_ids[i],
 							(unsigned int)port_stats.ipackets, burst_size);
 				} else {
 					TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
-							"Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
-							test_params->slave_port_ids[i],
+							"Child Port (%d) ipackets value (%u) not as expected (%d)\n",
+							test_params->child_port_ids[i],
 							(unsigned int)port_stats.ipackets, 0);
 				}
 			}
 		} else {
-			for (j = 0; j < test_params->bonded_slave_count; j++) {
-				rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+			for (j = 0; j < test_params->bonded_child_count; j++) {
+				rte_eth_stats_get(test_params->child_port_ids[j], &port_stats);
 				TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
-						"Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
-						test_params->slave_port_ids[i],
+						"Child Port (%d) ipackets value (%u) not as expected (%d)\n",
+						test_params->child_port_ids[i],
 						(unsigned int)port_stats.ipackets, 0);
 			}
 		}
@@ -4348,8 +4348,8 @@ test_tlb_rx_burst(void)
 		rte_eth_stats_reset(test_params->bonded_port_id);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
@@ -4358,14 +4358,14 @@ test_tlb_verify_promiscuous_enable_disable(void)
 	int i, primary_port, promiscuous_en;
 	int ret;
 
-	/* Initialize bonded device with 4 slaves in transmit load balancing mode */
-	TEST_ASSERT_SUCCESS( initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 children in transmit load balancing mode */
+	TEST_ASSERT_SUCCESS( initialize_bonded_device_with_children(
 			BONDING_MODE_TLB, 0, 4, 1),
 			"Failed to initialize bonded device");
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
 	TEST_ASSERT(primary_port >= 0,
-			"failed to get primary slave for bonded port (%d)",
+			"failed to get primary child for bonded port (%d)",
 			test_params->bonded_port_id);
 
 	ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -4377,10 +4377,10 @@ test_tlb_verify_promiscuous_enable_disable(void)
 	TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
 			"Port (%d) promiscuous mode not enabled\n",
 			test_params->bonded_port_id);
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
-		if (primary_port == test_params->slave_port_ids[i]) {
+				test_params->child_port_ids[i]);
+		if (primary_port == test_params->child_port_ids[i]) {
 			TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
 					"Port (%d) promiscuous mode not enabled\n",
 					test_params->bonded_port_id);
@@ -4402,16 +4402,16 @@ test_tlb_verify_promiscuous_enable_disable(void)
 			"Port (%d) promiscuous mode not disabled\n",
 			test_params->bonded_port_id);
 
-	for (i = 0; i < test_params->bonded_slave_count; i++) {
+	for (i = 0; i < test_params->bonded_child_count; i++) {
 		promiscuous_en = rte_eth_promiscuous_get(
-				test_params->slave_port_ids[i]);
+				test_params->child_port_ids[i]);
 		TEST_ASSERT_EQUAL(promiscuous_en, (int)0,
-				"slave port (%d) promiscuous mode not disabled\n",
-				test_params->slave_port_ids[i]);
+				"child port (%d) promiscuous mode not disabled\n",
+				test_params->child_port_ids[i]);
 	}
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
@@ -4420,19 +4420,19 @@ test_tlb_verify_mac_assignment(void)
 	struct rte_ether_addr read_mac_addr;
 	struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &expected_mac_addr_0),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+			test_params->child_port_ids[0]);
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &expected_mac_addr_1),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 
-	/* Initialize bonded device with 2 slaves in active backup mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 2 children in active backup mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_TLB, 0, 2, 1),
 			"Failed to initialize bonded device");
 
-	/* Verify that bonded MACs is that of first slave and that the other slave
+	/* Verify that bonded MACs is that of first child and that the other child
 	 * MAC hasn't been changed */
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -4442,27 +4442,27 @@ test_tlb_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->child_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"child port (%d) mac address not set to that of primary port",
+			test_params->child_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[1]);
+			"child port (%d) mac address not as expected",
+			test_params->child_port_ids[1]);
 
 	/* change primary and verify that MAC addresses haven't changed */
 	TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
-			test_params->slave_port_ids[1]), 0,
+			test_params->child_port_ids[1]), 0,
 			"Failed to set bonded port (%d) primary port to (%d)",
-			test_params->bonded_port_id, test_params->slave_port_ids[1]);
+			test_params->bonded_port_id, test_params->child_port_ids[1]);
 
 	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
 			"Failed to get mac address (port %d)",
@@ -4472,24 +4472,24 @@ test_tlb_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->child_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[0]);
+			"child port (%d) mac address not set to that of primary port",
+			test_params->child_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[1]);
+			"child port (%d) mac address not as expected",
+			test_params->child_port_ids[1]);
 
 	/* stop / start bonded device and verify that primary MAC address is
-	 * propagated to bonded device and slaves */
+	 * propagated to bonded device and children */
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
 			"Failed to stop bonded port %u",
@@ -4506,21 +4506,21 @@ test_tlb_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of primary port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->child_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[0]);
+			"child port (%d) mac address not as expected",
+			test_params->child_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of primary port",
-			test_params->slave_port_ids[1]);
+			"child port (%d) mac address not set to that of primary port",
+			test_params->child_port_ids[1]);
 
 
 	/* Set explicit MAC address */
@@ -4537,36 +4537,36 @@ test_tlb_verify_mac_assignment(void)
 			"bonded port (%d) mac address not set to that of bonded port",
 			test_params->bonded_port_id);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[0], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[0]);
+			test_params->child_port_ids[0]);
 	TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not as expected",
-			test_params->slave_port_ids[0]);
+			"child port (%d) mac address not as expected",
+			test_params->child_port_ids[0]);
 
-	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+	TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->child_port_ids[1], &read_mac_addr),
 			"Failed to get mac address (port %d)",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 	TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
 			sizeof(read_mac_addr)),
-			"slave port (%d) mac address not set to that of bonded port",
-			test_params->slave_port_ids[1]);
+			"child port (%d) mac address not set to that of bonded port",
+			test_params->child_port_ids[1]);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
 static int
-test_tlb_verify_slave_link_status_change_failover(void)
+test_tlb_verify_child_link_status_change_failover(void)
 {
-	struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+	struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_CHILD_COUNT][MAX_PKT_BURST];
 	struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
 	struct rte_eth_stats port_stats;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t children[RTE_MAX_ETHPORTS];
 
-	int i, burst_size, slave_count, primary_port;
+	int i, burst_size, child_count, primary_port;
 
 	burst_size = 21;
 
@@ -4574,61 +4574,61 @@ test_tlb_verify_slave_link_status_change_failover(void)
 
 
 
-	/* Initialize bonded device with 4 slaves in round robin mode */
-	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+	/* Initialize bonded device with 4 children in round robin mode */
+	TEST_ASSERT_SUCCESS(initialize_bonded_device_with_children(
 			BONDING_MODE_TLB, 0,
-			TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1),
-			"Failed to initialize bonded device with slaves");
+			TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_CHILD_COUNT, 1),
+			"Failed to initialize bonded device with children");
 
-	/* Verify Current Slaves Count /Active Slave Count is */
-	slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+	/* Verify Current Childs Count /Active Child Count is */
+	child_count = rte_eth_bond_children_get(test_params->bonded_port_id, children,
 			RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, 4,
-			"Number of slaves (%d) is not as expected (%d).\n",
-			slave_count, 4);
+	TEST_ASSERT_EQUAL(child_count, 4,
+			"Number of children (%d) is not as expected (%d).\n",
+			child_count, 4);
 
-	slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
-			slaves, RTE_MAX_ETHPORTS);
-	TEST_ASSERT_EQUAL(slave_count, (int)4,
-			"Number of slaves (%d) is not as expected (%d).\n",
-			slave_count, 4);
+	child_count = rte_eth_bond_active_children_get(test_params->bonded_port_id,
+			children, RTE_MAX_ETHPORTS);
+	TEST_ASSERT_EQUAL(child_count, (int)4,
+			"Number of children (%d) is not as expected (%d).\n",
+			child_count, 4);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+	TEST_ASSERT_EQUAL(primary_port, test_params->child_port_ids[0],
 			"Primary port not as expected");
 
-	/* Bring 2 slaves down and verify active slave count */
+	/* Bring 2 children down and verify active child count */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 0);
+			test_params->child_port_ids[1], 0);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 0);
+			test_params->child_port_ids[3], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 2);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_children_get(
+			test_params->bonded_port_id, children, RTE_MAX_ETHPORTS), 2,
+			"Number of active children (%d) is not as expected (%d).",
+			child_count, 2);
 
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[1], 1);
+			test_params->child_port_ids[1], 1);
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[3], 1);
+			test_params->child_port_ids[3], 1);
 
 
-	/* Bring primary port down, verify that active slave count is 3 and primary
+	/* Bring primary port down, verify that active child count is 3 and primary
 	 *  has changed */
 	virtual_ethdev_simulate_link_status_interrupt(
-			test_params->slave_port_ids[0], 0);
+			test_params->child_port_ids[0], 0);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
-			test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 3,
-			"Number of active slaves (%d) is not as expected (%d).",
-			slave_count, 3);
+	TEST_ASSERT_EQUAL(rte_eth_bond_active_children_get(
+			test_params->bonded_port_id, children, RTE_MAX_ETHPORTS), 3,
+			"Number of active children (%d) is not as expected (%d).",
+			child_count, 3);
 
 	primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
-	TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
+	TEST_ASSERT_EQUAL(primary_port, test_params->child_port_ids[2],
 			"Primary port not as expected");
 	rte_delay_us(500000);
-	/* Verify that pkts are sent on new primary slave */
+	/* Verify that pkts are sent on new primary child */
 	for (i = 0; i < 4; i++) {
 		TEST_ASSERT_EQUAL(generate_test_burst(
 				&pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
@@ -4639,36 +4639,36 @@ test_tlb_verify_slave_link_status_change_failover(void)
 		rte_delay_us(11000);
 	}
 
-	rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[0], &port_stats);
 	TEST_ASSERT_EQUAL(port_stats.opackets, (int8_t)0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[0]);
+			test_params->child_port_ids[0]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[1], &port_stats);
 	TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[1]);
+			test_params->child_port_ids[1]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[2], &port_stats);
 	TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[2]);
+			test_params->child_port_ids[2]);
 
-	rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+	rte_eth_stats_get(test_params->child_port_ids[3], &port_stats);
 	TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
 			"(%d) port_stats.opackets not as expected\n",
-			test_params->slave_port_ids[3]);
+			test_params->child_port_ids[3]);
 
 
 	/* Generate packet burst for testing */
 
-	for (i = 0; i < TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT; i++) {
+	for (i = 0; i < TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_CHILD_COUNT; i++) {
 		if (generate_test_burst(&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0) !=
 				burst_size)
 			return -1;
 
 		virtual_ethdev_add_mbufs_to_rx_queue(
-				test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
+				test_params->child_port_ids[i], &pkt_burst[i][0], burst_size);
 	}
 
 	if (rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
@@ -4684,11 +4684,11 @@ test_tlb_verify_slave_link_status_change_failover(void)
 			"(%d) port_stats.ipackets not as expected\n",
 			test_params->bonded_port_id);
 
-	/* Clean up and remove slaves from bonded device */
-	return remove_slaves_and_stop_bonded_device();
+	/* Clean up and remove children from bonded device */
+	return remove_children_and_stop_bonded_device();
 }
 
-#define TEST_ALB_SLAVE_COUNT	2
+#define TEST_ALB_CHILD_COUNT	2
 
 static uint8_t mac_client1[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 1};
 static uint8_t mac_client2[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 2};
@@ -4710,23 +4710,23 @@ test_alb_change_mac_in_reply_sent(void)
 	struct rte_ether_hdr *eth_pkt;
 	struct rte_arp_hdr *arp_pkt;
 
-	int slave_idx, nb_pkts, pkt_idx;
+	int child_idx, nb_pkts, pkt_idx;
 	int retval = 0;
 
 	struct rte_ether_addr bond_mac, client_mac;
-	struct rte_ether_addr *slave_mac1, *slave_mac2;
+	struct rte_ether_addr *child_mac1, *child_mac2;
 
 	TEST_ASSERT_SUCCESS(
-			initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
-					0, TEST_ALB_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			initialize_bonded_device_with_children(BONDING_MODE_ALB,
+					0, TEST_ALB_CHILD_COUNT, 1),
+			"Failed to initialize_bonded_device_with_children.");
 
 	/* Flush tx queue */
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count;
-			slave_idx++) {
+	for (child_idx = 0; child_idx < test_params->bonded_child_count;
+			child_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->child_port_ids[child_idx], pkts_sent,
 				MAX_PKT_BURST);
 	}
 
@@ -4782,18 +4782,18 @@ test_alb_change_mac_in_reply_sent(void)
 			RTE_ARP_OP_REPLY);
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, &pkt, 1);
 
-	slave_mac1 =
-			rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
-	slave_mac2 =
-			rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
+	child_mac1 =
+			rte_eth_devices[test_params->child_port_ids[0]].data->mac_addrs;
+	child_mac2 =
+			rte_eth_devices[test_params->child_port_ids[1]].data->mac_addrs;
 
 	/*
 	 * Checking if packets are properly distributed on bonding ports. Packets
 	 * 0 and 2 should be sent on port 0 and packets 1 and 3 on port 1.
 	 */
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (child_idx = 0; child_idx < test_params->bonded_child_count; child_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->child_port_ids[child_idx], pkts_sent,
 				MAX_PKT_BURST);
 
 		for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -4802,14 +4802,14 @@ test_alb_change_mac_in_reply_sent(void)
 			arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
 						sizeof(struct rte_ether_hdr));
 
-			if (slave_idx%2 == 0) {
-				if (!rte_is_same_ether_addr(slave_mac1,
+			if (child_idx%2 == 0) {
+				if (!rte_is_same_ether_addr(child_mac1,
 						&arp_pkt->arp_data.arp_sha)) {
 					retval = -1;
 					goto test_end;
 				}
 			} else {
-				if (!rte_is_same_ether_addr(slave_mac2,
+				if (!rte_is_same_ether_addr(child_mac2,
 						&arp_pkt->arp_data.arp_sha)) {
 					retval = -1;
 					goto test_end;
@@ -4819,7 +4819,7 @@ test_alb_change_mac_in_reply_sent(void)
 	}
 
 test_end:
-	retval += remove_slaves_and_stop_bonded_device();
+	retval += remove_children_and_stop_bonded_device();
 	return retval;
 }
 
@@ -4832,22 +4832,22 @@ test_alb_reply_from_client(void)
 	struct rte_mbuf *pkt;
 	struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
 
-	int slave_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
+	int child_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
 	int retval = 0;
 
 	struct rte_ether_addr bond_mac, client_mac;
-	struct rte_ether_addr *slave_mac1, *slave_mac2;
+	struct rte_ether_addr *child_mac1, *child_mac2;
 
 	TEST_ASSERT_SUCCESS(
-			initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
-					0, TEST_ALB_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			initialize_bonded_device_with_children(BONDING_MODE_ALB,
+					0, TEST_ALB_CHILD_COUNT, 1),
+			"Failed to initialize_bonded_device_with_children.");
 
 	/* Flush tx queue */
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (child_idx = 0; child_idx < test_params->bonded_child_count; child_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->child_port_ids[child_idx], pkts_sent,
 				MAX_PKT_BURST);
 	}
 
@@ -4868,7 +4868,7 @@ test_alb_reply_from_client(void)
 					sizeof(struct rte_ether_hdr));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[0], &pkt,
 			1);
 
 	pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4880,7 +4880,7 @@ test_alb_reply_from_client(void)
 					sizeof(struct rte_ether_hdr));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client2, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[0], &pkt,
 			1);
 
 	pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4892,7 +4892,7 @@ test_alb_reply_from_client(void)
 					sizeof(struct rte_ether_hdr));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client3, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[0], &pkt,
 			1);
 
 	pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4904,7 +4904,7 @@ test_alb_reply_from_client(void)
 					sizeof(struct rte_ether_hdr));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client4, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[0], &pkt,
 			1);
 
 	/*
@@ -4914,15 +4914,15 @@ test_alb_reply_from_client(void)
 	rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
 
-	slave_mac1 = rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
-	slave_mac2 = rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
+	child_mac1 = rte_eth_devices[test_params->child_port_ids[0]].data->mac_addrs;
+	child_mac2 = rte_eth_devices[test_params->child_port_ids[1]].data->mac_addrs;
 
 	/*
-	 * Checking if update ARP packets were properly send on slave ports.
+	 * Checking if update ARP packets were properly send on child ports.
 	 */
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (child_idx = 0; child_idx < test_params->bonded_child_count; child_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent, MAX_PKT_BURST);
+				test_params->child_port_ids[child_idx], pkts_sent, MAX_PKT_BURST);
 		nb_pkts_sum += nb_pkts;
 
 		for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -4931,14 +4931,14 @@ test_alb_reply_from_client(void)
 			arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
 						sizeof(struct rte_ether_hdr));
 
-			if (slave_idx%2 == 0) {
-				if (!rte_is_same_ether_addr(slave_mac1,
+			if (child_idx%2 == 0) {
+				if (!rte_is_same_ether_addr(child_mac1,
 						&arp_pkt->arp_data.arp_sha)) {
 					retval = -1;
 					goto test_end;
 				}
 			} else {
-				if (!rte_is_same_ether_addr(slave_mac2,
+				if (!rte_is_same_ether_addr(child_mac2,
 						&arp_pkt->arp_data.arp_sha)) {
 					retval = -1;
 					goto test_end;
@@ -4954,7 +4954,7 @@ test_alb_reply_from_client(void)
 	}
 
 test_end:
-	retval += remove_slaves_and_stop_bonded_device();
+	retval += remove_children_and_stop_bonded_device();
 	return retval;
 }
 
@@ -4968,21 +4968,21 @@ test_alb_receive_vlan_reply(void)
 	struct rte_mbuf *pkt;
 	struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
 
-	int slave_idx, nb_pkts, pkt_idx;
+	int child_idx, nb_pkts, pkt_idx;
 	int retval = 0;
 
 	struct rte_ether_addr bond_mac, client_mac;
 
 	TEST_ASSERT_SUCCESS(
-			initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
-					0, TEST_ALB_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			initialize_bonded_device_with_children(BONDING_MODE_ALB,
+					0, TEST_ALB_CHILD_COUNT, 1),
+			"Failed to initialize_bonded_device_with_children.");
 
 	/* Flush tx queue */
 	rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (child_idx = 0; child_idx < test_params->bonded_child_count; child_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->child_port_ids[child_idx], pkts_sent,
 				MAX_PKT_BURST);
 	}
 
@@ -5007,7 +5007,7 @@ test_alb_receive_vlan_reply(void)
 	arp_pkt = (struct rte_arp_hdr *)((char *)(vlan_pkt + 1));
 	initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
 			RTE_ARP_OP_REPLY);
-	virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+	virtual_ethdev_add_mbufs_to_rx_queue(test_params->child_port_ids[0], &pkt,
 			1);
 
 	rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
@@ -5016,9 +5016,9 @@ test_alb_receive_vlan_reply(void)
 	/*
 	 * Checking if VLAN headers in generated ARP Update packet are correct.
 	 */
-	for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+	for (child_idx = 0; child_idx < test_params->bonded_child_count; child_idx++) {
 		nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
-				test_params->slave_port_ids[slave_idx], pkts_sent,
+				test_params->child_port_ids[child_idx], pkts_sent,
 				MAX_PKT_BURST);
 
 		for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -5049,7 +5049,7 @@ test_alb_receive_vlan_reply(void)
 	}
 
 test_end:
-	retval += remove_slaves_and_stop_bonded_device();
+	retval += remove_children_and_stop_bonded_device();
 	return retval;
 }
 
@@ -5062,9 +5062,9 @@ test_alb_ipv4_tx(void)
 	retval = 0;
 
 	TEST_ASSERT_SUCCESS(
-			initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
-					0, TEST_ALB_SLAVE_COUNT, 1),
-			"Failed to initialize_bonded_device_with_slaves.");
+			initialize_bonded_device_with_children(BONDING_MODE_ALB,
+					0, TEST_ALB_CHILD_COUNT, 1),
+			"Failed to initialize_bonded_device_with_children.");
 
 	burst_size = 32;
 
@@ -5085,7 +5085,7 @@ test_alb_ipv4_tx(void)
 	}
 
 test_end:
-	retval += remove_slaves_and_stop_bonded_device();
+	retval += remove_children_and_stop_bonded_device();
 	return retval;
 }
 
@@ -5096,34 +5096,34 @@ static struct unit_test_suite link_bonding_test_suite  = {
 	.unit_test_cases = {
 		TEST_CASE(test_create_bonded_device),
 		TEST_CASE(test_create_bonded_device_with_invalid_params),
-		TEST_CASE(test_add_slave_to_bonded_device),
-		TEST_CASE(test_add_slave_to_invalid_bonded_device),
-		TEST_CASE(test_remove_slave_from_bonded_device),
-		TEST_CASE(test_remove_slave_from_invalid_bonded_device),
-		TEST_CASE(test_get_slaves_from_bonded_device),
-		TEST_CASE(test_add_already_bonded_slave_to_bonded_device),
-		TEST_CASE(test_add_remove_multiple_slaves_to_from_bonded_device),
+		TEST_CASE(test_add_child_to_bonded_device),
+		TEST_CASE(test_add_child_to_invalid_bonded_device),
+		TEST_CASE(test_remove_child_from_bonded_device),
+		TEST_CASE(test_remove_child_from_invalid_bonded_device),
+		TEST_CASE(test_get_children_from_bonded_device),
+		TEST_CASE(test_add_already_bonded_child_to_bonded_device),
+		TEST_CASE(test_add_remove_multiple_children_to_from_bonded_device),
 		TEST_CASE(test_start_bonded_device),
 		TEST_CASE(test_stop_bonded_device),
 		TEST_CASE(test_set_bonding_mode),
-		TEST_CASE(test_set_primary_slave),
+		TEST_CASE(test_set_primary_child),
 		TEST_CASE(test_set_explicit_bonded_mac),
 		TEST_CASE(test_set_bonded_port_initialization_mac_assignment),
 		TEST_CASE(test_status_interrupt),
-		TEST_CASE(test_adding_slave_after_bonded_device_started),
+		TEST_CASE(test_adding_child_after_bonded_device_started),
 		TEST_CASE(test_roundrobin_tx_burst),
-		TEST_CASE(test_roundrobin_tx_burst_slave_tx_fail),
-		TEST_CASE(test_roundrobin_rx_burst_on_single_slave),
-		TEST_CASE(test_roundrobin_rx_burst_on_multiple_slaves),
+		TEST_CASE(test_roundrobin_tx_burst_child_tx_fail),
+		TEST_CASE(test_roundrobin_rx_burst_on_single_child),
+		TEST_CASE(test_roundrobin_rx_burst_on_multiple_children),
 		TEST_CASE(test_roundrobin_verify_promiscuous_enable_disable),
 		TEST_CASE(test_roundrobin_verify_mac_assignment),
-		TEST_CASE(test_roundrobin_verify_slave_link_status_change_behaviour),
-		TEST_CASE(test_roundrobin_verfiy_polling_slave_link_status_change),
+		TEST_CASE(test_roundrobin_verify_child_link_status_change_behaviour),
+		TEST_CASE(test_roundrobin_verfiy_polling_child_link_status_change),
 		TEST_CASE(test_activebackup_tx_burst),
 		TEST_CASE(test_activebackup_rx_burst),
 		TEST_CASE(test_activebackup_verify_promiscuous_enable_disable),
 		TEST_CASE(test_activebackup_verify_mac_assignment),
-		TEST_CASE(test_activebackup_verify_slave_link_status_change_failover),
+		TEST_CASE(test_activebackup_verify_child_link_status_change_failover),
 		TEST_CASE(test_balance_xmit_policy_configuration),
 		TEST_CASE(test_balance_l2_tx_burst),
 		TEST_CASE(test_balance_l23_tx_burst_ipv4_toggle_ip_addr),
@@ -5137,26 +5137,26 @@ static struct unit_test_suite link_bonding_test_suite  = {
 		TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_ip_addr),
 		TEST_CASE(test_balance_l34_tx_burst_vlan_ipv6_toggle_ip_addr),
 		TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_udp_port),
-		TEST_CASE(test_balance_tx_burst_slave_tx_fail),
+		TEST_CASE(test_balance_tx_burst_child_tx_fail),
 		TEST_CASE(test_balance_rx_burst),
 		TEST_CASE(test_balance_verify_promiscuous_enable_disable),
 		TEST_CASE(test_balance_verify_mac_assignment),
-		TEST_CASE(test_balance_verify_slave_link_status_change_behaviour),
+		TEST_CASE(test_balance_verify_child_link_status_change_behaviour),
 		TEST_CASE(test_tlb_tx_burst),
 		TEST_CASE(test_tlb_rx_burst),
 		TEST_CASE(test_tlb_verify_mac_assignment),
 		TEST_CASE(test_tlb_verify_promiscuous_enable_disable),
-		TEST_CASE(test_tlb_verify_slave_link_status_change_failover),
+		TEST_CASE(test_tlb_verify_child_link_status_change_failover),
 		TEST_CASE(test_alb_change_mac_in_reply_sent),
 		TEST_CASE(test_alb_reply_from_client),
 		TEST_CASE(test_alb_receive_vlan_reply),
 		TEST_CASE(test_alb_ipv4_tx),
 		TEST_CASE(test_broadcast_tx_burst),
-		TEST_CASE(test_broadcast_tx_burst_slave_tx_fail),
+		TEST_CASE(test_broadcast_tx_burst_child_tx_fail),
 		TEST_CASE(test_broadcast_rx_burst),
 		TEST_CASE(test_broadcast_verify_promiscuous_enable_disable),
 		TEST_CASE(test_broadcast_verify_mac_assignment),
-		TEST_CASE(test_broadcast_verify_slave_link_status_change_behaviour),
+		TEST_CASE(test_broadcast_verify_child_link_status_change_behaviour),
 		TEST_CASE(test_reconfigure_bonded_device),
 		TEST_CASE(test_close_bonded_device),
 
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 21c512c94b89..b20ad9c4000d 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -31,7 +31,7 @@
 
 #include "test.h"
 
-#define SLAVE_COUNT (4)
+#define CHILD_COUNT (4)
 
 #define RX_RING_SIZE 1024
 #define TX_RING_SIZE 1024
@@ -46,15 +46,15 @@
 
 #define BONDED_DEV_NAME         ("net_bonding_m4_bond_dev")
 
-#define SLAVE_DEV_NAME_FMT      ("net_virt_%d")
-#define SLAVE_RX_QUEUE_FMT      ("net_virt_%d_rx")
-#define SLAVE_TX_QUEUE_FMT      ("net_virt_%d_tx")
+#define CHILD_DEV_NAME_FMT      ("net_virt_%d")
+#define CHILD_RX_QUEUE_FMT      ("net_virt_%d_rx")
+#define CHILD_TX_QUEUE_FMT      ("net_virt_%d_tx")
 
 #define INVALID_SOCKET_ID       (-1)
 #define INVALID_PORT_ID         (0xFF)
 #define INVALID_BONDING_MODE    (-1)
 
-static const struct rte_ether_addr slave_mac_default = {
+static const struct rte_ether_addr child_mac_default = {
 	{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 }
 };
 
@@ -70,7 +70,7 @@ static const struct rte_ether_addr slow_protocol_mac_addr = {
 	{ 0x01, 0x80, 0xC2, 0x00, 0x00, 0x02 }
 };
 
-struct slave_conf {
+struct child_conf {
 	struct rte_ring *rx_queue;
 	struct rte_ring *tx_queue;
 	uint16_t port_id;
@@ -86,21 +86,21 @@ struct ether_vlan_hdr {
 
 struct link_bonding_unittest_params {
 	uint8_t bonded_port_id;
-	struct slave_conf slave_ports[SLAVE_COUNT];
+	struct child_conf child_ports[CHILD_COUNT];
 
 	struct rte_mempool *mbuf_pool;
 };
 
-#define TEST_DEFAULT_SLAVE_COUNT     RTE_DIM(test_params.slave_ports)
-#define TEST_RX_SLAVE_COUT           TEST_DEFAULT_SLAVE_COUNT
-#define TEST_TX_SLAVE_COUNT          TEST_DEFAULT_SLAVE_COUNT
-#define TEST_MARKER_SLAVE_COUT       TEST_DEFAULT_SLAVE_COUNT
-#define TEST_EXPIRED_SLAVE_COUNT     TEST_DEFAULT_SLAVE_COUNT
-#define TEST_PROMISC_SLAVE_COUNT     TEST_DEFAULT_SLAVE_COUNT
+#define TEST_DEFAULT_CHILD_COUNT     RTE_DIM(test_params.child_ports)
+#define TEST_RX_CHILD_COUT           TEST_DEFAULT_CHILD_COUNT
+#define TEST_TX_CHILD_COUNT          TEST_DEFAULT_CHILD_COUNT
+#define TEST_MARKER_CHILD_COUT       TEST_DEFAULT_CHILD_COUNT
+#define TEST_EXPIRED_CHILD_COUNT     TEST_DEFAULT_CHILD_COUNT
+#define TEST_PROMISC_CHILD_COUNT     TEST_DEFAULT_CHILD_COUNT
 
 static struct link_bonding_unittest_params test_params  = {
 	.bonded_port_id = INVALID_PORT_ID,
-	.slave_ports = { [0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
+	.child_ports = { [0 ... CHILD_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
 
 	.mbuf_pool = NULL,
 };
@@ -120,58 +120,58 @@ static uint8_t lacpdu_rx_count[RTE_MAX_ETHPORTS] = {0, };
 #define FOR_EACH(_i, _item, _array, _size) \
 	for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
 
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a child
  * in this test.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->child_ports
+ * _child pointer to &test_params->child_ports[_idx]
  */
 #define FOR_EACH_PORT(_i, _port) \
-	FOR_EACH(_i, _port, test_params.slave_ports, \
-		RTE_DIM(test_params.slave_ports))
+	FOR_EACH(_i, _port, test_params.child_ports, \
+		RTE_DIM(test_params.child_ports))
 
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a child
  * in this test and satisfy given condition.
  *
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->child_ports
+ * _child pointer to &test_params->child_ports[_idx]
  * _condition condition that need to be checked
  */
 #define FOR_EACH_PORT_IF(_i, _port, _condition) FOR_EACH_PORT((_i), (_port)) \
 	if (!!(_condition))
 
-/* Macro for iterating over every port that is currently a slave of a bonded
+/* Macro for iterating over every port that is currently a child of a bonded
  * device.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->child_ports
+ * _child pointer to &test_params->child_ports[_idx]
  * */
-#define FOR_EACH_SLAVE(_i, _slave) \
-	FOR_EACH_PORT_IF(_i, _slave, (_slave)->bonded != 0)
+#define FOR_EACH_CHILD(_i, _child) \
+	FOR_EACH_PORT_IF(_i, _child, (_child)->bonded != 0)
 
 /*
- * Returns packets from slaves TX queue.
- * slave slave port
+ * Returns packets from children TX queue.
+ * child child port
  * buffer for packets
  * size size of buffer
  * return number of packets or negative error number
  */
 static int
-slave_get_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
+child_get_pkts(struct child_conf *child, struct rte_mbuf **buf, uint16_t size)
 {
-	return rte_ring_dequeue_burst(slave->tx_queue, (void **)buf,
+	return rte_ring_dequeue_burst(child->tx_queue, (void **)buf,
 			size, NULL);
 }
 
 /*
- * Injects given packets into slaves RX queue.
- * slave slave port
+ * Injects given packets into children RX queue.
+ * child child port
  * buffer for packets
  * size number of packets to be injected
  * return number of queued packets or negative error number
  */
 static int
-slave_put_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
+child_put_pkts(struct child_conf *child, struct rte_mbuf **buf, uint16_t size)
 {
-	return rte_ring_enqueue_burst(slave->rx_queue, (void **)buf,
+	return rte_ring_enqueue_burst(child->rx_queue, (void **)buf,
 			size, NULL);
 }
 
@@ -219,79 +219,79 @@ configure_ethdev(uint16_t port_id, uint8_t start)
 }
 
 static int
-add_slave(struct slave_conf *slave, uint8_t start)
+add_child(struct child_conf *child, uint8_t start)
 {
 	struct rte_ether_addr addr, addr_check;
 	int retval;
 
 	/* Some sanity check */
-	RTE_VERIFY(test_params.slave_ports <= slave &&
-		slave - test_params.slave_ports < (int)RTE_DIM(test_params.slave_ports));
-	RTE_VERIFY(slave->bonded == 0);
-	RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
+	RTE_VERIFY(test_params.child_ports <= child &&
+		child - test_params.child_ports < (int)RTE_DIM(test_params.child_ports));
+	RTE_VERIFY(child->bonded == 0);
+	RTE_VERIFY(child->port_id != INVALID_PORT_ID);
 
-	rte_ether_addr_copy(&slave_mac_default, &addr);
-	addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
+	rte_ether_addr_copy(&child_mac_default, &addr);
+	addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = child->port_id;
 
-	rte_eth_dev_mac_addr_remove(slave->port_id, &addr);
+	rte_eth_dev_mac_addr_remove(child->port_id, &addr);
 
-	TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(slave->port_id, &addr, 0),
-		"Failed to set slave MAC address");
+	TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(child->port_id, &addr, 0),
+		"Failed to set child MAC address");
 
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bonded_port_id,
-		slave->port_id),
-			"Failed to add slave (idx=%u, id=%u) to bonding (id=%u)",
-			(uint8_t)(slave - test_params.slave_ports), slave->port_id,
+	TEST_ASSERT_SUCCESS(rte_eth_bond_child_add(test_params.bonded_port_id,
+		child->port_id),
+			"Failed to add child (idx=%u, id=%u) to bonding (id=%u)",
+			(uint8_t)(child - test_params.child_ports), child->port_id,
 			test_params.bonded_port_id);
 
-	slave->bonded = 1;
+	child->bonded = 1;
 	if (start) {
-		TEST_ASSERT_SUCCESS(rte_eth_dev_start(slave->port_id),
-			"Failed to start slave %u", slave->port_id);
+		TEST_ASSERT_SUCCESS(rte_eth_dev_start(child->port_id),
+			"Failed to start child %u", child->port_id);
 	}
 
-	retval = rte_eth_macaddr_get(slave->port_id, &addr_check);
-	TEST_ASSERT_SUCCESS(retval, "Failed to get slave mac address: %s",
+	retval = rte_eth_macaddr_get(child->port_id, &addr_check);
+	TEST_ASSERT_SUCCESS(retval, "Failed to get child mac address: %s",
 			    strerror(-retval));
 	TEST_ASSERT_EQUAL(rte_is_same_ether_addr(&addr, &addr_check), 1,
-			"Slave MAC address is not as expected");
+			"Child MAC address is not as expected");
 
-	RTE_VERIFY(slave->lacp_parnter_state == 0);
+	RTE_VERIFY(child->lacp_parnter_state == 0);
 	return 0;
 }
 
 static int
-remove_slave(struct slave_conf *slave)
+remove_child(struct child_conf *child)
 {
-	ptrdiff_t slave_idx = slave - test_params.slave_ports;
+	ptrdiff_t child_idx = child - test_params.child_ports;
 
-	RTE_VERIFY(test_params.slave_ports <= slave &&
-		slave_idx < (ptrdiff_t)RTE_DIM(test_params.slave_ports));
+	RTE_VERIFY(test_params.child_ports <= child &&
+		child_idx < (ptrdiff_t)RTE_DIM(test_params.child_ports));
 
-	RTE_VERIFY(slave->bonded == 1);
-	RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
+	RTE_VERIFY(child->bonded == 1);
+	RTE_VERIFY(child->port_id != INVALID_PORT_ID);
 
-	TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
-		"Slave %u tx queue not empty while removing from bonding.",
-		slave->port_id);
+	TEST_ASSERT_EQUAL(rte_ring_count(child->rx_queue), 0,
+		"Child %u tx queue not empty while removing from bonding.",
+		child->port_id);
 
-	TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
-		"Slave %u tx queue not empty while removing from bonding.",
-		slave->port_id);
+	TEST_ASSERT_EQUAL(rte_ring_count(child->rx_queue), 0,
+		"Child %u tx queue not empty while removing from bonding.",
+		child->port_id);
 
-	TEST_ASSERT_EQUAL(rte_eth_bond_slave_remove(test_params.bonded_port_id,
-			slave->port_id), 0,
-			"Failed to remove slave (idx=%u, id=%u) from bonding (id=%u)",
-			(uint8_t)slave_idx, slave->port_id,
+	TEST_ASSERT_EQUAL(rte_eth_bond_child_remove(test_params.bonded_port_id,
+			child->port_id), 0,
+			"Failed to remove child (idx=%u, id=%u) from bonding (id=%u)",
+			(uint8_t)child_idx, child->port_id,
 			test_params.bonded_port_id);
 
-	slave->bonded = 0;
-	slave->lacp_parnter_state = 0;
+	child->bonded = 0;
+	child->lacp_parnter_state = 0;
 	return 0;
 }
 
 static void
-lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
+lacp_recv_cb(uint16_t child_id, struct rte_mbuf *lacp_pkt)
 {
 	struct rte_ether_hdr *hdr;
 	struct slow_protocol_frame *slow_hdr;
@@ -304,22 +304,22 @@ lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
 	slow_hdr = rte_pktmbuf_mtod(lacp_pkt, struct slow_protocol_frame *);
 	RTE_VERIFY(slow_hdr->slow_protocol.subtype == SLOW_SUBTYPE_LACP);
 
-	lacpdu_rx_count[slave_id]++;
+	lacpdu_rx_count[child_id]++;
 	rte_pktmbuf_free(lacp_pkt);
 }
 
 static int
-initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
+initialize_bonded_device_with_children(uint16_t child_count, uint8_t external_sm)
 {
 	uint8_t i;
 	int ret;
 
 	RTE_VERIFY(test_params.bonded_port_id != INVALID_PORT_ID);
 
-	for (i = 0; i < slave_count; i++) {
-		TEST_ASSERT_SUCCESS(add_slave(&test_params.slave_ports[i], 1),
+	for (i = 0; i < child_count; i++) {
+		TEST_ASSERT_SUCCESS(add_child(&test_params.child_ports[i], 1),
 			"Failed to add port %u to bonded device.\n",
-			test_params.slave_ports[i].port_id);
+			test_params.child_ports[i].port_id);
 	}
 
 	/* Reset mode 4 configuration */
@@ -345,34 +345,34 @@ initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
 }
 
 static int
-remove_slaves_and_stop_bonded_device(void)
+remove_children_and_stop_bonded_device(void)
 {
-	struct slave_conf *slave;
+	struct child_conf *child;
 	int retval;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t children[RTE_MAX_ETHPORTS];
 	uint16_t i;
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bonded_port_id),
 			"Failed to stop bonded port %u",
 			test_params.bonded_port_id);
 
-	FOR_EACH_SLAVE(i, slave)
-		remove_slave(slave);
+	FOR_EACH_CHILD(i, child)
+		remove_child(child);
 
-	retval = rte_eth_bond_slaves_get(test_params.bonded_port_id, slaves,
-		RTE_DIM(slaves));
+	retval = rte_eth_bond_children_get(test_params.bonded_port_id, children,
+		RTE_DIM(children));
 
 	TEST_ASSERT_EQUAL(retval, 0,
-		"Expected bonded device %u have 0 slaves but returned %d.",
+		"Expected bonded device %u have 0 children but returned %d.",
 			test_params.bonded_port_id, retval);
 
-	FOR_EACH_PORT(i, slave) {
-		TEST_ASSERT_SUCCESS(rte_eth_dev_stop(slave->port_id),
+	FOR_EACH_PORT(i, child) {
+		TEST_ASSERT_SUCCESS(rte_eth_dev_stop(child->port_id),
 				"Failed to stop bonded port %u",
-				slave->port_id);
+				child->port_id);
 
-		TEST_ASSERT(slave->bonded == 0,
-			"Port id=%u is still marked as enslaved.", slave->port_id);
+		TEST_ASSERT(child->bonded == 0,
+			"Port id=%u is still marked as enchildd.", child->port_id);
 	}
 
 	return TEST_SUCCESS;
@@ -383,7 +383,7 @@ test_setup(void)
 {
 	int retval, nb_mbuf_per_pool;
 	char name[RTE_ETH_NAME_MAX_LEN];
-	struct slave_conf *port;
+	struct child_conf *port;
 	const uint8_t socket_id = rte_socket_id();
 	uint16_t i;
 
@@ -400,10 +400,10 @@ test_setup(void)
 
 	/* Create / initialize ring eth devs. */
 	FOR_EACH_PORT(i, port) {
-		port = &test_params.slave_ports[i];
+		port = &test_params.child_ports[i];
 
 		if (port->rx_queue == NULL) {
-			retval = snprintf(name, RTE_DIM(name), SLAVE_RX_QUEUE_FMT, i);
+			retval = snprintf(name, RTE_DIM(name), CHILD_RX_QUEUE_FMT, i);
 			TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
 			port->rx_queue = rte_ring_create(name, RX_RING_SIZE, socket_id, 0);
 			TEST_ASSERT(port->rx_queue != NULL,
@@ -412,7 +412,7 @@ test_setup(void)
 		}
 
 		if (port->tx_queue == NULL) {
-			retval = snprintf(name, RTE_DIM(name), SLAVE_TX_QUEUE_FMT, i);
+			retval = snprintf(name, RTE_DIM(name), CHILD_TX_QUEUE_FMT, i);
 			TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
 			port->tx_queue = rte_ring_create(name, TX_RING_SIZE, socket_id, 0);
 			TEST_ASSERT_NOT_NULL(port->tx_queue,
@@ -421,7 +421,7 @@ test_setup(void)
 		}
 
 		if (port->port_id == INVALID_PORT_ID) {
-			retval = snprintf(name, RTE_DIM(name), SLAVE_DEV_NAME_FMT, i);
+			retval = snprintf(name, RTE_DIM(name), CHILD_DEV_NAME_FMT, i);
 			TEST_ASSERT(retval < (int)RTE_DIM(name) - 1, "Name too long");
 			retval = rte_eth_from_rings(name, &port->rx_queue, 1,
 					&port->tx_queue, 1, socket_id);
@@ -460,7 +460,7 @@ test_setup(void)
 static void
 testsuite_teardown(void)
 {
-	struct slave_conf *port;
+	struct child_conf *port;
 	uint8_t i;
 
 	/* Only stop ports.
@@ -480,7 +480,7 @@ testsuite_teardown(void)
  * frame but not LACP
  */
 static int
-make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
+make_lacp_reply(struct child_conf *child, struct rte_mbuf *pkt)
 {
 	struct rte_ether_hdr *hdr;
 	struct slow_protocol_frame *slow_hdr;
@@ -501,11 +501,11 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
 	/* Change source address to partner address */
 	rte_ether_addr_copy(&parnter_mac_default, &slow_hdr->eth_hdr.src_addr);
 	slow_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
-		slave->port_id;
+		child->port_id;
 
 	lacp = (struct lacpdu *) &slow_hdr->slow_protocol;
 	/* Save last received state */
-	slave->lacp_parnter_state = lacp->actor.state;
+	child->lacp_parnter_state = lacp->actor.state;
 	/* Change it into LACP replay by matching parameters. */
 	memcpy(&lacp->partner.port_params, &lacp->actor.port_params,
 		sizeof(struct port_params));
@@ -523,27 +523,27 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
 }
 
 /*
- * Reads packets from given slave, search for LACP packet and reply them.
+ * Reads packets from given child, search for LACP packet and reply them.
  *
- * Receives burst of packets from slave. Looks for LACP packet. Drops
+ * Receives burst of packets from child. Looks for LACP packet. Drops
  * all other packets. Prepares response LACP and sends it back.
  *
  * return number of LACP received and replied, -1 on error.
  */
 static int
-bond_handshake_reply(struct slave_conf *slave)
+bond_handshake_reply(struct child_conf *child)
 {
 	int retval;
 	struct rte_mbuf *rx_buf[MAX_PKT_BURST];
 	struct rte_mbuf *lacp_tx_buf[MAX_PKT_BURST];
 	uint16_t lacp_tx_buf_cnt = 0, i;
 
-	retval = slave_get_pkts(slave, rx_buf, RTE_DIM(rx_buf));
-	TEST_ASSERT(retval >= 0, "Getting slave %u packets failed.",
-			slave->port_id);
+	retval = child_get_pkts(child, rx_buf, RTE_DIM(rx_buf));
+	TEST_ASSERT(retval >= 0, "Getting child %u packets failed.",
+			child->port_id);
 
 	for (i = 0; i < (uint16_t)retval; i++) {
-		if (make_lacp_reply(slave, rx_buf[i]) == 0) {
+		if (make_lacp_reply(child, rx_buf[i]) == 0) {
 			/* reply with actor's LACP */
 			lacp_tx_buf[lacp_tx_buf_cnt++] = rx_buf[i];
 		} else
@@ -553,7 +553,7 @@ bond_handshake_reply(struct slave_conf *slave)
 	if (lacp_tx_buf_cnt == 0)
 		return 0;
 
-	retval = slave_put_pkts(slave, lacp_tx_buf, lacp_tx_buf_cnt);
+	retval = child_put_pkts(child, lacp_tx_buf, lacp_tx_buf_cnt);
 	if (retval <= lacp_tx_buf_cnt) {
 		/* retval might be negative */
 		for (i = RTE_MAX(0, retval); retval < lacp_tx_buf_cnt; retval++)
@@ -561,24 +561,24 @@ bond_handshake_reply(struct slave_conf *slave)
 	}
 
 	TEST_ASSERT_EQUAL(retval, lacp_tx_buf_cnt,
-		"Failed to equeue lacp packets into slave %u tx queue.",
-		slave->port_id);
+		"Failed to equeue lacp packets into child %u tx queue.",
+		child->port_id);
 
 	return lacp_tx_buf_cnt;
 }
 
 /*
- * Function check if given slave tx queue contains packets that make mode 4
- * handshake complete. It will drain slave queue.
+ * Function check if given child tx queue contains packets that make mode 4
+ * handshake complete. It will drain child queue.
  * return 0 if handshake not completed, 1 if handshake was complete,
  */
 static int
-bond_handshake_done(struct slave_conf *slave)
+bond_handshake_done(struct child_conf *child)
 {
 	const uint8_t expected_state = STATE_LACP_ACTIVE | STATE_SYNCHRONIZATION |
 			STATE_AGGREGATION | STATE_COLLECTING | STATE_DISTRIBUTING;
 
-	return slave->lacp_parnter_state == expected_state;
+	return child->lacp_parnter_state == expected_state;
 }
 
 static unsigned
@@ -603,32 +603,32 @@ bond_get_update_timeout_ms(void)
 static int
 bond_handshake(void)
 {
-	struct slave_conf *slave;
+	struct child_conf *child;
 	struct rte_mbuf *buf[MAX_PKT_BURST];
 	uint16_t nb_pkts;
-	uint8_t all_slaves_done, i, j;
-	uint8_t status[RTE_DIM(test_params.slave_ports)] = { 0 };
+	uint8_t all_children_done, i, j;
+	uint8_t status[RTE_DIM(test_params.child_ports)] = { 0 };
 	const unsigned delay = bond_get_update_timeout_ms();
 
 	/* Exchange LACP frames */
-	all_slaves_done = 0;
-	for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
+	all_children_done = 0;
+	for (i = 0; i < 30 && all_children_done == 0; ++i) {
 		rte_delay_ms(delay);
 
-		all_slaves_done = 1;
-		FOR_EACH_SLAVE(j, slave) {
-			/* If response already send, skip slave */
+		all_children_done = 1;
+		FOR_EACH_CHILD(j, child) {
+			/* If response already send, skip child */
 			if (status[j] != 0)
 				continue;
 
-			if (bond_handshake_reply(slave) < 0) {
-				all_slaves_done = 0;
+			if (bond_handshake_reply(child) < 0) {
+				all_children_done = 0;
 				break;
 			}
 
-			status[j] = bond_handshake_done(slave);
+			status[j] = bond_handshake_done(child);
 			if (status[j] == 0)
-				all_slaves_done = 0;
+				all_children_done = 0;
 		}
 
 		nb_pkts = bond_tx(NULL, 0);
@@ -639,26 +639,26 @@ bond_handshake(void)
 		TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
 	}
 	/* If response didn't send - report failure */
-	TEST_ASSERT_EQUAL(all_slaves_done, 1, "Bond handshake failed\n");
+	TEST_ASSERT_EQUAL(all_children_done, 1, "Bond handshake failed\n");
 
 	/* If flags doesn't match - report failure */
-	return all_slaves_done == 1 ? TEST_SUCCESS : TEST_FAILED;
+	return all_children_done == 1 ? TEST_SUCCESS : TEST_FAILED;
 }
 
-#define TEST_LACP_SLAVE_COUT RTE_DIM(test_params.slave_ports)
+#define TEST_LACP_CHILD_COUT RTE_DIM(test_params.child_ports)
 static int
 test_mode4_lacp(void)
 {
 	int retval;
 
-	retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+	retval = initialize_bonded_device_with_children(TEST_LACP_CHILD_COUT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 	/* Test LACP handshake function */
 	retval = bond_handshake();
 	TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_children_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -668,7 +668,7 @@ test_mode4_agg_mode_selection(void)
 {
 	int retval;
 	/* Test and verify for Stable mode */
-	retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+	retval = initialize_bonded_device_with_children(TEST_LACP_CHILD_COUT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 
@@ -684,12 +684,12 @@ test_mode4_agg_mode_selection(void)
 	TEST_ASSERT_EQUAL(retval, AGG_STABLE,
 			"Wrong agg mode received from bonding device");
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_children_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 
 	/* test and verify for Bandwidth mode */
-	retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+	retval = initialize_bonded_device_with_children(TEST_LACP_CHILD_COUT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 
@@ -706,11 +706,11 @@ test_mode4_agg_mode_selection(void)
 	TEST_ASSERT_EQUAL(retval, AGG_BANDWIDTH,
 			"Wrong agg mode received from bonding device");
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_children_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	/* test and verify selection for count mode */
-	retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+	retval = initialize_bonded_device_with_children(TEST_LACP_CHILD_COUT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 
@@ -726,7 +726,7 @@ test_mode4_agg_mode_selection(void)
 	TEST_ASSERT_EQUAL(retval, AGG_COUNT,
 			"Wrong agg mode received from bonding device");
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_children_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -780,7 +780,7 @@ generate_packets(struct rte_ether_addr *src_mac,
 }
 
 static int
-generate_and_put_packets(struct slave_conf *slave,
+generate_and_put_packets(struct child_conf *child,
 			struct rte_ether_addr *src_mac,
 			struct rte_ether_addr *dst_mac, uint16_t count)
 {
@@ -791,12 +791,12 @@ generate_and_put_packets(struct slave_conf *slave,
 	if (retval != (int)count)
 		return retval;
 
-	retval = slave_put_pkts(slave, pkts, count);
+	retval = child_put_pkts(child, pkts, count);
 	if (retval > 0 && retval != count)
 		free_pkts(&pkts[retval], count - retval);
 
 	TEST_ASSERT_EQUAL(retval, count,
-		"Failed to enqueue packets into slave %u RX queue", slave->port_id);
+		"Failed to enqueue packets into child %u RX queue", child->port_id);
 
 	return TEST_SUCCESS;
 }
@@ -804,7 +804,7 @@ generate_and_put_packets(struct slave_conf *slave,
 static int
 test_mode4_rx(void)
 {
-	struct slave_conf *slave;
+	struct child_conf *child;
 	uint16_t i, j;
 
 	uint16_t expected_pkts_cnt;
@@ -819,7 +819,7 @@ test_mode4_rx(void)
 	struct rte_ether_addr dst_mac;
 	struct rte_ether_addr bonded_mac;
 
-	retval = initialize_bonded_device_with_slaves(TEST_PROMISC_SLAVE_COUNT,
+	retval = initialize_bonded_device_with_children(TEST_PROMISC_CHILD_COUNT,
 						      0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
@@ -838,7 +838,7 @@ test_mode4_rx(void)
 	dst_mac.addr_bytes[0] += 2;
 
 	/* First try with promiscuous mode enabled.
-	 * Add 2 packets to each slave. First with bonding MAC address, second with
+	 * Add 2 packets to each child. First with bonding MAC address, second with
 	 * different. Check if we received all of them. */
 	retval = rte_eth_promiscuous_enable(test_params.bonded_port_id);
 	TEST_ASSERT_SUCCESS(retval,
@@ -846,16 +846,16 @@ test_mode4_rx(void)
 			test_params.bonded_port_id, rte_strerror(-retval));
 
 	expected_pkts_cnt = 0;
-	FOR_EACH_SLAVE(i, slave) {
-		retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
-		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
-			slave->port_id);
+	FOR_EACH_CHILD(i, child) {
+		retval = generate_and_put_packets(child, &src_mac, &bonded_mac, 1);
+		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to child %u",
+			child->port_id);
 
-		retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
-		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
-			slave->port_id);
+		retval = generate_and_put_packets(child, &src_mac, &dst_mac, 1);
+		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to child %u",
+			child->port_id);
 
-		/* Expect 2 packets per slave */
+		/* Expect 2 packets per child */
 		expected_pkts_cnt += 2;
 	}
 
@@ -894,16 +894,16 @@ test_mode4_rx(void)
 		test_params.bonded_port_id, rte_strerror(-retval));
 
 	expected_pkts_cnt = 0;
-	FOR_EACH_SLAVE(i, slave) {
-		retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
-		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
-			slave->port_id);
+	FOR_EACH_CHILD(i, child) {
+		retval = generate_and_put_packets(child, &src_mac, &bonded_mac, 1);
+		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to child %u",
+			child->port_id);
 
-		retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
-		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
-			slave->port_id);
+		retval = generate_and_put_packets(child, &src_mac, &dst_mac, 1);
+		TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to child %u",
+			child->port_id);
 
-		/* Expect only one packet per slave */
+		/* Expect only one packet per child */
 		expected_pkts_cnt += 1;
 	}
 
@@ -927,19 +927,19 @@ test_mode4_rx(void)
 	TEST_ASSERT_EQUAL(retval, expected_pkts_cnt,
 		"Expected %u packets but received only %d", expected_pkts_cnt, retval);
 
-	/* Link down test: simulate link down for first slave. */
+	/* Link down test: simulate link down for first child. */
 	delay = bond_get_update_timeout_ms();
 
-	uint8_t slave_down_id = INVALID_PORT_ID;
+	uint8_t child_down_id = INVALID_PORT_ID;
 
-	/* Find first slave and make link down on it*/
-	FOR_EACH_SLAVE(i, slave) {
-		rte_eth_dev_set_link_down(slave->port_id);
-		slave_down_id = slave->port_id;
+	/* Find first child and make link down on it*/
+	FOR_EACH_CHILD(i, child) {
+		rte_eth_dev_set_link_down(child->port_id);
+		child_down_id = child->port_id;
 		break;
 	}
 
-	RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
+	RTE_VERIFY(child_down_id != INVALID_PORT_ID);
 
 	/* Give some time to rearrange bonding */
 	for (i = 0; i < 3; i++) {
@@ -949,16 +949,16 @@ test_mode4_rx(void)
 
 	TEST_ASSERT_SUCCESS(bond_handshake(), "Handshake after link down failed");
 
-	/* Put packet to each slave */
-	FOR_EACH_SLAVE(i, slave) {
+	/* Put packet to each child */
+	FOR_EACH_CHILD(i, child) {
 		void *pkt = NULL;
 
-		dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
-		retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
+		dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = child->port_id;
+		retval = generate_and_put_packets(child, &src_mac, &dst_mac, 1);
 		TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
 
-		src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
-		retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
+		src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = child->port_id;
+		retval = generate_and_put_packets(child, &src_mac, &bonded_mac, 1);
 		TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
 
 		retval = bond_rx(pkts, RTE_DIM(pkts));
@@ -967,36 +967,36 @@ test_mode4_rx(void)
 		if (retval > 0)
 			free_pkts(pkts, retval);
 
-		while (rte_ring_dequeue(slave->rx_queue, (void **)&pkt) == 0)
+		while (rte_ring_dequeue(child->rx_queue, (void **)&pkt) == 0)
 			rte_pktmbuf_free(pkt);
 
-		if (slave_down_id == slave->port_id)
+		if (child_down_id == child->port_id)
 			TEST_ASSERT_EQUAL(retval, 0, "Packets received unexpectedly.");
 		else
 			TEST_ASSERT_NOT_EQUAL(retval, 0,
-				"Expected to receive some packets on slave %u.",
-				slave->port_id);
-		rte_eth_dev_start(slave->port_id);
+				"Expected to receive some packets on child %u.",
+				child->port_id);
+		rte_eth_dev_start(child->port_id);
 
 		for (j = 0; j < 5; j++) {
-			TEST_ASSERT(bond_handshake_reply(slave) >= 0,
+			TEST_ASSERT(bond_handshake_reply(child) >= 0,
 				"Handshake after link up");
 
-			if (bond_handshake_done(slave) == 1)
+			if (bond_handshake_done(child) == 1)
 				break;
 		}
 
-		TEST_ASSERT(j < 5, "Failed to aggregate slave after link up");
+		TEST_ASSERT(j < 5, "Failed to aggregate child after link up");
 	}
 
-	remove_slaves_and_stop_bonded_device();
+	remove_children_and_stop_bonded_device();
 	return TEST_SUCCESS;
 }
 
 static int
 test_mode4_tx_burst(void)
 {
-	struct slave_conf *slave;
+	struct child_conf *child;
 	uint16_t i, j;
 
 	uint16_t exp_pkts_cnt, pkts_cnt = 0;
@@ -1008,7 +1008,7 @@ test_mode4_tx_burst(void)
 		{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 } };
 	struct rte_ether_addr bonded_mac;
 
-	retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
+	retval = initialize_bonded_device_with_children(TEST_TX_CHILD_COUNT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 	retval = bond_handshake();
@@ -1036,19 +1036,19 @@ test_mode4_tx_burst(void)
 
 	TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
 
-	/* Check if packets were transmitted properly. Every slave should have
+	/* Check if packets were transmitted properly. Every child should have
 	 * at least one packet, and sum must match. Under normal operation
 	 * there should be no LACP nor MARKER frames. */
 	pkts_cnt = 0;
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_CHILD(i, child) {
 		uint16_t normal_cnt, slow_cnt;
 
-		retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
+		retval = child_get_pkts(child, pkts, RTE_DIM(pkts));
 		normal_cnt = 0;
 		slow_cnt = 0;
 
 		for (j = 0; j < retval; j++) {
-			if (make_lacp_reply(slave, pkts[j]) == 1)
+			if (make_lacp_reply(child, pkts[j]) == 1)
 				normal_cnt++;
 			else
 				slow_cnt++;
@@ -1056,11 +1056,11 @@ test_mode4_tx_burst(void)
 
 		free_pkts(pkts, normal_cnt + slow_cnt);
 		TEST_ASSERT_EQUAL(slow_cnt, 0,
-			"slave %u unexpectedly transmitted %d SLOW packets", slave->port_id,
+			"child %u unexpectedly transmitted %d SLOW packets", child->port_id,
 			slow_cnt);
 
 		TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
-			"slave %u did not transmitted any packets", slave->port_id);
+			"child %u did not transmitted any packets", child->port_id);
 
 		pkts_cnt += normal_cnt;
 	}
@@ -1069,18 +1069,18 @@ test_mode4_tx_burst(void)
 		"Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
 
 	/* Link down test:
-	 * simulate link down for first slave. */
+	 * simulate link down for first child. */
 	delay = bond_get_update_timeout_ms();
 
-	uint8_t slave_down_id = INVALID_PORT_ID;
+	uint8_t child_down_id = INVALID_PORT_ID;
 
-	FOR_EACH_SLAVE(i, slave) {
-		rte_eth_dev_set_link_down(slave->port_id);
-		slave_down_id = slave->port_id;
+	FOR_EACH_CHILD(i, child) {
+		rte_eth_dev_set_link_down(child->port_id);
+		child_down_id = child->port_id;
 		break;
 	}
 
-	RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
+	RTE_VERIFY(child_down_id != INVALID_PORT_ID);
 
 	/* Give some time to rearrange bonding. */
 	for (i = 0; i < 3; i++) {
@@ -1110,19 +1110,19 @@ test_mode4_tx_burst(void)
 
 	TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
 
-	/* Check if packets was transmitted properly. Every slave should have
+	/* Check if packets was transmitted properly. Every child should have
 	 * at least one packet, and sum must match. Under normal operation
 	 * there should be no LACP nor MARKER frames. */
 	pkts_cnt = 0;
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_CHILD(i, child) {
 		uint16_t normal_cnt, slow_cnt;
 
-		retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
+		retval = child_get_pkts(child, pkts, RTE_DIM(pkts));
 		normal_cnt = 0;
 		slow_cnt = 0;
 
 		for (j = 0; j < retval; j++) {
-			if (make_lacp_reply(slave, pkts[j]) == 1)
+			if (make_lacp_reply(child, pkts[j]) == 1)
 				normal_cnt++;
 			else
 				slow_cnt++;
@@ -1130,17 +1130,17 @@ test_mode4_tx_burst(void)
 
 		free_pkts(pkts, normal_cnt + slow_cnt);
 
-		if (slave_down_id == slave->port_id) {
+		if (child_down_id == child->port_id) {
 			TEST_ASSERT_EQUAL(normal_cnt + slow_cnt, 0,
-				"slave %u enexpectedly transmitted %u packets",
-				normal_cnt + slow_cnt, slave->port_id);
+				"child %u enexpectedly transmitted %u packets",
+				normal_cnt + slow_cnt, child->port_id);
 		} else {
 			TEST_ASSERT_EQUAL(slow_cnt, 0,
-				"slave %u unexpectedly transmitted %d SLOW packets",
-				slave->port_id, slow_cnt);
+				"child %u unexpectedly transmitted %d SLOW packets",
+				child->port_id, slow_cnt);
 
 			TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
-				"slave %u did not transmitted any packets", slave->port_id);
+				"child %u did not transmitted any packets", child->port_id);
 		}
 
 		pkts_cnt += normal_cnt;
@@ -1149,11 +1149,11 @@ test_mode4_tx_burst(void)
 	TEST_ASSERT_EQUAL(exp_pkts_cnt, pkts_cnt,
 		"Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
 
-	return remove_slaves_and_stop_bonded_device();
+	return remove_children_and_stop_bonded_device();
 }
 
 static void
-init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
+init_marker(struct rte_mbuf *pkt, struct child_conf *child)
 {
 	struct marker_header *marker_hdr = rte_pktmbuf_mtod(pkt,
 			struct marker_header *);
@@ -1166,7 +1166,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
 	rte_ether_addr_copy(&parnter_mac_default,
 			&marker_hdr->eth_hdr.src_addr);
 	marker_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
-		slave->port_id;
+		child->port_id;
 
 	marker_hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
 
@@ -1177,7 +1177,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
 			offsetof(struct marker, reserved_90) -
 			offsetof(struct marker, requester_port);
 	RTE_VERIFY(marker_hdr->marker.info_length == 16);
-	marker_hdr->marker.requester_port = slave->port_id + 1;
+	marker_hdr->marker.requester_port = child->port_id + 1;
 	marker_hdr->marker.tlv_type_terminator = TLV_TYPE_TERMINATOR_INFORMATION;
 	marker_hdr->marker.terminator_length = 0;
 }
@@ -1185,7 +1185,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
 static int
 test_mode4_marker(void)
 {
-	struct slave_conf *slave;
+	struct child_conf *child;
 	struct rte_mbuf *pkts[MAX_PKT_BURST];
 	struct rte_mbuf *marker_pkt;
 	struct marker_header *marker_hdr;
@@ -1196,7 +1196,7 @@ test_mode4_marker(void)
 	uint8_t i, j;
 	const uint16_t ethtype_slow_be = rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
 
-	retval = initialize_bonded_device_with_slaves(TEST_MARKER_SLAVE_COUT,
+	retval = initialize_bonded_device_with_children(TEST_MARKER_CHILD_COUT,
 						      0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
@@ -1205,17 +1205,17 @@ test_mode4_marker(void)
 	TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
 
 	delay = bond_get_update_timeout_ms();
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_CHILD(i, child) {
 		marker_pkt = rte_pktmbuf_alloc(test_params.mbuf_pool);
 		TEST_ASSERT_NOT_NULL(marker_pkt, "Failed to allocate marker packet");
-		init_marker(marker_pkt, slave);
+		init_marker(marker_pkt, child);
 
-		retval = slave_put_pkts(slave, &marker_pkt, 1);
+		retval = child_put_pkts(child, &marker_pkt, 1);
 		if (retval != 1)
 			rte_pktmbuf_free(marker_pkt);
 
 		TEST_ASSERT_EQUAL(retval, 1,
-			"Failed to send marker packet to slave %u", slave->port_id);
+			"Failed to send marker packet to child %u", child->port_id);
 
 		for (j = 0; j < 20; ++j) {
 			rte_delay_ms(delay);
@@ -1233,13 +1233,13 @@ test_mode4_marker(void)
 
 			/* Check if LACP packet was send by state machines
 			   First and only packet must be a maker response */
-			retval = slave_get_pkts(slave, pkts, MAX_PKT_BURST);
+			retval = child_get_pkts(child, pkts, MAX_PKT_BURST);
 			if (retval == 0)
 				continue;
 			if (retval > 1)
 				free_pkts(pkts, retval);
 
-			TEST_ASSERT_EQUAL(retval, 1, "failed to get slave packets");
+			TEST_ASSERT_EQUAL(retval, 1, "failed to get child packets");
 			nb_pkts = retval;
 
 			marker_hdr = rte_pktmbuf_mtod(pkts[0], struct marker_header *);
@@ -1263,7 +1263,7 @@ test_mode4_marker(void)
 		TEST_ASSERT(j < 20, "Marker response not found");
 	}
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_children_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval,	"Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -1272,7 +1272,7 @@ test_mode4_marker(void)
 static int
 test_mode4_expired(void)
 {
-	struct slave_conf *slave, *exp_slave = NULL;
+	struct child_conf *child, *exp_child = NULL;
 	struct rte_mbuf *pkts[MAX_PKT_BURST];
 	int retval;
 	uint32_t old_delay;
@@ -1282,7 +1282,7 @@ test_mode4_expired(void)
 
 	struct rte_eth_bond_8023ad_conf conf;
 
-	retval = initialize_bonded_device_with_slaves(TEST_EXPIRED_SLAVE_COUNT,
+	retval = initialize_bonded_device_with_children(TEST_EXPIRED_CHILD_COUNT,
 						      0);
 	/* Set custom timeouts to make test last shorter. */
 	rte_eth_bond_8023ad_conf_get(test_params.bonded_port_id, &conf);
@@ -1298,8 +1298,8 @@ test_mode4_expired(void)
 
 	/* Wait for new settings to be applied. */
 	for (i = 0; i < old_delay/conf.update_timeout_ms * 2; i++) {
-		FOR_EACH_SLAVE(j, slave)
-			bond_handshake_reply(slave);
+		FOR_EACH_CHILD(j, child)
+			bond_handshake_reply(child);
 
 		rte_delay_ms(conf.update_timeout_ms);
 	}
@@ -1307,13 +1307,13 @@ test_mode4_expired(void)
 	retval = bond_handshake();
 	TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
 
-	/* Find first slave */
-	FOR_EACH_SLAVE(i, slave) {
-		exp_slave = slave;
+	/* Find first child */
+	FOR_EACH_CHILD(i, child) {
+		exp_child = child;
 		break;
 	}
 
-	RTE_VERIFY(exp_slave != NULL);
+	RTE_VERIFY(exp_child != NULL);
 
 	/* When one of partners do not send or respond to LACP frame in
 	 * conf.long_timeout_ms time, internal state machines should detect this
@@ -1325,16 +1325,16 @@ test_mode4_expired(void)
 		TEST_ASSERT_EQUAL(retval, 0, "Unexpectedly received %d packets",
 			retval);
 
-		FOR_EACH_SLAVE(i, slave) {
-			retval = bond_handshake_reply(slave);
+		FOR_EACH_CHILD(i, child) {
+			retval = bond_handshake_reply(child);
 			TEST_ASSERT(retval >= 0, "Handshake failed");
 
-			/* Remove replay for slave that suppose to be expired. */
-			if (slave == exp_slave) {
-				while (rte_ring_count(slave->rx_queue) > 0) {
+			/* Remove replay for child that suppose to be expired. */
+			if (child == exp_child) {
+				while (rte_ring_count(child->rx_queue) > 0) {
 					void *pkt = NULL;
 
-					rte_ring_dequeue(slave->rx_queue, &pkt);
+					rte_ring_dequeue(child->rx_queue, &pkt);
 					rte_pktmbuf_free(pkt);
 				}
 			}
@@ -1348,17 +1348,17 @@ test_mode4_expired(void)
 			retval);
 	}
 
-	/* After test only expected slave should be in EXPIRED state */
-	FOR_EACH_SLAVE(i, slave) {
-		if (slave == exp_slave)
-			TEST_ASSERT(slave->lacp_parnter_state & STATE_EXPIRED,
-				"Slave %u should be in expired.", slave->port_id);
+	/* After test only expected child should be in EXPIRED state */
+	FOR_EACH_CHILD(i, child) {
+		if (child == exp_child)
+			TEST_ASSERT(child->lacp_parnter_state & STATE_EXPIRED,
+				"Child %u should be in expired.", child->port_id);
 		else
-			TEST_ASSERT_EQUAL(bond_handshake_done(slave), 1,
-				"Slave %u should be operational.", slave->port_id);
+			TEST_ASSERT_EQUAL(bond_handshake_done(child), 1,
+				"Child %u should be operational.", child->port_id);
 	}
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_children_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -1372,17 +1372,17 @@ test_mode4_ext_ctrl(void)
 	 *   . try to transmit lacpdu (should fail)
 	 *   . try to set collecting and distributing flags (should fail)
 	 * reconfigure w/external sm
-	 *   . transmit one lacpdu on each slave using new api
-	 *   . make sure each slave receives one lacpdu using the callback api
-	 *   . transmit one data pdu on each slave (should fail)
+	 *   . transmit one lacpdu on each child using new api
+	 *   . make sure each child receives one lacpdu using the callback api
+	 *   . transmit one data pdu on each child (should fail)
 	 *   . enable distribution and collection, send one data pdu each again
 	 */
 
 	int retval;
-	struct slave_conf *slave = NULL;
+	struct child_conf *child = NULL;
 	uint8_t i;
 
-	struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
+	struct rte_mbuf *lacp_tx_buf[CHILD_COUNT];
 	struct rte_ether_addr src_mac, dst_mac;
 	struct lacpdu_header lacpdu = {
 		.lacpdu = {
@@ -1396,30 +1396,30 @@ test_mode4_ext_ctrl(void)
 	initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
 			      RTE_ETHER_TYPE_SLOW, 0, 0);
 
-	for (i = 0; i < SLAVE_COUNT; i++) {
+	for (i = 0; i < CHILD_COUNT; i++) {
 		lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
 		rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
 			   &lacpdu, sizeof(lacpdu));
 		rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
 	}
 
-	retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
+	retval = initialize_bonded_device_with_children(TEST_TX_CHILD_COUNT, 0);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_CHILD(i, child) {
 		TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_slowtx(
 						test_params.bonded_port_id,
-						slave->port_id, lacp_tx_buf[i]),
-				 "Slave should not allow manual LACP xmit");
+						child->port_id, lacp_tx_buf[i]),
+				 "Child should not allow manual LACP xmit");
 		TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_collect(
 						test_params.bonded_port_id,
-						slave->port_id, 1),
-				 "Slave should not allow external state controls");
+						child->port_id, 1),
+				 "Child should not allow external state controls");
 	}
 
 	free_pkts(lacp_tx_buf, RTE_DIM(lacp_tx_buf));
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_children_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Bonded device cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -1430,13 +1430,13 @@ static int
 test_mode4_ext_lacp(void)
 {
 	int retval;
-	struct slave_conf *slave = NULL;
-	uint8_t all_slaves_done = 0, i;
+	struct child_conf *child = NULL;
+	uint8_t all_children_done = 0, i;
 	uint16_t nb_pkts;
 	const unsigned int delay = bond_get_update_timeout_ms();
 
-	struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
-	struct rte_mbuf *buf[SLAVE_COUNT];
+	struct rte_mbuf *lacp_tx_buf[CHILD_COUNT];
+	struct rte_mbuf *buf[CHILD_COUNT];
 	struct rte_ether_addr src_mac, dst_mac;
 	struct lacpdu_header lacpdu = {
 		.lacpdu = {
@@ -1450,14 +1450,14 @@ test_mode4_ext_lacp(void)
 	initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
 			      RTE_ETHER_TYPE_SLOW, 0, 0);
 
-	for (i = 0; i < SLAVE_COUNT; i++) {
+	for (i = 0; i < CHILD_COUNT; i++) {
 		lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
 		rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
 			   &lacpdu, sizeof(lacpdu));
 		rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
 	}
 
-	retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 1);
+	retval = initialize_bonded_device_with_children(TEST_TX_CHILD_COUNT, 1);
 	TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
 
 	memset(lacpdu_rx_count, 0, sizeof(lacpdu_rx_count));
@@ -1466,22 +1466,22 @@ test_mode4_ext_lacp(void)
 	for (i = 0; i < 30; ++i)
 		rte_delay_ms(delay);
 
-	FOR_EACH_SLAVE(i, slave) {
+	FOR_EACH_CHILD(i, child) {
 		retval = rte_eth_bond_8023ad_ext_slowtx(
 						test_params.bonded_port_id,
-						slave->port_id, lacp_tx_buf[i]);
+						child->port_id, lacp_tx_buf[i]);
 		TEST_ASSERT_SUCCESS(retval,
-				    "Slave should allow manual LACP xmit");
+				    "Child should allow manual LACP xmit");
 	}
 
 	nb_pkts = bond_tx(NULL, 0);
 	TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets transmitted unexpectedly");
 
-	FOR_EACH_SLAVE(i, slave) {
-		nb_pkts = slave_get_pkts(slave, buf, RTE_DIM(buf));
-		TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on slave %d\n",
+	FOR_EACH_CHILD(i, child) {
+		nb_pkts = child_get_pkts(child, buf, RTE_DIM(buf));
+		TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on child %d\n",
 				  nb_pkts, i);
-		slave_put_pkts(slave, buf, nb_pkts);
+		child_put_pkts(child, buf, nb_pkts);
 	}
 
 	nb_pkts = bond_rx(buf, RTE_DIM(buf));
@@ -1489,26 +1489,26 @@ test_mode4_ext_lacp(void)
 	TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
 
 	/* wait for the periodic callback to run */
-	for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
+	for (i = 0; i < 30 && all_children_done == 0; ++i) {
 		uint8_t s, total = 0;
 
 		rte_delay_ms(delay);
-		FOR_EACH_SLAVE(s, slave) {
-			total += lacpdu_rx_count[slave->port_id];
+		FOR_EACH_CHILD(s, child) {
+			total += lacpdu_rx_count[child->port_id];
 		}
 
-		if (total >= SLAVE_COUNT)
-			all_slaves_done = 1;
+		if (total >= CHILD_COUNT)
+			all_children_done = 1;
 	}
 
-	FOR_EACH_SLAVE(i, slave) {
-		TEST_ASSERT_EQUAL(lacpdu_rx_count[slave->port_id], 1,
-				  "Slave port %u should have received 1 lacpdu (count=%u)",
-				  slave->port_id,
-				  lacpdu_rx_count[slave->port_id]);
+	FOR_EACH_CHILD(i, child) {
+		TEST_ASSERT_EQUAL(lacpdu_rx_count[child->port_id], 1,
+				  "Child port %u should have received 1 lacpdu (count=%u)",
+				  child->port_id,
+				  lacpdu_rx_count[child->port_id]);
 	}
 
-	retval = remove_slaves_and_stop_bonded_device();
+	retval = remove_children_and_stop_bonded_device();
 	TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
 
 	return TEST_SUCCESS;
@@ -1517,10 +1517,10 @@ test_mode4_ext_lacp(void)
 static int
 check_environment(void)
 {
-	struct slave_conf *port;
+	struct child_conf *port;
 	uint8_t i, env_state;
-	uint16_t slaves[RTE_DIM(test_params.slave_ports)];
-	int slaves_count;
+	uint16_t children[RTE_DIM(test_params.child_ports)];
+	int children_count;
 
 	env_state = 0;
 	FOR_EACH_PORT(i, port) {
@@ -1540,20 +1540,20 @@ check_environment(void)
 			break;
 	}
 
-	slaves_count = rte_eth_bond_slaves_get(test_params.bonded_port_id,
-			slaves, RTE_DIM(slaves));
+	children_count = rte_eth_bond_children_get(test_params.bonded_port_id,
+			children, RTE_DIM(children));
 
-	if (slaves_count != 0)
+	if (children_count != 0)
 		env_state |= 0x10;
 
 	TEST_ASSERT_EQUAL(env_state, 0,
 		"Environment not clean (port %u):%s%s%s%s%s",
 		port->port_id,
-		env_state & 0x01 ? " slave rx queue not clean" : "",
-		env_state & 0x02 ? " slave tx queue not clean" : "",
-		env_state & 0x04 ? " port marked as enslaved" : "",
-		env_state & 0x80 ? " slave state is not reset" : "",
-		env_state & 0x10 ? " slave count not equal 0" : ".");
+		env_state & 0x01 ? " child rx queue not clean" : "",
+		env_state & 0x02 ? " child tx queue not clean" : "",
+		env_state & 0x04 ? " port marked as enchildd" : "",
+		env_state & 0x80 ? " child state is not reset" : "",
+		env_state & 0x10 ? " child count not equal 0" : ".");
 
 
 	return TEST_SUCCESS;
@@ -1562,7 +1562,7 @@ check_environment(void)
 static int
 test_mode4_executor(int (*test_func)(void))
 {
-	struct slave_conf *port;
+	struct child_conf *port;
 	int test_result;
 	uint8_t i;
 	void *pkt;
@@ -1581,7 +1581,7 @@ test_mode4_executor(int (*test_func)(void))
 
 	/* Reset environment in case test failed to do that. */
 	if (test_result != TEST_SUCCESS) {
-		TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
+		TEST_ASSERT_SUCCESS(remove_children_and_stop_bonded_device(),
 			"Failed to stop bonded device");
 
 		FOR_EACH_PORT(i, port) {
diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 464fb2dbd0bf..b1eee6bd4d5a 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -27,15 +27,15 @@
 
 #include "test.h"
 
-#define SLAVE_COUNT (4)
+#define CHILD_COUNT (4)
 
 #define RXTX_RING_SIZE			1024
 #define RXTX_QUEUE_COUNT		4
 
 #define BONDED_DEV_NAME         ("net_bonding_rss")
 
-#define SLAVE_DEV_NAME_FMT      ("net_null%d")
-#define SLAVE_RXTX_QUEUE_FMT      ("rssconf_slave%d_q%d")
+#define CHILD_DEV_NAME_FMT      ("net_null%d")
+#define CHILD_RXTX_QUEUE_FMT      ("rssconf_child%d_q%d")
 
 #define NUM_MBUFS 8191
 #define MBUF_SIZE (1600 + RTE_PKTMBUF_HEADROOM)
@@ -46,7 +46,7 @@
 #define INVALID_PORT_ID         (0xFF)
 #define INVALID_BONDING_MODE    (-1)
 
-struct slave_conf {
+struct child_conf {
 	uint16_t port_id;
 	struct rte_eth_dev_info dev_info;
 
@@ -54,7 +54,7 @@ struct slave_conf {
 	uint8_t rss_key[40];
 	struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
 
-	uint8_t is_slave;
+	uint8_t is_child;
 	struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
 };
 
@@ -62,15 +62,15 @@ struct link_bonding_rssconf_unittest_params {
 	uint8_t bond_port_id;
 	struct rte_eth_dev_info bond_dev_info;
 	struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
-	struct slave_conf slave_ports[SLAVE_COUNT];
+	struct child_conf child_ports[CHILD_COUNT];
 
 	struct rte_mempool *mbuf_pool;
 };
 
 static struct link_bonding_rssconf_unittest_params test_params  = {
 	.bond_port_id = INVALID_PORT_ID,
-	.slave_ports = {
-		[0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_slave = 0}
+	.child_ports = {
+		[0 ... CHILD_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_child = 0}
 	},
 	.mbuf_pool = NULL,
 };
@@ -107,14 +107,14 @@ static struct rte_eth_conf rss_pmd_conf = {
 #define FOR_EACH(_i, _item, _array, _size) \
 	for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
 
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a child
  * in this test.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->child_ports
+ * _child pointer to &test_params->child_ports[_idx]
  */
 #define FOR_EACH_PORT(_i, _port) \
-	FOR_EACH(_i, _port, test_params.slave_ports, \
-		RTE_DIM(test_params.slave_ports))
+	FOR_EACH(_i, _port, test_params.child_ports, \
+		RTE_DIM(test_params.child_ports))
 
 static int
 configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
@@ -151,21 +151,21 @@ configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
 }
 
 /**
- * Remove all slaves from bonding
+ * Remove all children from bonding
  */
 static int
-remove_slaves(void)
+remove_children(void)
 {
 	unsigned n;
-	struct slave_conf *port;
+	struct child_conf *port;
 
 	FOR_EACH_PORT(n, port) {
-		port = &test_params.slave_ports[n];
-		if (port->is_slave) {
-			TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(
+		port = &test_params.child_ports[n];
+		if (port->is_child) {
+			TEST_ASSERT_SUCCESS(rte_eth_bond_child_remove(
 					test_params.bond_port_id, port->port_id),
-					"Cannot remove slave %d from bonding", port->port_id);
-			port->is_slave = 0;
+					"Cannot remove child %d from bonding", port->port_id);
+			port->is_child = 0;
 		}
 	}
 
@@ -173,30 +173,30 @@ remove_slaves(void)
 }
 
 static int
-remove_slaves_and_stop_bonded_device(void)
+remove_children_and_stop_bonded_device(void)
 {
-	TEST_ASSERT_SUCCESS(remove_slaves(), "Removing slaves");
+	TEST_ASSERT_SUCCESS(remove_children(), "Removing children");
 	TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bond_port_id),
 			"Failed to stop port %u", test_params.bond_port_id);
 	return TEST_SUCCESS;
 }
 
 /**
- * Add all slaves to bonding
+ * Add all children to bonding
  */
 static int
-bond_slaves(void)
+bond_children(void)
 {
 	unsigned n;
-	struct slave_conf *port;
+	struct child_conf *port;
 
 	FOR_EACH_PORT(n, port) {
-		port = &test_params.slave_ports[n];
-		if (!port->is_slave) {
-			TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
-					port->port_id), "Cannot attach slave %d to the bonding",
+		port = &test_params.child_ports[n];
+		if (!port->is_child) {
+			TEST_ASSERT_SUCCESS(rte_eth_bond_child_add(test_params.bond_port_id,
+					port->port_id), "Cannot attach child %d to the bonding",
 					port->port_id);
-			port->is_slave = 1;
+			port->is_child = 1;
 		}
 	}
 
@@ -223,11 +223,11 @@ reta_set(uint16_t port_id, uint8_t value, int reta_size)
 }
 
 /**
- * Check if slaves RETA is synchronized with bonding port. Returns 1 if slave
+ * Check if children RETA is synchronized with bonding port. Returns 1 if child
  * port is synced with bonding port.
  */
 static int
-reta_check_synced(struct slave_conf *port)
+reta_check_synced(struct child_conf *port)
 {
 	unsigned i;
 
@@ -264,10 +264,10 @@ bond_reta_fetch(void) {
 }
 
 /**
- * Fetch slaves RETA
+ * Fetch children RETA
  */
 static int
-slave_reta_fetch(struct slave_conf *port) {
+child_reta_fetch(struct child_conf *port) {
 	unsigned j;
 
 	for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
@@ -280,49 +280,49 @@ slave_reta_fetch(struct slave_conf *port) {
 }
 
 /**
- * Remove and add slave to check if slaves configuration is synced with
- * the bonding ports values after adding new slave.
+ * Remove and add child to check if children configuration is synced with
+ * the bonding ports values after adding new child.
  */
 static int
-slave_remove_and_add(void)
+child_remove_and_add(void)
 {
-	struct slave_conf *port = &(test_params.slave_ports[0]);
+	struct child_conf *port = &(test_params.child_ports[0]);
 
-	/* 1. Remove first slave from bonding */
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params.bond_port_id,
-			port->port_id), "Cannot remove slave #d from bonding");
+	/* 1. Remove first child from bonding */
+	TEST_ASSERT_SUCCESS(rte_eth_bond_child_remove(test_params.bond_port_id,
+			port->port_id), "Cannot remove child #d from bonding");
 
-	/* 2. Change removed (ex-)slave and bonding configuration to different
+	/* 2. Change removed (ex-)child and bonding configuration to different
 	 *    values
 	 */
 	reta_set(test_params.bond_port_id, 1, test_params.bond_dev_info.reta_size);
 	bond_reta_fetch();
 
 	reta_set(port->port_id, 2, port->dev_info.reta_size);
-	slave_reta_fetch(port);
+	child_reta_fetch(port);
 
 	TEST_ASSERT(reta_check_synced(port) == 0,
-			"Removed slave didn't should be synchronized with bonding port");
+			"Removed child didn't should be synchronized with bonding port");
 
-	/* 3. Add (ex-)slave and check if configuration changed*/
-	TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
-			port->port_id), "Cannot add slave");
+	/* 3. Add (ex-)child and check if configuration changed*/
+	TEST_ASSERT_SUCCESS(rte_eth_bond_child_add(test_params.bond_port_id,
+			port->port_id), "Cannot add child");
 
 	bond_reta_fetch();
-	slave_reta_fetch(port);
+	child_reta_fetch(port);
 
 	return reta_check_synced(port);
 }
 
 /**
- * Test configuration propagation over slaves.
+ * Test configuration propagation over children.
  */
 static int
 test_propagate(void)
 {
 	unsigned i;
 	uint8_t n;
-	struct slave_conf *port;
+	struct child_conf *port;
 	uint8_t bond_rss_key[40];
 	struct rte_eth_rss_conf bond_rss_conf;
 
@@ -349,18 +349,18 @@ test_propagate(void)
 
 			retval = rte_eth_dev_rss_hash_update(test_params.bond_port_id,
 					&bond_rss_conf);
-			TEST_ASSERT_SUCCESS(retval, "Cannot set slaves hash function");
+			TEST_ASSERT_SUCCESS(retval, "Cannot set children hash function");
 
 			FOR_EACH_PORT(n, port) {
-				port = &test_params.slave_ports[n];
+				port = &test_params.child_ports[n];
 
 				retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
 						&port->rss_conf);
 				TEST_ASSERT_SUCCESS(retval,
-						"Cannot take slaves RSS configuration");
+						"Cannot take children RSS configuration");
 
 				TEST_ASSERT(port->rss_conf.rss_hf == rss_hf,
-						"Hash function not propagated for slave %d",
+						"Hash function not propagated for child %d",
 						port->port_id);
 			}
 
@@ -376,11 +376,11 @@ test_propagate(void)
 
 		/* Set all keys to zero */
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.child_ports[n];
 			memset(port->rss_conf.rss_key, 0, 40);
 			retval = rte_eth_dev_rss_hash_update(port->port_id,
 					&port->rss_conf);
-			TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RSS keys");
+			TEST_ASSERT_SUCCESS(retval, "Cannot set children RSS keys");
 		}
 
 		memset(bond_rss_key, i, sizeof(bond_rss_key));
@@ -393,18 +393,18 @@ test_propagate(void)
 		TEST_ASSERT_SUCCESS(retval, "Cannot set bonded port RSS keys");
 
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.child_ports[n];
 
 			retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
 					&(port->rss_conf));
 
 			TEST_ASSERT_SUCCESS(retval,
-					"Cannot take slaves RSS configuration");
+					"Cannot take children RSS configuration");
 
 			/* compare keys */
 			retval = memcmp(port->rss_conf.rss_key, bond_rss_key,
 					sizeof(bond_rss_key));
-			TEST_ASSERT(retval == 0, "Key value not propagated for slave %d",
+			TEST_ASSERT(retval == 0, "Key value not propagated for child %d",
 					port->port_id);
 		}
 	}
@@ -416,10 +416,10 @@ test_propagate(void)
 
 		/* Set all keys to zero */
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.child_ports[n];
 			retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
 					port->dev_info.reta_size);
-			TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RETA");
+			TEST_ASSERT_SUCCESS(retval, "Cannot set children RETA");
 		}
 
 		TEST_ASSERT_SUCCESS(reta_set(test_params.bond_port_id,
@@ -429,9 +429,9 @@ test_propagate(void)
 		bond_reta_fetch();
 
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.child_ports[n];
 
-			slave_reta_fetch(port);
+			child_reta_fetch(port);
 			TEST_ASSERT(reta_check_synced(port) == 1, "RETAs inconsistent");
 		}
 	}
@@ -459,29 +459,29 @@ test_rss(void)
 			"Error during getting device (port %u) info: %s\n",
 			test_params.bond_port_id, strerror(-ret));
 
-	TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
+	TEST_ASSERT_SUCCESS(bond_children(), "Bonding children failed");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
 			"Failed to start bonding port (%d).", test_params.bond_port_id);
 
 	TEST_ASSERT_SUCCESS(test_propagate(), "Propagation test failed");
 
-	TEST_ASSERT(slave_remove_and_add() == 1, "remove and add slaves success.");
+	TEST_ASSERT(child_remove_and_add() == 1, "remove and add children success.");
 
-	remove_slaves_and_stop_bonded_device();
+	remove_children_and_stop_bonded_device();
 
 	return TEST_SUCCESS;
 }
 
 
 /**
- * Test RSS configuration over bonded and slaves.
+ * Test RSS configuration over bonded and children.
  */
 static int
 test_rss_config_lazy(void)
 {
 	struct rte_eth_rss_conf bond_rss_conf = {0};
-	struct slave_conf *port;
+	struct child_conf *port;
 	uint8_t rss_key[40];
 	uint64_t rss_hf;
 	int retval;
@@ -502,18 +502,18 @@ test_rss_config_lazy(void)
 		TEST_ASSERT(retval != 0, "Succeeded in setting bonded port hash function");
 	}
 
-	/* Set all keys to zero for all slaves */
+	/* Set all keys to zero for all children */
 	FOR_EACH_PORT(n, port) {
-		port = &test_params.slave_ports[n];
+		port = &test_params.child_ports[n];
 		retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
 						       &port->rss_conf);
-		TEST_ASSERT_SUCCESS(retval, "Cannot get slaves RSS configuration");
+		TEST_ASSERT_SUCCESS(retval, "Cannot get children RSS configuration");
 		memset(port->rss_key, 0, sizeof(port->rss_key));
 		port->rss_conf.rss_key = port->rss_key;
 		port->rss_conf.rss_key_len = sizeof(port->rss_key);
 		retval = rte_eth_dev_rss_hash_update(port->port_id,
 						     &port->rss_conf);
-		TEST_ASSERT(retval != 0, "Succeeded in setting slaves RSS keys");
+		TEST_ASSERT(retval != 0, "Succeeded in setting children RSS keys");
 	}
 
 	/* Set RSS keys for bonded port */
@@ -529,10 +529,10 @@ test_rss_config_lazy(void)
 	/*  Test RETA propagation */
 	for (i = 0; i < RXTX_QUEUE_COUNT; i++) {
 		FOR_EACH_PORT(n, port) {
-			port = &test_params.slave_ports[n];
+			port = &test_params.child_ports[n];
 			retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
 					  port->dev_info.reta_size);
-			TEST_ASSERT(retval != 0, "Succeeded in setting slaves RETA");
+			TEST_ASSERT(retval != 0, "Succeeded in setting children RETA");
 		}
 
 		retval = reta_set(test_params.bond_port_id, i % RXTX_QUEUE_COUNT,
@@ -560,14 +560,14 @@ test_rss_lazy(void)
 			"Error during getting device (port %u) info: %s\n",
 			test_params.bond_port_id, strerror(-ret));
 
-	TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
+	TEST_ASSERT_SUCCESS(bond_children(), "Bonding children failed");
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
 			"Failed to start bonding port (%d).", test_params.bond_port_id);
 
 	TEST_ASSERT_SUCCESS(test_rss_config_lazy(), "Succeeded in setting RSS hash when RX_RSS mq_mode is turned off");
 
-	remove_slaves_and_stop_bonded_device();
+	remove_children_and_stop_bonded_device();
 
 	return TEST_SUCCESS;
 }
@@ -579,13 +579,13 @@ test_setup(void)
 	int retval;
 	int port_id;
 	char name[256];
-	struct slave_conf *port;
+	struct child_conf *port;
 	struct rte_ether_addr mac_addr = { .addr_bytes = {0} };
 
 	if (test_params.mbuf_pool == NULL) {
 
 		test_params.mbuf_pool = rte_pktmbuf_pool_create(
-			"RSS_MBUF_POOL", NUM_MBUFS * SLAVE_COUNT,
+			"RSS_MBUF_POOL", NUM_MBUFS * CHILD_COUNT,
 			MBUF_CACHE_SIZE, 0, MBUF_SIZE, rte_socket_id());
 
 		TEST_ASSERT(test_params.mbuf_pool != NULL,
@@ -594,10 +594,10 @@ test_setup(void)
 
 	/* Create / initialize ring eth devs. */
 	FOR_EACH_PORT(n, port) {
-		port = &test_params.slave_ports[n];
+		port = &test_params.child_ports[n];
 
 		port_id = rte_eth_dev_count_avail();
-		snprintf(name, sizeof(name), SLAVE_DEV_NAME_FMT, port_id);
+		snprintf(name, sizeof(name), CHILD_DEV_NAME_FMT, port_id);
 
 		retval = rte_vdev_init(name, "size=64,copy=0");
 		TEST_ASSERT_SUCCESS(retval, "Failed to create null device '%s'\n",
@@ -647,7 +647,7 @@ test_setup(void)
 static void
 testsuite_teardown(void)
 {
-	struct slave_conf *port;
+	struct child_conf *port;
 	uint8_t i;
 
 	/* Only stop ports.
@@ -685,7 +685,7 @@ test_rssconf_executor(int (*test_func)(void))
 
 	/* Reset environment in case test failed to do that. */
 	if (test_result != TEST_SUCCESS) {
-		TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
+		TEST_ASSERT_SUCCESS(remove_children_and_stop_bonded_device(),
 			"Failed to stop bonded device");
 	}
 
diff --git a/doc/guides/howto/lm_bond_virtio_sriov.rst b/doc/guides/howto/lm_bond_virtio_sriov.rst
index e854ae214ef9..a64a04247c0e 100644
--- a/doc/guides/howto/lm_bond_virtio_sriov.rst
+++ b/doc/guides/howto/lm_bond_virtio_sriov.rst
@@ -17,8 +17,8 @@ Test Setup
 ----------
 
 A bonded device is created in the VM.
-The virtio and VF PMD's are added as slaves to the bonded device.
-The VF is set as the primary slave of the bonded device.
+The virtio and VF PMD's are added as childern to the bonded device.
+The VF is set as the primary child of the bonded device.
 
 A bridge must be set up on the Host connecting the tap device, which is the
 backend of the Virtio device and the Physical Function (PF) device.
@@ -116,13 +116,13 @@ Bonding is port 2 (P2).
 
    testpmd> create bonded device 1 0
    Created new bonded device net_bond_testpmd_0 on (port 2).
-   testpmd> add bonding slave 0 2
-   testpmd> add bonding slave 1 2
+   testpmd> add bonding child 0 2
+   testpmd> add bonding child 1 2
    testpmd> show bonding config 2
 
 The syntax of the ``testpmd`` command is:
 
-set bonding primary (slave id) (port id)
+set bonding primary (child id) (port id)
 
 Set primary to P1 before starting bonding port.
 
@@ -139,7 +139,7 @@ Set primary to P1 before starting bonding port.
 
    testpmd> show bonding config 2
 
-Primary is now P1. There are 2 active slaves.
+Primary is now P1. There are 2 active children.
 
 Use P2 only for forwarding.
 
@@ -151,7 +151,7 @@ Use P2 only for forwarding.
    testpmd> start
    testpmd> show bonding config 2
 
-Primary is now P1. There are 2 active slaves.
+Primary is now P1. There are 2 active children.
 
 .. code-block:: console
 
@@ -163,10 +163,10 @@ VF traffic is seen at P1 and P2.
 
    testpmd> clear port stats all
    testpmd> set bonding primary 0 2
-   testpmd> remove bonding slave 1 2
+   testpmd> remove bonding child 1 2
    testpmd> show bonding config 2
 
-Primary is now P0. There is 1 active slave.
+Primary is now P0. There is 1 active child.
 
 .. code-block:: console
 
@@ -210,7 +210,7 @@ On host_server_1: Terminal 1
 
    testpmd> show bonding config 2
 
-Primary is now P0. There is 1 active slave.
+Primary is now P0. There is 1 active child.
 
 .. code-block:: console
 
@@ -346,7 +346,7 @@ The ``mac_addr`` command only works with the Kernel PF for Niantic.
    testpmd> show port stats all.
    testpmd> show config fwd
    testpmd> show bonding config 2
-   testpmd> add bonding slave 1 2
+   testpmd> add bonding child 1 2
    testpmd> set bonding primary 1 2
    testpmd> show bonding config 2
    testpmd> show port stats all
@@ -355,7 +355,7 @@ VF traffic is seen at P1 (VF) and P2 (Bonded device).
 
 .. code-block:: console
 
-   testpmd> remove bonding slave 0 2
+   testpmd> remove bonding child 0 2
    testpmd> show bonding config 2
    testpmd> port stop 0
    testpmd> port close 0
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 0b09b0c50a7b..dd91264cd8a2 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -781,8 +781,8 @@ DPDK implements a light-weight library to allow PMDs to be bonded together and p
 
 .. code-block:: console
 
-    dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=<PCI B:D.F device 1>,slave=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
-    (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
+    dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,child=<PCI B:D.F device 1>,child=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
+    (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,child=0000:82:00.0,child=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
 
 Vector Processing
 -----------------
diff --git a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
index 1f66154e356d..f07bb281a727 100644
--- a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
+++ b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
@@ -16,14 +16,14 @@ allows physical PMDs to be bonded together to create a single logical PMD.
 The Link Bonding PMD library(librte_net_bond) supports bonding of groups of
 ``rte_eth_dev`` ports of the same speed and duplex to provide similar
 capabilities to that found in Linux bonding driver to allow the aggregation
-of multiple (slave) NICs into a single logical interface between a server
+of multiple (child) NICs into a single logical interface between a server
 and a switch. The new bonded PMD will then process these interfaces based on
 the mode of operation specified to provide support for features such as
 redundant links, fault tolerance and/or load balancing.
 
 The librte_net_bond library exports a C API which provides an API for the
 creation of bonded devices as well as the configuration and management of the
-bonded device and its slave devices.
+bonded device and its child devices.
 
 .. note::
 
@@ -45,7 +45,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
 
 
     This mode provides load balancing and fault tolerance by transmission of
-    packets in sequential order from the first available slave device through
+    packets in sequential order from the first available child device through
     the last. Packets are bulk dequeued from devices then serviced in a
     round-robin manner. This mode does not guarantee in order reception of
     packets and down stream should be able to handle out of order packets.
@@ -57,9 +57,9 @@ Currently the Link Bonding PMD library supports following modes of operation:
    Active Backup (Mode 1)
 
 
-    In this mode only one slave in the bond is active at any time, a different
-    slave becomes active if, and only if, the primary active slave fails,
-    thereby providing fault tolerance to slave failure. The single logical
+    In this mode only one child in the bond is active at any time, a different
+    child becomes active if, and only if, the primary active child fails,
+    thereby providing fault tolerance to child failure. The single logical
     bonded interface's MAC address is externally visible on only one NIC (port)
     to avoid confusing the network switch.
 
@@ -73,10 +73,10 @@ Currently the Link Bonding PMD library supports following modes of operation:
     This mode provides transmit load balancing (based on the selected
     transmission policy) and fault tolerance. The default policy (layer2) uses
     a simple calculation based on the packet flow source and destination MAC
-    addresses as well as the number of active slaves available to the bonded
-    device to classify the packet to a specific slave to transmit on. Alternate
+    addresses as well as the number of active children available to the bonded
+    device to classify the packet to a specific child to transmit on. Alternate
     transmission policies supported are layer 2+3, this takes the IP source and
-    destination addresses into the calculation of the transmit slave port and
+    destination addresses into the calculation of the transmit child port and
     the final supported policy is layer 3+4, this uses IP source and
     destination addresses as well as the TCP/UDP source and destination port.
 
@@ -92,7 +92,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
    Broadcast (Mode 3)
 
 
-    This mode provides fault tolerance by transmission of packets on all slave
+    This mode provides fault tolerance by transmission of packets on all child
     ports.
 
 *   **Link Aggregation 802.3AD (Mode 4):**
@@ -114,7 +114,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
        intervals period of less than 100ms.
 
     #. Calls to ``rte_eth_tx_burst`` must have a buffer size of at least 2xN,
-       where N is the number of slaves. This is a space required for LACP
+       where N is the number of children. This is a space required for LACP
        frames. Additionally LACP packets are included in the statistics, but
        they are not returned to the application.
 
@@ -126,7 +126,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
 
 
     This mode provides an adaptive transmit load balancing. It dynamically
-    changes the transmitting slave, according to the computed load. Statistics
+    changes the transmitting child, according to the computed load. Statistics
     are collected in 100ms intervals and scheduled every 10ms.
 
 
@@ -140,74 +140,74 @@ The Link Bonding Library supports the creation of bonded devices at application
 startup time during EAL initialization using the ``--vdev`` option as well as
 programmatically via the C API ``rte_eth_bond_create`` function.
 
-Bonded devices support the dynamical addition and removal of slave devices using
-the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove`` APIs.
+Bonded devices support the dynamical addition and removal of child devices using
+the ``rte_eth_bond_child_add`` / ``rte_eth_bond_child_remove`` APIs.
 
-After a slave device is added to a bonded device slave is stopped using
+After a child device is added to a bonded device child is stopped using
 ``rte_eth_dev_stop`` and then reconfigured using ``rte_eth_dev_configure``
 the RX and TX queues are also reconfigured using ``rte_eth_tx_queue_setup`` /
 ``rte_eth_rx_queue_setup`` with the parameters use to configure the bonding
 device. If RSS is enabled for bonding device, this mode is also enabled on new
-slave and configured as well.
+child and configured as well.
 Any flow which was configured to the bond device also is configured to the added
-slave.
+child.
 
 Setting up multi-queue mode for bonding device to RSS, makes it fully
-RSS-capable, so all slaves are synchronized with its configuration. This mode is
-intended to provide RSS configuration on slaves transparent for client
+RSS-capable, so all children are synchronized with its configuration. This mode is
+intended to provide RSS configuration on children transparent for client
 application implementation.
 
 Bonding device stores its own version of RSS settings i.e. RETA, RSS hash
-function and RSS key, used to set up its slaves. That let to define the meaning
+function and RSS key, used to set up its children. That let to define the meaning
 of RSS configuration of bonding device as desired configuration of whole bonding
-(as one unit), without pointing any of slave inside. It is required to ensure
+(as one unit), without pointing any of child inside. It is required to ensure
 consistency and made it more error-proof.
 
 RSS hash function set for bonding device, is a maximal set of RSS hash functions
-supported by all bonded slaves. RETA size is a GCD of all its RETA's sizes, so
-it can be easily used as a pattern providing expected behavior, even if slave
+supported by all bonded children. RETA size is a GCD of all its RETA's sizes, so
+it can be easily used as a pattern providing expected behavior, even if child
 RETAs' sizes are different. If RSS Key is not set for bonded device, it's not
-changed on the slaves and default key for device is used.
+changed on the children and default key for device is used.
 
-As RSS configurations, there is flow consistency in the bonded slaves for the
+As RSS configurations, there is flow consistency in the bonded children for the
 next rte flow operations:
 
 Validate:
-	- Validate flow for each slave, failure at least for one slave causes to
+	- Validate flow for each child, failure at least for one child causes to
 	  bond validation failure.
 
 Create:
-	- Create the flow in all slaves.
-	- Save all the slaves created flows objects in bonding internal flow
+	- Create the flow in all children.
+	- Save all the children created flows objects in bonding internal flow
 	  structure.
-	- Failure in flow creation for existed slave rejects the flow.
-	- Failure in flow creation for new slaves in slave adding time rejects
-	  the slave.
+	- Failure in flow creation for existed child rejects the flow.
+	- Failure in flow creation for new children in child adding time rejects
+	  the child.
 
 Destroy:
-	- Destroy the flow in all slaves and release the bond internal flow
+	- Destroy the flow in all children and release the bond internal flow
 	  memory.
 
 Flush:
-	- Destroy all the bonding PMD flows in all the slaves.
+	- Destroy all the bonding PMD flows in all the children.
 
 .. note::
 
-    Don't call slaves flush directly, It destroys all the slave flows which
+    Don't call children flush directly, It destroys all the child flows which
     may include external flows or the bond internal LACP flow.
 
 Query:
-	- Summarize flow counters from all the slaves, relevant only for
+	- Summarize flow counters from all the children, relevant only for
 	  ``RTE_FLOW_ACTION_TYPE_COUNT``.
 
 Isolate:
-	- Call to flow isolate for all slaves.
-	- Failure in flow isolation for existed slave rejects the isolate mode.
-	- Failure in flow isolation for new slaves in slave adding time rejects
-	  the slave.
+	- Call to flow isolate for all children.
+	- Failure in flow isolation for existed child rejects the isolate mode.
+	- Failure in flow isolation for new children in child adding time rejects
+	  the child.
 
 All settings are managed through the bonding port API and always are propagated
-in one direction (from bonding to slaves).
+in one direction (from bonding to children).
 
 Link Status Change Interrupts / Polling
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -215,16 +215,16 @@ Link Status Change Interrupts / Polling
 Link bonding devices support the registration of a link status change callback,
 using the ``rte_eth_dev_callback_register`` API, this will be called when the
 status of the bonding device changes. For example in the case of a bonding
-device which has 3 slaves, the link status will change to up when one slave
-becomes active or change to down when all slaves become inactive. There is no
-callback notification when a single slave changes state and the previous
-conditions are not met. If a user wishes to monitor individual slaves then they
-must register callbacks with that slave directly.
+device which has 3 children, the link status will change to up when one child
+becomes active or change to down when all children become inactive. There is no
+callback notification when a single child changes state and the previous
+conditions are not met. If a user wishes to monitor individual children then they
+must register callbacks with that child directly.
 
 The link bonding library also supports devices which do not implement link
 status change interrupts, this is achieved by polling the devices link status at
 a defined period which is set using the ``rte_eth_bond_link_monitoring_set``
-API, the default polling interval is 10ms. When a device is added as a slave to
+API, the default polling interval is 10ms. When a device is added as a child to
 a bonding device it is determined using the ``RTE_PCI_DRV_INTR_LSC`` flag
 whether the device supports interrupts or whether the link status should be
 monitored by polling it.
@@ -233,30 +233,30 @@ Requirements / Limitations
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 The current implementation only supports devices that support the same speed
-and duplex to be added as a slaves to the same bonded device. The bonded device
-inherits these attributes from the first active slave added to the bonded
-device and then all further slaves added to the bonded device must support
+and duplex to be added as a children to the same bonded device. The bonded device
+inherits these attributes from the first active child added to the bonded
+device and then all further children added to the bonded device must support
 these parameters.
 
-A bonding device must have a minimum of one slave before the bonding device
+A bonding device must have a minimum of one child before the bonding device
 itself can be started.
 
 To use a bonding device dynamic RSS configuration feature effectively, it is
-also required, that all slaves should be RSS-capable and support, at least one
+also required, that all children should be RSS-capable and support, at least one
 common hash function available for each of them. Changing RSS key is only
-possible, when all slave devices support the same key size.
+possible, when all child devices support the same key size.
 
-To prevent inconsistency on how slaves process packets, once a device is added
+To prevent inconsistency on how children process packets, once a device is added
 to a bonding device, RSS and rte flow configurations should be managed through
-the bonding device API, and not directly on the slave.
+the bonding device API, and not directly on the child.
 
 Like all other PMD, all functions exported by a PMD are lock-free functions
 that are assumed not to be invoked in parallel on different logical cores to
 work on the same target object.
 
 It should also be noted that the PMD receive function should not be invoked
-directly on a slave devices after they have been to a bonded device since
-packets read directly from the slave device will no longer be available to the
+directly on a child devices after they have been to a bonded device since
+packets read directly from the child device will no longer be available to the
 bonded device to read.
 
 Configuration
@@ -265,25 +265,25 @@ Configuration
 Link bonding devices are created using the ``rte_eth_bond_create`` API
 which requires a unique device name, the bonding mode,
 and the socket Id to allocate the bonding device's resources on.
-The other configurable parameters for a bonded device are its slave devices,
-its primary slave, a user defined MAC address and transmission policy to use if
+The other configurable parameters for a bonded device are its child devices,
+its primary child, a user defined MAC address and transmission policy to use if
 the device is in balance XOR mode.
 
-Slave Devices
+Child Devices
 ^^^^^^^^^^^^^
 
-Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` slave devices
-of the same speed and duplex. Ethernet devices can be added as a slave to a
-maximum of one bonded device. Slave devices are reconfigured with the
+Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` child devices
+of the same speed and duplex. Ethernet devices can be added as a child to a
+maximum of one bonded device. Child devices are reconfigured with the
 configuration of the bonded device on being added to a bonded device.
 
-The bonded also guarantees to return the MAC address of the slave device to its
-original value of removal of a slave from it.
+The bonded also guarantees to return the MAC address of the child device to its
+original value of removal of a child from it.
 
-Primary Slave
+Primary Child
 ^^^^^^^^^^^^^
 
-The primary slave is used to define the default port to use when a bonded
+The primary child is used to define the default port to use when a bonded
 device is in active backup mode. A different port will only be used if, and
 only if, the current primary port goes down. If the user does not specify a
 primary port it will default to being the first port added to the bonded device.
@@ -292,14 +292,14 @@ MAC Address
 ^^^^^^^^^^^
 
 The bonded device can be configured with a user specified MAC address, this
-address will be inherited by the some/all slave devices depending on the
+address will be inherited by the some/all child devices depending on the
 operating mode. If the device is in active backup mode then only the primary
-device will have the user specified MAC, all other slaves will retain their
-original MAC address. In mode 0, 2, 3, 4 all slaves devices are configure with
+device will have the user specified MAC, all other children will retain their
+original MAC address. In mode 0, 2, 3, 4 all children devices are configure with
 the bonded devices MAC address.
 
 If a user defined MAC address is not defined then the bonded device will
-default to using the primary slaves MAC address.
+default to using the primary children MAC address.
 
 Balance XOR Transmit Policies
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -310,17 +310,17 @@ Balance XOR mode. Layer 2, Layer 2+3, Layer 3+4.
 *   **Layer 2:**   Ethernet MAC address based balancing is the default
     transmission policy for Balance XOR bonding mode. It uses a simple XOR
     calculation on the source MAC address and destination MAC address of the
-    packet and then calculate the modulus of this value to calculate the slave
+    packet and then calculate the modulus of this value to calculate the child
     device to transmit the packet on.
 
 *   **Layer 2 + 3:** Ethernet MAC address & IP Address based balancing uses a
     combination of source/destination MAC addresses and the source/destination
-    IP addresses of the data packet to decide which slave port the packet will
+    IP addresses of the data packet to decide which child port the packet will
     be transmitted on.
 
 *   **Layer 3 + 4:**  IP Address & UDP Port based  balancing uses a combination
     of source/destination IP Address and the source/destination UDP ports of
-    the packet of the data packet to decide which slave port the packet will be
+    the packet of the data packet to decide which child port the packet will be
     transmitted on.
 
 All these policies support 802.1Q VLAN Ethernet packets, as well as IPv4, IPv6
@@ -350,13 +350,13 @@ device configure API ``rte_eth_dev_configure`` and then the RX and TX queues
 which will be used must be setup using ``rte_eth_tx_queue_setup`` /
 ``rte_eth_rx_queue_setup``.
 
-Slave devices can be dynamically added and removed from a link bonding device
-using the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove``
-APIs but at least one slave device must be added to the link bonding device
+Child devices can be dynamically added and removed from a link bonding device
+using the ``rte_eth_bond_child_add`` / ``rte_eth_bond_child_remove``
+APIs but at least one child device must be added to the link bonding device
 before it can be started using ``rte_eth_dev_start``.
 
-The link status of a bonded device is dictated by that of its slaves, if all
-slave device link status are down or if all slaves are removed from the link
+The link status of a bonded device is dictated by that of its children, if all
+child device link status are down or if all children are removed from the link
 bonding device then the link status of the bonding device will go down.
 
 It is also possible to configure / query the configuration of the control
@@ -390,7 +390,7 @@ long as the following two rules are respected:
     where X can be any combination of numbers and/or letters,
     and the name is no greater than 32 characters long.
 
-*   A least one slave device is provided with for each bonded device definition.
+*   A least one child device is provided with for each bonded device definition.
 
 *   The operation mode of the bonded device being created is provided.
 
@@ -404,20 +404,20 @@ The different options are:
 
         mode=2
 
-*   slave: Defines the PMD device which will be added as slave to the bonded
+*   child: Defines the PMD device which will be added as child to the bonded
     device. This option can be selected multiple times, for each device to be
-    added as a slave. Physical devices should be specified using their PCI
+    added as a child. Physical devices should be specified using their PCI
     address, in the format domain:bus:devid.function
 
 .. code-block:: console
 
-        slave=0000:0a:00.0,slave=0000:0a:00.1
+        child=0000:0a:00.0,child=0000:0a:00.1
 
-*   primary: Optional parameter which defines the primary slave port,
-    is used in active backup mode to select the primary slave for data TX/RX if
+*   primary: Optional parameter which defines the primary child port,
+    is used in active backup mode to select the primary child for data TX/RX if
     it is available. The primary port also is used to select the MAC address to
-    use when it is not defined by the user. This defaults to the first slave
-    added to the device if it is specified. The primary device must be a slave
+    use when it is not defined by the user. This defaults to the first child
+    added to the device if it is specified. The primary device must be a child
     of the bonded device.
 
 .. code-block:: console
@@ -432,7 +432,7 @@ The different options are:
         socket_id=0
 
 *   mac: Optional parameter to select a MAC address for link bonding device,
-    this overrides the value of the primary slave device.
+    this overrides the value of the primary child device.
 
 .. code-block:: console
 
@@ -474,29 +474,29 @@ The different options are:
 Examples of Usage
 ^^^^^^^^^^^^^^^^^
 
-Create a bonded device in round robin mode with two slaves specified by their PCI address:
+Create a bonded device in round robin mode with two children specified by their PCI address:
 
 .. code-block:: console
 
-    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00' -- --port-topology=chained
+    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,child=0000:0a:00.01,child=0000:04:00.00' -- --port-topology=chained
 
-Create a bonded device in round robin mode with two slaves specified by their PCI address and an overriding MAC address:
+Create a bonded device in round robin mode with two children specified by their PCI address and an overriding MAC address:
 
 .. code-block:: console
 
-    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
+    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,child=0000:0a:00.01,child=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
 
-Create a bonded device in active backup mode with two slaves specified, and a primary slave specified by their PCI addresses:
+Create a bonded device in active backup mode with two children specified, and a primary child specified by their PCI addresses:
 
 .. code-block:: console
 
-    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,slave=0000:0a:00.01,slave=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
+    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,child=0000:0a:00.01,child=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
 
-Create a bonded device in balance mode with two slaves specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
+Create a bonded device in balance mode with two children specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
 
 .. code-block:: console
 
-    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,slave=0000:0a:00.01,slave=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
+    ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,child=0000:0a:00.01,child=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
 
 .. _bonding_testpmd_commands:
 
@@ -517,28 +517,28 @@ For example, to create a bonded device in mode 1 on socket 0::
    testpmd> create bonded device 1 0
    created new bonded device (port X)
 
-add bonding slave
+add bonding child
 ~~~~~~~~~~~~~~~~~
 
 Adds Ethernet device to a Link Bonding device::
 
-   testpmd> add bonding slave (slave id) (port id)
+   testpmd> add bonding child (child id) (port id)
 
 For example, to add Ethernet device (port 6) to a Link Bonding device (port 10)::
 
-   testpmd> add bonding slave 6 10
+   testpmd> add bonding child 6 10
 
 
-remove bonding slave
+remove bonding child
 ~~~~~~~~~~~~~~~~~~~~
 
-Removes an Ethernet slave device from a Link Bonding device::
+Removes an Ethernet child device from a Link Bonding device::
 
-   testpmd> remove bonding slave (slave id) (port id)
+   testpmd> remove bonding child (child id) (port id)
 
-For example, to remove Ethernet slave device (port 6) to a Link Bonding device (port 10)::
+For example, to remove Ethernet child device (port 6) to a Link Bonding device (port 10)::
 
-   testpmd> remove bonding slave 6 10
+   testpmd> remove bonding child 6 10
 
 set bonding mode
 ~~~~~~~~~~~~~~~~
@@ -554,11 +554,11 @@ For example, to set the bonding mode of a Link Bonding device (port 10) to broad
 set bonding primary
 ~~~~~~~~~~~~~~~~~~~
 
-Set an Ethernet slave device as the primary device on a Link Bonding device::
+Set an Ethernet child device as the primary device on a Link Bonding device::
 
-   testpmd> set bonding primary (slave id) (port id)
+   testpmd> set bonding primary (child id) (port id)
 
-For example, to set the Ethernet slave device (port 6) as the primary port of a Link Bonding device (port 10)::
+For example, to set the Ethernet child device (port 6) as the primary port of a Link Bonding device (port 10)::
 
    testpmd> set bonding primary 6 10
 
@@ -590,7 +590,7 @@ set bonding mon_period
 
 Set the link status monitoring polling period in milliseconds for a bonding device.
 
-This adds support for PMD slave devices which do not support link status interrupts.
+This adds support for PMD child devices which do not support link status interrupts.
 When the mon_period is set to a value greater than 0 then all PMD's which do not support
 link status ISR will be queried every polling interval to check if their link status has changed::
 
@@ -604,7 +604,7 @@ For example, to set the link status monitoring polling period of bonded device (
 set bonding lacp dedicated_queue
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Enable dedicated tx/rx queues on bonding devices slaves to handle LACP control plane traffic
+Enable dedicated tx/rx queues on bonding devices children to handle LACP control plane traffic
 when in mode 4 (link-aggregation-802.3ad)::
 
    testpmd> set bonding lacp dedicated_queues (port_id) (enable|disable)
@@ -627,13 +627,13 @@ it also shows link-aggregation-802.3ad information if the link mode is mode 4::
    testpmd> show bonding config (port id)
 
 For example,
-to show the configuration a Link Bonding device (port 9) with 3 slave devices (1, 3, 4)
+to show the configuration a Link Bonding device (port 9) with 3 child devices (1, 3, 4)
 in balance mode with a transmission policy of layer 2+3::
 
    testpmd> show bonding config 9
      - Dev basic:
         Bonding mode: BALANCE(2)
         Balance Xmit Policy: BALANCE_XMIT_POLICY_LAYER23
-        Slaves (3): [1 3 4]
-        Active Slaves (3): [1 3 4]
+        Children (3): [1 3 4]
+        Active Children (3): [1 3 4]
         Primary: [3]
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 8f2384785930..3e3fb772fd62 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -1901,11 +1901,11 @@ In this case, identifier is ``net_pcap0``.
 This identifier format is the same as ``--vdev`` format of DPDK applications.
 
 For example, to re-attach a bonded port which has been previously detached,
-the mode and slave parameters must be given.
+the mode and child parameters must be given.
 
 .. code-block:: console
 
-   testpmd> port attach net_bond_0,mode=0,slave=1
+   testpmd> port attach net_bond_0,mode=0,child=1
    Attaching a new port...
    EAL: Initializing pmd_bond for net_bond_0
    EAL: Create bonded device net_bond_0 on port 0 in mode 0 on socket 0.
diff --git a/drivers/net/bonding/bonding_testpmd.c b/drivers/net/bonding/bonding_testpmd.c
index b3c12cada078..c93a2d94883f 100644
--- a/drivers/net/bonding/bonding_testpmd.c
+++ b/drivers/net/bonding/bonding_testpmd.c
@@ -279,7 +279,7 @@ struct cmd_set_bonding_primary_result {
 	cmdline_fixed_string_t set;
 	cmdline_fixed_string_t bonding;
 	cmdline_fixed_string_t primary;
-	portid_t slave_id;
+	portid_t child_id;
 	portid_t port_id;
 };
 
@@ -287,13 +287,13 @@ static void cmd_set_bonding_primary_parsed(void *parsed_result,
 	__rte_unused struct cmdline *cl, __rte_unused void *data)
 {
 	struct cmd_set_bonding_primary_result *res = parsed_result;
-	portid_t master_port_id = res->port_id;
-	portid_t slave_port_id = res->slave_id;
+	portid_t parent_port_id = res->port_id;
+	portid_t child_port_id = res->child_id;
 
-	/* Set the primary slave for a bonded device. */
-	if (rte_eth_bond_primary_set(master_port_id, slave_port_id) != 0) {
-		fprintf(stderr, "\t Failed to set primary slave for port = %d.\n",
-			master_port_id);
+	/* Set the primary child for a bonded device. */
+	if (rte_eth_bond_primary_set(parent_port_id, child_port_id) != 0) {
+		fprintf(stderr, "\t Failed to set primary child for port = %d.\n",
+			parent_port_id);
 		return;
 	}
 	init_port_config();
@@ -308,141 +308,141 @@ static cmdline_parse_token_string_t cmd_setbonding_primary_bonding =
 static cmdline_parse_token_string_t cmd_setbonding_primary_primary =
 	TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_primary_result,
 		primary, "primary");
-static cmdline_parse_token_num_t cmd_setbonding_primary_slave =
+static cmdline_parse_token_num_t cmd_setbonding_primary_child =
 	TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
-		slave_id, RTE_UINT16);
+		child_id, RTE_UINT16);
 static cmdline_parse_token_num_t cmd_setbonding_primary_port =
 	TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
 		port_id, RTE_UINT16);
 
 static cmdline_parse_inst_t cmd_set_bonding_primary = {
 	.f = cmd_set_bonding_primary_parsed,
-	.help_str = "set bonding primary <slave_id> <port_id>: "
-		"Set the primary slave for port_id",
+	.help_str = "set bonding primary <child_id> <port_id>: "
+		"Set the primary child for port_id",
 	.data = NULL,
 	.tokens = {
 		(void *)&cmd_setbonding_primary_set,
 		(void *)&cmd_setbonding_primary_bonding,
 		(void *)&cmd_setbonding_primary_primary,
-		(void *)&cmd_setbonding_primary_slave,
+		(void *)&cmd_setbonding_primary_child,
 		(void *)&cmd_setbonding_primary_port,
 		NULL
 	}
 };
 
-/* *** ADD SLAVE *** */
-struct cmd_add_bonding_slave_result {
+/* *** ADD CHILD *** */
+struct cmd_add_bonding_child_result {
 	cmdline_fixed_string_t add;
 	cmdline_fixed_string_t bonding;
-	cmdline_fixed_string_t slave;
-	portid_t slave_id;
+	cmdline_fixed_string_t child;
+	portid_t child_id;
 	portid_t port_id;
 };
 
-static void cmd_add_bonding_slave_parsed(void *parsed_result,
+static void cmd_add_bonding_child_parsed(void *parsed_result,
 	__rte_unused struct cmdline *cl, __rte_unused void *data)
 {
-	struct cmd_add_bonding_slave_result *res = parsed_result;
-	portid_t master_port_id = res->port_id;
-	portid_t slave_port_id = res->slave_id;
+	struct cmd_add_bonding_child_result *res = parsed_result;
+	portid_t parent_port_id = res->port_id;
+	portid_t child_port_id = res->child_id;
 
-	/* add the slave for a bonded device. */
-	if (rte_eth_bond_slave_add(master_port_id, slave_port_id) != 0) {
+	/* add the child for a bonded device. */
+	if (rte_eth_bond_child_add(parent_port_id, child_port_id) != 0) {
 		fprintf(stderr,
-			"\t Failed to add slave %d to master port = %d.\n",
-			slave_port_id, master_port_id);
+			"\t Failed to add child %d to parent port = %d.\n",
+			child_port_id, parent_port_id);
 		return;
 	}
-	ports[master_port_id].update_conf = 1;
+	ports[parent_port_id].update_conf = 1;
 	init_port_config();
-	set_port_slave_flag(slave_port_id);
+	set_port_child_flag(child_port_id);
 }
 
-static cmdline_parse_token_string_t cmd_addbonding_slave_add =
-	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_child_add =
+	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_child_result,
 		add, "add");
-static cmdline_parse_token_string_t cmd_addbonding_slave_bonding =
-	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_child_bonding =
+	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_child_result,
 		bonding, "bonding");
-static cmdline_parse_token_string_t cmd_addbonding_slave_slave =
-	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
-		slave, "slave");
-static cmdline_parse_token_num_t cmd_addbonding_slave_slaveid =
-	TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
-		slave_id, RTE_UINT16);
-static cmdline_parse_token_num_t cmd_addbonding_slave_port =
-	TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_child_child =
+	TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_child_result,
+		child, "child");
+static cmdline_parse_token_num_t cmd_addbonding_child_childid =
+	TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_child_result,
+		child_id, RTE_UINT16);
+static cmdline_parse_token_num_t cmd_addbonding_child_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_child_result,
 		port_id, RTE_UINT16);
 
-static cmdline_parse_inst_t cmd_add_bonding_slave = {
-	.f = cmd_add_bonding_slave_parsed,
-	.help_str = "add bonding slave <slave_id> <port_id>: "
-		"Add a slave device to a bonded device",
+static cmdline_parse_inst_t cmd_add_bonding_child = {
+	.f = cmd_add_bonding_child_parsed,
+	.help_str = "add bonding child <child_id> <port_id>: "
+		"Add a child device to a bonded device",
 	.data = NULL,
 	.tokens = {
-		(void *)&cmd_addbonding_slave_add,
-		(void *)&cmd_addbonding_slave_bonding,
-		(void *)&cmd_addbonding_slave_slave,
-		(void *)&cmd_addbonding_slave_slaveid,
-		(void *)&cmd_addbonding_slave_port,
+		(void *)&cmd_addbonding_child_add,
+		(void *)&cmd_addbonding_child_bonding,
+		(void *)&cmd_addbonding_child_child,
+		(void *)&cmd_addbonding_child_childid,
+		(void *)&cmd_addbonding_child_port,
 		NULL
 	}
 };
 
-/* *** REMOVE SLAVE *** */
-struct cmd_remove_bonding_slave_result {
+/* *** REMOVE CHILD *** */
+struct cmd_remove_bonding_child_result {
 	cmdline_fixed_string_t remove;
 	cmdline_fixed_string_t bonding;
-	cmdline_fixed_string_t slave;
-	portid_t slave_id;
+	cmdline_fixed_string_t child;
+	portid_t child_id;
 	portid_t port_id;
 };
 
-static void cmd_remove_bonding_slave_parsed(void *parsed_result,
+static void cmd_remove_bonding_child_parsed(void *parsed_result,
 	__rte_unused struct cmdline *cl, __rte_unused void *data)
 {
-	struct cmd_remove_bonding_slave_result *res = parsed_result;
-	portid_t master_port_id = res->port_id;
-	portid_t slave_port_id = res->slave_id;
+	struct cmd_remove_bonding_child_result *res = parsed_result;
+	portid_t parent_port_id = res->port_id;
+	portid_t child_port_id = res->child_id;
 
-	/* remove the slave from a bonded device. */
-	if (rte_eth_bond_slave_remove(master_port_id, slave_port_id) != 0) {
+	/* remove the child from a bonded device. */
+	if (rte_eth_bond_child_remove(parent_port_id, child_port_id) != 0) {
 		fprintf(stderr,
-			"\t Failed to remove slave %d from master port = %d.\n",
-			slave_port_id, master_port_id);
+			"\t Failed to remove child %d from parent port = %d.\n",
+			child_port_id, parent_port_id);
 		return;
 	}
 	init_port_config();
-	clear_port_slave_flag(slave_port_id);
+	clear_port_child_flag(child_port_id);
 }
 
-static cmdline_parse_token_string_t cmd_removebonding_slave_remove =
-	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_child_remove =
+	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_child_result,
 		remove, "remove");
-static cmdline_parse_token_string_t cmd_removebonding_slave_bonding =
-	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_child_bonding =
+	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_child_result,
 		bonding, "bonding");
-static cmdline_parse_token_string_t cmd_removebonding_slave_slave =
-	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
-		slave, "slave");
-static cmdline_parse_token_num_t cmd_removebonding_slave_slaveid =
-	TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
-		slave_id, RTE_UINT16);
-static cmdline_parse_token_num_t cmd_removebonding_slave_port =
-	TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_child_child =
+	TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_child_result,
+		child, "child");
+static cmdline_parse_token_num_t cmd_removebonding_child_childid =
+	TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_child_result,
+		child_id, RTE_UINT16);
+static cmdline_parse_token_num_t cmd_removebonding_child_port =
+	TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_child_result,
 		port_id, RTE_UINT16);
 
-static cmdline_parse_inst_t cmd_remove_bonding_slave = {
-	.f = cmd_remove_bonding_slave_parsed,
-	.help_str = "remove bonding slave <slave_id> <port_id>: "
-		"Remove a slave device from a bonded device",
+static cmdline_parse_inst_t cmd_remove_bonding_child = {
+	.f = cmd_remove_bonding_child_parsed,
+	.help_str = "remove bonding child <child_id> <port_id>: "
+		"Remove a child device from a bonded device",
 	.data = NULL,
 	.tokens = {
-		(void *)&cmd_removebonding_slave_remove,
-		(void *)&cmd_removebonding_slave_bonding,
-		(void *)&cmd_removebonding_slave_slave,
-		(void *)&cmd_removebonding_slave_slaveid,
-		(void *)&cmd_removebonding_slave_port,
+		(void *)&cmd_removebonding_child_remove,
+		(void *)&cmd_removebonding_child_bonding,
+		(void *)&cmd_removebonding_child_child,
+		(void *)&cmd_removebonding_child_childid,
+		(void *)&cmd_removebonding_child_port,
 		NULL
 	}
 };
@@ -706,18 +706,18 @@ static struct testpmd_driver_commands bonding_cmds = {
 	},
 	{
 		&cmd_set_bonding_primary,
-		"set bonding primary (slave_id) (port_id)\n"
-		"	Set the primary slave for a bonded device.\n",
+		"set bonding primary (child_id) (port_id)\n"
+		"	Set the primary child for a bonded device.\n",
 	},
 	{
-		&cmd_add_bonding_slave,
-		"add bonding slave (slave_id) (port_id)\n"
-		"	Add a slave device to a bonded device.\n",
+		&cmd_add_bonding_child,
+		"add bonding child (child_id) (port_id)\n"
+		"	Add a child device to a bonded device.\n",
 	},
 	{
-		&cmd_remove_bonding_slave,
-		"remove bonding slave (slave_id) (port_id)\n"
-		"	Remove a slave device from a bonded device.\n",
+		&cmd_remove_bonding_child,
+		"remove bonding child (child_id) (port_id)\n"
+		"	Remove a child device from a bonded device.\n",
 	},
 	{
 		&cmd_create_bonded_device,
diff --git a/drivers/net/bonding/eth_bond_8023ad_private.h b/drivers/net/bonding/eth_bond_8023ad_private.h
index a5e1fffea1d2..0e5ae90c2bbf 100644
--- a/drivers/net/bonding/eth_bond_8023ad_private.h
+++ b/drivers/net/bonding/eth_bond_8023ad_private.h
@@ -15,10 +15,10 @@
 #include "rte_eth_bond_8023ad.h"
 
 #define BOND_MODE_8023AX_UPDATE_TIMEOUT_MS  100
-/** Maximum number of packets to one slave queued in TX ring. */
-#define BOND_MODE_8023AX_SLAVE_RX_PKTS        3
-/** Maximum number of LACP packets from one slave queued in TX ring. */
-#define BOND_MODE_8023AX_SLAVE_TX_PKTS        1
+/** Maximum number of packets to one child queued in TX ring. */
+#define BOND_MODE_8023AX_CHILD_RX_PKTS        3
+/** Maximum number of LACP packets from one child queued in TX ring. */
+#define BOND_MODE_8023AX_CHILD_TX_PKTS        1
 /**
  * Timeouts definitions (5.4.4 in 802.1AX documentation).
  */
@@ -113,7 +113,7 @@ struct port {
 	enum rte_bond_8023ad_selection selected;
 
 	/** Indicates if either allmulti or promisc has been enforced on the
-	 * slave so that we can receive lacp packets
+	 * child so that we can receive lacp packets
 	 */
 #define BOND_8023AD_FORCED_ALLMULTI (1 << 0)
 #define BOND_8023AD_FORCED_PROMISC (1 << 1)
@@ -162,8 +162,8 @@ struct mode8023ad_private {
 	uint8_t external_sm;
 	struct rte_ether_addr mac_addr;
 
-	struct rte_eth_link slave_link;
-	/***< slave link properties */
+	struct rte_eth_link child_link;
+	/***< child link properties */
 
 	/**
 	 * Configuration of dedicated hardware queues for control plane
@@ -208,7 +208,7 @@ bond_mode_8023ad_setup(struct rte_eth_dev *dev,
 /**
  * @internal
  *
- * Enables 802.1AX mode and all active slaves on bonded interface.
+ * Enables 802.1AX mode and all active children on bonded interface.
  *
  * @param dev Bonded interface
  * @return
@@ -220,7 +220,7 @@ bond_mode_8023ad_enable(struct rte_eth_dev *dev);
 /**
  * @internal
  *
- * Disables 802.1AX mode of the bonded interface and slaves.
+ * Disables 802.1AX mode of the bonded interface and children.
  *
  * @param dev Bonded interface
  * @return
@@ -256,43 +256,43 @@ bond_mode_8023ad_stop(struct rte_eth_dev *dev);
  *
  * Passes given slow packet to state machines management logic.
  * @param internals Bonded device private data.
- * @param slave_id Slave port id.
+ * @param child_id Child port id.
  * @param slot_pkt Slow packet.
  */
 void
 bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
-				 uint16_t slave_id, struct rte_mbuf *pkt);
+				 uint16_t child_id, struct rte_mbuf *pkt);
 
 /**
  * @internal
  *
- * Appends given slave used slave
+ * Appends given child device
  *
  * @param dev       Bonded interface.
- * @param port_id   Slave port ID to be added
+ * @param port_id   Child port ID to be added
  *
  * @return
  *  0 on success, negative value otherwise.
  */
 void
-bond_mode_8023ad_activate_slave(struct rte_eth_dev *dev, uint16_t port_id);
+bond_mode_8023ad_activate_child(struct rte_eth_dev *dev, uint16_t port_id);
 
 /**
  * @internal
  *
- * Denitializes and removes given slave from 802.1AX mode.
+ * Denitializes and removes given child from 802.1AX mode.
  *
  * @param dev       Bonded interface.
- * @param slave_num Position of slave in active_slaves array
+ * @param child_num Position of child in active_children array
  *
  * @return
  *  0 on success, negative value otherwise.
  */
 int
-bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *dev, uint16_t slave_pos);
+bond_mode_8023ad_deactivate_child(struct rte_eth_dev *dev, uint16_t child_pos);
 
 /**
- * Updates state when MAC was changed on bonded device or one of its slaves.
+ * Updates state when MAC was changed on bonded device or one of its children.
  * @param bond_dev Bonded device
  */
 void
@@ -300,10 +300,10 @@ bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev);
 
 int
 bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
-		uint16_t slave_port);
+		uint16_t child_port);
 
 int
-bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port);
+bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t child_port);
 
 int
 bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id);
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index d4f1fb27d4b3..d6cbf4293a45 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -18,13 +18,13 @@
 #include "eth_bond_8023ad_private.h"
 #include "rte_eth_bond_alb.h"
 
-#define PMD_BOND_SLAVE_PORT_KVARG			("slave")
-#define PMD_BOND_PRIMARY_SLAVE_KVARG		("primary")
-#define PMD_BOND_MODE_KVARG					("mode")
-#define PMD_BOND_AGG_MODE_KVARG				("agg_mode")
-#define PMD_BOND_XMIT_POLICY_KVARG			("xmit_policy")
-#define PMD_BOND_SOCKET_ID_KVARG			("socket_id")
-#define PMD_BOND_MAC_ADDR_KVARG				("mac")
+#define PMD_BOND_CHILD_PORT_KVARG		("child")
+#define PMD_BOND_PRIMARY_CHILD_KVARG		("primary")
+#define PMD_BOND_MODE_KVARG			("mode")
+#define PMD_BOND_AGG_MODE_KVARG			("agg_mode")
+#define PMD_BOND_XMIT_POLICY_KVARG		("xmit_policy")
+#define PMD_BOND_SOCKET_ID_KVARG		("socket_id")
+#define PMD_BOND_MAC_ADDR_KVARG			("mac")
 #define PMD_BOND_LSC_POLL_PERIOD_KVARG		("lsc_poll_period_ms")
 #define PMD_BOND_LINK_UP_PROP_DELAY_KVARG	("up_delay")
 #define PMD_BOND_LINK_DOWN_PROP_DELAY_KVARG	("down_delay")
@@ -50,8 +50,8 @@ extern const struct rte_flow_ops bond_flow_ops;
 /** Port Queue Mapping Structure */
 struct bond_rx_queue {
 	uint16_t queue_id;
-	/**< Next active_slave to poll */
-	uint16_t active_slave;
+	/**< Next active_child to poll */
+	uint16_t active_child;
 	/**< Queue Id */
 	struct bond_dev_private *dev_private;
 	/**< Reference to eth_dev private structure */
@@ -74,19 +74,19 @@ struct bond_tx_queue {
 	/**< Copy of TX configuration structure for queue */
 };
 
-/** Bonded slave devices structure */
-struct bond_ethdev_slave_ports {
-	uint16_t slaves[RTE_MAX_ETHPORTS];	/**< Slave port id array */
-	uint16_t slave_count;				/**< Number of slaves */
+/** Bonded child devices structure */
+struct bond_ethdev_child_ports {
+	uint16_t children[RTE_MAX_ETHPORTS];	/**< Child port id array */
+	uint16_t child_count;			/**< Number of children */
 };
 
-struct bond_slave_details {
+struct bond_child_details {
 	uint16_t port_id;
 
 	uint8_t link_status_poll_enabled;
 	uint8_t link_status_wait_to_complete;
 	uint8_t last_link_status;
-	/**< Port Id of slave eth_dev */
+	/**< Port Id of child eth_dev */
 	struct rte_ether_addr persisted_mac_addr;
 
 	uint16_t reta_size;
@@ -94,7 +94,7 @@ struct bond_slave_details {
 
 struct rte_flow {
 	TAILQ_ENTRY(rte_flow) next;
-	/* Slaves flows */
+	/* Children flows */
 	struct rte_flow *flows[RTE_MAX_ETHPORTS];
 	/* Flow description for synchronization */
 	struct rte_flow_conv_rule rule;
@@ -102,7 +102,7 @@ struct rte_flow {
 };
 
 typedef void (*burst_xmit_hash_t)(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves);
+		uint16_t child_count, uint16_t *children);
 
 /** Link Bonding PMD device private configuration Structure */
 struct bond_dev_private {
@@ -112,8 +112,8 @@ struct bond_dev_private {
 	rte_spinlock_t lock;
 	rte_spinlock_t lsc_lock;
 
-	uint16_t primary_port;			/**< Primary Slave Port */
-	uint16_t current_primary_port;		/**< Primary Slave Port */
+	uint16_t primary_port;			/**< Primary Child Port */
+	uint16_t current_primary_port;		/**< Primary Child Port */
 	uint16_t user_defined_primary_port;
 	/**< Flag for whether primary port is user defined or not */
 
@@ -137,16 +137,16 @@ struct bond_dev_private {
 	uint16_t nb_rx_queues;			/**< Total number of rx queues */
 	uint16_t nb_tx_queues;			/**< Total number of tx queues*/
 
-	uint16_t active_slave_count;		/**< Number of active slaves */
-	uint16_t active_slaves[RTE_MAX_ETHPORTS];    /**< Active slave list */
+	uint16_t active_child_count;		/**< Number of active children */
+	uint16_t active_children[RTE_MAX_ETHPORTS];    /**< Active child list */
 
-	uint16_t slave_count;			/**< Number of bonded slaves */
-	struct bond_slave_details slaves[RTE_MAX_ETHPORTS];
-	/**< Array of bonded slaves details */
+	uint16_t child_count;			/**< Number of bonded children */
+	struct bond_child_details children[RTE_MAX_ETHPORTS];
+	/**< Array of bonded children details */
 
 	struct mode8023ad_private mode4;
-	uint16_t tlb_slaves_order[RTE_MAX_ETHPORTS];
-	/**< TLB active slaves send order */
+	uint16_t tlb_children_order[RTE_MAX_ETHPORTS];
+	/**< TLB active children send order */
 	struct mode_alb_private mode6;
 
 	uint64_t rx_offload_capa;       /** Rx offload capability */
@@ -177,7 +177,7 @@ struct bond_dev_private {
 	uint8_t rss_key_len;				/**< hash key length in bytes. */
 
 	struct rte_kvargs *kvlist;
-	uint8_t slave_update_idx;
+	uint8_t child_update_idx;
 
 	bool kvargs_processing_is_done;
 
@@ -191,19 +191,19 @@ struct bond_dev_private {
 extern const struct eth_dev_ops default_dev_ops;
 
 int
-check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev);
+check_for_parent_bonded_ethdev(const struct rte_eth_dev *eth_dev);
 
 int
 check_for_bonded_ethdev(const struct rte_eth_dev *eth_dev);
 
-/* Search given slave array to find position of given id.
- * Return slave pos or slaves_count if not found. */
+/* Search given child array to find position of given id.
+ * Return child pos or children_count if not found. */
 static inline uint16_t
-find_slave_by_id(uint16_t *slaves, uint16_t slaves_count, uint16_t slave_id) {
+find_child_by_id(uint16_t *children, uint16_t children_count, uint16_t child_id) {
 
 	uint16_t pos;
-	for (pos = 0; pos < slaves_count; pos++) {
-		if (slave_id == slaves[pos])
+	for (pos = 0; pos < children_count; pos++) {
+		if (child_id == children[pos])
 			break;
 	}
 
@@ -217,13 +217,13 @@ int
 valid_bonded_port_id(uint16_t port_id);
 
 int
-valid_slave_port_id(struct bond_dev_private *internals, uint16_t port_id);
+valid_child_port_id(struct bond_dev_private *internals, uint16_t port_id);
 
 void
-deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
+deactivate_child(struct rte_eth_dev *eth_dev, uint16_t port_id);
 
 void
-activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
+activate_child(struct rte_eth_dev *eth_dev, uint16_t port_id);
 
 int
 mac_address_set(struct rte_eth_dev *eth_dev,
@@ -234,66 +234,66 @@ mac_address_get(struct rte_eth_dev *eth_dev,
 		struct rte_ether_addr *dst_mac_addr);
 
 int
-mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev);
+mac_address_children_update(struct rte_eth_dev *bonded_eth_dev);
 
 int
-slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
-		uint16_t slave_port_id);
+child_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+		uint16_t child_port_id);
 
 int
-slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
-		uint16_t slave_port_id);
+child_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+		uint16_t child_port_id);
 
 int
 bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode);
 
 int
-slave_configure(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev);
+child_configure(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *child_eth_dev);
 
 int
-slave_start(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev);
+child_start(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *child_eth_dev);
 
 void
-slave_remove(struct bond_dev_private *internals,
-		struct rte_eth_dev *slave_eth_dev);
+child_remove(struct bond_dev_private *internals,
+		struct rte_eth_dev *child_eth_dev);
 
 void
-slave_add(struct bond_dev_private *internals,
-		struct rte_eth_dev *slave_eth_dev);
+child_add(struct bond_dev_private *internals,
+		struct rte_eth_dev *child_eth_dev);
 
 void
 burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves);
+		uint16_t child_count, uint16_t *children);
 
 void
 burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves);
+		uint16_t child_count, uint16_t *children);
 
 void
 burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves);
+		uint16_t child_count, uint16_t *children);
 
 
 void
 bond_ethdev_primary_set(struct bond_dev_private *internals,
-		uint16_t slave_port_id);
+		uint16_t child_port_id);
 
 int
 bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 		void *param, void *ret_param);
 
 int
-bond_ethdev_parse_slave_port_kvarg(const char *key,
+bond_ethdev_parse_child_port_kvarg(const char *key,
 		const char *value, void *extra_args);
 
 int
-bond_ethdev_parse_slave_mode_kvarg(const char *key,
+bond_ethdev_parse_child_mode_kvarg(const char *key,
 		const char *value, void *extra_args);
 
 int
-bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_child_agg_mode_kvarg(const char *key __rte_unused,
 		const char *value, void *extra_args);
 
 int
@@ -301,7 +301,7 @@ bond_ethdev_parse_socket_id_kvarg(const char *key,
 		const char *value, void *extra_args);
 
 int
-bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key,
+bond_ethdev_parse_primary_child_port_id_kvarg(const char *key,
 		const char *value, void *extra_args);
 
 int
@@ -323,7 +323,7 @@ void
 bond_tlb_enable(struct bond_dev_private *internals);
 
 void
-bond_tlb_activate_slave(struct bond_dev_private *internals);
+bond_tlb_activate_child(struct bond_dev_private *internals);
 
 int
 bond_ethdev_stop(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/bonding/rte_eth_bond.h b/drivers/net/bonding/rte_eth_bond.h
index 874aa91a5fe4..a74eab35dd08 100644
--- a/drivers/net/bonding/rte_eth_bond.h
+++ b/drivers/net/bonding/rte_eth_bond.h
@@ -10,7 +10,7 @@
  *
  * RTE Link Bonding Ethernet Device
  * Link Bonding for 1GbE and 10GbE ports to allow the aggregation of multiple
- * (slave) NICs into a single logical interface. The bonded device processes
+ * NICs into a single logical interface. The bonded device processes
  * these interfaces based on the mode of operation specified and supported.
  * This implementation supports 4 modes of operation round robin, active backup
  * balance and broadcast. Providing redundant links, fault tolerance and/or
@@ -28,24 +28,24 @@ extern "C" {
 #define BONDING_MODE_ROUND_ROBIN		(0)
 /**< Round Robin (Mode 0).
  * In this mode all transmitted packets will be balanced equally across all
- * active slaves of the bonded in a round robin fashion. */
+ * active devices of the bonded in a round robin fashion. */
 #define BONDING_MODE_ACTIVE_BACKUP		(1)
 /**< Active Backup (Mode 1).
  * In this mode all packets transmitted will be transmitted on the primary
- * slave until such point as the primary slave is no longer available and then
- * transmitted packets will be sent on the next available slaves. The primary
- * slave can be defined by the user but defaults to the first active slave
+ * device until such point as the primary device is no longer available and then
+ * transmitted packets will be sent on the next available devices. The primary
+ * device can be defined by the user but defaults to the first active device
  * available if not specified. */
 #define BONDING_MODE_BALANCE			(2)
 /**< Balance (Mode 2).
  * In this mode all packets transmitted will be balanced across the available
- * slaves using one of three available transmit policies - l2, l2+3 or l3+4.
+ * devices using one of three available transmit policies - l2, l2+3 or l3+4.
  * See BALANCE_XMIT_POLICY macros definitions for further details on transmit
  * policies. */
 #define BONDING_MODE_BROADCAST			(3)
 /**< Broadcast (Mode 3).
  * In this mode all transmitted packets will be transmitted on all available
- * active slaves of the bonded. */
+ * active devices of the bonded. */
 #define BONDING_MODE_8023AD				(4)
 /**< 802.3AD (Mode 4).
  *
@@ -62,22 +62,22 @@ extern "C" {
  * be handled with the expected latency and this may cause the link status to be
  * incorrectly marked as down or failure to correctly negotiate with peers.
  * - For optimal performance during initial handshaking the array of mbufs provided
- * to rx_burst should be at least 2 times the slave count size.
+ * to rx_burst should be at least 2 times the device count size.
  *
  */
 #define BONDING_MODE_TLB	(5)
 /**< Adaptive TLB (Mode 5)
  * This mode provides an adaptive transmit load balancing. It dynamically
- * changes the transmitting slave, according to the computed load. Statistics
+ * changes the transmitting device, according to the computed load. Statistics
  * are collected in 100ms intervals and scheduled every 10ms */
 #define BONDING_MODE_ALB	(6)
 /**< Adaptive Load Balancing (Mode 6)
  * This mode includes adaptive TLB and receive load balancing (RLB). In RLB the
  * bonding driver intercepts ARP replies send by local system and overwrites its
  * source MAC address, so that different peers send data to the server on
- * different slave interfaces. When local system sends ARP request, it saves IP
+ * different device interfaces. When local system sends ARP request, it saves IP
  * information from it. When ARP reply from that peer is received, its MAC is
- * stored, one of slave MACs assigned and ARP reply send to that peer.
+ * stored, one of device MACs assigned and ARP reply send to that peer.
  */
 
 /* Balance Mode Transmit Policies */
@@ -113,28 +113,42 @@ int
 rte_eth_bond_free(const char *name);
 
 /**
- * Add a rte_eth_dev device as a slave to the bonded device
+ * Add a rte_eth_dev device as a child to the bonded device
  *
  * @param bonded_port_id	Port ID of bonded device.
- * @param slave_port_id		Port ID of slave device.
+ * @param child_port_id		Port ID of child device.
  *
  * @return
  *	0 on success, negative value otherwise
  */
 int
-rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_child_add(uint16_t bonded_port_id, uint16_t child_port_id);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t child_port_id)
+{
+	return rte_eth_bond_child_add(bonded_port_id, child_port_id);
+}
 
 /**
- * Remove a slave rte_eth_dev device from the bonded device
+ * Remove a child device from the bonded device
  *
  * @param bonded_port_id	Port ID of bonded device.
- * @param slave_port_id		Port ID of slave device.
+ * @param child_port_id		Port ID of child device.
  *
  * @return
  *	0 on success, negative value otherwise
  */
 int
-rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_child_remove(uint16_t bonded_port_id, uint16_t child_port_id);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t child_port_id)
+{
+	return rte_eth_bond_child_remove(bonded_port_id, child_port_id);
+}
 
 /**
  * Set link bonding mode of bonded device
@@ -160,65 +174,73 @@ int
 rte_eth_bond_mode_get(uint16_t bonded_port_id);
 
 /**
- * Set slave rte_eth_dev as primary slave of bonded device
+ * Set child rte_eth_dev as primary of bonded device
  *
  * @param bonded_port_id	Port ID of bonded device.
- * @param slave_port_id		Port ID of slave device.
+ * @param child_port_id		Port ID of child device.
  *
  * @return
  *	0 on success, negative value otherwise
  */
 int
-rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t child_port_id);
 
 /**
- * Get primary slave of bonded device
+ * Get primary child of bonded device
  *
  * @param bonded_port_id	Port ID of bonded device.
  *
  * @return
- *	Port Id of primary slave on success, -1 on failure
+ *	Port Id of primary child on success, -1 on failure
  */
 int
 rte_eth_bond_primary_get(uint16_t bonded_port_id);
 
 /**
- * Populate an array with list of the slaves port id's of the bonded device
+ * Populate an array with list of the children of the bonded device
  *
  * @param bonded_port_id	Port ID of bonded eth_dev to interrogate
- * @param slaves			Array to be populated with the current active slaves
- * @param len				Length of slaves array
+ * @param children			Array to be populated with the current active children
+ * @param len				Length of children array
  *
  * @return
- *	Number of slaves associated with bonded device on success,
+ *	Number of children associated with bonded device on success,
  *	negative value otherwise
  */
 int
-rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
-			uint16_t len);
+rte_eth_bond_children_get(uint16_t bonded_port_id, uint16_t children[],
+			  uint16_t len);
+
+__rte_deprecated
+static inline int
+rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t children[],
+			uint16_t len)
+{
+	return rte_eth_bond_children_get(bonded_port_id, children, len);
+}
 
 /**
- * Populate an array with list of the active slaves port id's of the bonded
+ * Populate an array with list of the active children port id's of the bonded
  * device.
  *
  * @param bonded_port_id	Port ID of bonded eth_dev to interrogate
- * @param slaves			Array to be populated with the current active slaves
- * @param len				Length of slaves array
+ * @param children			Array to be populated with the current active children
+ * @param len				Length of children array
  *
  * @return
- *	Number of active slaves associated with bonded device on success,
+ *	Number of active children associated with bonded device on success,
  *	negative value otherwise
  */
 int
-rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_active_children_get(uint16_t bonded_port_id, uint16_t children[],
 				uint16_t len);
 
 /**
- * Set explicit MAC address to use on bonded device and it's slaves.
+ * Set explicit MAC address to use on bonded device and it's children.
  *
  * @param bonded_port_id	Port ID of bonded device.
- * @param mac_addr			MAC Address to use on bonded device overriding
- *							slaves MAC addresses
+ * @param mac_addr		MAC Address to use on bonded device overriding
+ *				children MAC addresses
  *
  * @return
  *	0 on success, negative value otherwise
@@ -228,8 +250,8 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
 		struct rte_ether_addr *mac_addr);
 
 /**
- * Reset bonded device to use MAC from primary slave on bonded device and it's
- * slaves.
+ * Reset bonded device to use MAC from primary child on bonded device and it's
+ * children.
  *
  * @param bonded_port_id	Port ID of bonded device.
  *
@@ -266,7 +288,7 @@ rte_eth_bond_xmit_policy_get(uint16_t bonded_port_id);
 
 /**
  * Set the link monitoring frequency (in ms) for monitoring the link status of
- * slave devices
+ * child devices
  *
  * @param bonded_port_id	Port ID of bonded device.
  * @param internal_ms		Monitoring interval in milliseconds
@@ -280,7 +302,7 @@ rte_eth_bond_link_monitoring_set(uint16_t bonded_port_id, uint32_t internal_ms);
 
 /**
  * Get the current link monitoring frequency (in ms) for monitoring of the link
- * status of slave devices
+ * status of child devices
  *
  * @param bonded_port_id	Port ID of bonded device.
  *
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 4a266bb2caf1..32ac1f47ee6e 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -19,7 +19,7 @@ static void bond_mode_8023ad_ext_periodic_cb(void *arg);
 #define MODE4_DEBUG(fmt, ...)				\
 	rte_log(RTE_LOG_DEBUG, bond_logtype,		\
 		"%6u [Port %u: %s] " fmt,		\
-		bond_dbg_get_time_diff_ms(), slave_id,	\
+		bond_dbg_get_time_diff_ms(), child_id,	\
 		__func__, ##__VA_ARGS__)
 
 static uint64_t start_time;
@@ -184,9 +184,9 @@ set_warning_flags(struct port *port, uint16_t flags)
 }
 
 static void
-show_warnings(uint16_t slave_id)
+show_warnings(uint16_t child_id)
 {
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[child_id];
 	uint8_t warnings;
 
 	do {
@@ -205,36 +205,36 @@ show_warnings(uint16_t slave_id)
 
 	if (warnings & WRN_RX_QUEUE_FULL) {
 		RTE_BOND_LOG(DEBUG,
-			     "Slave %u: failed to enqueue LACP packet into RX ring.\n"
+			     "Child %u: failed to enqueue LACP packet into RX ring.\n"
 			     "Receive and transmit functions must be invoked on bonded"
 			     "interface at least 10 times per second or LACP will notwork correctly",
-			     slave_id);
+			     child_id);
 	}
 
 	if (warnings & WRN_TX_QUEUE_FULL) {
 		RTE_BOND_LOG(DEBUG,
-			     "Slave %u: failed to enqueue LACP packet into TX ring.\n"
+			     "Child %u: failed to enqueue LACP packet into TX ring.\n"
 			     "Receive and transmit functions must be invoked on bonded"
 			     "interface at least 10 times per second or LACP will not work correctly",
-			     slave_id);
+			     child_id);
 	}
 
 	if (warnings & WRN_RX_MARKER_TO_FAST)
-		RTE_BOND_LOG(INFO, "Slave %u: marker to early - ignoring.",
-			     slave_id);
+		RTE_BOND_LOG(INFO, "Child %u: marker to early - ignoring.",
+			     child_id);
 
 	if (warnings & WRN_UNKNOWN_SLOW_TYPE) {
 		RTE_BOND_LOG(INFO,
-			"Slave %u: ignoring unknown slow protocol frame type",
-			     slave_id);
+			"Child %u: ignoring unknown slow protocol frame type",
+			     child_id);
 	}
 
 	if (warnings & WRN_UNKNOWN_MARKER_TYPE)
-		RTE_BOND_LOG(INFO, "Slave %u: ignoring unknown marker type",
-			     slave_id);
+		RTE_BOND_LOG(INFO, "Child %u: ignoring unknown marker type",
+			     child_id);
 
 	if (warnings & WRN_NOT_LACP_CAPABLE)
-		MODE4_DEBUG("Port %u is not LACP capable!\n", slave_id);
+		MODE4_DEBUG("Port %u is not LACP capable!\n", child_id);
 }
 
 static void
@@ -256,10 +256,10 @@ record_default(struct port *port)
  * @param port			Port on which LACPDU was received.
  */
 static void
-rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
+rx_machine(struct bond_dev_private *internals, uint16_t child_id,
 		struct lacpdu *lacp)
 {
-	struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *agg, *port = &bond_mode_8023ad_ports[child_id];
 	uint64_t timeout;
 
 	if (SM_FLAG(port, BEGIN)) {
@@ -389,9 +389,9 @@ rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
  * @param port			Port to handle state machine.
  */
 static void
-periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
+periodic_machine(struct bond_dev_private *internals, uint16_t child_id)
 {
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[child_id];
 	/* Calculate if either site is LACP enabled */
 	uint64_t timeout;
 	uint8_t active = ACTOR_STATE(port, LACP_ACTIVE) ||
@@ -451,9 +451,9 @@ periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
  * @param port			Port to handle state machine.
  */
 static void
-mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
+mux_machine(struct bond_dev_private *internals, uint16_t child_id)
 {
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[child_id];
 
 	/* Save current state for later use */
 	const uint8_t state_mask = STATE_SYNCHRONIZATION | STATE_DISTRIBUTING |
@@ -527,8 +527,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
 				SM_FLAG_SET(port, NTT);
 				MODE4_DEBUG("COLLECTING -> DISTRIBUTING\n");
 				RTE_BOND_LOG(INFO,
-					"Bond %u: slave id %u distributing started.",
-					internals->port_id, slave_id);
+					"Bond %u: child id %u distributing started.",
+					internals->port_id, child_id);
 			}
 		} else {
 			if (!PARTNER_STATE(port, COLLECTING)) {
@@ -538,8 +538,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
 				SM_FLAG_SET(port, NTT);
 				MODE4_DEBUG("DISTRIBUTING -> COLLECTING\n");
 				RTE_BOND_LOG(INFO,
-					"Bond %u: slave id %u distributing stopped.",
-					internals->port_id, slave_id);
+					"Bond %u: child id %u distributing stopped.",
+					internals->port_id, child_id);
 			}
 		}
 	}
@@ -554,9 +554,9 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
  * @param port
  */
 static void
-tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
+tx_machine(struct bond_dev_private *internals, uint16_t child_id)
 {
-	struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *agg, *port = &bond_mode_8023ad_ports[child_id];
 
 	struct rte_mbuf *lacp_pkt = NULL;
 	struct lacpdu_header *hdr;
@@ -587,7 +587,7 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
 
 	/* Source and destination MAC */
 	rte_ether_addr_copy(&lacp_mac_addr, &hdr->eth_hdr.dst_addr);
-	rte_eth_macaddr_get(slave_id, &hdr->eth_hdr.src_addr);
+	rte_eth_macaddr_get(child_id, &hdr->eth_hdr.src_addr);
 	hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
 
 	lacpdu = &hdr->lacpdu;
@@ -635,10 +635,10 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
 			return;
 		}
 	} else {
-		uint16_t pkts_sent = rte_eth_tx_prepare(slave_id,
+		uint16_t pkts_sent = rte_eth_tx_prepare(child_id,
 				internals->mode4.dedicated_queues.tx_qid,
 				&lacp_pkt, 1);
-		pkts_sent = rte_eth_tx_burst(slave_id,
+		pkts_sent = rte_eth_tx_burst(child_id,
 				internals->mode4.dedicated_queues.tx_qid,
 				&lacp_pkt, pkts_sent);
 		if (pkts_sent != 1) {
@@ -679,40 +679,40 @@ max_index(uint64_t *a, int n)
  * @param port_pos			Port to assign.
  */
 static void
-selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
+selection_logic(struct bond_dev_private *internals, uint16_t child_id)
 {
 	struct port *agg, *port;
-	uint16_t slaves_count, new_agg_id, i, j = 0;
-	uint16_t *slaves;
+	uint16_t children_count, new_agg_id, i, j = 0;
+	uint16_t *children;
 	uint64_t agg_bandwidth[RTE_MAX_ETHPORTS] = {0};
 	uint64_t agg_count[RTE_MAX_ETHPORTS] = {0};
-	uint16_t default_slave = 0;
+	uint16_t default_child = 0;
 	struct rte_eth_link link_info;
 	uint16_t agg_new_idx = 0;
 	int ret;
 
-	slaves = internals->active_slaves;
-	slaves_count = internals->active_slave_count;
-	port = &bond_mode_8023ad_ports[slave_id];
+	children = internals->active_children;
+	children_count = internals->active_child_count;
+	port = &bond_mode_8023ad_ports[child_id];
 
 	/* Search for aggregator suitable for this port */
-	for (i = 0; i < slaves_count; ++i) {
-		agg = &bond_mode_8023ad_ports[slaves[i]];
+	for (i = 0; i < children_count; ++i) {
+		agg = &bond_mode_8023ad_ports[children[i]];
 		/* Skip ports that are not aggregators */
-		if (agg->aggregator_port_id != slaves[i])
+		if (agg->aggregator_port_id != children[i])
 			continue;
 
-		ret = rte_eth_link_get_nowait(slaves[i], &link_info);
+		ret = rte_eth_link_get_nowait(children[i], &link_info);
 		if (ret < 0) {
 			RTE_BOND_LOG(ERR,
-				"Slave (port %u) link get failed: %s\n",
-				slaves[i], rte_strerror(-ret));
+				"Child (port %u) link get failed: %s\n",
+				children[i], rte_strerror(-ret));
 			continue;
 		}
 		agg_count[i] += 1;
 		agg_bandwidth[i] += link_info.link_speed;
 
-		/* Actors system ID is not checked since all slave device have the same
+		/* Actors system ID is not checked since all child device have the same
 		 * ID (MAC address). */
 		if ((agg->actor.key == port->actor.key &&
 			agg->partner.system_priority == port->partner.system_priority &&
@@ -724,31 +724,31 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
 				rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) != 0) {
 
 			if (j == 0)
-				default_slave = i;
+				default_child = i;
 			j++;
 		}
 	}
 
 	switch (internals->mode4.agg_selection) {
 	case AGG_COUNT:
-		agg_new_idx = max_index(agg_count, slaves_count);
-		new_agg_id = slaves[agg_new_idx];
+		agg_new_idx = max_index(agg_count, children_count);
+		new_agg_id = children[agg_new_idx];
 		break;
 	case AGG_BANDWIDTH:
-		agg_new_idx = max_index(agg_bandwidth, slaves_count);
-		new_agg_id = slaves[agg_new_idx];
+		agg_new_idx = max_index(agg_bandwidth, children_count);
+		new_agg_id = children[agg_new_idx];
 		break;
 	case AGG_STABLE:
-		if (default_slave == slaves_count)
-			new_agg_id = slaves[slave_id];
+		if (default_child == children_count)
+			new_agg_id = children[child_id];
 		else
-			new_agg_id = slaves[default_slave];
+			new_agg_id = children[default_child];
 		break;
 	default:
-		if (default_slave == slaves_count)
-			new_agg_id = slaves[slave_id];
+		if (default_child == children_count)
+			new_agg_id = children[child_id];
 		else
-			new_agg_id = slaves[default_slave];
+			new_agg_id = children[default_child];
 		break;
 	}
 
@@ -758,7 +758,7 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
 		MODE4_DEBUG("-> SELECTED: ID=%3u\n"
 			"\t%s aggregator ID=%3u\n",
 			port->aggregator_port_id,
-			port->aggregator_port_id == slave_id ?
+			port->aggregator_port_id == child_id ?
 				"aggregator not found, using default" : "aggregator found",
 			port->aggregator_port_id);
 	}
@@ -802,7 +802,7 @@ link_speed_key(uint16_t speed) {
 }
 
 static void
-rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
+rx_machine_update(struct bond_dev_private *internals, uint16_t child_id,
 		struct rte_mbuf *lacp_pkt) {
 	struct lacpdu_header *lacp;
 	struct lacpdu_actor_partner_params *partner;
@@ -813,7 +813,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
 		RTE_ASSERT(lacp->lacpdu.subtype == SLOW_SUBTYPE_LACP);
 
 		partner = &lacp->lacpdu.partner;
-		port = &bond_mode_8023ad_ports[slave_id];
+		port = &bond_mode_8023ad_ports[child_id];
 		agg = &bond_mode_8023ad_ports[port->aggregator_port_id];
 
 		if (rte_is_zero_ether_addr(&partner->port_params.system) ||
@@ -822,7 +822,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
 			/* This LACP frame is sending to the bonding port
 			 * so pass it to rx_machine.
 			 */
-			rx_machine(internals, slave_id, &lacp->lacpdu);
+			rx_machine(internals, child_id, &lacp->lacpdu);
 		} else {
 			char preferred_system_name[RTE_ETHER_ADDR_FMT_SIZE];
 			char self_system_name[RTE_ETHER_ADDR_FMT_SIZE];
@@ -837,16 +837,16 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
 		}
 		rte_pktmbuf_free(lacp_pkt);
 	} else
-		rx_machine(internals, slave_id, NULL);
+		rx_machine(internals, child_id, NULL);
 }
 
 static void
 bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
-			uint16_t slave_id)
+			uint16_t child_id)
 {
 #define DEDICATED_QUEUE_BURST_SIZE 32
 	struct rte_mbuf *lacp_pkt[DEDICATED_QUEUE_BURST_SIZE];
-	uint16_t rx_count = rte_eth_rx_burst(slave_id,
+	uint16_t rx_count = rte_eth_rx_burst(child_id,
 				internals->mode4.dedicated_queues.rx_qid,
 				lacp_pkt, DEDICATED_QUEUE_BURST_SIZE);
 
@@ -854,10 +854,10 @@ bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
 		uint16_t i;
 
 		for (i = 0; i < rx_count; i++)
-			bond_mode_8023ad_handle_slow_pkt(internals, slave_id,
+			bond_mode_8023ad_handle_slow_pkt(internals, child_id,
 					lacp_pkt[i]);
 	} else {
-		rx_machine_update(internals, slave_id, NULL);
+		rx_machine_update(internals, child_id, NULL);
 	}
 }
 
@@ -868,23 +868,23 @@ bond_mode_8023ad_periodic_cb(void *arg)
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
 	struct port *port;
 	struct rte_eth_link link_info;
-	struct rte_ether_addr slave_addr;
+	struct rte_ether_addr child_addr;
 	struct rte_mbuf *lacp_pkt = NULL;
-	uint16_t slave_id;
+	uint16_t child_id;
 	uint16_t i;
 
 
 	/* Update link status on each port */
-	for (i = 0; i < internals->active_slave_count; i++) {
+	for (i = 0; i < internals->active_child_count; i++) {
 		uint16_t key;
 		int ret;
 
-		slave_id = internals->active_slaves[i];
-		ret = rte_eth_link_get_nowait(slave_id, &link_info);
+		child_id = internals->active_children[i];
+		ret = rte_eth_link_get_nowait(child_id, &link_info);
 		if (ret < 0) {
 			RTE_BOND_LOG(ERR,
-				"Slave (port %u) link get failed: %s\n",
-				slave_id, rte_strerror(-ret));
+				"Child (port %u) link get failed: %s\n",
+				child_id, rte_strerror(-ret));
 		}
 
 		if (ret >= 0 && link_info.link_status != 0) {
@@ -895,8 +895,8 @@ bond_mode_8023ad_periodic_cb(void *arg)
 			key = 0;
 		}
 
-		rte_eth_macaddr_get(slave_id, &slave_addr);
-		port = &bond_mode_8023ad_ports[slave_id];
+		rte_eth_macaddr_get(child_id, &child_addr);
+		port = &bond_mode_8023ad_ports[child_id];
 
 		key = rte_cpu_to_be_16(key);
 		if (key != port->actor.key) {
@@ -907,16 +907,16 @@ bond_mode_8023ad_periodic_cb(void *arg)
 			SM_FLAG_SET(port, NTT);
 		}
 
-		if (!rte_is_same_ether_addr(&port->actor.system, &slave_addr)) {
-			rte_ether_addr_copy(&slave_addr, &port->actor.system);
-			if (port->aggregator_port_id == slave_id)
+		if (!rte_is_same_ether_addr(&port->actor.system, &child_addr)) {
+			rte_ether_addr_copy(&child_addr, &port->actor.system);
+			if (port->aggregator_port_id == child_id)
 				SM_FLAG_SET(port, NTT);
 		}
 	}
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		slave_id = internals->active_slaves[i];
-		port = &bond_mode_8023ad_ports[slave_id];
+	for (i = 0; i < internals->active_child_count; i++) {
+		child_id = internals->active_children[i];
+		port = &bond_mode_8023ad_ports[child_id];
 
 		if ((port->actor.key &
 				rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) == 0) {
@@ -947,19 +947,19 @@ bond_mode_8023ad_periodic_cb(void *arg)
 			if (retval != 0)
 				lacp_pkt = NULL;
 
-			rx_machine_update(internals, slave_id, lacp_pkt);
+			rx_machine_update(internals, child_id, lacp_pkt);
 		} else {
 			bond_mode_8023ad_dedicated_rxq_process(internals,
-					slave_id);
+					child_id);
 		}
 
-		periodic_machine(internals, slave_id);
-		mux_machine(internals, slave_id);
-		tx_machine(internals, slave_id);
-		selection_logic(internals, slave_id);
+		periodic_machine(internals, child_id);
+		mux_machine(internals, child_id);
+		tx_machine(internals, child_id);
+		selection_logic(internals, child_id);
 
 		SM_FLAG_CLR(port, BEGIN);
-		show_warnings(slave_id);
+		show_warnings(child_id);
 	}
 
 	rte_eal_alarm_set(internals->mode4.update_timeout_us,
@@ -967,34 +967,34 @@ bond_mode_8023ad_periodic_cb(void *arg)
 }
 
 static int
-bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
+bond_mode_8023ad_register_lacp_mac(uint16_t child_id)
 {
 	int ret;
 
-	ret = rte_eth_allmulticast_enable(slave_id);
+	ret = rte_eth_allmulticast_enable(child_id);
 	if (ret != 0) {
 		RTE_BOND_LOG(ERR,
 			"failed to enable allmulti mode for port %u: %s",
-			slave_id, rte_strerror(-ret));
+			child_id, rte_strerror(-ret));
 	}
-	if (rte_eth_allmulticast_get(slave_id)) {
+	if (rte_eth_allmulticast_get(child_id)) {
 		RTE_BOND_LOG(DEBUG, "forced allmulti for port %u",
-			     slave_id);
-		bond_mode_8023ad_ports[slave_id].forced_rx_flags =
+			     child_id);
+		bond_mode_8023ad_ports[child_id].forced_rx_flags =
 				BOND_8023AD_FORCED_ALLMULTI;
 		return 0;
 	}
 
-	ret = rte_eth_promiscuous_enable(slave_id);
+	ret = rte_eth_promiscuous_enable(child_id);
 	if (ret != 0) {
 		RTE_BOND_LOG(ERR,
 			"failed to enable promiscuous mode for port %u: %s",
-			slave_id, rte_strerror(-ret));
+			child_id, rte_strerror(-ret));
 	}
-	if (rte_eth_promiscuous_get(slave_id)) {
+	if (rte_eth_promiscuous_get(child_id)) {
 		RTE_BOND_LOG(DEBUG, "forced promiscuous for port %u",
-			     slave_id);
-		bond_mode_8023ad_ports[slave_id].forced_rx_flags =
+			     child_id);
+		bond_mode_8023ad_ports[child_id].forced_rx_flags =
 				BOND_8023AD_FORCED_PROMISC;
 		return 0;
 	}
@@ -1003,27 +1003,27 @@ bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
 }
 
 static void
-bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
+bond_mode_8023ad_unregister_lacp_mac(uint16_t child_id)
 {
 	int ret;
 
-	switch (bond_mode_8023ad_ports[slave_id].forced_rx_flags) {
+	switch (bond_mode_8023ad_ports[child_id].forced_rx_flags) {
 	case BOND_8023AD_FORCED_ALLMULTI:
-		RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", slave_id);
-		ret = rte_eth_allmulticast_disable(slave_id);
+		RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", child_id);
+		ret = rte_eth_allmulticast_disable(child_id);
 		if (ret != 0)
 			RTE_BOND_LOG(ERR,
 				"failed to disable allmulti mode for port %u: %s",
-				slave_id, rte_strerror(-ret));
+				child_id, rte_strerror(-ret));
 		break;
 
 	case BOND_8023AD_FORCED_PROMISC:
-		RTE_BOND_LOG(DEBUG, "unset promisc for port %u", slave_id);
-		ret = rte_eth_promiscuous_disable(slave_id);
+		RTE_BOND_LOG(DEBUG, "unset promisc for port %u", child_id);
+		ret = rte_eth_promiscuous_disable(child_id);
 		if (ret != 0)
 			RTE_BOND_LOG(ERR,
 				"failed to disable promiscuous mode for port %u: %s",
-				slave_id, rte_strerror(-ret));
+				child_id, rte_strerror(-ret));
 		break;
 
 	default:
@@ -1032,12 +1032,12 @@ bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
 }
 
 void
-bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
-				uint16_t slave_id)
+bond_mode_8023ad_activate_child(struct rte_eth_dev *bond_dev,
+				uint16_t child_id)
 {
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
 
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[child_id];
 	struct port_params initial = {
 			.system = { { 0 } },
 			.system_priority = rte_cpu_to_be_16(0xFFFF),
@@ -1053,15 +1053,15 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 	struct bond_tx_queue *bd_tx_q;
 	uint16_t q_id;
 
-	/* Given slave mus not be in active list */
-	RTE_ASSERT(find_slave_by_id(internals->active_slaves,
-	internals->active_slave_count, slave_id) == internals->active_slave_count);
+	/* Given child mus not be in active list */
+	RTE_ASSERT(find_child_by_id(internals->active_children,
+	internals->active_child_count, child_id) == internals->active_child_count);
 	RTE_SET_USED(internals); /* used only for assert when enabled */
 
 	memcpy(&port->actor, &initial, sizeof(struct port_params));
 	/* Standard requires that port ID must be grater than 0.
 	 * Add 1 do get corresponding port_number */
-	port->actor.port_number = rte_cpu_to_be_16(slave_id + 1);
+	port->actor.port_number = rte_cpu_to_be_16(child_id + 1);
 
 	memcpy(&port->partner, &initial, sizeof(struct port_params));
 	memcpy(&port->partner_admin, &initial, sizeof(struct port_params));
@@ -1072,11 +1072,11 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 	port->sm_flags = SM_FLAGS_BEGIN;
 
 	/* use this port as aggregator */
-	port->aggregator_port_id = slave_id;
+	port->aggregator_port_id = child_id;
 
-	if (bond_mode_8023ad_register_lacp_mac(slave_id) < 0) {
-		RTE_BOND_LOG(WARNING, "slave %u is most likely broken and won't receive LACP packets",
-			     slave_id);
+	if (bond_mode_8023ad_register_lacp_mac(child_id) < 0) {
+		RTE_BOND_LOG(WARNING, "child %u is most likely broken and won't receive LACP packets",
+			     child_id);
 	}
 
 	timer_cancel(&port->warning_timer);
@@ -1087,7 +1087,7 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 	RTE_ASSERT(port->rx_ring == NULL);
 	RTE_ASSERT(port->tx_ring == NULL);
 
-	socket_id = rte_eth_dev_socket_id(slave_id);
+	socket_id = rte_eth_dev_socket_id(child_id);
 	if (socket_id == -1)
 		socket_id = rte_socket_id();
 
@@ -1095,14 +1095,14 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 				RTE_PKTMBUF_HEADROOM;
 
 	/* The size of the mempool should be at least:
-	 * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */
-	total_tx_desc = BOND_MODE_8023AX_SLAVE_TX_PKTS;
+	 * the sum of the TX descriptors + BOND_MODE_8023AX_CHILD_TX_PKTS */
+	total_tx_desc = BOND_MODE_8023AX_CHILD_TX_PKTS;
 	for (q_id = 0; q_id < bond_dev->data->nb_tx_queues; q_id++) {
 		bd_tx_q = (struct bond_tx_queue*)bond_dev->data->tx_queues[q_id];
 		total_tx_desc += bd_tx_q->nb_tx_desc;
 	}
 
-	snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
+	snprintf(mem_name, RTE_DIM(mem_name), "child_port%u_pool", child_id);
 	port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
 		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
 			32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
@@ -1111,39 +1111,39 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 	/* Any memory allocation failure in initialization is critical because
 	 * resources can't be free, so reinitialization is impossible. */
 	if (port->mbuf_pool == NULL) {
-		rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
-			slave_id, mem_name, rte_strerror(rte_errno));
+		rte_panic("Child %u: Failed to create memory pool '%s': %s\n",
+			child_id, mem_name, rte_strerror(rte_errno));
 	}
 
-	snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_rx", slave_id);
+	snprintf(mem_name, RTE_DIM(mem_name), "child_%u_rx", child_id);
 	port->rx_ring = rte_ring_create(mem_name,
-			rte_align32pow2(BOND_MODE_8023AX_SLAVE_RX_PKTS), socket_id, 0);
+			rte_align32pow2(BOND_MODE_8023AX_CHILD_RX_PKTS), socket_id, 0);
 
 	if (port->rx_ring == NULL) {
-		rte_panic("Slave %u: Failed to create rx ring '%s': %s\n", slave_id,
+		rte_panic("Child %u: Failed to create rx ring '%s': %s\n", child_id,
 			mem_name, rte_strerror(rte_errno));
 	}
 
 	/* TX ring is at least one pkt longer to make room for marker packet. */
-	snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_tx", slave_id);
+	snprintf(mem_name, RTE_DIM(mem_name), "child_%u_tx", child_id);
 	port->tx_ring = rte_ring_create(mem_name,
-			rte_align32pow2(BOND_MODE_8023AX_SLAVE_TX_PKTS + 1), socket_id, 0);
+			rte_align32pow2(BOND_MODE_8023AX_CHILD_TX_PKTS + 1), socket_id, 0);
 
 	if (port->tx_ring == NULL) {
-		rte_panic("Slave %u: Failed to create tx ring '%s': %s\n", slave_id,
+		rte_panic("Child %u: Failed to create tx ring '%s': %s\n", child_id,
 			mem_name, rte_strerror(rte_errno));
 	}
 }
 
 int
-bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
-		uint16_t slave_id)
+bond_mode_8023ad_deactivate_child(struct rte_eth_dev *bond_dev __rte_unused,
+		uint16_t child_id)
 {
 	void *pkt = NULL;
 	struct port *port = NULL;
 	uint8_t old_partner_state;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[child_id];
 
 	ACTOR_STATE_CLR(port, AGGREGATION);
 	port->selected = UNSELECTED;
@@ -1151,7 +1151,7 @@ bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
 	old_partner_state = port->partner_state;
 	record_default(port);
 
-	bond_mode_8023ad_unregister_lacp_mac(slave_id);
+	bond_mode_8023ad_unregister_lacp_mac(child_id);
 
 	/* If partner timeout state changes then disable timer */
 	if (!((old_partner_state ^ port->partner_state) &
@@ -1174,30 +1174,30 @@ void
 bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev)
 {
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
-	struct rte_ether_addr slave_addr;
-	struct port *slave, *agg_slave;
-	uint16_t slave_id, i, j;
+	struct rte_ether_addr child_addr;
+	struct port *child, *agg_child;
+	uint16_t child_id, i, j;
 
 	bond_mode_8023ad_stop(bond_dev);
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		slave_id = internals->active_slaves[i];
-		slave = &bond_mode_8023ad_ports[slave_id];
-		rte_eth_macaddr_get(slave_id, &slave_addr);
+	for (i = 0; i < internals->active_child_count; i++) {
+		child_id = internals->active_children[i];
+		child = &bond_mode_8023ad_ports[child_id];
+		rte_eth_macaddr_get(child_id, &child_addr);
 
-		if (rte_is_same_ether_addr(&slave_addr, &slave->actor.system))
+		if (rte_is_same_ether_addr(&child_addr, &child->actor.system))
 			continue;
 
-		rte_ether_addr_copy(&slave_addr, &slave->actor.system);
+		rte_ether_addr_copy(&child_addr, &child->actor.system);
 		/* Do nothing if this port is not an aggregator. In other case
 		 * Set NTT flag on every port that use this aggregator. */
-		if (slave->aggregator_port_id != slave_id)
+		if (child->aggregator_port_id != child_id)
 			continue;
 
-		for (j = 0; j < internals->active_slave_count; j++) {
-			agg_slave = &bond_mode_8023ad_ports[internals->active_slaves[j]];
-			if (agg_slave->aggregator_port_id == slave_id)
-				SM_FLAG_SET(agg_slave, NTT);
+		for (j = 0; j < internals->active_child_count; j++) {
+			agg_child = &bond_mode_8023ad_ports[internals->active_children[j]];
+			if (agg_child->aggregator_port_id == child_id)
+				SM_FLAG_SET(agg_child, NTT);
 		}
 	}
 
@@ -1288,9 +1288,9 @@ bond_mode_8023ad_enable(struct rte_eth_dev *bond_dev)
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
 	uint16_t i;
 
-	for (i = 0; i < internals->active_slave_count; i++)
-		bond_mode_8023ad_activate_slave(bond_dev,
-				internals->active_slaves[i]);
+	for (i = 0; i < internals->active_child_count; i++)
+		bond_mode_8023ad_activate_child(bond_dev,
+				internals->active_children[i]);
 
 	return 0;
 }
@@ -1326,10 +1326,10 @@ bond_mode_8023ad_stop(struct rte_eth_dev *bond_dev)
 
 void
 bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
-				  uint16_t slave_id, struct rte_mbuf *pkt)
+				  uint16_t child_id, struct rte_mbuf *pkt)
 {
 	struct mode8023ad_private *mode4 = &internals->mode4;
-	struct port *port = &bond_mode_8023ad_ports[slave_id];
+	struct port *port = &bond_mode_8023ad_ports[child_id];
 	struct marker_header *m_hdr;
 	uint64_t marker_timer, old_marker_timer;
 	int retval;
@@ -1362,7 +1362,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
 		} while (unlikely(retval == 0));
 
 		m_hdr->marker.tlv_type_marker = MARKER_TLV_TYPE_RESP;
-		rte_eth_macaddr_get(slave_id, &m_hdr->eth_hdr.src_addr);
+		rte_eth_macaddr_get(child_id, &m_hdr->eth_hdr.src_addr);
 
 		if (internals->mode4.dedicated_queues.enabled == 0) {
 			if (rte_ring_enqueue(port->tx_ring, pkt) != 0) {
@@ -1373,10 +1373,10 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
 			}
 		} else {
 			/* Send packet directly to the slow queue */
-			uint16_t tx_count = rte_eth_tx_prepare(slave_id,
+			uint16_t tx_count = rte_eth_tx_prepare(child_id,
 					internals->mode4.dedicated_queues.tx_qid,
 					&pkt, 1);
-			tx_count = rte_eth_tx_burst(slave_id,
+			tx_count = rte_eth_tx_burst(child_id,
 					internals->mode4.dedicated_queues.tx_qid,
 					&pkt, tx_count);
 			if (tx_count != 1) {
@@ -1394,7 +1394,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
 				goto free_out;
 			}
 		} else
-			rx_machine_update(internals, slave_id, pkt);
+			rx_machine_update(internals, child_id, pkt);
 	} else {
 		wrn = WRN_UNKNOWN_SLOW_TYPE;
 		goto free_out;
@@ -1477,7 +1477,7 @@ bond_8023ad_setup_validate(uint16_t port_id,
 		return -EINVAL;
 
 	if (conf != NULL) {
-		/* Basic sanity check */
+		/* Check configuration */
 		if (conf->slow_periodic_ms == 0 ||
 				conf->fast_periodic_ms >= conf->slow_periodic_ms ||
 				conf->long_timeout_ms == 0 ||
@@ -1517,8 +1517,8 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
 
 
 int
-rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
-		struct rte_eth_bond_8023ad_slave_info *info)
+rte_eth_bond_8023ad_child_info(uint16_t port_id, uint16_t child_id,
+		struct rte_eth_bond_8023ad_child_info *info)
 {
 	struct rte_eth_dev *bond_dev;
 	struct bond_dev_private *internals;
@@ -1531,12 +1531,12 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
 	bond_dev = &rte_eth_devices[port_id];
 
 	internals = bond_dev->data->dev_private;
-	if (find_slave_by_id(internals->active_slaves,
-			internals->active_slave_count, slave_id) ==
-				internals->active_slave_count)
+	if (find_child_by_id(internals->active_children,
+			internals->active_child_count, child_id) ==
+				internals->active_child_count)
 		return -EINVAL;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[child_id];
 	info->selected = port->selected;
 
 	info->actor_state = port->actor_state;
@@ -1550,7 +1550,7 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
 }
 
 static int
-bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
+bond_8023ad_ext_validate(uint16_t port_id, uint16_t child_id)
 {
 	struct rte_eth_dev *bond_dev;
 	struct bond_dev_private *internals;
@@ -1565,9 +1565,9 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
 		return -EINVAL;
 
 	internals = bond_dev->data->dev_private;
-	if (find_slave_by_id(internals->active_slaves,
-			internals->active_slave_count, slave_id) ==
-				internals->active_slave_count)
+	if (find_child_by_id(internals->active_children,
+			internals->active_child_count, child_id) ==
+				internals->active_child_count)
 		return -EINVAL;
 
 	mode4 = &internals->mode4;
@@ -1578,17 +1578,17 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
 }
 
 int
-rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t child_id,
 				int enabled)
 {
 	struct port *port;
 	int res;
 
-	res = bond_8023ad_ext_validate(port_id, slave_id);
+	res = bond_8023ad_ext_validate(port_id, child_id);
 	if (res != 0)
 		return res;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[child_id];
 
 	if (enabled)
 		ACTOR_STATE_SET(port, COLLECTING);
@@ -1599,17 +1599,17 @@ rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
 }
 
 int
-rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t child_id,
 				int enabled)
 {
 	struct port *port;
 	int res;
 
-	res = bond_8023ad_ext_validate(port_id, slave_id);
+	res = bond_8023ad_ext_validate(port_id, child_id);
 	if (res != 0)
 		return res;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[child_id];
 
 	if (enabled)
 		ACTOR_STATE_SET(port, DISTRIBUTING);
@@ -1620,45 +1620,45 @@ rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
 }
 
 int
-rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id)
+rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t child_id)
 {
 	struct port *port;
 	int err;
 
-	err = bond_8023ad_ext_validate(port_id, slave_id);
+	err = bond_8023ad_ext_validate(port_id, child_id);
 	if (err != 0)
 		return err;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[child_id];
 	return ACTOR_STATE(port, DISTRIBUTING);
 }
 
 int
-rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id)
+rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t child_id)
 {
 	struct port *port;
 	int err;
 
-	err = bond_8023ad_ext_validate(port_id, slave_id);
+	err = bond_8023ad_ext_validate(port_id, child_id);
 	if (err != 0)
 		return err;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[child_id];
 	return ACTOR_STATE(port, COLLECTING);
 }
 
 int
-rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t child_id,
 		struct rte_mbuf *lacp_pkt)
 {
 	struct port *port;
 	int res;
 
-	res = bond_8023ad_ext_validate(port_id, slave_id);
+	res = bond_8023ad_ext_validate(port_id, child_id);
 	if (res != 0)
 		return res;
 
-	port = &bond_mode_8023ad_ports[slave_id];
+	port = &bond_mode_8023ad_ports[child_id];
 
 	if (rte_pktmbuf_pkt_len(lacp_pkt) < sizeof(struct lacpdu_header))
 		return -EINVAL;
@@ -1683,11 +1683,11 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
 	struct mode8023ad_private *mode4 = &internals->mode4;
 	struct port *port;
 	void *pkt = NULL;
-	uint16_t i, slave_id;
+	uint16_t i, child_id;
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		slave_id = internals->active_slaves[i];
-		port = &bond_mode_8023ad_ports[slave_id];
+	for (i = 0; i < internals->active_child_count; i++) {
+		child_id = internals->active_children[i];
+		port = &bond_mode_8023ad_ports[child_id];
 
 		if (rte_ring_dequeue(port->rx_ring, &pkt) == 0) {
 			struct rte_mbuf *lacp_pkt = pkt;
@@ -1700,7 +1700,7 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
 			/* This is LACP frame so pass it to rx callback.
 			 * Callback is responsible for freeing mbuf.
 			 */
-			mode4->slowrx_cb(slave_id, lacp_pkt);
+			mode4->slowrx_cb(child_id, lacp_pkt);
 		}
 	}
 
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.h b/drivers/net/bonding/rte_eth_bond_8023ad.h
index 7ad8d6d00bd5..d66817a199fe 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.h
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.h
@@ -35,7 +35,7 @@ extern "C" {
 #define MARKER_TLV_TYPE_INFO                0x01
 #define MARKER_TLV_TYPE_RESP                0x02
 
-typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t slave_id,
+typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t child_id,
 						  struct rte_mbuf *lacp_pkt);
 
 enum rte_bond_8023ad_selection {
@@ -66,13 +66,13 @@ struct port_params {
 	uint16_t system_priority;
 	/**< System priority (unused in current implementation) */
 	struct rte_ether_addr system;
-	/**< System ID - Slave MAC address, same as bonding MAC address */
+	/**< System ID - Child MAC address, same as bonding MAC address */
 	uint16_t key;
 	/**< Speed information (implementation dependent) and duplex. */
 	uint16_t port_priority;
 	/**< Priority of this (unused in current implementation) */
 	uint16_t port_number;
-	/**< Port number. It corresponds to slave port id. */
+	/**< Port number. It corresponds to child port id. */
 } __rte_packed __rte_aligned(2);
 
 struct lacpdu_actor_partner_params {
@@ -141,7 +141,7 @@ struct rte_eth_bond_8023ad_conf {
 	enum rte_bond_8023ad_agg_selection agg_selection;
 };
 
-struct rte_eth_bond_8023ad_slave_info {
+struct rte_eth_bond_8023ad_child_info {
 	enum rte_bond_8023ad_selection selected;
 	uint8_t actor_state;
 	struct port_params actor;
@@ -184,104 +184,104 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
 /**
  * @internal
  *
- * Function returns current state of given slave device.
+ * Function returns current state of given child device.
  *
- * @param slave_id  Port id of valid slave.
+ * @param child_id  Port id of valid child.
  * @param conf		buffer for configuration
  * @return
  *   0 - if ok
- *   -EINVAL if conf is NULL or slave id is invalid (not a slave of given
+ *   -EINVAL if conf is NULL or child id is invalid (not a child of given
  *       bonded device or is not inactive).
  */
 int
-rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
-		struct rte_eth_bond_8023ad_slave_info *conf);
+rte_eth_bond_8023ad_child_info(uint16_t port_id, uint16_t child_id,
+		struct rte_eth_bond_8023ad_child_info *conf);
 
 #ifdef __cplusplus
 }
 #endif
 
 /**
- * Configure a slave port to start collecting.
+ * Configure a child port to start collecting.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port id of valid slave.
+ * @param child_id	Port id of valid child.
  * @param enabled	Non-zero when collection enabled.
  * @return
  *   0 - if ok
- *   -EINVAL if slave is not valid.
+ *   -EINVAL if child is not valid.
  */
 int
-rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t child_id,
 				int enabled);
 
 /**
- * Get COLLECTING flag from slave port actor state.
+ * Get COLLECTING flag from child port actor state.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port id of valid slave.
+ * @param child_id	Port id of valid child.
  * @return
  *   0 - if not set
  *   1 - if set
- *   -EINVAL if slave is not valid.
+ *   -EINVAL if child is not valid.
  */
 int
-rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id);
+rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t child_id);
 
 /**
- * Configure a slave port to start distributing.
+ * Configure a child port to start distributing.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port id of valid slave.
+ * @param child_id	Port id of valid child.
  * @param enabled	Non-zero when distribution enabled.
  * @return
  *   0 - if ok
- *   -EINVAL if slave is not valid.
+ *   -EINVAL if child is not valid.
  */
 int
-rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t child_id,
 				int enabled);
 
 /**
- * Get DISTRIBUTING flag from slave port actor state.
+ * Get DISTRIBUTING flag from child port actor state.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port id of valid slave.
+ * @param child_id	Port id of valid child.
  * @return
  *   0 - if not set
  *   1 - if set
- *   -EINVAL if slave is not valid.
+ *   -EINVAL if child is not valid.
  */
 int
-rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id);
+rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t child_id);
 
 /**
  * LACPDU transmit path for external 802.3ad state machine.  Caller retains
  * ownership of the packet on failure.
  *
  * @param port_id	Bonding device id
- * @param slave_id	Port ID of valid slave device.
+ * @param child_id	Port ID of valid child device.
  * @param lacp_pkt	mbuf containing LACPDU.
  *
  * @return
  *   0 on success, negative value otherwise.
  */
 int
-rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t child_id,
 		struct rte_mbuf *lacp_pkt);
 
 /**
- * Enable dedicated hw queues for 802.3ad control plane traffic on slaves
+ * Enable dedicated hw queues for 802.3ad control plane traffic on children
  *
- * This function creates an additional tx and rx queue on each slave for
+ * This function creates an additional tx and rx queue on each child for
  * dedicated 802.3ad control plane traffic . A flow filtering rule is
- * programmed on each slave to redirect all LACP slow packets to that rx queue
+ * programmed on each child to redirect all LACP slow packets to that rx queue
  * for processing in the LACP state machine, this removes the need to filter
  * these packets in the bonded devices data path. The additional tx queue is
  * used to enable the LACP state machine to enqueue LACP packets directly to
- * slave hw independently of the bonded devices data path.
+ * child hw independently of the bonded devices data path.
  *
- * To use this feature all slaves must support the programming of the flow
+ * To use this feature all children must support the programming of the flow
  * filter rule required for rx and have enough queues that one rx and tx queue
  * can be reserved for the LACP state machines control packets.
  *
@@ -296,7 +296,7 @@ int
 rte_eth_bond_8023ad_dedicated_queues_enable(uint16_t port_id);
 
 /**
- * Disable slow queue on slaves
+ * Disable slow queue on children
  *
  * This function disables hardware slow packet filter.
  *
diff --git a/drivers/net/bonding/rte_eth_bond_alb.c b/drivers/net/bonding/rte_eth_bond_alb.c
index 86335a797135..0fcd1448c15b 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.c
+++ b/drivers/net/bonding/rte_eth_bond_alb.c
@@ -19,13 +19,13 @@ simple_hash(uint8_t *hash_start, int hash_size)
 }
 
 static uint16_t
-calculate_slave(struct bond_dev_private *internals)
+calculate_child(struct bond_dev_private *internals)
 {
 	uint16_t idx;
 
-	idx = (internals->mode6.last_slave + 1) % internals->active_slave_count;
-	internals->mode6.last_slave = idx;
-	return internals->active_slaves[idx];
+	idx = (internals->mode6.last_child + 1) % internals->active_child_count;
+	internals->mode6.last_child = idx;
+	return internals->active_children[idx];
 }
 
 int
@@ -41,7 +41,7 @@ bond_mode_alb_enable(struct rte_eth_dev *bond_dev)
 	/* Fill hash table with initial values */
 	memset(hash_table, 0, sizeof(struct client_data) * ALB_HASH_TABLE_SIZE);
 	rte_spinlock_init(&internals->mode6.lock);
-	internals->mode6.last_slave = ALB_NULL_INDEX;
+	internals->mode6.last_child = ALB_NULL_INDEX;
 	internals->mode6.ntt = 0;
 
 	/* Initialize memory pool for ARP packets to send */
@@ -96,7 +96,7 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
 	/*
 	 * We got reply for ARP Request send by the application. We need to
 	 * update client table when received data differ from what is stored
-	 * in ALB table and issue sending update packet to that slave.
+	 * in ALB table and issue sending update packet to that child.
 	 */
 	rte_spinlock_lock(&internals->mode6.lock);
 	if (client_info->in_use == 0 ||
@@ -112,8 +112,8 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
 		client_info->cli_ip = arp->arp_data.arp_sip;
 		rte_ether_addr_copy(&arp->arp_data.arp_sha,
 				&client_info->cli_mac);
-		client_info->slave_idx = calculate_slave(internals);
-		rte_eth_macaddr_get(client_info->slave_idx,
+		client_info->child_idx = calculate_child(internals);
+		rte_eth_macaddr_get(client_info->child_idx,
 				&client_info->app_mac);
 		rte_ether_addr_copy(&client_info->app_mac,
 				&arp->arp_data.arp_tha);
@@ -166,33 +166,33 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
 						&arp->arp_data.arp_tha,
 						&client_info->cli_mac);
 				}
-				rte_eth_macaddr_get(client_info->slave_idx,
+				rte_eth_macaddr_get(client_info->child_idx,
 						&client_info->app_mac);
 				rte_ether_addr_copy(&client_info->app_mac,
 						&arp->arp_data.arp_sha);
 				memcpy(client_info->vlan, eth_h + 1, offset);
 				client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
 				rte_spinlock_unlock(&internals->mode6.lock);
-				return client_info->slave_idx;
+				return client_info->child_idx;
 			}
 		}
 
-		/* Assign new slave to this client and update src mac in ARP */
+		/* Assign new child to this client and update src mac in ARP */
 		client_info->in_use = 1;
 		client_info->ntt = 0;
 		client_info->app_ip = arp->arp_data.arp_sip;
 		rte_ether_addr_copy(&arp->arp_data.arp_tha,
 				&client_info->cli_mac);
 		client_info->cli_ip = arp->arp_data.arp_tip;
-		client_info->slave_idx = calculate_slave(internals);
-		rte_eth_macaddr_get(client_info->slave_idx,
+		client_info->child_idx = calculate_child(internals);
+		rte_eth_macaddr_get(client_info->child_idx,
 				&client_info->app_mac);
 		rte_ether_addr_copy(&client_info->app_mac,
 				&arp->arp_data.arp_sha);
 		memcpy(client_info->vlan, eth_h + 1, offset);
 		client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
 		rte_spinlock_unlock(&internals->mode6.lock);
-		return client_info->slave_idx;
+		return client_info->child_idx;
 	}
 
 	/* If packet is not ARP Reply, send it on current primary port. */
@@ -208,7 +208,7 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
 {
 	struct rte_ether_hdr *eth_h;
 	struct rte_arp_hdr *arp_h;
-	uint16_t slave_idx;
+	uint16_t child_idx;
 
 	rte_spinlock_lock(&internals->mode6.lock);
 	eth_h = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
@@ -238,10 +238,10 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
 	arp_h->arp_plen = sizeof(uint32_t);
 	arp_h->arp_opcode = rte_cpu_to_be_16(RTE_ARP_OP_REPLY);
 
-	slave_idx = client_info->slave_idx;
+	child_idx = client_info->child_idx;
 	rte_spinlock_unlock(&internals->mode6.lock);
 
-	return slave_idx;
+	return child_idx;
 }
 
 void
@@ -252,18 +252,18 @@ bond_mode_alb_client_list_upd(struct rte_eth_dev *bond_dev)
 
 	int i;
 
-	/* If active slave count is 0, it's pointless to refresh alb table */
-	if (internals->active_slave_count <= 0)
+	/* If active child count is 0, it's pointless to refresh alb table */
+	if (internals->active_child_count <= 0)
 		return;
 
 	rte_spinlock_lock(&internals->mode6.lock);
-	internals->mode6.last_slave = ALB_NULL_INDEX;
+	internals->mode6.last_child = ALB_NULL_INDEX;
 
 	for (i = 0; i < ALB_HASH_TABLE_SIZE; i++) {
 		client_info = &internals->mode6.client_table[i];
 		if (client_info->in_use) {
-			client_info->slave_idx = calculate_slave(internals);
-			rte_eth_macaddr_get(client_info->slave_idx, &client_info->app_mac);
+			client_info->child_idx = calculate_child(internals);
+			rte_eth_macaddr_get(client_info->child_idx, &client_info->app_mac);
 			internals->mode6.ntt = 1;
 		}
 	}
diff --git a/drivers/net/bonding/rte_eth_bond_alb.h b/drivers/net/bonding/rte_eth_bond_alb.h
index 4e9aeda9bc86..dae3f84c5efb 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.h
+++ b/drivers/net/bonding/rte_eth_bond_alb.h
@@ -22,8 +22,8 @@ struct client_data {
 	uint32_t cli_ip;
 	/**< Client IP address */
 
-	uint16_t slave_idx;
-	/**< Index of slave on which we connect with that client */
+	uint16_t child_idx;
+	/**< Index of child on which we connect with that client */
 	uint8_t in_use;
 	/**< Flag indicating if entry in client table is currently used */
 	uint8_t ntt;
@@ -42,8 +42,8 @@ struct mode_alb_private {
 	/**< Mempool for creating ARP update packets */
 	uint8_t ntt;
 	/**< Flag indicating if we need to send update to any client on next tx */
-	uint32_t last_slave;
-	/**< Index of last used slave in client table */
+	uint32_t last_child;
+	/**< Index of last used child in client table */
 	rte_spinlock_t lock;
 };
 
@@ -72,9 +72,9 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
 		struct bond_dev_private *internals);
 
 /**
- * Function handles ARP packet transmission. It also decides on which slave
- * send that packet. If packet is ARP Request, it is send on primary slave.
- * If it is ARP Reply, it is send on slave stored in client table for that
+ * Function handles ARP packet transmission. It also decides on which child
+ * send that packet. If packet is ARP Request, it is send on primary child.
+ * If it is ARP Reply, it is send on child stored in client table for that
  * connection. On Reply function also updates data in client table.
  *
  * @param eth_h			ETH header of transmitted packet.
@@ -82,7 +82,7 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
  * @param internals		Bonding data.
  *
  * @return
- * Index of slave on which packet should be sent.
+ * Index of child on which packet should be sent.
  */
 uint16_t
 bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
@@ -96,14 +96,14 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
  * @param internals		Bonding data.
  *
  * @return
- * Index of slave on which packet should be sent.
+ * Index of child on which packet should be sent.
  */
 uint16_t
 bond_mode_alb_arp_upd(struct client_data *client_info,
 		struct rte_mbuf *pkt, struct bond_dev_private *internals);
 
 /**
- * Function updates slave indexes of active connections.
+ * Function updates child indexes of active connections.
  *
  * @param bond_dev		Pointer to bonded device struct.
  */
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index c0178369b464..231d117bc5ed 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -37,7 +37,7 @@ valid_bonded_port_id(uint16_t port_id)
 }
 
 int
-check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
+check_for_parent_bonded_ethdev(const struct rte_eth_dev *eth_dev)
 {
 	int i;
 	struct bond_dev_private *internals;
@@ -47,31 +47,31 @@ check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
 
 	internals = eth_dev->data->dev_private;
 
-	/* Check if any of slave devices is a bonded device */
-	for (i = 0; i < internals->slave_count; i++)
-		if (valid_bonded_port_id(internals->slaves[i].port_id) == 0)
+	/* Check if any of child devices is a bonded device */
+	for (i = 0; i < internals->child_count; i++)
+		if (valid_bonded_port_id(internals->children[i].port_id) == 0)
 			return 1;
 
 	return 0;
 }
 
 int
-valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
+valid_child_port_id(struct bond_dev_private *internals, uint16_t child_port_id)
 {
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(slave_port_id, -1);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(child_port_id, -1);
 
-	/* Verify that slave_port_id refers to a non bonded port */
-	if (check_for_bonded_ethdev(&rte_eth_devices[slave_port_id]) == 0 &&
+	/* Verify that child_port_id refers to a non bonded port */
+	if (check_for_bonded_ethdev(&rte_eth_devices[child_port_id]) == 0 &&
 			internals->mode == BONDING_MODE_8023AD) {
-		RTE_BOND_LOG(ERR, "Cannot add slave to bonded device in 802.3ad"
-				" mode as slave is also a bonded device, only "
+		RTE_BOND_LOG(ERR, "Cannot add child to bonded device in 802.3ad"
+				" mode as child is also a bonded device, only "
 				"physical devices can be support in this mode.");
 		return -1;
 	}
 
-	if (internals->port_id == slave_port_id) {
+	if (internals->port_id == child_port_id) {
 		RTE_BOND_LOG(ERR,
-			"Cannot add the bonded device itself as its slave.");
+			"Cannot add the bonded device itself as its child.");
 		return -1;
 	}
 
@@ -79,61 +79,61 @@ valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
 }
 
 void
-activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
+activate_child(struct rte_eth_dev *eth_dev, uint16_t port_id)
 {
 	struct bond_dev_private *internals = eth_dev->data->dev_private;
-	uint16_t active_count = internals->active_slave_count;
+	uint16_t active_count = internals->active_child_count;
 
 	if (internals->mode == BONDING_MODE_8023AD)
-		bond_mode_8023ad_activate_slave(eth_dev, port_id);
+		bond_mode_8023ad_activate_child(eth_dev, port_id);
 
 	if (internals->mode == BONDING_MODE_TLB
 			|| internals->mode == BONDING_MODE_ALB) {
 
-		internals->tlb_slaves_order[active_count] = port_id;
+		internals->tlb_children_order[active_count] = port_id;
 	}
 
-	RTE_ASSERT(internals->active_slave_count <
-			(RTE_DIM(internals->active_slaves) - 1));
+	RTE_ASSERT(internals->active_child_count <
+			(RTE_DIM(internals->active_children) - 1));
 
-	internals->active_slaves[internals->active_slave_count] = port_id;
-	internals->active_slave_count++;
+	internals->active_children[internals->active_child_count] = port_id;
+	internals->active_child_count++;
 
 	if (internals->mode == BONDING_MODE_TLB)
-		bond_tlb_activate_slave(internals);
+		bond_tlb_activate_child(internals);
 	if (internals->mode == BONDING_MODE_ALB)
 		bond_mode_alb_client_list_upd(eth_dev);
 }
 
 void
-deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
+deactivate_child(struct rte_eth_dev *eth_dev, uint16_t port_id)
 {
-	uint16_t slave_pos;
+	uint16_t child_pos;
 	struct bond_dev_private *internals = eth_dev->data->dev_private;
-	uint16_t active_count = internals->active_slave_count;
+	uint16_t active_count = internals->active_child_count;
 
 	if (internals->mode == BONDING_MODE_8023AD) {
 		bond_mode_8023ad_stop(eth_dev);
-		bond_mode_8023ad_deactivate_slave(eth_dev, port_id);
+		bond_mode_8023ad_deactivate_child(eth_dev, port_id);
 	} else if (internals->mode == BONDING_MODE_TLB
 			|| internals->mode == BONDING_MODE_ALB)
 		bond_tlb_disable(internals);
 
-	slave_pos = find_slave_by_id(internals->active_slaves, active_count,
+	child_pos = find_child_by_id(internals->active_children, active_count,
 			port_id);
 
-	/* If slave was not at the end of the list
-	 * shift active slaves up active array list */
-	if (slave_pos < active_count) {
+	/* If child was not at the end of the list
+	 * shift active children up active array list */
+	if (child_pos < active_count) {
 		active_count--;
-		memmove(internals->active_slaves + slave_pos,
-				internals->active_slaves + slave_pos + 1,
-				(active_count - slave_pos) *
-					sizeof(internals->active_slaves[0]));
+		memmove(internals->active_children + child_pos,
+				internals->active_children + child_pos + 1,
+				(active_count - child_pos) *
+					sizeof(internals->active_children[0]));
 	}
 
-	RTE_ASSERT(active_count < RTE_DIM(internals->active_slaves));
-	internals->active_slave_count = active_count;
+	RTE_ASSERT(active_count < RTE_DIM(internals->active_children));
+	internals->active_child_count = active_count;
 
 	if (eth_dev->data->dev_started) {
 		if (internals->mode == BONDING_MODE_8023AD) {
@@ -192,7 +192,7 @@ rte_eth_bond_free(const char *name)
 }
 
 static int
-slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
+child_vlan_filter_set(uint16_t bonded_port_id, uint16_t child_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
@@ -224,7 +224,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
 			if (unlikely(slab & mask)) {
 				uint16_t vlan_id = pos + i;
 
-				res = rte_eth_dev_vlan_filter(slave_port_id,
+				res = rte_eth_dev_vlan_filter(child_port_id,
 							      vlan_id, 1);
 			}
 		}
@@ -236,45 +236,45 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
 }
 
 static int
-slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
+child_rte_flow_prepare(uint16_t child_id, struct bond_dev_private *internals)
 {
 	struct rte_flow *flow;
 	struct rte_flow_error ferror;
-	uint16_t slave_port_id = internals->slaves[slave_id].port_id;
+	uint16_t child_port_id = internals->children[child_id].port_id;
 
 	if (internals->flow_isolated_valid != 0) {
-		if (rte_eth_dev_stop(slave_port_id) != 0) {
+		if (rte_eth_dev_stop(child_port_id) != 0) {
 			RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
-				     slave_port_id);
+				     child_port_id);
 			return -1;
 		}
 
-		if (rte_flow_isolate(slave_port_id, internals->flow_isolated,
+		if (rte_flow_isolate(child_port_id, internals->flow_isolated,
 		    &ferror)) {
-			RTE_BOND_LOG(ERR, "rte_flow_isolate failed for slave"
-				     " %d: %s", slave_id, ferror.message ?
+			RTE_BOND_LOG(ERR, "rte_flow_isolate failed for child"
+				     " %d: %s", child_id, ferror.message ?
 				     ferror.message : "(no stated reason)");
 			return -1;
 		}
 	}
 	TAILQ_FOREACH(flow, &internals->flow_list, next) {
-		flow->flows[slave_id] = rte_flow_create(slave_port_id,
+		flow->flows[child_id] = rte_flow_create(child_port_id,
 							flow->rule.attr,
 							flow->rule.pattern,
 							flow->rule.actions,
 							&ferror);
-		if (flow->flows[slave_id] == NULL) {
-			RTE_BOND_LOG(ERR, "Cannot create flow for slave"
-				     " %d: %s", slave_id,
+		if (flow->flows[child_id] == NULL) {
+			RTE_BOND_LOG(ERR, "Cannot create flow for child"
+				     " %d: %s", child_id,
 				     ferror.message ? ferror.message :
 				     "(no stated reason)");
-			/* Destroy successful bond flows from the slave */
+			/* Destroy successful bond flows from the child */
 			TAILQ_FOREACH(flow, &internals->flow_list, next) {
-				if (flow->flows[slave_id] != NULL) {
-					rte_flow_destroy(slave_port_id,
-							 flow->flows[slave_id],
+				if (flow->flows[child_id] != NULL) {
+					rte_flow_destroy(child_port_id,
+							 flow->flows[child_id],
 							 &ferror);
-					flow->flows[slave_id] = NULL;
+					flow->flows[child_id] = NULL;
 				}
 			}
 			return -1;
@@ -284,7 +284,7 @@ slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
 }
 
 static void
-eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
+eth_bond_child_inherit_dev_info_rx_first(struct bond_dev_private *internals,
 					 const struct rte_eth_dev_info *di)
 {
 	struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
@@ -292,20 +292,20 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
 	internals->reta_size = di->reta_size;
 	internals->rss_key_len = di->hash_key_size;
 
-	/* Inherit Rx offload capabilities from the first slave device */
+	/* Inherit Rx offload capabilities from the first child device */
 	internals->rx_offload_capa = di->rx_offload_capa;
 	internals->rx_queue_offload_capa = di->rx_queue_offload_capa;
 	internals->flow_type_rss_offloads = di->flow_type_rss_offloads;
 
-	/* Inherit maximum Rx packet size from the first slave device */
+	/* Inherit maximum Rx packet size from the first child device */
 	internals->candidate_max_rx_pktlen = di->max_rx_pktlen;
 
-	/* Inherit default Rx queue settings from the first slave device */
+	/* Inherit default Rx queue settings from the first child device */
 	memcpy(rxconf_i, &di->default_rxconf, sizeof(*rxconf_i));
 
 	/*
 	 * Turn off descriptor prefetch and writeback by default for all
-	 * slave devices. Applications may tweak this setting if need be.
+	 * child devices. Applications may tweak this setting if need be.
 	 */
 	rxconf_i->rx_thresh.pthresh = 0;
 	rxconf_i->rx_thresh.hthresh = 0;
@@ -314,26 +314,26 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
 	/* Setting this to zero should effectively enable default values */
 	rxconf_i->rx_free_thresh = 0;
 
-	/* Disable deferred start by default for all slave devices */
+	/* Disable deferred start by default for all child devices */
 	rxconf_i->rx_deferred_start = 0;
 }
 
 static void
-eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
+eth_bond_child_inherit_dev_info_tx_first(struct bond_dev_private *internals,
 					 const struct rte_eth_dev_info *di)
 {
 	struct rte_eth_txconf *txconf_i = &internals->default_txconf;
 
-	/* Inherit Tx offload capabilities from the first slave device */
+	/* Inherit Tx offload capabilities from the first child device */
 	internals->tx_offload_capa = di->tx_offload_capa;
 	internals->tx_queue_offload_capa = di->tx_queue_offload_capa;
 
-	/* Inherit default Tx queue settings from the first slave device */
+	/* Inherit default Tx queue settings from the first child device */
 	memcpy(txconf_i, &di->default_txconf, sizeof(*txconf_i));
 
 	/*
 	 * Turn off descriptor prefetch and writeback by default for all
-	 * slave devices. Applications may tweak this setting if need be.
+	 * child devices. Applications may tweak this setting if need be.
 	 */
 	txconf_i->tx_thresh.pthresh = 0;
 	txconf_i->tx_thresh.hthresh = 0;
@@ -341,17 +341,17 @@ eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
 
 	/*
 	 * Setting these parameters to zero assumes that default
-	 * values will be configured implicitly by slave devices.
+	 * values will be configured implicitly by child devices.
 	 */
 	txconf_i->tx_free_thresh = 0;
 	txconf_i->tx_rs_thresh = 0;
 
-	/* Disable deferred start by default for all slave devices */
+	/* Disable deferred start by default for all child devices */
 	txconf_i->tx_deferred_start = 0;
 }
 
 static void
-eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
+eth_bond_child_inherit_dev_info_rx_next(struct bond_dev_private *internals,
 					const struct rte_eth_dev_info *di)
 {
 	struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
@@ -362,32 +362,32 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
 	internals->flow_type_rss_offloads &= di->flow_type_rss_offloads;
 
 	/*
-	 * If at least one slave device suggests enabling this
-	 * setting by default, enable it for all slave devices
+	 * If at least one child device suggests enabling this
+	 * setting by default, enable it for all child devices
 	 * since disabling it may not be necessarily supported.
 	 */
 	if (rxconf->rx_drop_en == 1)
 		rxconf_i->rx_drop_en = 1;
 
 	/*
-	 * Adding a new slave device may cause some of previously inherited
+	 * Adding a new child device may cause some of previously inherited
 	 * offloads to be withdrawn from the internal rx_queue_offload_capa
 	 * value. Thus, the new internal value of default Rx queue offloads
 	 * has to be masked by rx_queue_offload_capa to make sure that only
 	 * commonly supported offloads are preserved from both the previous
-	 * value and the value being inherited from the new slave device.
+	 * value and the value being inherited from the new child device.
 	 */
 	rxconf_i->offloads = (rxconf_i->offloads | rxconf->offloads) &
 			     internals->rx_queue_offload_capa;
 
 	/*
-	 * RETA size is GCD of all slaves RETA sizes, so, if all sizes will be
+	 * RETA size is GCD of all children RETA sizes, so, if all sizes will be
 	 * the power of 2, the lower one is GCD
 	 */
 	if (internals->reta_size > di->reta_size)
 		internals->reta_size = di->reta_size;
 	if (internals->rss_key_len > di->hash_key_size) {
-		RTE_BOND_LOG(WARNING, "slave has different rss key size, "
+		RTE_BOND_LOG(WARNING, "child has different rss key size, "
 				"configuring rss may fail");
 		internals->rss_key_len = di->hash_key_size;
 	}
@@ -398,7 +398,7 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
 }
 
 static void
-eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
+eth_bond_child_inherit_dev_info_tx_next(struct bond_dev_private *internals,
 					const struct rte_eth_dev_info *di)
 {
 	struct rte_eth_txconf *txconf_i = &internals->default_txconf;
@@ -408,34 +408,34 @@ eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
 	internals->tx_queue_offload_capa &= di->tx_queue_offload_capa;
 
 	/*
-	 * Adding a new slave device may cause some of previously inherited
+	 * Adding a new child device may cause some of previously inherited
 	 * offloads to be withdrawn from the internal tx_queue_offload_capa
 	 * value. Thus, the new internal value of default Tx queue offloads
 	 * has to be masked by tx_queue_offload_capa to make sure that only
 	 * commonly supported offloads are preserved from both the previous
-	 * value and the value being inherited from the new slave device.
+	 * value and the value being inherited from the new child device.
 	 */
 	txconf_i->offloads = (txconf_i->offloads | txconf->offloads) &
 			     internals->tx_queue_offload_capa;
 }
 
 static void
-eth_bond_slave_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
-		const struct rte_eth_desc_lim *slave_desc_lim)
+eth_bond_child_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
+		const struct rte_eth_desc_lim *child_desc_lim)
 {
-	memcpy(bond_desc_lim, slave_desc_lim, sizeof(*bond_desc_lim));
+	memcpy(bond_desc_lim, child_desc_lim, sizeof(*bond_desc_lim));
 }
 
 static int
-eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
-		const struct rte_eth_desc_lim *slave_desc_lim)
+eth_bond_child_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
+		const struct rte_eth_desc_lim *child_desc_lim)
 {
 	bond_desc_lim->nb_max = RTE_MIN(bond_desc_lim->nb_max,
-					slave_desc_lim->nb_max);
+					child_desc_lim->nb_max);
 	bond_desc_lim->nb_min = RTE_MAX(bond_desc_lim->nb_min,
-					slave_desc_lim->nb_min);
+					child_desc_lim->nb_min);
 	bond_desc_lim->nb_align = RTE_MAX(bond_desc_lim->nb_align,
-					  slave_desc_lim->nb_align);
+					  child_desc_lim->nb_align);
 
 	if (bond_desc_lim->nb_min > bond_desc_lim->nb_max ||
 	    bond_desc_lim->nb_align > bond_desc_lim->nb_max) {
@@ -444,22 +444,22 @@ eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
 	}
 
 	/* Treat maximum number of segments equal to 0 as unspecified */
-	if (slave_desc_lim->nb_seg_max != 0 &&
+	if (child_desc_lim->nb_seg_max != 0 &&
 	    (bond_desc_lim->nb_seg_max == 0 ||
-	     slave_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
-		bond_desc_lim->nb_seg_max = slave_desc_lim->nb_seg_max;
-	if (slave_desc_lim->nb_mtu_seg_max != 0 &&
+	     child_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
+		bond_desc_lim->nb_seg_max = child_desc_lim->nb_seg_max;
+	if (child_desc_lim->nb_mtu_seg_max != 0 &&
 	    (bond_desc_lim->nb_mtu_seg_max == 0 ||
-	     slave_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
-		bond_desc_lim->nb_mtu_seg_max = slave_desc_lim->nb_mtu_seg_max;
+	     child_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
+		bond_desc_lim->nb_mtu_seg_max = child_desc_lim->nb_mtu_seg_max;
 
 	return 0;
 }
 
 static int
-__eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
+__eth_bond_child_add_lock_free(uint16_t bonded_port_id, uint16_t child_port_id)
 {
-	struct rte_eth_dev *bonded_eth_dev, *slave_eth_dev;
+	struct rte_eth_dev *bonded_eth_dev, *child_eth_dev;
 	struct bond_dev_private *internals;
 	struct rte_eth_link link_props;
 	struct rte_eth_dev_info dev_info;
@@ -468,77 +468,77 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	internals = bonded_eth_dev->data->dev_private;
 
-	if (valid_slave_port_id(internals, slave_port_id) != 0)
+	if (valid_child_port_id(internals, child_port_id) != 0)
 		return -1;
 
-	slave_eth_dev = &rte_eth_devices[slave_port_id];
-	if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDED_SLAVE) {
-		RTE_BOND_LOG(ERR, "Slave device is already a slave of a bonded device");
+	child_eth_dev = &rte_eth_devices[child_port_id];
+	if (child_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDED_CHILD) {
+		RTE_BOND_LOG(ERR, "Child device is already a child of a bonded device");
 		return -1;
 	}
 
-	ret = rte_eth_dev_info_get(slave_port_id, &dev_info);
+	ret = rte_eth_dev_info_get(child_port_id, &dev_info);
 	if (ret != 0) {
 		RTE_BOND_LOG(ERR,
 			"%s: Error during getting device (port %u) info: %s\n",
-			__func__, slave_port_id, strerror(-ret));
+			__func__, child_port_id, strerror(-ret));
 
 		return ret;
 	}
 	if (dev_info.max_rx_pktlen < internals->max_rx_pktlen) {
-		RTE_BOND_LOG(ERR, "Slave (port %u) max_rx_pktlen too small",
-			     slave_port_id);
+		RTE_BOND_LOG(ERR, "Child (port %u) max_rx_pktlen too small",
+			     child_port_id);
 		return -1;
 	}
 
-	slave_add(internals, slave_eth_dev);
+	child_add(internals, child_eth_dev);
 
-	/* We need to store slaves reta_size to be able to synchronize RETA for all
-	 * slave devices even if its sizes are different.
+	/* We need to store children reta_size to be able to synchronize RETA for all
+	 * child devices even if its sizes are different.
 	 */
-	internals->slaves[internals->slave_count].reta_size = dev_info.reta_size;
+	internals->children[internals->child_count].reta_size = dev_info.reta_size;
 
-	if (internals->slave_count < 1) {
-		/* if MAC is not user defined then use MAC of first slave add to
+	if (internals->child_count < 1) {
+		/* if MAC is not user defined then use MAC of first child add to
 		 * bonded device */
 		if (!internals->user_defined_mac) {
 			if (mac_address_set(bonded_eth_dev,
-					    slave_eth_dev->data->mac_addrs)) {
+					    child_eth_dev->data->mac_addrs)) {
 				RTE_BOND_LOG(ERR, "Failed to set MAC address");
 				return -1;
 			}
 		}
 
-		/* Make primary slave */
-		internals->primary_port = slave_port_id;
-		internals->current_primary_port = slave_port_id;
+		/* Make primary child */
+		internals->primary_port = child_port_id;
+		internals->current_primary_port = child_port_id;
 
 		internals->speed_capa = dev_info.speed_capa;
 
-		/* Inherit queues settings from first slave */
-		internals->nb_rx_queues = slave_eth_dev->data->nb_rx_queues;
-		internals->nb_tx_queues = slave_eth_dev->data->nb_tx_queues;
+		/* Inherit queues settings from first child */
+		internals->nb_rx_queues = child_eth_dev->data->nb_rx_queues;
+		internals->nb_tx_queues = child_eth_dev->data->nb_tx_queues;
 
-		eth_bond_slave_inherit_dev_info_rx_first(internals, &dev_info);
-		eth_bond_slave_inherit_dev_info_tx_first(internals, &dev_info);
+		eth_bond_child_inherit_dev_info_rx_first(internals, &dev_info);
+		eth_bond_child_inherit_dev_info_tx_first(internals, &dev_info);
 
-		eth_bond_slave_inherit_desc_lim_first(&internals->rx_desc_lim,
+		eth_bond_child_inherit_desc_lim_first(&internals->rx_desc_lim,
 						      &dev_info.rx_desc_lim);
-		eth_bond_slave_inherit_desc_lim_first(&internals->tx_desc_lim,
+		eth_bond_child_inherit_desc_lim_first(&internals->tx_desc_lim,
 						      &dev_info.tx_desc_lim);
 	} else {
 		int ret;
 
 		internals->speed_capa &= dev_info.speed_capa;
-		eth_bond_slave_inherit_dev_info_rx_next(internals, &dev_info);
-		eth_bond_slave_inherit_dev_info_tx_next(internals, &dev_info);
+		eth_bond_child_inherit_dev_info_rx_next(internals, &dev_info);
+		eth_bond_child_inherit_dev_info_tx_next(internals, &dev_info);
 
-		ret = eth_bond_slave_inherit_desc_lim_next(
+		ret = eth_bond_child_inherit_desc_lim_next(
 				&internals->rx_desc_lim, &dev_info.rx_desc_lim);
 		if (ret != 0)
 			return ret;
 
-		ret = eth_bond_slave_inherit_desc_lim_next(
+		ret = eth_bond_child_inherit_desc_lim_next(
 				&internals->tx_desc_lim, &dev_info.tx_desc_lim);
 		if (ret != 0)
 			return ret;
@@ -552,79 +552,79 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
 	bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf &=
 			internals->flow_type_rss_offloads;
 
-	if (slave_rte_flow_prepare(internals->slave_count, internals) != 0) {
-		RTE_BOND_LOG(ERR, "Failed to prepare new slave flows: port=%d",
-			     slave_port_id);
+	if (child_rte_flow_prepare(internals->child_count, internals) != 0) {
+		RTE_BOND_LOG(ERR, "Failed to prepare new child flows: port=%d",
+			     child_port_id);
 		return -1;
 	}
 
-	/* Add additional MAC addresses to the slave */
-	if (slave_add_mac_addresses(bonded_eth_dev, slave_port_id) != 0) {
-		RTE_BOND_LOG(ERR, "Failed to add mac address(es) to slave %hu",
-				slave_port_id);
+	/* Add additional MAC addresses to the child */
+	if (child_add_mac_addresses(bonded_eth_dev, child_port_id) != 0) {
+		RTE_BOND_LOG(ERR, "Failed to add mac address(es) to child %hu",
+				child_port_id);
 		return -1;
 	}
 
-	internals->slave_count++;
+	internals->child_count++;
 
 	if (bonded_eth_dev->data->dev_started) {
-		if (slave_configure(bonded_eth_dev, slave_eth_dev) != 0) {
-			internals->slave_count--;
-			RTE_BOND_LOG(ERR, "rte_bond_slaves_configure: port=%d",
-					slave_port_id);
+		if (child_configure(bonded_eth_dev, child_eth_dev) != 0) {
+			internals->child_count--;
+			RTE_BOND_LOG(ERR, "rte_bond_children_configure: port=%d",
+					child_port_id);
 			return -1;
 		}
-		if (slave_start(bonded_eth_dev, slave_eth_dev) != 0) {
-			internals->slave_count--;
-			RTE_BOND_LOG(ERR, "rte_bond_slaves_start: port=%d",
-					slave_port_id);
+		if (child_start(bonded_eth_dev, child_eth_dev) != 0) {
+			internals->child_count--;
+			RTE_BOND_LOG(ERR, "rte_bond_children_start: port=%d",
+					child_port_id);
 			return -1;
 		}
 	}
 
-	/* Update all slave devices MACs */
-	mac_address_slaves_update(bonded_eth_dev);
+	/* Update all child devices MACs */
+	mac_address_children_update(bonded_eth_dev);
 
 	/* Register link status change callback with bonded device pointer as
 	 * argument*/
-	rte_eth_dev_callback_register(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
+	rte_eth_dev_callback_register(child_port_id, RTE_ETH_EVENT_INTR_LSC,
 			bond_ethdev_lsc_event_callback, &bonded_eth_dev->data->port_id);
 
-	/* If bonded device is started then we can add the slave to our active
-	 * slave array */
+	/* If bonded device is started then we can add the child to our active
+	 * child array */
 	if (bonded_eth_dev->data->dev_started) {
-		ret = rte_eth_link_get_nowait(slave_port_id, &link_props);
+		ret = rte_eth_link_get_nowait(child_port_id, &link_props);
 		if (ret < 0) {
-			rte_eth_dev_callback_unregister(slave_port_id,
+			rte_eth_dev_callback_unregister(child_port_id,
 					RTE_ETH_EVENT_INTR_LSC,
 					bond_ethdev_lsc_event_callback,
 					&bonded_eth_dev->data->port_id);
-			internals->slave_count--;
+			internals->child_count--;
 			RTE_BOND_LOG(ERR,
-				"Slave (port %u) link get failed: %s\n",
-				slave_port_id, rte_strerror(-ret));
+				"Child (port %u) link get failed: %s\n",
+				child_port_id, rte_strerror(-ret));
 			return -1;
 		}
 
 		if (link_props.link_status == RTE_ETH_LINK_UP) {
-			if (internals->active_slave_count == 0 &&
+			if (internals->active_child_count == 0 &&
 			    !internals->user_defined_primary_port)
 				bond_ethdev_primary_set(internals,
-							slave_port_id);
+							child_port_id);
 		}
 	}
 
-	/* Add slave details to bonded device */
-	slave_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDED_SLAVE;
+	/* Add child details to bonded device */
+	child_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDED_CHILD;
 
-	slave_vlan_filter_set(bonded_port_id, slave_port_id);
+	child_vlan_filter_set(bonded_port_id, child_port_id);
 
 	return 0;
 
 }
 
 int
-rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_child_add(uint16_t bonded_port_id, uint16_t child_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
@@ -637,12 +637,12 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	internals = bonded_eth_dev->data->dev_private;
 
-	if (valid_slave_port_id(internals, slave_port_id) != 0)
+	if (valid_child_port_id(internals, child_port_id) != 0)
 		return -1;
 
 	rte_spinlock_lock(&internals->lock);
 
-	retval = __eth_bond_slave_add_lock_free(bonded_port_id, slave_port_id);
+	retval = __eth_bond_child_add_lock_free(bonded_port_id, child_port_id);
 
 	rte_spinlock_unlock(&internals->lock);
 
@@ -650,93 +650,93 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
 }
 
 static int
-__eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
-				   uint16_t slave_port_id)
+__eth_bond_child_remove_lock_free(uint16_t bonded_port_id,
+				   uint16_t child_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
-	struct rte_eth_dev *slave_eth_dev;
+	struct rte_eth_dev *child_eth_dev;
 	struct rte_flow_error flow_error;
 	struct rte_flow *flow;
-	int i, slave_idx;
+	int i, child_idx;
 
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	internals = bonded_eth_dev->data->dev_private;
 
-	if (valid_slave_port_id(internals, slave_port_id) < 0)
+	if (valid_child_port_id(internals, child_port_id) < 0)
 		return -1;
 
-	/* first remove from active slave list */
-	slave_idx = find_slave_by_id(internals->active_slaves,
-		internals->active_slave_count, slave_port_id);
+	/* first remove from active child list */
+	child_idx = find_child_by_id(internals->active_children,
+		internals->active_child_count, child_port_id);
 
-	if (slave_idx < internals->active_slave_count)
-		deactivate_slave(bonded_eth_dev, slave_port_id);
+	if (child_idx < internals->active_child_count)
+		deactivate_child(bonded_eth_dev, child_port_id);
 
-	slave_idx = -1;
-	/* now find in slave list */
-	for (i = 0; i < internals->slave_count; i++)
-		if (internals->slaves[i].port_id == slave_port_id) {
-			slave_idx = i;
+	child_idx = -1;
+	/* now find in child list */
+	for (i = 0; i < internals->child_count; i++)
+		if (internals->children[i].port_id == child_port_id) {
+			child_idx = i;
 			break;
 		}
 
-	if (slave_idx < 0) {
-		RTE_BOND_LOG(ERR, "Couldn't find slave in port list, slave count %u",
-				internals->slave_count);
+	if (child_idx < 0) {
+		RTE_BOND_LOG(ERR, "Couldn't find child in port list, child count %u",
+				internals->child_count);
 		return -1;
 	}
 
 	/* Un-register link status change callback with bonded device pointer as
 	 * argument*/
-	rte_eth_dev_callback_unregister(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
+	rte_eth_dev_callback_unregister(child_port_id, RTE_ETH_EVENT_INTR_LSC,
 			bond_ethdev_lsc_event_callback,
 			&rte_eth_devices[bonded_port_id].data->port_id);
 
-	/* Restore original MAC address of slave device */
-	rte_eth_dev_default_mac_addr_set(slave_port_id,
-			&(internals->slaves[slave_idx].persisted_mac_addr));
+	/* Restore original MAC address of child device */
+	rte_eth_dev_default_mac_addr_set(child_port_id,
+			&(internals->children[child_idx].persisted_mac_addr));
 
-	/* remove additional MAC addresses from the slave */
-	slave_remove_mac_addresses(bonded_eth_dev, slave_port_id);
+	/* remove additional MAC addresses from the child */
+	child_remove_mac_addresses(bonded_eth_dev, child_port_id);
 
 	/*
-	 * Remove bond device flows from slave device.
+	 * Remove bond device flows from child device.
 	 * Note: don't restore flow isolate mode.
 	 */
 	TAILQ_FOREACH(flow, &internals->flow_list, next) {
-		if (flow->flows[slave_idx] != NULL) {
-			rte_flow_destroy(slave_port_id, flow->flows[slave_idx],
+		if (flow->flows[child_idx] != NULL) {
+			rte_flow_destroy(child_port_id, flow->flows[child_idx],
 					 &flow_error);
-			flow->flows[slave_idx] = NULL;
+			flow->flows[child_idx] = NULL;
 		}
 	}
 
-	slave_eth_dev = &rte_eth_devices[slave_port_id];
-	slave_remove(internals, slave_eth_dev);
-	slave_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDED_SLAVE);
+	child_eth_dev = &rte_eth_devices[child_port_id];
+	child_remove(internals, child_eth_dev);
+	child_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDED_CHILD);
 
-	/*  first slave in the active list will be the primary by default,
+	/*  first child in the active list will be the primary by default,
 	 *  otherwise use first device in list */
-	if (internals->current_primary_port == slave_port_id) {
-		if (internals->active_slave_count > 0)
-			internals->current_primary_port = internals->active_slaves[0];
-		else if (internals->slave_count > 0)
-			internals->current_primary_port = internals->slaves[0].port_id;
+	if (internals->current_primary_port == child_port_id) {
+		if (internals->active_child_count > 0)
+			internals->current_primary_port = internals->active_children[0];
+		else if (internals->child_count > 0)
+			internals->current_primary_port = internals->children[0].port_id;
 		else
 			internals->primary_port = 0;
-		mac_address_slaves_update(bonded_eth_dev);
+		mac_address_children_update(bonded_eth_dev);
 	}
 
-	if (internals->active_slave_count < 1) {
-		/* if no slaves are any longer attached to bonded device and MAC is not
+	if (internals->active_child_count < 1) {
+		/* if no children are any longer attached to bonded device and MAC is not
 		 * user defined then clear MAC of bonded device as it will be reset
-		 * when a new slave is added */
-		if (internals->slave_count < 1 && !internals->user_defined_mac)
+		 * when a new child is added */
+		if (internals->child_count < 1 && !internals->user_defined_mac)
 			memset(rte_eth_devices[bonded_port_id].data->mac_addrs, 0,
 					sizeof(*(rte_eth_devices[bonded_port_id].data->mac_addrs)));
 	}
-	if (internals->slave_count == 0) {
+	if (internals->child_count == 0) {
 		internals->rx_offload_capa = 0;
 		internals->tx_offload_capa = 0;
 		internals->rx_queue_offload_capa = 0;
@@ -750,7 +750,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
 }
 
 int
-rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_child_remove(uint16_t bonded_port_id, uint16_t child_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
@@ -764,7 +764,7 @@ rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
 
 	rte_spinlock_lock(&internals->lock);
 
-	retval = __eth_bond_slave_remove_lock_free(bonded_port_id, slave_port_id);
+	retval = __eth_bond_child_remove_lock_free(bonded_port_id, child_port_id);
 
 	rte_spinlock_unlock(&internals->lock);
 
@@ -781,7 +781,7 @@ rte_eth_bond_mode_set(uint16_t bonded_port_id, uint8_t mode)
 
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 
-	if (check_for_master_bonded_ethdev(bonded_eth_dev) != 0 &&
+	if (check_for_parent_bonded_ethdev(bonded_eth_dev) != 0 &&
 			mode == BONDING_MODE_8023AD)
 		return -1;
 
@@ -802,7 +802,7 @@ rte_eth_bond_mode_get(uint16_t bonded_port_id)
 }
 
 int
-rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t child_port_id)
 {
 	struct bond_dev_private *internals;
 
@@ -811,13 +811,13 @@ rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
 
 	internals = rte_eth_devices[bonded_port_id].data->dev_private;
 
-	if (valid_slave_port_id(internals, slave_port_id) != 0)
+	if (valid_child_port_id(internals, child_port_id) != 0)
 		return -1;
 
 	internals->user_defined_primary_port = 1;
-	internals->primary_port = slave_port_id;
+	internals->primary_port = child_port_id;
 
-	bond_ethdev_primary_set(internals, slave_port_id);
+	bond_ethdev_primary_set(internals, child_port_id);
 
 	return 0;
 }
@@ -832,14 +832,14 @@ rte_eth_bond_primary_get(uint16_t bonded_port_id)
 
 	internals = rte_eth_devices[bonded_port_id].data->dev_private;
 
-	if (internals->slave_count < 1)
+	if (internals->child_count < 1)
 		return -1;
 
 	return internals->current_primary_port;
 }
 
 int
-rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_children_get(uint16_t bonded_port_id, uint16_t children[],
 			uint16_t len)
 {
 	struct bond_dev_private *internals;
@@ -848,22 +848,22 @@ rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
 	if (valid_bonded_port_id(bonded_port_id) != 0)
 		return -1;
 
-	if (slaves == NULL)
+	if (children == NULL)
 		return -1;
 
 	internals = rte_eth_devices[bonded_port_id].data->dev_private;
 
-	if (internals->slave_count > len)
+	if (internals->child_count > len)
 		return -1;
 
-	for (i = 0; i < internals->slave_count; i++)
-		slaves[i] = internals->slaves[i].port_id;
+	for (i = 0; i < internals->child_count; i++)
+		children[i] = internals->children[i].port_id;
 
-	return internals->slave_count;
+	return internals->child_count;
 }
 
 int
-rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_active_children_get(uint16_t bonded_port_id, uint16_t children[],
 		uint16_t len)
 {
 	struct bond_dev_private *internals;
@@ -871,18 +871,18 @@ rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
 	if (valid_bonded_port_id(bonded_port_id) != 0)
 		return -1;
 
-	if (slaves == NULL)
+	if (children == NULL)
 		return -1;
 
 	internals = rte_eth_devices[bonded_port_id].data->dev_private;
 
-	if (internals->active_slave_count > len)
+	if (internals->active_child_count > len)
 		return -1;
 
-	memcpy(slaves, internals->active_slaves,
-	internals->active_slave_count * sizeof(internals->active_slaves[0]));
+	memcpy(children, internals->active_children,
+	internals->active_child_count * sizeof(internals->active_children[0]));
 
-	return internals->active_slave_count;
+	return internals->active_child_count;
 }
 
 int
@@ -904,9 +904,9 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
 
 	internals->user_defined_mac = 1;
 
-	/* Update all slave devices MACs*/
-	if (internals->slave_count > 0)
-		return mac_address_slaves_update(bonded_eth_dev);
+	/* Update all child devices MACs*/
+	if (internals->child_count > 0)
+		return mac_address_children_update(bonded_eth_dev);
 
 	return 0;
 }
@@ -925,30 +925,30 @@ rte_eth_bond_mac_address_reset(uint16_t bonded_port_id)
 
 	internals->user_defined_mac = 0;
 
-	if (internals->slave_count > 0) {
-		int slave_port;
-		/* Get the primary slave location based on the primary port
-		 * number as, while slave_add(), we will keep the primary
-		 * slave based on slave_count,but not based on the primary port.
+	if (internals->child_count > 0) {
+		int child_port;
+		/* Get the primary child location based on the primary port
+		 * number as, while child_add(), we will keep the primary
+		 * child based on child_count,but not based on the primary port.
 		 */
-		for (slave_port = 0; slave_port < internals->slave_count;
-		     slave_port++) {
-			if (internals->slaves[slave_port].port_id ==
+		for (child_port = 0; child_port < internals->child_count;
+		     child_port++) {
+			if (internals->children[child_port].port_id ==
 			    internals->primary_port)
 				break;
 		}
 
 		/* Set MAC Address of Bonded Device */
 		if (mac_address_set(bonded_eth_dev,
-			&internals->slaves[slave_port].persisted_mac_addr)
+			&internals->children[child_port].persisted_mac_addr)
 				!= 0) {
 			RTE_BOND_LOG(ERR, "Failed to set MAC address on bonded device");
 			return -1;
 		}
-		/* Update all slave devices MAC addresses */
-		return mac_address_slaves_update(bonded_eth_dev);
+		/* Update all child devices MAC addresses */
+		return mac_address_children_update(bonded_eth_dev);
 	}
-	/* No need to update anything as no slaves present */
+	/* No need to update anything as no children present */
 	return 0;
 }
 
diff --git a/drivers/net/bonding/rte_eth_bond_args.c b/drivers/net/bonding/rte_eth_bond_args.c
index 6553166f5cb7..c4af24f119e7 100644
--- a/drivers/net/bonding/rte_eth_bond_args.c
+++ b/drivers/net/bonding/rte_eth_bond_args.c
@@ -12,8 +12,8 @@
 #include "eth_bond_private.h"
 
 const char *pmd_bond_init_valid_arguments[] = {
-	PMD_BOND_SLAVE_PORT_KVARG,
-	PMD_BOND_PRIMARY_SLAVE_KVARG,
+	PMD_BOND_CHILD_PORT_KVARG,
+	PMD_BOND_PRIMARY_CHILD_KVARG,
 	PMD_BOND_MODE_KVARG,
 	PMD_BOND_XMIT_POLICY_KVARG,
 	PMD_BOND_SOCKET_ID_KVARG,
@@ -109,31 +109,31 @@ parse_port_id(const char *port_str)
 }
 
 int
-bond_ethdev_parse_slave_port_kvarg(const char *key,
+bond_ethdev_parse_child_port_kvarg(const char *key,
 		const char *value, void *extra_args)
 {
-	struct bond_ethdev_slave_ports *slave_ports;
+	struct bond_ethdev_child_ports *child_ports;
 
 	if (value == NULL || extra_args == NULL)
 		return -1;
 
-	slave_ports = extra_args;
+	child_ports = extra_args;
 
-	if (strcmp(key, PMD_BOND_SLAVE_PORT_KVARG) == 0) {
+	if (strcmp(key, PMD_BOND_CHILD_PORT_KVARG) == 0) {
 		int port_id = parse_port_id(value);
 		if (port_id < 0) {
-			RTE_BOND_LOG(ERR, "Invalid slave port value (%s) specified",
+			RTE_BOND_LOG(ERR, "Invalid child port value (%s) specified",
 				     value);
 			return -1;
 		} else
-			slave_ports->slaves[slave_ports->slave_count++] =
+			child_ports->children[child_ports->child_count++] =
 					port_id;
 	}
 	return 0;
 }
 
 int
-bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_child_mode_kvarg(const char *key __rte_unused,
 		const char *value, void *extra_args)
 {
 	uint8_t *mode;
@@ -160,13 +160,13 @@ bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
 	case BONDING_MODE_ALB:
 		return 0;
 	default:
-		RTE_BOND_LOG(ERR, "Invalid slave mode value (%s) specified", value);
+		RTE_BOND_LOG(ERR, "Invalid child mode value (%s) specified", value);
 		return -1;
 	}
 }
 
 int
-bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_child_agg_mode_kvarg(const char *key __rte_unused,
 		const char *value, void *extra_args)
 {
 	uint8_t *agg_mode;
@@ -221,19 +221,19 @@ bond_ethdev_parse_socket_id_kvarg(const char *key __rte_unused,
 }
 
 int
-bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_primary_child_port_id_kvarg(const char *key __rte_unused,
 		const char *value, void *extra_args)
 {
-	int primary_slave_port_id;
+	int primary_child_port_id;
 
 	if (value == NULL || extra_args == NULL)
 		return -1;
 
-	primary_slave_port_id = parse_port_id(value);
-	if (primary_slave_port_id < 0)
+	primary_child_port_id = parse_port_id(value);
+	if (primary_child_port_id < 0)
 		return -1;
 
-	*(uint16_t *)extra_args = (uint16_t)primary_slave_port_id;
+	*(uint16_t *)extra_args = (uint16_t)primary_child_port_id;
 
 	return 0;
 }
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 65b77faae709..b2d5b171c712 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -69,12 +69,12 @@ bond_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	int i;
 	int ret;
 
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_flow_validate(internals->slaves[i].port_id, attr,
+	for (i = 0; i < internals->child_count; i++) {
+		ret = rte_flow_validate(internals->children[i].port_id, attr,
 					patterns, actions, err);
 		if (ret) {
 			RTE_BOND_LOG(ERR, "Operation rte_flow_validate failed"
-				     " for slave %d with error %d", i, ret);
+				     " for child %d with error %d", i, ret);
 			return ret;
 		}
 	}
@@ -97,11 +97,11 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 				   NULL, rte_strerror(ENOMEM));
 		return NULL;
 	}
-	for (i = 0; i < internals->slave_count; i++) {
-		flow->flows[i] = rte_flow_create(internals->slaves[i].port_id,
+	for (i = 0; i < internals->child_count; i++) {
+		flow->flows[i] = rte_flow_create(internals->children[i].port_id,
 						 attr, patterns, actions, err);
 		if (unlikely(flow->flows[i] == NULL)) {
-			RTE_BOND_LOG(ERR, "Failed to create flow on slave %d",
+			RTE_BOND_LOG(ERR, "Failed to create flow on child %d",
 				     i);
 			goto err;
 		}
@@ -109,10 +109,10 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 	TAILQ_INSERT_TAIL(&internals->flow_list, flow, next);
 	return flow;
 err:
-	/* Destroy all slaves flows. */
-	for (i = 0; i < internals->slave_count; i++) {
+	/* Destroy all children flows. */
+	for (i = 0; i < internals->child_count; i++) {
 		if (flow->flows[i] != NULL)
-			rte_flow_destroy(internals->slaves[i].port_id,
+			rte_flow_destroy(internals->children[i].port_id,
 					 flow->flows[i], err);
 	}
 	bond_flow_release(&flow);
@@ -127,15 +127,15 @@ bond_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
 	int i;
 	int ret = 0;
 
-	for (i = 0; i < internals->slave_count; i++) {
+	for (i = 0; i < internals->child_count; i++) {
 		int lret;
 
 		if (unlikely(flow->flows[i] == NULL))
 			continue;
-		lret = rte_flow_destroy(internals->slaves[i].port_id,
+		lret = rte_flow_destroy(internals->children[i].port_id,
 					flow->flows[i], err);
 		if (unlikely(lret != 0)) {
-			RTE_BOND_LOG(ERR, "Failed to destroy flow on slave %d:"
+			RTE_BOND_LOG(ERR, "Failed to destroy flow on child %d:"
 				     " %d", i, lret);
 			ret = lret;
 		}
@@ -154,7 +154,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
 	int ret = 0;
 	int lret;
 
-	/* Destroy all bond flows from its slaves instead of flushing them to
+	/* Destroy all bond flows from its children instead of flushing them to
 	 * keep the LACP flow or any other external flows.
 	 */
 	RTE_TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
@@ -163,7 +163,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
 			ret = lret;
 	}
 	if (unlikely(ret != 0))
-		RTE_BOND_LOG(ERR, "Failed to flush flow in all slaves");
+		RTE_BOND_LOG(ERR, "Failed to flush flow in all children");
 	return ret;
 }
 
@@ -174,26 +174,26 @@ bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
 		      struct rte_flow_error *err)
 {
 	struct bond_dev_private *internals = dev->data->dev_private;
-	struct rte_flow_query_count slave_count;
+	struct rte_flow_query_count child_count;
 	int i;
 	int ret;
 
 	count->bytes = 0;
 	count->hits = 0;
-	rte_memcpy(&slave_count, count, sizeof(slave_count));
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_flow_query(internals->slaves[i].port_id,
+	rte_memcpy(&child_count, count, sizeof(child_count));
+	for (i = 0; i < internals->child_count; i++) {
+		ret = rte_flow_query(internals->children[i].port_id,
 				     flow->flows[i], action,
-				     &slave_count, err);
+				     &child_count, err);
 		if (unlikely(ret != 0)) {
 			RTE_BOND_LOG(ERR, "Failed to query flow on"
-				     " slave %d: %d", i, ret);
+				     " child %d: %d", i, ret);
 			return ret;
 		}
-		count->bytes += slave_count.bytes;
-		count->hits += slave_count.hits;
-		slave_count.bytes = 0;
-		slave_count.hits = 0;
+		count->bytes += child_count.bytes;
+		count->hits += child_count.hits;
+		child_count.bytes = 0;
+		child_count.hits = 0;
 	}
 	return 0;
 }
@@ -221,11 +221,11 @@ bond_flow_isolate(struct rte_eth_dev *dev, int set,
 	int i;
 	int ret;
 
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_flow_isolate(internals->slaves[i].port_id, set, err);
+	for (i = 0; i < internals->child_count; i++) {
+		ret = rte_flow_isolate(internals->children[i].port_id, set, err);
 		if (unlikely(ret != 0)) {
 			RTE_BOND_LOG(ERR, "Operation rte_flow_isolate failed"
-				     " for slave %d with error %d", i, ret);
+				     " for child %d with error %d", i, ret);
 			internals->flow_isolated_valid = 0;
 			return ret;
 		}
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index f0c4f7d26b86..5c9da8d0d5f8 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -61,33 +61,33 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	struct bond_dev_private *internals;
 
 	uint16_t num_rx_total = 0;
-	uint16_t slave_count;
-	uint16_t active_slave;
+	uint16_t child_count;
+	uint16_t active_child;
 	int i;
 
 	/* Cast to structure, containing bonded device's port id and queue id */
 	struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue;
 	internals = bd_rx_q->dev_private;
-	slave_count = internals->active_slave_count;
-	active_slave = bd_rx_q->active_slave;
+	child_count = internals->active_child_count;
+	active_child = bd_rx_q->active_child;
 
-	for (i = 0; i < slave_count && nb_pkts; i++) {
-		uint16_t num_rx_slave;
+	for (i = 0; i < child_count && nb_pkts; i++) {
+		uint16_t num_rx_child;
 
 		/* Offset of pointer to *bufs increases as packets are received
-		 * from other slaves */
-		num_rx_slave =
-			rte_eth_rx_burst(internals->active_slaves[active_slave],
+		 * from other children */
+		num_rx_child =
+			rte_eth_rx_burst(internals->active_children[active_child],
 					 bd_rx_q->queue_id,
 					 bufs + num_rx_total, nb_pkts);
-		num_rx_total += num_rx_slave;
-		nb_pkts -= num_rx_slave;
-		if (++active_slave >= slave_count)
-			active_slave = 0;
+		num_rx_total += num_rx_child;
+		nb_pkts -= num_rx_child;
+		if (++active_child >= child_count)
+			active_child = 0;
 	}
 
-	if (++bd_rx_q->active_slave >= slave_count)
-		bd_rx_q->active_slave = 0;
+	if (++bd_rx_q->active_child >= child_count)
+		bd_rx_q->active_child = 0;
 	return num_rx_total;
 }
 
@@ -158,8 +158,8 @@ const struct rte_flow_attr flow_attr_8023ad = {
 
 int
 bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
-		uint16_t slave_port) {
-	struct rte_eth_dev_info slave_info;
+		uint16_t child_port) {
+	struct rte_eth_dev_info child_info;
 	struct rte_flow_error error;
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
 
@@ -177,29 +177,29 @@ bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
 		}
 	};
 
-	int ret = rte_flow_validate(slave_port, &flow_attr_8023ad,
+	int ret = rte_flow_validate(child_port, &flow_attr_8023ad,
 			flow_item_8023ad, actions, &error);
 	if (ret < 0) {
-		RTE_BOND_LOG(ERR, "%s: %s (slave_port=%d queue_id=%d)",
-				__func__, error.message, slave_port,
+		RTE_BOND_LOG(ERR, "%s: %s (child_port=%d queue_id=%d)",
+				__func__, error.message, child_port,
 				internals->mode4.dedicated_queues.rx_qid);
 		return -1;
 	}
 
-	ret = rte_eth_dev_info_get(slave_port, &slave_info);
+	ret = rte_eth_dev_info_get(child_port, &child_info);
 	if (ret != 0) {
 		RTE_BOND_LOG(ERR,
 			"%s: Error during getting device (port %u) info: %s\n",
-			__func__, slave_port, strerror(-ret));
+			__func__, child_port, strerror(-ret));
 
 		return ret;
 	}
 
-	if (slave_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
-			slave_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
+	if (child_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
+			child_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
 		RTE_BOND_LOG(ERR,
-			"%s: Slave %d capabilities doesn't allow allocating additional queues",
-			__func__, slave_port);
+			"%s: Child %d capabilities doesn't allow allocating additional queues",
+			__func__, child_port);
 		return -1;
 	}
 
@@ -214,8 +214,8 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
 	uint16_t idx;
 	int ret;
 
-	/* Verify if all slaves in bonding supports flow director and */
-	if (internals->slave_count > 0) {
+	/* Verify if all children in bonding supports flow director and */
+	if (internals->child_count > 0) {
 		ret = rte_eth_dev_info_get(bond_dev->data->port_id, &bond_info);
 		if (ret != 0) {
 			RTE_BOND_LOG(ERR,
@@ -229,9 +229,9 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
 		internals->mode4.dedicated_queues.rx_qid = bond_info.nb_rx_queues;
 		internals->mode4.dedicated_queues.tx_qid = bond_info.nb_tx_queues;
 
-		for (idx = 0; idx < internals->slave_count; idx++) {
+		for (idx = 0; idx < internals->child_count; idx++) {
 			if (bond_ethdev_8023ad_flow_verify(bond_dev,
-					internals->slaves[idx].port_id) != 0)
+					internals->children[idx].port_id) != 0)
 				return -1;
 		}
 	}
@@ -240,7 +240,7 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
 }
 
 int
-bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
+bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t child_port) {
 
 	struct rte_flow_error error;
 	struct bond_dev_private *internals = bond_dev->data->dev_private;
@@ -258,12 +258,12 @@ bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
 		}
 	};
 
-	internals->mode4.dedicated_queues.flow[slave_port] = rte_flow_create(slave_port,
+	internals->mode4.dedicated_queues.flow[child_port] = rte_flow_create(child_port,
 			&flow_attr_8023ad, flow_item_8023ad, actions, &error);
-	if (internals->mode4.dedicated_queues.flow[slave_port] == NULL) {
+	if (internals->mode4.dedicated_queues.flow[child_port] == NULL) {
 		RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_set: %s "
-				"(slave_port=%d queue_id=%d)",
-				error.message, slave_port,
+				"(child_port=%d queue_id=%d)",
+				error.message, child_port,
 				internals->mode4.dedicated_queues.rx_qid);
 		return -1;
 	}
@@ -304,10 +304,10 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 	const uint16_t ether_type_slow_be =
 		rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
 	uint16_t num_rx_total = 0;	/* Total number of received packets */
-	uint16_t slaves[RTE_MAX_ETHPORTS];
-	uint16_t slave_count, idx;
+	uint16_t children[RTE_MAX_ETHPORTS];
+	uint16_t child_count, idx;
 
-	uint8_t collecting;  /* current slave collecting status */
+	uint8_t collecting;  /* current child collecting status */
 	const uint8_t promisc = rte_eth_promiscuous_get(internals->port_id);
 	const uint8_t allmulti = rte_eth_allmulticast_get(internals->port_id);
 	uint8_t subtype;
@@ -315,24 +315,24 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 	uint16_t j;
 	uint16_t k;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy child list to protect against child up/down changes during tx
 	 * bursting */
-	slave_count = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * slave_count);
+	child_count = internals->active_child_count;
+	memcpy(children, internals->active_children,
+			sizeof(internals->active_children[0]) * child_count);
 
-	idx = bd_rx_q->active_slave;
-	if (idx >= slave_count) {
-		bd_rx_q->active_slave = 0;
+	idx = bd_rx_q->active_child;
+	if (idx >= child_count) {
+		bd_rx_q->active_child = 0;
 		idx = 0;
 	}
-	for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) {
+	for (i = 0; i < child_count && num_rx_total < nb_pkts; i++) {
 		j = num_rx_total;
-		collecting = ACTOR_STATE(&bond_mode_8023ad_ports[slaves[idx]],
+		collecting = ACTOR_STATE(&bond_mode_8023ad_ports[children[idx]],
 					 COLLECTING);
 
-		/* Read packets from this slave */
-		num_rx_total += rte_eth_rx_burst(slaves[idx], bd_rx_q->queue_id,
+		/* Read packets from this child */
+		num_rx_total += rte_eth_rx_burst(children[idx], bd_rx_q->queue_id,
 				&bufs[num_rx_total], nb_pkts - num_rx_total);
 
 		for (k = j; k < 2 && k < num_rx_total; k++)
@@ -348,7 +348,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 
 			/* Remove packet from array if:
 			 * - it is slow packet but no dedicated rxq is present,
-			 * - slave is not in collecting state,
+			 * - child is not in collecting state,
 			 * - bonding interface is not in promiscuous mode and
 			 *   packet address isn't in mac_addrs array:
 			 *   - packet is unicast,
@@ -367,7 +367,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 				  !allmulti)))) {
 				if (hdr->ether_type == ether_type_slow_be) {
 					bond_mode_8023ad_handle_slow_pkt(
-					    internals, slaves[idx], bufs[j]);
+					    internals, children[idx], bufs[j]);
 				} else
 					rte_pktmbuf_free(bufs[j]);
 
@@ -380,12 +380,12 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
 			} else
 				j++;
 		}
-		if (unlikely(++idx == slave_count))
+		if (unlikely(++idx == child_count))
 			idx = 0;
 	}
 
-	if (++bd_rx_q->active_slave >= slave_count)
-		bd_rx_q->active_slave = 0;
+	if (++bd_rx_q->active_child >= child_count)
+		bd_rx_q->active_child = 0;
 
 	return num_rx_total;
 }
@@ -583,59 +583,59 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 	struct bond_dev_private *internals;
 	struct bond_tx_queue *bd_tx_q;
 
-	struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_pkts];
-	uint16_t slave_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
+	struct rte_mbuf *child_bufs[RTE_MAX_ETHPORTS][nb_pkts];
+	uint16_t child_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
 
-	uint16_t num_of_slaves;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t num_of_children;
+	uint16_t children[RTE_MAX_ETHPORTS];
 
-	uint16_t num_tx_total = 0, num_tx_slave;
+	uint16_t num_tx_total = 0, num_tx_child;
 
-	static int slave_idx = 0;
-	int i, cslave_idx = 0, tx_fail_total = 0;
+	static int child_idx = 0;
+	int i, cchild_idx = 0, tx_fail_total = 0;
 
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy child list to protect against child up/down changes during tx
 	 * bursting */
-	num_of_slaves = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * num_of_slaves);
+	num_of_children = internals->active_child_count;
+	memcpy(children, internals->active_children,
+			sizeof(internals->active_children[0]) * num_of_children);
 
-	if (num_of_slaves < 1)
+	if (num_of_children < 1)
 		return num_tx_total;
 
-	/* Populate slaves mbuf with which packets are to be sent on it  */
+	/* Populate children mbuf with which packets are to be sent on it  */
 	for (i = 0; i < nb_pkts; i++) {
-		cslave_idx = (slave_idx + i) % num_of_slaves;
-		slave_bufs[cslave_idx][(slave_nb_pkts[cslave_idx])++] = bufs[i];
+		cchild_idx = (child_idx + i) % num_of_children;
+		child_bufs[cchild_idx][(child_nb_pkts[cchild_idx])++] = bufs[i];
 	}
 
-	/* increment current slave index so the next call to tx burst starts on the
-	 * next slave */
-	slave_idx = ++cslave_idx;
+	/* increment current child index so the next call to tx burst starts on the
+	 * next child */
+	child_idx = ++cchild_idx;
 
-	/* Send packet burst on each slave device */
-	for (i = 0; i < num_of_slaves; i++) {
-		if (slave_nb_pkts[i] > 0) {
-			num_tx_slave = rte_eth_tx_prepare(slaves[i],
-					bd_tx_q->queue_id, slave_bufs[i],
-					slave_nb_pkts[i]);
-			num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
-					slave_bufs[i], num_tx_slave);
+	/* Send packet burst on each child device */
+	for (i = 0; i < num_of_children; i++) {
+		if (child_nb_pkts[i] > 0) {
+			num_tx_child = rte_eth_tx_prepare(children[i],
+					bd_tx_q->queue_id, child_bufs[i],
+					child_nb_pkts[i]);
+			num_tx_child = rte_eth_tx_burst(children[i], bd_tx_q->queue_id,
+					child_bufs[i], num_tx_child);
 
 			/* if tx burst fails move packets to end of bufs */
-			if (unlikely(num_tx_slave < slave_nb_pkts[i])) {
-				int tx_fail_slave = slave_nb_pkts[i] - num_tx_slave;
+			if (unlikely(num_tx_child < child_nb_pkts[i])) {
+				int tx_fail_child = child_nb_pkts[i] - num_tx_child;
 
-				tx_fail_total += tx_fail_slave;
+				tx_fail_total += tx_fail_child;
 
 				memcpy(&bufs[nb_pkts - tx_fail_total],
-				       &slave_bufs[i][num_tx_slave],
-				       tx_fail_slave * sizeof(bufs[0]));
+				       &child_bufs[i][num_tx_child],
+				       tx_fail_child * sizeof(bufs[0]));
 			}
-			num_tx_total += num_tx_slave;
+			num_tx_total += num_tx_child;
 		}
 	}
 
@@ -653,7 +653,7 @@ bond_ethdev_tx_burst_active_backup(void *queue,
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	if (internals->active_slave_count < 1)
+	if (internals->active_child_count < 1)
 		return 0;
 
 	nb_prep_pkts = rte_eth_tx_prepare(internals->current_primary_port,
@@ -699,7 +699,7 @@ ipv6_hash(struct rte_ipv6_hdr *ipv6_hdr)
 
 void
 burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves)
+		uint16_t child_count, uint16_t *children)
 {
 	struct rte_ether_hdr *eth_hdr;
 	uint32_t hash;
@@ -710,13 +710,13 @@ burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
 
 		hash = ether_hash(eth_hdr);
 
-		slaves[i] = (hash ^= hash >> 8) % slave_count;
+		children[i] = (hash ^= hash >> 8) % child_count;
 	}
 }
 
 void
 burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves)
+		uint16_t child_count, uint16_t *children)
 {
 	uint16_t i;
 	struct rte_ether_hdr *eth_hdr;
@@ -748,13 +748,13 @@ burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
 		hash ^= hash >> 16;
 		hash ^= hash >> 8;
 
-		slaves[i] = hash % slave_count;
+		children[i] = hash % child_count;
 	}
 }
 
 void
 burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
-		uint16_t slave_count, uint16_t *slaves)
+		uint16_t child_count, uint16_t *children)
 {
 	struct rte_ether_hdr *eth_hdr;
 	uint16_t proto;
@@ -822,30 +822,30 @@ burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
 		hash ^= hash >> 16;
 		hash ^= hash >> 8;
 
-		slaves[i] = hash % slave_count;
+		children[i] = hash % child_count;
 	}
 }
 
-struct bwg_slave {
+struct bwg_child {
 	uint64_t bwg_left_int;
 	uint64_t bwg_left_remainder;
-	uint16_t slave;
+	uint16_t child;
 };
 
 void
-bond_tlb_activate_slave(struct bond_dev_private *internals) {
+bond_tlb_activate_child(struct bond_dev_private *internals) {
 	int i;
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		tlb_last_obytets[internals->active_slaves[i]] = 0;
+	for (i = 0; i < internals->active_child_count; i++) {
+		tlb_last_obytets[internals->active_children[i]] = 0;
 	}
 }
 
 static int
 bandwidth_cmp(const void *a, const void *b)
 {
-	const struct bwg_slave *bwg_a = a;
-	const struct bwg_slave *bwg_b = b;
+	const struct bwg_child *bwg_a = a;
+	const struct bwg_child *bwg_b = b;
 	int64_t diff = (int64_t)bwg_b->bwg_left_int - (int64_t)bwg_a->bwg_left_int;
 	int64_t diff2 = (int64_t)bwg_b->bwg_left_remainder -
 			(int64_t)bwg_a->bwg_left_remainder;
@@ -863,14 +863,14 @@ bandwidth_cmp(const void *a, const void *b)
 
 static void
 bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
-		struct bwg_slave *bwg_slave)
+		struct bwg_child *bwg_child)
 {
 	struct rte_eth_link link_status;
 	int ret;
 
 	ret = rte_eth_link_get_nowait(port_id, &link_status);
 	if (ret < 0) {
-		RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
+		RTE_BOND_LOG(ERR, "Child (port %u) link get failed: %s",
 			     port_id, rte_strerror(-ret));
 		return;
 	}
@@ -878,51 +878,51 @@ bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
 	if (link_bwg == 0)
 		return;
 	link_bwg = link_bwg * (update_idx+1) * REORDER_PERIOD_MS;
-	bwg_slave->bwg_left_int = (link_bwg - 1000*load) / link_bwg;
-	bwg_slave->bwg_left_remainder = (link_bwg - 1000*load) % link_bwg;
+	bwg_child->bwg_left_int = (link_bwg - 1000*load) / link_bwg;
+	bwg_child->bwg_left_remainder = (link_bwg - 1000*load) % link_bwg;
 }
 
 static void
-bond_ethdev_update_tlb_slave_cb(void *arg)
+bond_ethdev_update_tlb_child_cb(void *arg)
 {
 	struct bond_dev_private *internals = arg;
-	struct rte_eth_stats slave_stats;
-	struct bwg_slave bwg_array[RTE_MAX_ETHPORTS];
-	uint16_t slave_count;
+	struct rte_eth_stats child_stats;
+	struct bwg_child bwg_array[RTE_MAX_ETHPORTS];
+	uint16_t child_count;
 	uint64_t tx_bytes;
 
 	uint8_t update_stats = 0;
-	uint16_t slave_id;
+	uint16_t child_id;
 	uint16_t i;
 
-	internals->slave_update_idx++;
+	internals->child_update_idx++;
 
 
-	if (internals->slave_update_idx >= REORDER_PERIOD_MS)
+	if (internals->child_update_idx >= REORDER_PERIOD_MS)
 		update_stats = 1;
 
-	for (i = 0; i < internals->active_slave_count; i++) {
-		slave_id = internals->active_slaves[i];
-		rte_eth_stats_get(slave_id, &slave_stats);
-		tx_bytes = slave_stats.obytes - tlb_last_obytets[slave_id];
-		bandwidth_left(slave_id, tx_bytes,
-				internals->slave_update_idx, &bwg_array[i]);
-		bwg_array[i].slave = slave_id;
+	for (i = 0; i < internals->active_child_count; i++) {
+		child_id = internals->active_children[i];
+		rte_eth_stats_get(child_id, &child_stats);
+		tx_bytes = child_stats.obytes - tlb_last_obytets[child_id];
+		bandwidth_left(child_id, tx_bytes,
+				internals->child_update_idx, &bwg_array[i]);
+		bwg_array[i].child = child_id;
 
 		if (update_stats) {
-			tlb_last_obytets[slave_id] = slave_stats.obytes;
+			tlb_last_obytets[child_id] = child_stats.obytes;
 		}
 	}
 
 	if (update_stats == 1)
-		internals->slave_update_idx = 0;
+		internals->child_update_idx = 0;
 
-	slave_count = i;
-	qsort(bwg_array, slave_count, sizeof(bwg_array[0]), bandwidth_cmp);
-	for (i = 0; i < slave_count; i++)
-		internals->tlb_slaves_order[i] = bwg_array[i].slave;
+	child_count = i;
+	qsort(bwg_array, child_count, sizeof(bwg_array[0]), bandwidth_cmp);
+	for (i = 0; i < child_count; i++)
+		internals->tlb_children_order[i] = bwg_array[i].child;
 
-	rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_slave_cb,
+	rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_child_cb,
 			(struct bond_dev_private *)internals);
 }
 
@@ -937,29 +937,29 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	uint16_t num_tx_total = 0, num_tx_prep;
 	uint16_t i, j;
 
-	uint16_t num_of_slaves = internals->active_slave_count;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t num_of_children = internals->active_child_count;
+	uint16_t children[RTE_MAX_ETHPORTS];
 
 	struct rte_ether_hdr *ether_hdr;
-	struct rte_ether_addr primary_slave_addr;
-	struct rte_ether_addr active_slave_addr;
+	struct rte_ether_addr primary_child_addr;
+	struct rte_ether_addr active_child_addr;
 
-	if (num_of_slaves < 1)
+	if (num_of_children < 1)
 		return num_tx_total;
 
-	memcpy(slaves, internals->tlb_slaves_order,
-				sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
+	memcpy(children, internals->tlb_children_order,
+				sizeof(internals->tlb_children_order[0]) * num_of_children);
 
 
-	rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
+	rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_child_addr);
 
 	if (nb_pkts > 3) {
 		for (i = 0; i < 3; i++)
 			rte_prefetch0(rte_pktmbuf_mtod(bufs[i], void*));
 	}
 
-	for (i = 0; i < num_of_slaves; i++) {
-		rte_eth_macaddr_get(slaves[i], &active_slave_addr);
+	for (i = 0; i < num_of_children; i++) {
+		rte_eth_macaddr_get(children[i], &active_child_addr);
 		for (j = num_tx_total; j < nb_pkts; j++) {
 			if (j + 3 < nb_pkts)
 				rte_prefetch0(rte_pktmbuf_mtod(bufs[j+3], void*));
@@ -967,17 +967,17 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			ether_hdr = rte_pktmbuf_mtod(bufs[j],
 						struct rte_ether_hdr *);
 			if (rte_is_same_ether_addr(&ether_hdr->src_addr,
-							&primary_slave_addr))
-				rte_ether_addr_copy(&active_slave_addr,
+							&primary_child_addr))
+				rte_ether_addr_copy(&active_child_addr,
 						&ether_hdr->src_addr);
 #if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
-					mode6_debug("TX IPv4:", ether_hdr, slaves[i], &burstnumberTX);
+					mode6_debug("TX IPv4:", ether_hdr, children[i], &burstnumberTX);
 #endif
 		}
 
-		num_tx_prep = rte_eth_tx_prepare(slaves[i], bd_tx_q->queue_id,
+		num_tx_prep = rte_eth_tx_prepare(children[i], bd_tx_q->queue_id,
 				bufs + num_tx_total, nb_pkts - num_tx_total);
-		num_tx_total += rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+		num_tx_total += rte_eth_tx_burst(children[i], bd_tx_q->queue_id,
 				bufs + num_tx_total, num_tx_prep);
 
 		if (num_tx_total == nb_pkts)
@@ -990,13 +990,13 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 void
 bond_tlb_disable(struct bond_dev_private *internals)
 {
-	rte_eal_alarm_cancel(bond_ethdev_update_tlb_slave_cb, internals);
+	rte_eal_alarm_cancel(bond_ethdev_update_tlb_child_cb, internals);
 }
 
 void
 bond_tlb_enable(struct bond_dev_private *internals)
 {
-	bond_ethdev_update_tlb_slave_cb(internals);
+	bond_ethdev_update_tlb_child_cb(internals);
 }
 
 static uint16_t
@@ -1011,11 +1011,11 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	struct client_data *client_info;
 
 	/*
-	 * We create transmit buffers for every slave and one additional to send
+	 * We create transmit buffers for every child and one additional to send
 	 * through tlb. In worst case every packet will be send on one port.
 	 */
-	struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
-	uint16_t slave_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
+	struct rte_mbuf *child_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
+	uint16_t child_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
 
 	/*
 	 * We create separate transmit buffers for update packets as they won't
@@ -1029,7 +1029,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 	uint16_t num_send, num_not_send = 0;
 	uint16_t num_tx_total = 0;
-	uint16_t slave_idx;
+	uint16_t child_idx;
 
 	int i, j;
 
@@ -1040,19 +1040,19 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		offset = get_vlan_offset(eth_h, &ether_type);
 
 		if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP)) {
-			slave_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
+			child_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
 
 			/* Change src mac in eth header */
-			rte_eth_macaddr_get(slave_idx, &eth_h->src_addr);
+			rte_eth_macaddr_get(child_idx, &eth_h->src_addr);
 
-			/* Add packet to slave tx buffer */
-			slave_bufs[slave_idx][slave_bufs_pkts[slave_idx]] = bufs[i];
-			slave_bufs_pkts[slave_idx]++;
+			/* Add packet to child tx buffer */
+			child_bufs[child_idx][child_bufs_pkts[child_idx]] = bufs[i];
+			child_bufs_pkts[child_idx]++;
 		} else {
 			/* If packet is not ARP, send it with TLB policy */
-			slave_bufs[RTE_MAX_ETHPORTS][slave_bufs_pkts[RTE_MAX_ETHPORTS]] =
+			child_bufs[RTE_MAX_ETHPORTS][child_bufs_pkts[RTE_MAX_ETHPORTS]] =
 					bufs[i];
-			slave_bufs_pkts[RTE_MAX_ETHPORTS]++;
+			child_bufs_pkts[RTE_MAX_ETHPORTS]++;
 		}
 	}
 
@@ -1062,7 +1062,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			client_info = &internals->mode6.client_table[i];
 
 			if (client_info->in_use) {
-				/* Allocate new packet to send ARP update on current slave */
+				/* Allocate new packet to send ARP update on current child */
 				upd_pkt = rte_pktmbuf_alloc(internals->mode6.mempool);
 				if (upd_pkt == NULL) {
 					RTE_BOND_LOG(ERR,
@@ -1076,36 +1076,36 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				upd_pkt->data_len = pkt_size;
 				upd_pkt->pkt_len = pkt_size;
 
-				slave_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
+				child_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
 						internals);
 
 				/* Add packet to update tx buffer */
-				update_bufs[slave_idx][update_bufs_pkts[slave_idx]] = upd_pkt;
-				update_bufs_pkts[slave_idx]++;
+				update_bufs[child_idx][update_bufs_pkts[child_idx]] = upd_pkt;
+				update_bufs_pkts[child_idx]++;
 			}
 		}
 		internals->mode6.ntt = 0;
 	}
 
-	/* Send ARP packets on proper slaves */
+	/* Send ARP packets on proper children */
 	for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
-		if (slave_bufs_pkts[i] > 0) {
+		if (child_bufs_pkts[i] > 0) {
 			num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
-					slave_bufs[i], slave_bufs_pkts[i]);
+					child_bufs[i], child_bufs_pkts[i]);
 			num_send = rte_eth_tx_burst(i, bd_tx_q->queue_id,
-					slave_bufs[i], num_send);
-			for (j = 0; j < slave_bufs_pkts[i] - num_send; j++) {
+					child_bufs[i], num_send);
+			for (j = 0; j < child_bufs_pkts[i] - num_send; j++) {
 				bufs[nb_pkts - 1 - num_not_send - j] =
-						slave_bufs[i][nb_pkts - 1 - j];
+						child_bufs[i][nb_pkts - 1 - j];
 			}
 
 			num_tx_total += num_send;
-			num_not_send += slave_bufs_pkts[i] - num_send;
+			num_not_send += child_bufs_pkts[i] - num_send;
 
 #if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
 	/* Print TX stats including update packets */
-			for (j = 0; j < slave_bufs_pkts[i]; j++) {
-				eth_h = rte_pktmbuf_mtod(slave_bufs[i][j],
+			for (j = 0; j < child_bufs_pkts[i]; j++) {
+				eth_h = rte_pktmbuf_mtod(child_bufs[i][j],
 							struct rte_ether_hdr *);
 				mode6_debug("TX ARP:", eth_h, i, &burstnumberTX);
 			}
@@ -1113,7 +1113,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		}
 	}
 
-	/* Send update packets on proper slaves */
+	/* Send update packets on proper children */
 	for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
 		if (update_bufs_pkts[i] > 0) {
 			num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
@@ -1134,14 +1134,14 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	}
 
 	/* Send non-ARP packets using tlb policy */
-	if (slave_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
+	if (child_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
 		num_send = bond_ethdev_tx_burst_tlb(queue,
-				slave_bufs[RTE_MAX_ETHPORTS],
-				slave_bufs_pkts[RTE_MAX_ETHPORTS]);
+				child_bufs[RTE_MAX_ETHPORTS],
+				child_bufs_pkts[RTE_MAX_ETHPORTS]);
 
-		for (j = 0; j < slave_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
+		for (j = 0; j < child_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
 			bufs[nb_pkts - 1 - num_not_send - j] =
-					slave_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
+					child_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
 		}
 
 		num_tx_total += num_send;
@@ -1152,59 +1152,59 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 static inline uint16_t
 tx_burst_balance(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
-		 uint16_t *slave_port_ids, uint16_t slave_count)
+		 uint16_t *child_port_ids, uint16_t child_count)
 {
 	struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
 	struct bond_dev_private *internals = bd_tx_q->dev_private;
 
-	/* Array to sort mbufs for transmission on each slave into */
-	struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_bufs];
-	/* Number of mbufs for transmission on each slave */
-	uint16_t slave_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
-	/* Mapping array generated by hash function to map mbufs to slaves */
-	uint16_t bufs_slave_port_idxs[nb_bufs];
+	/* Array to sort mbufs for transmission on each child into */
+	struct rte_mbuf *child_bufs[RTE_MAX_ETHPORTS][nb_bufs];
+	/* Number of mbufs for transmission on each child */
+	uint16_t child_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
+	/* Mapping array generated by hash function to map mbufs to children */
+	uint16_t bufs_child_port_idxs[nb_bufs];
 
-	uint16_t slave_tx_count;
+	uint16_t child_tx_count;
 	uint16_t total_tx_count = 0, total_tx_fail_count = 0;
 
 	uint16_t i;
 
 	/*
-	 * Populate slaves mbuf with the packets which are to be sent on it
-	 * selecting output slave using hash based on xmit policy
+	 * Populate children mbuf with the packets which are to be sent on it
+	 * selecting output child using hash based on xmit policy
 	 */
-	internals->burst_xmit_hash(bufs, nb_bufs, slave_count,
-			bufs_slave_port_idxs);
+	internals->burst_xmit_hash(bufs, nb_bufs, child_count,
+			bufs_child_port_idxs);
 
 	for (i = 0; i < nb_bufs; i++) {
-		/* Populate slave mbuf arrays with mbufs for that slave. */
-		uint16_t slave_idx = bufs_slave_port_idxs[i];
+		/* Populate child mbuf arrays with mbufs for that child. */
+		uint16_t child_idx = bufs_child_port_idxs[i];
 
-		slave_bufs[slave_idx][slave_nb_bufs[slave_idx]++] = bufs[i];
+		child_bufs[child_idx][child_nb_bufs[child_idx]++] = bufs[i];
 	}
 
-	/* Send packet burst on each slave device */
-	for (i = 0; i < slave_count; i++) {
-		if (slave_nb_bufs[i] == 0)
+	/* Send packet burst on each child device */
+	for (i = 0; i < child_count; i++) {
+		if (child_nb_bufs[i] == 0)
 			continue;
 
-		slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
-				bd_tx_q->queue_id, slave_bufs[i],
-				slave_nb_bufs[i]);
-		slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
-				bd_tx_q->queue_id, slave_bufs[i],
-				slave_tx_count);
+		child_tx_count = rte_eth_tx_prepare(child_port_ids[i],
+				bd_tx_q->queue_id, child_bufs[i],
+				child_nb_bufs[i]);
+		child_tx_count = rte_eth_tx_burst(child_port_ids[i],
+				bd_tx_q->queue_id, child_bufs[i],
+				child_tx_count);
 
-		total_tx_count += slave_tx_count;
+		total_tx_count += child_tx_count;
 
 		/* If tx burst fails move packets to end of bufs */
-		if (unlikely(slave_tx_count < slave_nb_bufs[i])) {
-			int slave_tx_fail_count = slave_nb_bufs[i] -
-					slave_tx_count;
-			total_tx_fail_count += slave_tx_fail_count;
+		if (unlikely(child_tx_count < child_nb_bufs[i])) {
+			int child_tx_fail_count = child_nb_bufs[i] -
+					child_tx_count;
+			total_tx_fail_count += child_tx_fail_count;
 			memcpy(&bufs[nb_bufs - total_tx_fail_count],
-			       &slave_bufs[i][slave_tx_count],
-			       slave_tx_fail_count * sizeof(bufs[0]));
+			       &child_bufs[i][child_tx_count],
+			       child_tx_fail_count * sizeof(bufs[0]));
 		}
 	}
 
@@ -1218,23 +1218,23 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 	struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
 	struct bond_dev_private *internals = bd_tx_q->dev_private;
 
-	uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
-	uint16_t slave_count;
+	uint16_t child_port_ids[RTE_MAX_ETHPORTS];
+	uint16_t child_count;
 
 	if (unlikely(nb_bufs == 0))
 		return 0;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy child list to protect against child up/down changes during tx
 	 * bursting
 	 */
-	slave_count = internals->active_slave_count;
-	if (unlikely(slave_count < 1))
+	child_count = internals->active_child_count;
+	if (unlikely(child_count < 1))
 		return 0;
 
-	memcpy(slave_port_ids, internals->active_slaves,
-			sizeof(slave_port_ids[0]) * slave_count);
-	return tx_burst_balance(queue, bufs, nb_bufs, slave_port_ids,
-				slave_count);
+	memcpy(child_port_ids, internals->active_children,
+			sizeof(child_port_ids[0]) * child_count);
+	return tx_burst_balance(queue, bufs, nb_bufs, child_port_ids,
+				child_count);
 }
 
 static inline uint16_t
@@ -1244,31 +1244,31 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
 	struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
 	struct bond_dev_private *internals = bd_tx_q->dev_private;
 
-	uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
-	uint16_t slave_count;
+	uint16_t child_port_ids[RTE_MAX_ETHPORTS];
+	uint16_t child_count;
 
-	uint16_t dist_slave_port_ids[RTE_MAX_ETHPORTS];
-	uint16_t dist_slave_count;
+	uint16_t dist_child_port_ids[RTE_MAX_ETHPORTS];
+	uint16_t dist_child_count;
 
-	uint16_t slave_tx_count;
+	uint16_t child_tx_count;
 
 	uint16_t i;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy child list to protect against child up/down changes during tx
 	 * bursting */
-	slave_count = internals->active_slave_count;
-	if (unlikely(slave_count < 1))
+	child_count = internals->active_child_count;
+	if (unlikely(child_count < 1))
 		return 0;
 
-	memcpy(slave_port_ids, internals->active_slaves,
-			sizeof(slave_port_ids[0]) * slave_count);
+	memcpy(child_port_ids, internals->active_children,
+			sizeof(child_port_ids[0]) * child_count);
 
 	if (dedicated_txq)
 		goto skip_tx_ring;
 
 	/* Check for LACP control packets and send if available */
-	for (i = 0; i < slave_count; i++) {
-		struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
+	for (i = 0; i < child_count; i++) {
+		struct port *port = &bond_mode_8023ad_ports[child_port_ids[i]];
 		struct rte_mbuf *ctrl_pkt = NULL;
 
 		if (likely(rte_ring_empty(port->tx_ring)))
@@ -1276,15 +1276,15 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
 
 		if (rte_ring_dequeue(port->tx_ring,
 				     (void **)&ctrl_pkt) != -ENOENT) {
-			slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
+			child_tx_count = rte_eth_tx_prepare(child_port_ids[i],
 					bd_tx_q->queue_id, &ctrl_pkt, 1);
-			slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
-					bd_tx_q->queue_id, &ctrl_pkt, slave_tx_count);
+			child_tx_count = rte_eth_tx_burst(child_port_ids[i],
+					bd_tx_q->queue_id, &ctrl_pkt, child_tx_count);
 			/*
 			 * re-enqueue LAG control plane packets to buffering
 			 * ring if transmission fails so the packet isn't lost.
 			 */
-			if (slave_tx_count != 1)
+			if (child_tx_count != 1)
 				rte_ring_enqueue(port->tx_ring,	ctrl_pkt);
 		}
 	}
@@ -1293,20 +1293,20 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
 	if (unlikely(nb_bufs == 0))
 		return 0;
 
-	dist_slave_count = 0;
-	for (i = 0; i < slave_count; i++) {
-		struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
+	dist_child_count = 0;
+	for (i = 0; i < child_count; i++) {
+		struct port *port = &bond_mode_8023ad_ports[child_port_ids[i]];
 
 		if (ACTOR_STATE(port, DISTRIBUTING))
-			dist_slave_port_ids[dist_slave_count++] =
-					slave_port_ids[i];
+			dist_child_port_ids[dist_child_count++] =
+					child_port_ids[i];
 	}
 
-	if (unlikely(dist_slave_count < 1))
+	if (unlikely(dist_child_count < 1))
 		return 0;
 
-	return tx_burst_balance(queue, bufs, nb_bufs, dist_slave_port_ids,
-				dist_slave_count);
+	return tx_burst_balance(queue, bufs, nb_bufs, dist_child_port_ids,
+				dist_child_count);
 }
 
 static uint16_t
@@ -1330,78 +1330,78 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 	struct bond_dev_private *internals;
 	struct bond_tx_queue *bd_tx_q;
 
-	uint16_t slaves[RTE_MAX_ETHPORTS];
+	uint16_t children[RTE_MAX_ETHPORTS];
 	uint8_t tx_failed_flag = 0;
-	uint16_t num_of_slaves;
+	uint16_t num_of_children;
 
 	uint16_t max_nb_of_tx_pkts = 0;
 
-	int slave_tx_total[RTE_MAX_ETHPORTS];
-	int i, most_successful_tx_slave = -1;
+	int child_tx_total[RTE_MAX_ETHPORTS];
+	int i, most_successful_tx_child = -1;
 
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	/* Copy slave list to protect against slave up/down changes during tx
+	/* Copy child list to protect against child up/down changes during tx
 	 * bursting */
-	num_of_slaves = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * num_of_slaves);
+	num_of_children = internals->active_child_count;
+	memcpy(children, internals->active_children,
+			sizeof(internals->active_children[0]) * num_of_children);
 
-	if (num_of_slaves < 1)
+	if (num_of_children < 1)
 		return 0;
 
 	/* It is rare that bond different PMDs together, so just call tx-prepare once */
-	nb_pkts = rte_eth_tx_prepare(slaves[0], bd_tx_q->queue_id, bufs, nb_pkts);
+	nb_pkts = rte_eth_tx_prepare(children[0], bd_tx_q->queue_id, bufs, nb_pkts);
 
 	/* Increment reference count on mbufs */
 	for (i = 0; i < nb_pkts; i++)
-		rte_pktmbuf_refcnt_update(bufs[i], num_of_slaves - 1);
+		rte_pktmbuf_refcnt_update(bufs[i], num_of_children - 1);
 
-	/* Transmit burst on each active slave */
-	for (i = 0; i < num_of_slaves; i++) {
-		slave_tx_total[i] = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+	/* Transmit burst on each active child */
+	for (i = 0; i < num_of_children; i++) {
+		child_tx_total[i] = rte_eth_tx_burst(children[i], bd_tx_q->queue_id,
 					bufs, nb_pkts);
 
-		if (unlikely(slave_tx_total[i] < nb_pkts))
+		if (unlikely(child_tx_total[i] < nb_pkts))
 			tx_failed_flag = 1;
 
-		/* record the value and slave index for the slave which transmits the
+		/* record the value and child index for the child which transmits the
 		 * maximum number of packets */
-		if (slave_tx_total[i] > max_nb_of_tx_pkts) {
-			max_nb_of_tx_pkts = slave_tx_total[i];
-			most_successful_tx_slave = i;
+		if (child_tx_total[i] > max_nb_of_tx_pkts) {
+			max_nb_of_tx_pkts = child_tx_total[i];
+			most_successful_tx_child = i;
 		}
 	}
 
-	/* if slaves fail to transmit packets from burst, the calling application
+	/* if children fail to transmit packets from burst, the calling application
 	 * is not expected to know about multiple references to packets so we must
-	 * handle failures of all packets except those of the most successful slave
+	 * handle failures of all packets except those of the most successful child
 	 */
 	if (unlikely(tx_failed_flag))
-		for (i = 0; i < num_of_slaves; i++)
-			if (i != most_successful_tx_slave)
-				while (slave_tx_total[i] < nb_pkts)
-					rte_pktmbuf_free(bufs[slave_tx_total[i]++]);
+		for (i = 0; i < num_of_children; i++)
+			if (i != most_successful_tx_child)
+				while (child_tx_total[i] < nb_pkts)
+					rte_pktmbuf_free(bufs[child_tx_total[i]++]);
 
 	return max_nb_of_tx_pkts;
 }
 
 static void
-link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
+link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *child_link)
 {
 	struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
 
 	if (bond_ctx->mode == BONDING_MODE_8023AD) {
 		/**
 		 * If in mode 4 then save the link properties of the first
-		 * slave, all subsequent slaves must match these properties
+		 * child, all subsequent children must match these properties
 		 */
-		struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
+		struct rte_eth_link *bond_link = &bond_ctx->mode4.child_link;
 
-		bond_link->link_autoneg = slave_link->link_autoneg;
-		bond_link->link_duplex = slave_link->link_duplex;
-		bond_link->link_speed = slave_link->link_speed;
+		bond_link->link_autoneg = child_link->link_autoneg;
+		bond_link->link_duplex = child_link->link_duplex;
+		bond_link->link_speed = child_link->link_speed;
 	} else {
 		/**
 		 * In any other mode the link properties are set to default
@@ -1414,16 +1414,16 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
 
 static int
 link_properties_valid(struct rte_eth_dev *ethdev,
-		struct rte_eth_link *slave_link)
+		struct rte_eth_link *child_link)
 {
 	struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
 
 	if (bond_ctx->mode == BONDING_MODE_8023AD) {
-		struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
+		struct rte_eth_link *bond_link = &bond_ctx->mode4.child_link;
 
-		if (bond_link->link_duplex != slave_link->link_duplex ||
-			bond_link->link_autoneg != slave_link->link_autoneg ||
-			bond_link->link_speed != slave_link->link_speed)
+		if (bond_link->link_duplex != child_link->link_duplex ||
+			bond_link->link_autoneg != child_link->link_autoneg ||
+			bond_link->link_speed != child_link->link_speed)
 			return -1;
 	}
 
@@ -1480,11 +1480,11 @@ mac_address_set(struct rte_eth_dev *eth_dev,
 static const struct rte_ether_addr null_mac_addr;
 
 /*
- * Add additional MAC addresses to the slave
+ * Add additional MAC addresses to the child
  */
 int
-slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
-		uint16_t slave_port_id)
+child_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+		uint16_t child_port_id)
 {
 	int i, ret;
 	struct rte_ether_addr *mac_addr;
@@ -1494,11 +1494,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
 		if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
 			break;
 
-		ret = rte_eth_dev_mac_addr_add(slave_port_id, mac_addr, 0);
+		ret = rte_eth_dev_mac_addr_add(child_port_id, mac_addr, 0);
 		if (ret < 0) {
 			/* rollback */
 			for (i--; i > 0; i--)
-				rte_eth_dev_mac_addr_remove(slave_port_id,
+				rte_eth_dev_mac_addr_remove(child_port_id,
 					&bonded_eth_dev->data->mac_addrs[i]);
 			return ret;
 		}
@@ -1508,11 +1508,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
 }
 
 /*
- * Remove additional MAC addresses from the slave
+ * Remove additional MAC addresses from the child
  */
 int
-slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
-		uint16_t slave_port_id)
+child_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+		uint16_t child_port_id)
 {
 	int i, rc, ret;
 	struct rte_ether_addr *mac_addr;
@@ -1523,7 +1523,7 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
 		if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
 			break;
 
-		ret = rte_eth_dev_mac_addr_remove(slave_port_id, mac_addr);
+		ret = rte_eth_dev_mac_addr_remove(child_port_id, mac_addr);
 		/* save only the first error */
 		if (ret < 0 && rc == 0)
 			rc = ret;
@@ -1533,26 +1533,26 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
 }
 
 int
-mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
+mac_address_children_update(struct rte_eth_dev *bonded_eth_dev)
 {
 	struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
 	bool set;
 	int i;
 
-	/* Update slave devices MAC addresses */
-	if (internals->slave_count < 1)
+	/* Update child devices MAC addresses */
+	if (internals->child_count < 1)
 		return -1;
 
 	switch (internals->mode) {
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
-		for (i = 0; i < internals->slave_count; i++) {
+		for (i = 0; i < internals->child_count; i++) {
 			if (rte_eth_dev_default_mac_addr_set(
-					internals->slaves[i].port_id,
+					internals->children[i].port_id,
 					bonded_eth_dev->data->mac_addrs)) {
 				RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
-						internals->slaves[i].port_id);
+						internals->children[i].port_id);
 				return -1;
 			}
 		}
@@ -1565,8 +1565,8 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
 	case BONDING_MODE_ALB:
 	default:
 		set = true;
-		for (i = 0; i < internals->slave_count; i++) {
-			if (internals->slaves[i].port_id ==
+		for (i = 0; i < internals->child_count; i++) {
+			if (internals->children[i].port_id ==
 					internals->current_primary_port) {
 				if (rte_eth_dev_default_mac_addr_set(
 						internals->current_primary_port,
@@ -1577,10 +1577,10 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
 				}
 			} else {
 				if (rte_eth_dev_default_mac_addr_set(
-						internals->slaves[i].port_id,
-						&internals->slaves[i].persisted_mac_addr)) {
+						internals->children[i].port_id,
+						&internals->children[i].persisted_mac_addr)) {
 					RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
-							internals->slaves[i].port_id);
+							internals->children[i].port_id);
 				}
 			}
 		}
@@ -1655,55 +1655,55 @@ bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode)
 
 
 static int
-slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev)
+child_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *child_eth_dev)
 {
 	int errval = 0;
 	struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
-	struct port *port = &bond_mode_8023ad_ports[slave_eth_dev->data->port_id];
+	struct port *port = &bond_mode_8023ad_ports[child_eth_dev->data->port_id];
 
 	if (port->slow_pool == NULL) {
 		char mem_name[256];
-		int slave_id = slave_eth_dev->data->port_id;
+		int child_id = child_eth_dev->data->port_id;
 
-		snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_slow_pool",
-				slave_id);
+		snprintf(mem_name, RTE_DIM(mem_name), "child_port%u_slow_pool",
+				child_id);
 		port->slow_pool = rte_pktmbuf_pool_create(mem_name, 8191,
 			250, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
-			slave_eth_dev->data->numa_node);
+			child_eth_dev->data->numa_node);
 
 		/* Any memory allocation failure in initialization is critical because
 		 * resources can't be free, so reinitialization is impossible. */
 		if (port->slow_pool == NULL) {
-			rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
-				slave_id, mem_name, rte_strerror(rte_errno));
+			rte_panic("Child %u: Failed to create memory pool '%s': %s\n",
+				child_id, mem_name, rte_strerror(rte_errno));
 		}
 	}
 
 	if (internals->mode4.dedicated_queues.enabled == 1) {
 		/* Configure slow Rx queue */
 
-		errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id,
+		errval = rte_eth_rx_queue_setup(child_eth_dev->data->port_id,
 				internals->mode4.dedicated_queues.rx_qid, 128,
-				rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+				rte_eth_dev_socket_id(child_eth_dev->data->port_id),
 				NULL, port->slow_pool);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 					"rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
-					slave_eth_dev->data->port_id,
+					child_eth_dev->data->port_id,
 					internals->mode4.dedicated_queues.rx_qid,
 					errval);
 			return errval;
 		}
 
-		errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id,
+		errval = rte_eth_tx_queue_setup(child_eth_dev->data->port_id,
 				internals->mode4.dedicated_queues.tx_qid, 512,
-				rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+				rte_eth_dev_socket_id(child_eth_dev->data->port_id),
 				NULL);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 				"rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
-				slave_eth_dev->data->port_id,
+				child_eth_dev->data->port_id,
 				internals->mode4.dedicated_queues.tx_qid,
 				errval);
 			return errval;
@@ -1713,8 +1713,8 @@ slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
 }
 
 int
-slave_configure(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev)
+child_configure(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *child_eth_dev)
 {
 	uint16_t nb_rx_queues;
 	uint16_t nb_tx_queues;
@@ -1723,45 +1723,45 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 
 	struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
 
-	/* Stop slave */
-	errval = rte_eth_dev_stop(slave_eth_dev->data->port_id);
+	/* Stop child */
+	errval = rte_eth_dev_stop(child_eth_dev->data->port_id);
 	if (errval != 0)
 		RTE_BOND_LOG(ERR, "rte_eth_dev_stop: port %u, err (%d)",
-			     slave_eth_dev->data->port_id, errval);
+			     child_eth_dev->data->port_id, errval);
 
-	/* Enable interrupts on slave device if supported */
-	if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
-		slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
+	/* Enable interrupts on child device if supported */
+	if (child_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+		child_eth_dev->data->dev_conf.intr_conf.lsc = 1;
 
-	/* If RSS is enabled for bonding, try to enable it for slaves  */
+	/* If RSS is enabled for bonding, try to enable it for children  */
 	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		/* rss_key won't be empty if RSS is configured in bonded dev */
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
+		child_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
 					internals->rss_key_len;
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
+		child_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
 					internals->rss_key;
 
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
+		child_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
 				bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
-		slave_eth_dev->data->dev_conf.rxmode.mq_mode =
+		child_eth_dev->data->dev_conf.rxmode.mq_mode =
 				bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
 	} else {
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
-		slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
-		slave_eth_dev->data->dev_conf.rxmode.mq_mode =
+		child_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
+		child_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
+		child_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
+		child_eth_dev->data->dev_conf.rxmode.mq_mode =
 				bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
 	}
 
-	slave_eth_dev->data->dev_conf.rxmode.mtu =
+	child_eth_dev->data->dev_conf.rxmode.mtu =
 			bonded_eth_dev->data->dev_conf.rxmode.mtu;
-	slave_eth_dev->data->dev_conf.link_speeds =
+	child_eth_dev->data->dev_conf.link_speeds =
 			bonded_eth_dev->data->dev_conf.link_speeds;
 
-	slave_eth_dev->data->dev_conf.txmode.offloads =
+	child_eth_dev->data->dev_conf.txmode.offloads =
 			bonded_eth_dev->data->dev_conf.txmode.offloads;
 
-	slave_eth_dev->data->dev_conf.rxmode.offloads =
+	child_eth_dev->data->dev_conf.rxmode.offloads =
 			bonded_eth_dev->data->dev_conf.rxmode.offloads;
 
 	nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
@@ -1775,28 +1775,28 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 	}
 
 	/* Configure device */
-	errval = rte_eth_dev_configure(slave_eth_dev->data->port_id,
+	errval = rte_eth_dev_configure(child_eth_dev->data->port_id,
 			nb_rx_queues, nb_tx_queues,
-			&(slave_eth_dev->data->dev_conf));
+			&(child_eth_dev->data->dev_conf));
 	if (errval != 0) {
-		RTE_BOND_LOG(ERR, "Cannot configure slave device: port %u, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+		RTE_BOND_LOG(ERR, "Cannot configure child device: port %u, err (%d)",
+				child_eth_dev->data->port_id, errval);
 		return errval;
 	}
 
-	errval = rte_eth_dev_set_mtu(slave_eth_dev->data->port_id,
+	errval = rte_eth_dev_set_mtu(child_eth_dev->data->port_id,
 				     bonded_eth_dev->data->mtu);
 	if (errval != 0 && errval != -ENOTSUP) {
 		RTE_BOND_LOG(ERR, "rte_eth_dev_set_mtu: port %u, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				child_eth_dev->data->port_id, errval);
 		return errval;
 	}
 	return 0;
 }
 
 int
-slave_start(struct rte_eth_dev *bonded_eth_dev,
-		struct rte_eth_dev *slave_eth_dev)
+child_start(struct rte_eth_dev *bonded_eth_dev,
+		struct rte_eth_dev *child_eth_dev)
 {
 	int errval = 0;
 	struct bond_rx_queue *bd_rx_q;
@@ -1809,14 +1809,14 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
 	for (q_id = 0; q_id < bonded_eth_dev->data->nb_rx_queues; q_id++) {
 		bd_rx_q = (struct bond_rx_queue *)bonded_eth_dev->data->rx_queues[q_id];
 
-		errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id, q_id,
+		errval = rte_eth_rx_queue_setup(child_eth_dev->data->port_id, q_id,
 				bd_rx_q->nb_rx_desc,
-				rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+				rte_eth_dev_socket_id(child_eth_dev->data->port_id),
 				&(bd_rx_q->rx_conf), bd_rx_q->mb_pool);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 					"rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
-					slave_eth_dev->data->port_id, q_id, errval);
+					child_eth_dev->data->port_id, q_id, errval);
 			return errval;
 		}
 	}
@@ -1825,58 +1825,58 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
 	for (q_id = 0; q_id < bonded_eth_dev->data->nb_tx_queues; q_id++) {
 		bd_tx_q = (struct bond_tx_queue *)bonded_eth_dev->data->tx_queues[q_id];
 
-		errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id, q_id,
+		errval = rte_eth_tx_queue_setup(child_eth_dev->data->port_id, q_id,
 				bd_tx_q->nb_tx_desc,
-				rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+				rte_eth_dev_socket_id(child_eth_dev->data->port_id),
 				&bd_tx_q->tx_conf);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 				"rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
-				slave_eth_dev->data->port_id, q_id, errval);
+				child_eth_dev->data->port_id, q_id, errval);
 			return errval;
 		}
 	}
 
 	if (internals->mode == BONDING_MODE_8023AD &&
 			internals->mode4.dedicated_queues.enabled == 1) {
-		if (slave_configure_slow_queue(bonded_eth_dev, slave_eth_dev)
+		if (child_configure_slow_queue(bonded_eth_dev, child_eth_dev)
 				!= 0)
 			return errval;
 
 		errval = bond_ethdev_8023ad_flow_verify(bonded_eth_dev,
-				slave_eth_dev->data->port_id);
+				child_eth_dev->data->port_id);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 				"bond_ethdev_8023ad_flow_verify: port=%d, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				child_eth_dev->data->port_id, errval);
 			return errval;
 		}
 
-		if (internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id] != NULL) {
-			errval = rte_flow_destroy(slave_eth_dev->data->port_id,
-					internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id],
+		if (internals->mode4.dedicated_queues.flow[child_eth_dev->data->port_id] != NULL) {
+			errval = rte_flow_destroy(child_eth_dev->data->port_id,
+					internals->mode4.dedicated_queues.flow[child_eth_dev->data->port_id],
 					&flow_error);
 			RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_destroy: port=%d, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				child_eth_dev->data->port_id, errval);
 		}
 	}
 
 	/* Start device */
-	errval = rte_eth_dev_start(slave_eth_dev->data->port_id);
+	errval = rte_eth_dev_start(child_eth_dev->data->port_id);
 	if (errval != 0) {
 		RTE_BOND_LOG(ERR, "rte_eth_dev_start: port=%u, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				child_eth_dev->data->port_id, errval);
 		return -1;
 	}
 
 	if (internals->mode == BONDING_MODE_8023AD &&
 			internals->mode4.dedicated_queues.enabled == 1) {
 		errval = bond_ethdev_8023ad_flow_set(bonded_eth_dev,
-				slave_eth_dev->data->port_id);
+				child_eth_dev->data->port_id);
 		if (errval != 0) {
 			RTE_BOND_LOG(ERR,
 				"bond_ethdev_8023ad_flow_set: port=%d, err (%d)",
-				slave_eth_dev->data->port_id, errval);
+				child_eth_dev->data->port_id, errval);
 			return errval;
 		}
 	}
@@ -1888,27 +1888,27 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
 
 		internals = bonded_eth_dev->data->dev_private;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			if (internals->slaves[i].port_id == slave_eth_dev->data->port_id) {
+		for (i = 0; i < internals->child_count; i++) {
+			if (internals->children[i].port_id == child_eth_dev->data->port_id) {
 				errval = rte_eth_dev_rss_reta_update(
-						slave_eth_dev->data->port_id,
+						child_eth_dev->data->port_id,
 						&internals->reta_conf[0],
-						internals->slaves[i].reta_size);
+						internals->children[i].reta_size);
 				if (errval != 0) {
 					RTE_BOND_LOG(WARNING,
-						     "rte_eth_dev_rss_reta_update on slave port %d fails (err %d)."
+						     "rte_eth_dev_rss_reta_update on child port %d fails (err %d)."
 						     " RSS Configuration for bonding may be inconsistent.",
-						     slave_eth_dev->data->port_id, errval);
+						     child_eth_dev->data->port_id, errval);
 				}
 				break;
 			}
 		}
 	}
 
-	/* If lsc interrupt is set, check initial slave's link status */
-	if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
-		slave_eth_dev->dev_ops->link_update(slave_eth_dev, 0);
-		bond_ethdev_lsc_event_callback(slave_eth_dev->data->port_id,
+	/* If lsc interrupt is set, check initial child's link status */
+	if (child_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
+		child_eth_dev->dev_ops->link_update(child_eth_dev, 0);
+		bond_ethdev_lsc_event_callback(child_eth_dev->data->port_id,
 			RTE_ETH_EVENT_INTR_LSC, &bonded_eth_dev->data->port_id,
 			NULL);
 	}
@@ -1917,75 +1917,75 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
 }
 
 void
-slave_remove(struct bond_dev_private *internals,
-		struct rte_eth_dev *slave_eth_dev)
+child_remove(struct bond_dev_private *internals,
+		struct rte_eth_dev *child_eth_dev)
 {
 	uint16_t i;
 
-	for (i = 0; i < internals->slave_count; i++)
-		if (internals->slaves[i].port_id ==
-				slave_eth_dev->data->port_id)
+	for (i = 0; i < internals->child_count; i++)
+		if (internals->children[i].port_id ==
+				child_eth_dev->data->port_id)
 			break;
 
-	if (i < (internals->slave_count - 1)) {
+	if (i < (internals->child_count - 1)) {
 		struct rte_flow *flow;
 
-		memmove(&internals->slaves[i], &internals->slaves[i + 1],
-				sizeof(internals->slaves[0]) *
-				(internals->slave_count - i - 1));
+		memmove(&internals->children[i], &internals->children[i + 1],
+				sizeof(internals->children[0]) *
+				(internals->child_count - i - 1));
 		TAILQ_FOREACH(flow, &internals->flow_list, next) {
 			memmove(&flow->flows[i], &flow->flows[i + 1],
 				sizeof(flow->flows[0]) *
-				(internals->slave_count - i - 1));
-			flow->flows[internals->slave_count - 1] = NULL;
+				(internals->child_count - i - 1));
+			flow->flows[internals->child_count - 1] = NULL;
 		}
 	}
 
-	internals->slave_count--;
+	internals->child_count--;
 
-	/* force reconfiguration of slave interfaces */
-	rte_eth_dev_internal_reset(slave_eth_dev);
+	/* force reconfiguration of child interfaces */
+	rte_eth_dev_internal_reset(child_eth_dev);
 }
 
 static void
-bond_ethdev_slave_link_status_change_monitor(void *cb_arg);
+bond_ethdev_child_link_status_change_monitor(void *cb_arg);
 
 void
-slave_add(struct bond_dev_private *internals,
-		struct rte_eth_dev *slave_eth_dev)
+child_add(struct bond_dev_private *internals,
+		struct rte_eth_dev *child_eth_dev)
 {
-	struct bond_slave_details *slave_details =
-			&internals->slaves[internals->slave_count];
+	struct bond_child_details *child_details =
+			&internals->children[internals->child_count];
 
-	slave_details->port_id = slave_eth_dev->data->port_id;
-	slave_details->last_link_status = 0;
+	child_details->port_id = child_eth_dev->data->port_id;
+	child_details->last_link_status = 0;
 
-	/* Mark slave devices that don't support interrupts so we can
+	/* Mark child devices that don't support interrupts so we can
 	 * compensate when we start the bond
 	 */
-	if (!(slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) {
-		slave_details->link_status_poll_enabled = 1;
+	if (!(child_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) {
+		child_details->link_status_poll_enabled = 1;
 	}
 
-	slave_details->link_status_wait_to_complete = 0;
+	child_details->link_status_wait_to_complete = 0;
 	/* clean tlb_last_obytes when adding port for bonding device */
-	memcpy(&(slave_details->persisted_mac_addr), slave_eth_dev->data->mac_addrs,
+	memcpy(&(child_details->persisted_mac_addr), child_eth_dev->data->mac_addrs,
 			sizeof(struct rte_ether_addr));
 }
 
 void
 bond_ethdev_primary_set(struct bond_dev_private *internals,
-		uint16_t slave_port_id)
+		uint16_t child_port_id)
 {
 	int i;
 
-	if (internals->active_slave_count < 1)
-		internals->current_primary_port = slave_port_id;
+	if (internals->active_child_count < 1)
+		internals->current_primary_port = child_port_id;
 	else
-		/* Search bonded device slave ports for new proposed primary port */
-		for (i = 0; i < internals->active_slave_count; i++) {
-			if (internals->active_slaves[i] == slave_port_id)
-				internals->current_primary_port = slave_port_id;
+		/* Search bonded device child ports for new proposed primary port */
+		for (i = 0; i < internals->active_child_count; i++) {
+			if (internals->active_children[i] == child_port_id)
+				internals->current_primary_port = child_port_id;
 		}
 }
 
@@ -1998,9 +1998,9 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 	struct bond_dev_private *internals;
 	int i;
 
-	/* slave eth dev will be started by bonded device */
+	/* child eth dev will be started by bonded device */
 	if (check_for_bonded_ethdev(eth_dev)) {
-		RTE_BOND_LOG(ERR, "User tried to explicitly start a slave eth_dev (%d)",
+		RTE_BOND_LOG(ERR, "User tried to explicitly start a child eth_dev (%d)",
 				eth_dev->data->port_id);
 		return -1;
 	}
@@ -2010,17 +2010,17 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 
 	internals = eth_dev->data->dev_private;
 
-	if (internals->slave_count == 0) {
-		RTE_BOND_LOG(ERR, "Cannot start port since there are no slave devices");
+	if (internals->child_count == 0) {
+		RTE_BOND_LOG(ERR, "Cannot start port since there are no child devices");
 		goto out_err;
 	}
 
 	if (internals->user_defined_mac == 0) {
 		struct rte_ether_addr *new_mac_addr = NULL;
 
-		for (i = 0; i < internals->slave_count; i++)
-			if (internals->slaves[i].port_id == internals->primary_port)
-				new_mac_addr = &internals->slaves[i].persisted_mac_addr;
+		for (i = 0; i < internals->child_count; i++)
+			if (internals->children[i].port_id == internals->primary_port)
+				new_mac_addr = &internals->children[i].persisted_mac_addr;
 
 		if (new_mac_addr == NULL)
 			goto out_err;
@@ -2042,28 +2042,28 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 	}
 
 
-	/* Reconfigure each slave device if starting bonded device */
-	for (i = 0; i < internals->slave_count; i++) {
-		struct rte_eth_dev *slave_ethdev =
-				&(rte_eth_devices[internals->slaves[i].port_id]);
-		if (slave_configure(eth_dev, slave_ethdev) != 0) {
+	/* Reconfigure each child device if starting bonded device */
+	for (i = 0; i < internals->child_count; i++) {
+		struct rte_eth_dev *child_ethdev =
+				&(rte_eth_devices[internals->children[i].port_id]);
+		if (child_configure(eth_dev, child_ethdev) != 0) {
 			RTE_BOND_LOG(ERR,
-				"bonded port (%d) failed to reconfigure slave device (%d)",
+				"bonded port (%d) failed to reconfigure child device (%d)",
 				eth_dev->data->port_id,
-				internals->slaves[i].port_id);
+				internals->children[i].port_id);
 			goto out_err;
 		}
-		if (slave_start(eth_dev, slave_ethdev) != 0) {
+		if (child_start(eth_dev, child_ethdev) != 0) {
 			RTE_BOND_LOG(ERR,
-				"bonded port (%d) failed to start slave device (%d)",
+				"bonded port (%d) failed to start child device (%d)",
 				eth_dev->data->port_id,
-				internals->slaves[i].port_id);
+				internals->children[i].port_id);
 			goto out_err;
 		}
-		/* We will need to poll for link status if any slave doesn't
+		/* We will need to poll for link status if any child doesn't
 		 * support interrupts
 		 */
-		if (internals->slaves[i].link_status_poll_enabled)
+		if (internals->children[i].link_status_poll_enabled)
 			internals->link_status_polling_enabled = 1;
 	}
 
@@ -2071,12 +2071,12 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 	if (internals->link_status_polling_enabled) {
 		rte_eal_alarm_set(
 			internals->link_status_polling_interval_ms * 1000,
-			bond_ethdev_slave_link_status_change_monitor,
+			bond_ethdev_child_link_status_change_monitor,
 			(void *)&rte_eth_devices[internals->port_id]);
 	}
 
-	/* Update all slave devices MACs*/
-	if (mac_address_slaves_update(eth_dev) != 0)
+	/* Update all child devices MACs*/
+	if (mac_address_children_update(eth_dev) != 0)
 		goto out_err;
 
 	if (internals->user_defined_primary_port)
@@ -2132,8 +2132,8 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 		bond_mode_8023ad_stop(eth_dev);
 
 		/* Discard all messages to/from mode 4 state machines */
-		for (i = 0; i < internals->active_slave_count; i++) {
-			port = &bond_mode_8023ad_ports[internals->active_slaves[i]];
+		for (i = 0; i < internals->active_child_count; i++) {
+			port = &bond_mode_8023ad_ports[internals->active_children[i]];
 
 			RTE_ASSERT(port->rx_ring != NULL);
 			while (rte_ring_dequeue(port->rx_ring, &pkt) != -ENOENT)
@@ -2148,30 +2148,30 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 	if (internals->mode == BONDING_MODE_TLB ||
 			internals->mode == BONDING_MODE_ALB) {
 		bond_tlb_disable(internals);
-		for (i = 0; i < internals->active_slave_count; i++)
-			tlb_last_obytets[internals->active_slaves[i]] = 0;
+		for (i = 0; i < internals->active_child_count; i++)
+			tlb_last_obytets[internals->active_children[i]] = 0;
 	}
 
 	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 0;
 
 	internals->link_status_polling_enabled = 0;
-	for (i = 0; i < internals->slave_count; i++) {
-		uint16_t slave_id = internals->slaves[i].port_id;
+	for (i = 0; i < internals->child_count; i++) {
+		uint16_t child_id = internals->children[i].port_id;
 
-		internals->slaves[i].last_link_status = 0;
-		ret = rte_eth_dev_stop(slave_id);
+		internals->children[i].last_link_status = 0;
+		ret = rte_eth_dev_stop(child_id);
 		if (ret != 0) {
 			RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
-				     slave_id);
+				     child_id);
 			return ret;
 		}
 
-		/* active slaves need to be deactivated. */
-		if (find_slave_by_id(internals->active_slaves,
-				internals->active_slave_count, slave_id) !=
-					internals->active_slave_count)
-			deactivate_slave(eth_dev, slave_id);
+		/* active children need to be deactivated. */
+		if (find_child_by_id(internals->active_children,
+				internals->active_child_count, child_id) !=
+					internals->active_child_count)
+			deactivate_child(eth_dev, child_id);
 	}
 
 	return 0;
@@ -2188,8 +2188,8 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
 	/* Flush flows in all back-end devices before removing them */
 	bond_flow_ops.flush(dev, &ferror);
 
-	while (internals->slave_count != skipped) {
-		uint16_t port_id = internals->slaves[skipped].port_id;
+	while (internals->child_count != skipped) {
+		uint16_t port_id = internals->children[skipped].port_id;
 		int ret;
 
 		ret = rte_eth_dev_stop(port_id);
@@ -2203,7 +2203,7 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
 			continue;
 		}
 
-		if (rte_eth_bond_slave_remove(bond_port_id, port_id) != 0) {
+		if (rte_eth_bond_child_remove(bond_port_id, port_id) != 0) {
 			RTE_BOND_LOG(ERR,
 				     "Failed to remove port %d from bonded device %s",
 				     port_id, dev->device->name);
@@ -2246,7 +2246,7 @@ static int
 bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 {
 	struct bond_dev_private *internals = dev->data->dev_private;
-	struct bond_slave_details slave;
+	struct bond_child_details child;
 	int ret;
 
 	uint16_t max_nb_rx_queues = UINT16_MAX;
@@ -2259,31 +2259,31 @@ bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 			RTE_ETHER_MAX_JUMBO_FRAME_LEN;
 
 	/* Max number of tx/rx queues that the bonded device can support is the
-	 * minimum values of the bonded slaves, as all slaves must be capable
+	 * minimum values of the bonded children, as all children must be capable
 	 * of supporting the same number of tx/rx queues.
 	 */
-	if (internals->slave_count > 0) {
-		struct rte_eth_dev_info slave_info;
+	if (internals->child_count > 0) {
+		struct rte_eth_dev_info child_info;
 		uint16_t idx;
 
-		for (idx = 0; idx < internals->slave_count; idx++) {
-			slave = internals->slaves[idx];
-			ret = rte_eth_dev_info_get(slave.port_id, &slave_info);
+		for (idx = 0; idx < internals->child_count; idx++) {
+			child = internals->children[idx];
+			ret = rte_eth_dev_info_get(child.port_id, &child_info);
 			if (ret != 0) {
 				RTE_BOND_LOG(ERR,
 					"%s: Error during getting device (port %u) info: %s\n",
 					__func__,
-					slave.port_id,
+					child.port_id,
 					strerror(-ret));
 
 				return ret;
 			}
 
-			if (slave_info.max_rx_queues < max_nb_rx_queues)
-				max_nb_rx_queues = slave_info.max_rx_queues;
+			if (child_info.max_rx_queues < max_nb_rx_queues)
+				max_nb_rx_queues = child_info.max_rx_queues;
 
-			if (slave_info.max_tx_queues < max_nb_tx_queues)
-				max_nb_tx_queues = slave_info.max_tx_queues;
+			if (child_info.max_tx_queues < max_nb_tx_queues)
+				max_nb_tx_queues = child_info.max_tx_queues;
 		}
 	}
 
@@ -2332,7 +2332,7 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 	uint16_t i;
 	struct bond_dev_private *internals = dev->data->dev_private;
 
-	/* don't do this while a slave is being added */
+	/* don't do this while a child is being added */
 	rte_spinlock_lock(&internals->lock);
 
 	if (on)
@@ -2340,13 +2340,13 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 	else
 		rte_bitmap_clear(internals->vlan_filter_bmp, vlan_id);
 
-	for (i = 0; i < internals->slave_count; i++) {
-		uint16_t port_id = internals->slaves[i].port_id;
+	for (i = 0; i < internals->child_count; i++) {
+		uint16_t port_id = internals->children[i].port_id;
 
 		res = rte_eth_dev_vlan_filter(port_id, vlan_id, on);
 		if (res == ENOTSUP)
 			RTE_BOND_LOG(WARNING,
-				     "Setting VLAN filter on slave port %u not supported.",
+				     "Setting VLAN filter on child port %u not supported.",
 				     port_id);
 	}
 
@@ -2424,14 +2424,14 @@ bond_ethdev_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id)
 }
 
 static void
-bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
+bond_ethdev_child_link_status_change_monitor(void *cb_arg)
 {
-	struct rte_eth_dev *bonded_ethdev, *slave_ethdev;
+	struct rte_eth_dev *bonded_ethdev, *child_ethdev;
 	struct bond_dev_private *internals;
 
-	/* Default value for polling slave found is true as we don't want to
+	/* Default value for polling child found is true as we don't want to
 	 * disable the polling thread if we cannot get the lock */
-	int i, polling_slave_found = 1;
+	int i, polling_child_found = 1;
 
 	if (cb_arg == NULL)
 		return;
@@ -2443,28 +2443,28 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
 		!internals->link_status_polling_enabled)
 		return;
 
-	/* If device is currently being configured then don't check slaves link
+	/* If device is currently being configured then don't check children link
 	 * status, wait until next period */
 	if (rte_spinlock_trylock(&internals->lock)) {
-		if (internals->slave_count > 0)
-			polling_slave_found = 0;
+		if (internals->child_count > 0)
+			polling_child_found = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			if (!internals->slaves[i].link_status_poll_enabled)
+		for (i = 0; i < internals->child_count; i++) {
+			if (!internals->children[i].link_status_poll_enabled)
 				continue;
 
-			slave_ethdev = &rte_eth_devices[internals->slaves[i].port_id];
-			polling_slave_found = 1;
+			child_ethdev = &rte_eth_devices[internals->children[i].port_id];
+			polling_child_found = 1;
 
-			/* Update slave link status */
-			(*slave_ethdev->dev_ops->link_update)(slave_ethdev,
-					internals->slaves[i].link_status_wait_to_complete);
+			/* Update child link status */
+			(*child_ethdev->dev_ops->link_update)(child_ethdev,
+					internals->children[i].link_status_wait_to_complete);
 
 			/* if link status has changed since last checked then call lsc
 			 * event callback */
-			if (slave_ethdev->data->dev_link.link_status !=
-					internals->slaves[i].last_link_status) {
-				bond_ethdev_lsc_event_callback(internals->slaves[i].port_id,
+			if (child_ethdev->data->dev_link.link_status !=
+					internals->children[i].last_link_status) {
+				bond_ethdev_lsc_event_callback(internals->children[i].port_id,
 						RTE_ETH_EVENT_INTR_LSC,
 						&bonded_ethdev->data->port_id,
 						NULL);
@@ -2473,10 +2473,10 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
 		rte_spinlock_unlock(&internals->lock);
 	}
 
-	if (polling_slave_found)
-		/* Set alarm to continue monitoring link status of slave ethdev's */
+	if (polling_child_found)
+		/* Set alarm to continue monitoring link status of child ethdev's */
 		rte_eal_alarm_set(internals->link_status_polling_interval_ms * 1000,
-				bond_ethdev_slave_link_status_change_monitor, cb_arg);
+				bond_ethdev_child_link_status_change_monitor, cb_arg);
 }
 
 static int
@@ -2485,7 +2485,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 	int (*link_update)(uint16_t port_id, struct rte_eth_link *eth_link);
 
 	struct bond_dev_private *bond_ctx;
-	struct rte_eth_link slave_link;
+	struct rte_eth_link child_link;
 
 	bool one_link_update_succeeded;
 	uint32_t idx;
@@ -2496,7 +2496,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 	ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	if (ethdev->data->dev_started == 0 ||
-			bond_ctx->active_slave_count == 0) {
+			bond_ctx->active_child_count == 0) {
 		ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 		return 0;
 	}
@@ -2512,51 +2512,51 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 	case BONDING_MODE_BROADCAST:
 		/**
 		 * Setting link speed to UINT32_MAX to ensure we pick up the
-		 * value of the first active slave
+		 * value of the first active child
 		 */
 		ethdev->data->dev_link.link_speed = UINT32_MAX;
 
 		/**
-		 * link speed is minimum value of all the slaves link speed as
-		 * packet loss will occur on this slave if transmission at rates
+		 * link speed is minimum value of all the children link speed as
+		 * packet loss will occur on this child if transmission at rates
 		 * greater than this are attempted
 		 */
-		for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
-			ret = link_update(bond_ctx->active_slaves[idx],
-					  &slave_link);
+		for (idx = 0; idx < bond_ctx->active_child_count; idx++) {
+			ret = link_update(bond_ctx->active_children[idx],
+					  &child_link);
 			if (ret < 0) {
 				ethdev->data->dev_link.link_speed =
 					RTE_ETH_SPEED_NUM_NONE;
 				RTE_BOND_LOG(ERR,
-					"Slave (port %u) link get failed: %s",
-					bond_ctx->active_slaves[idx],
+					"Child (port %u) link get failed: %s",
+					bond_ctx->active_children[idx],
 					rte_strerror(-ret));
 				return 0;
 			}
 
-			if (slave_link.link_speed <
+			if (child_link.link_speed <
 					ethdev->data->dev_link.link_speed)
 				ethdev->data->dev_link.link_speed =
-						slave_link.link_speed;
+						child_link.link_speed;
 		}
 		break;
 	case BONDING_MODE_ACTIVE_BACKUP:
-		/* Current primary slave */
-		ret = link_update(bond_ctx->current_primary_port, &slave_link);
+		/* Current primary child */
+		ret = link_update(bond_ctx->current_primary_port, &child_link);
 		if (ret < 0) {
-			RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
+			RTE_BOND_LOG(ERR, "Child (port %u) link get failed: %s",
 				bond_ctx->current_primary_port,
 				rte_strerror(-ret));
 			return 0;
 		}
 
-		ethdev->data->dev_link.link_speed = slave_link.link_speed;
+		ethdev->data->dev_link.link_speed = child_link.link_speed;
 		break;
 	case BONDING_MODE_8023AD:
 		ethdev->data->dev_link.link_autoneg =
-				bond_ctx->mode4.slave_link.link_autoneg;
+				bond_ctx->mode4.child_link.link_autoneg;
 		ethdev->data->dev_link.link_duplex =
-				bond_ctx->mode4.slave_link.link_duplex;
+				bond_ctx->mode4.child_link.link_duplex;
 		/* fall through */
 		/* to update link speed */
 	case BONDING_MODE_ROUND_ROBIN:
@@ -2566,29 +2566,29 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 	default:
 		/**
 		 * In theses mode the maximum theoretical link speed is the sum
-		 * of all the slaves
+		 * of all the children
 		 */
 		ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		one_link_update_succeeded = false;
 
-		for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
-			ret = link_update(bond_ctx->active_slaves[idx],
-					&slave_link);
+		for (idx = 0; idx < bond_ctx->active_child_count; idx++) {
+			ret = link_update(bond_ctx->active_children[idx],
+					&child_link);
 			if (ret < 0) {
 				RTE_BOND_LOG(ERR,
-					"Slave (port %u) link get failed: %s",
-					bond_ctx->active_slaves[idx],
+					"Child (port %u) link get failed: %s",
+					bond_ctx->active_children[idx],
 					rte_strerror(-ret));
 				continue;
 			}
 
 			one_link_update_succeeded = true;
 			ethdev->data->dev_link.link_speed +=
-					slave_link.link_speed;
+					child_link.link_speed;
 		}
 
 		if (!one_link_update_succeeded) {
-			RTE_BOND_LOG(ERR, "All slaves link get failed");
+			RTE_BOND_LOG(ERR, "All children link get failed");
 			return 0;
 		}
 	}
@@ -2602,27 +2602,27 @@ static int
 bond_ethdev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 {
 	struct bond_dev_private *internals = dev->data->dev_private;
-	struct rte_eth_stats slave_stats;
+	struct rte_eth_stats child_stats;
 	int i, j;
 
-	for (i = 0; i < internals->slave_count; i++) {
-		rte_eth_stats_get(internals->slaves[i].port_id, &slave_stats);
+	for (i = 0; i < internals->child_count; i++) {
+		rte_eth_stats_get(internals->children[i].port_id, &child_stats);
 
-		stats->ipackets += slave_stats.ipackets;
-		stats->opackets += slave_stats.opackets;
-		stats->ibytes += slave_stats.ibytes;
-		stats->obytes += slave_stats.obytes;
-		stats->imissed += slave_stats.imissed;
-		stats->ierrors += slave_stats.ierrors;
-		stats->oerrors += slave_stats.oerrors;
-		stats->rx_nombuf += slave_stats.rx_nombuf;
+		stats->ipackets += child_stats.ipackets;
+		stats->opackets += child_stats.opackets;
+		stats->ibytes += child_stats.ibytes;
+		stats->obytes += child_stats.obytes;
+		stats->imissed += child_stats.imissed;
+		stats->ierrors += child_stats.ierrors;
+		stats->oerrors += child_stats.oerrors;
+		stats->rx_nombuf += child_stats.rx_nombuf;
 
 		for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
-			stats->q_ipackets[j] += slave_stats.q_ipackets[j];
-			stats->q_opackets[j] += slave_stats.q_opackets[j];
-			stats->q_ibytes[j] += slave_stats.q_ibytes[j];
-			stats->q_obytes[j] += slave_stats.q_obytes[j];
-			stats->q_errors[j] += slave_stats.q_errors[j];
+			stats->q_ipackets[j] += child_stats.q_ipackets[j];
+			stats->q_opackets[j] += child_stats.q_opackets[j];
+			stats->q_ibytes[j] += child_stats.q_ibytes[j];
+			stats->q_obytes[j] += child_stats.q_obytes[j];
+			stats->q_errors[j] += child_stats.q_errors[j];
 		}
 
 	}
@@ -2638,8 +2638,8 @@ bond_ethdev_stats_reset(struct rte_eth_dev *dev)
 	int err;
 	int ret;
 
-	for (i = 0, err = 0; i < internals->slave_count; i++) {
-		ret = rte_eth_stats_reset(internals->slaves[i].port_id);
+	for (i = 0, err = 0; i < internals->child_count; i++) {
+		ret = rte_eth_stats_reset(internals->children[i].port_id);
 		if (ret != 0)
 			err = ret;
 	}
@@ -2656,15 +2656,15 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
 	uint16_t port_id;
 
 	switch (internals->mode) {
-	/* Promiscuous mode is propagated to all slaves */
+	/* Promiscuous mode is propagated to all children */
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD: {
-		unsigned int slave_ok = 0;
+		unsigned int child_ok = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			port_id = internals->slaves[i].port_id;
+		for (i = 0; i < internals->child_count; i++) {
+			port_id = internals->children[i].port_id;
 
 			ret = rte_eth_promiscuous_enable(port_id);
 			if (ret != 0)
@@ -2672,23 +2672,23 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
 					"Failed to enable promiscuous mode for port %u: %s",
 					port_id, rte_strerror(-ret));
 			else
-				slave_ok++;
+				child_ok++;
 		}
 		/*
 		 * Report success if operation is successful on at least
-		 * on one slave. Otherwise return last error code.
+		 * on one child. Otherwise return last error code.
 		 */
-		if (slave_ok > 0)
+		if (child_ok > 0)
 			ret = 0;
 		break;
 	}
-	/* Promiscuous mode is propagated only to primary slave */
+	/* Promiscuous mode is propagated only to primary child */
 	case BONDING_MODE_ACTIVE_BACKUP:
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
 		/* Do not touch promisc when there cannot be primary ports */
-		if (internals->slave_count == 0)
+		if (internals->child_count == 0)
 			break;
 		port_id = internals->current_primary_port;
 		ret = rte_eth_promiscuous_enable(port_id);
@@ -2710,20 +2710,20 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
 	uint16_t port_id;
 
 	switch (internals->mode) {
-	/* Promiscuous mode is propagated to all slaves */
+	/* Promiscuous mode is propagated to all children */
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD: {
-		unsigned int slave_ok = 0;
+		unsigned int child_ok = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			port_id = internals->slaves[i].port_id;
+		for (i = 0; i < internals->child_count; i++) {
+			port_id = internals->children[i].port_id;
 
 			if (internals->mode == BONDING_MODE_8023AD &&
 			    bond_mode_8023ad_ports[port_id].forced_rx_flags ==
 					BOND_8023AD_FORCED_PROMISC) {
-				slave_ok++;
+				child_ok++;
 				continue;
 			}
 			ret = rte_eth_promiscuous_disable(port_id);
@@ -2732,23 +2732,23 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
 					"Failed to disable promiscuous mode for port %u: %s",
 					port_id, rte_strerror(-ret));
 			else
-				slave_ok++;
+				child_ok++;
 		}
 		/*
 		 * Report success if operation is successful on at least
-		 * on one slave. Otherwise return last error code.
+		 * on one child. Otherwise return last error code.
 		 */
-		if (slave_ok > 0)
+		if (child_ok > 0)
 			ret = 0;
 		break;
 	}
-	/* Promiscuous mode is propagated only to primary slave */
+	/* Promiscuous mode is propagated only to primary child */
 	case BONDING_MODE_ACTIVE_BACKUP:
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
 		/* Do not touch promisc when there cannot be primary ports */
-		if (internals->slave_count == 0)
+		if (internals->child_count == 0)
 			break;
 		port_id = internals->current_primary_port;
 		ret = rte_eth_promiscuous_disable(port_id);
@@ -2772,7 +2772,7 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD:
-		/* As promiscuous mode is propagated to all slaves for these
+		/* As promiscuous mode is propagated to all children for these
 		 * mode, no need to update for bonding device.
 		 */
 		break;
@@ -2780,9 +2780,9 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
-		/* As promiscuous mode is propagated only to primary slave
+		/* As promiscuous mode is propagated only to primary child
 		 * for these mode. When active/standby switchover, promiscuous
-		 * mode should be set to new primary slave according to bonding
+		 * mode should be set to new primary child according to bonding
 		 * device.
 		 */
 		if (rte_eth_promiscuous_get(internals->port_id) == 1)
@@ -2803,15 +2803,15 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
 	uint16_t port_id;
 
 	switch (internals->mode) {
-	/* allmulti mode is propagated to all slaves */
+	/* allmulti mode is propagated to all children */
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD: {
-		unsigned int slave_ok = 0;
+		unsigned int child_ok = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			port_id = internals->slaves[i].port_id;
+		for (i = 0; i < internals->child_count; i++) {
+			port_id = internals->children[i].port_id;
 
 			ret = rte_eth_allmulticast_enable(port_id);
 			if (ret != 0)
@@ -2819,23 +2819,23 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
 					"Failed to enable allmulti mode for port %u: %s",
 					port_id, rte_strerror(-ret));
 			else
-				slave_ok++;
+				child_ok++;
 		}
 		/*
 		 * Report success if operation is successful on at least
-		 * on one slave. Otherwise return last error code.
+		 * on one child. Otherwise return last error code.
 		 */
-		if (slave_ok > 0)
+		if (child_ok > 0)
 			ret = 0;
 		break;
 	}
-	/* allmulti mode is propagated only to primary slave */
+	/* allmulti mode is propagated only to primary child */
 	case BONDING_MODE_ACTIVE_BACKUP:
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
 		/* Do not touch allmulti when there cannot be primary ports */
-		if (internals->slave_count == 0)
+		if (internals->child_count == 0)
 			break;
 		port_id = internals->current_primary_port;
 		ret = rte_eth_allmulticast_enable(port_id);
@@ -2857,15 +2857,15 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
 	uint16_t port_id;
 
 	switch (internals->mode) {
-	/* allmulti mode is propagated to all slaves */
+	/* allmulti mode is propagated to all children */
 	case BONDING_MODE_ROUND_ROBIN:
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD: {
-		unsigned int slave_ok = 0;
+		unsigned int child_ok = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			uint16_t port_id = internals->slaves[i].port_id;
+		for (i = 0; i < internals->child_count; i++) {
+			uint16_t port_id = internals->children[i].port_id;
 
 			if (internals->mode == BONDING_MODE_8023AD &&
 			    bond_mode_8023ad_ports[port_id].forced_rx_flags ==
@@ -2878,23 +2878,23 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
 					"Failed to disable allmulti mode for port %u: %s",
 					port_id, rte_strerror(-ret));
 			else
-				slave_ok++;
+				child_ok++;
 		}
 		/*
 		 * Report success if operation is successful on at least
-		 * on one slave. Otherwise return last error code.
+		 * on one child. Otherwise return last error code.
 		 */
-		if (slave_ok > 0)
+		if (child_ok > 0)
 			ret = 0;
 		break;
 	}
-	/* allmulti mode is propagated only to primary slave */
+	/* allmulti mode is propagated only to primary child */
 	case BONDING_MODE_ACTIVE_BACKUP:
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
 		/* Do not touch allmulti when there cannot be primary ports */
-		if (internals->slave_count == 0)
+		if (internals->child_count == 0)
 			break;
 		port_id = internals->current_primary_port;
 		ret = rte_eth_allmulticast_disable(port_id);
@@ -2918,7 +2918,7 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
 	case BONDING_MODE_BALANCE:
 	case BONDING_MODE_BROADCAST:
 	case BONDING_MODE_8023AD:
-		/* As allmulticast mode is propagated to all slaves for these
+		/* As allmulticast mode is propagated to all children for these
 		 * mode, no need to update for bonding device.
 		 */
 		break;
@@ -2926,9 +2926,9 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
 	case BONDING_MODE_TLB:
 	case BONDING_MODE_ALB:
 	default:
-		/* As allmulticast mode is propagated only to primary slave
+		/* As allmulticast mode is propagated only to primary child
 		 * for these mode. When active/standby switchover, allmulticast
-		 * mode should be set to new primary slave according to bonding
+		 * mode should be set to new primary child according to bonding
 		 * device.
 		 */
 		if (rte_eth_allmulticast_get(internals->port_id) == 1)
@@ -2961,8 +2961,8 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 	int ret;
 
 	uint8_t lsc_flag = 0;
-	int valid_slave = 0;
-	uint16_t active_pos, slave_idx;
+	int valid_child = 0;
+	uint16_t active_pos, child_idx;
 	uint16_t i;
 
 	if (type != RTE_ETH_EVENT_INTR_LSC || param == NULL)
@@ -2979,62 +2979,62 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 	if (!bonded_eth_dev->data->dev_started)
 		return rc;
 
-	/* verify that port_id is a valid slave of bonded port */
-	for (i = 0; i < internals->slave_count; i++) {
-		if (internals->slaves[i].port_id == port_id) {
-			valid_slave = 1;
-			slave_idx = i;
+	/* verify that port_id is a valid child of bonded port */
+	for (i = 0; i < internals->child_count; i++) {
+		if (internals->children[i].port_id == port_id) {
+			valid_child = 1;
+			child_idx = i;
 			break;
 		}
 	}
 
-	if (!valid_slave)
+	if (!valid_child)
 		return rc;
 
 	/* Synchronize lsc callback parallel calls either by real link event
-	 * from the slaves PMDs or by the bonding PMD itself.
+	 * from the children PMDs or by the bonding PMD itself.
 	 */
 	rte_spinlock_lock(&internals->lsc_lock);
 
 	/* Search for port in active port list */
-	active_pos = find_slave_by_id(internals->active_slaves,
-			internals->active_slave_count, port_id);
+	active_pos = find_child_by_id(internals->active_children,
+			internals->active_child_count, port_id);
 
 	ret = rte_eth_link_get_nowait(port_id, &link);
 	if (ret < 0)
-		RTE_BOND_LOG(ERR, "Slave (port %u) link get failed", port_id);
+		RTE_BOND_LOG(ERR, "Child (port %u) link get failed", port_id);
 
 	if (ret == 0 && link.link_status) {
-		if (active_pos < internals->active_slave_count)
+		if (active_pos < internals->active_child_count)
 			goto link_update;
 
 		/* check link state properties if bonded link is up*/
 		if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
 			if (link_properties_valid(bonded_eth_dev, &link) != 0)
 				RTE_BOND_LOG(ERR, "Invalid link properties "
-					     "for slave %d in bonding mode %d",
+					     "for child %d in bonding mode %d",
 					     port_id, internals->mode);
 		} else {
-			/* inherit slave link properties */
+			/* inherit child link properties */
 			link_properties_set(bonded_eth_dev, &link);
 		}
 
-		/* If no active slave ports then set this port to be
+		/* If no active child ports then set this port to be
 		 * the primary port.
 		 */
-		if (internals->active_slave_count < 1) {
-			/* If first active slave, then change link status */
+		if (internals->active_child_count < 1) {
+			/* If first active child, then change link status */
 			bonded_eth_dev->data->dev_link.link_status =
 								RTE_ETH_LINK_UP;
 			internals->current_primary_port = port_id;
 			lsc_flag = 1;
 
-			mac_address_slaves_update(bonded_eth_dev);
+			mac_address_children_update(bonded_eth_dev);
 			bond_ethdev_promiscuous_update(bonded_eth_dev);
 			bond_ethdev_allmulticast_update(bonded_eth_dev);
 		}
 
-		activate_slave(bonded_eth_dev, port_id);
+		activate_child(bonded_eth_dev, port_id);
 
 		/* If the user has defined the primary port then default to
 		 * using it.
@@ -3043,24 +3043,24 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 				internals->primary_port == port_id)
 			bond_ethdev_primary_set(internals, port_id);
 	} else {
-		if (active_pos == internals->active_slave_count)
+		if (active_pos == internals->active_child_count)
 			goto link_update;
 
-		/* Remove from active slave list */
-		deactivate_slave(bonded_eth_dev, port_id);
+		/* Remove from active child list */
+		deactivate_child(bonded_eth_dev, port_id);
 
-		if (internals->active_slave_count < 1)
+		if (internals->active_child_count < 1)
 			lsc_flag = 1;
 
-		/* Update primary id, take first active slave from list or if none
+		/* Update primary id, take first active child from list or if none
 		 * available set to -1 */
 		if (port_id == internals->current_primary_port) {
-			if (internals->active_slave_count > 0)
+			if (internals->active_child_count > 0)
 				bond_ethdev_primary_set(internals,
-						internals->active_slaves[0]);
+						internals->active_children[0]);
 			else
 				internals->current_primary_port = internals->primary_port;
-			mac_address_slaves_update(bonded_eth_dev);
+			mac_address_children_update(bonded_eth_dev);
 			bond_ethdev_promiscuous_update(bonded_eth_dev);
 			bond_ethdev_allmulticast_update(bonded_eth_dev);
 		}
@@ -3069,10 +3069,10 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 link_update:
 	/**
 	 * Update bonded device link properties after any change to active
-	 * slaves
+	 * children
 	 */
 	bond_ethdev_link_update(bonded_eth_dev, 0);
-	internals->slaves[slave_idx].last_link_status = link.link_status;
+	internals->children[child_idx].last_link_status = link.link_status;
 
 	if (lsc_flag) {
 		/* Cancel any possible outstanding interrupts if delays are enabled */
@@ -3114,7 +3114,7 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
 {
 	unsigned i, j;
 	int result = 0;
-	int slave_reta_size;
+	int child_reta_size;
 	unsigned reta_count;
 	struct bond_dev_private *internals = dev->data->dev_private;
 
@@ -3137,11 +3137,11 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
 		memcpy(&internals->reta_conf[i], &internals->reta_conf[0],
 				sizeof(internals->reta_conf[0]) * reta_count);
 
-	/* Propagate RETA over slaves */
-	for (i = 0; i < internals->slave_count; i++) {
-		slave_reta_size = internals->slaves[i].reta_size;
-		result = rte_eth_dev_rss_reta_update(internals->slaves[i].port_id,
-				&internals->reta_conf[0], slave_reta_size);
+	/* Propagate RETA over children */
+	for (i = 0; i < internals->child_count; i++) {
+		child_reta_size = internals->children[i].reta_size;
+		result = rte_eth_dev_rss_reta_update(internals->children[i].port_id,
+				&internals->reta_conf[0], child_reta_size);
 		if (result < 0)
 			return result;
 	}
@@ -3194,8 +3194,8 @@ bond_ethdev_rss_hash_update(struct rte_eth_dev *dev,
 		bond_rss_conf.rss_key_len = internals->rss_key_len;
 	}
 
-	for (i = 0; i < internals->slave_count; i++) {
-		result = rte_eth_dev_rss_hash_update(internals->slaves[i].port_id,
+	for (i = 0; i < internals->child_count; i++) {
+		result = rte_eth_dev_rss_hash_update(internals->children[i].port_id,
 				&bond_rss_conf);
 		if (result < 0)
 			return result;
@@ -3221,21 +3221,21 @@ bond_ethdev_rss_hash_conf_get(struct rte_eth_dev *dev,
 static int
 bond_ethdev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 {
-	struct rte_eth_dev *slave_eth_dev;
+	struct rte_eth_dev *child_eth_dev;
 	struct bond_dev_private *internals = dev->data->dev_private;
 	int ret, i;
 
 	rte_spinlock_lock(&internals->lock);
 
-	for (i = 0; i < internals->slave_count; i++) {
-		slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
-		if (*slave_eth_dev->dev_ops->mtu_set == NULL) {
+	for (i = 0; i < internals->child_count; i++) {
+		child_eth_dev = &rte_eth_devices[internals->children[i].port_id];
+		if (*child_eth_dev->dev_ops->mtu_set == NULL) {
 			rte_spinlock_unlock(&internals->lock);
 			return -ENOTSUP;
 		}
 	}
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_eth_dev_set_mtu(internals->slaves[i].port_id, mtu);
+	for (i = 0; i < internals->child_count; i++) {
+		ret = rte_eth_dev_set_mtu(internals->children[i].port_id, mtu);
 		if (ret < 0) {
 			rte_spinlock_unlock(&internals->lock);
 			return ret;
@@ -3271,29 +3271,29 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
 			struct rte_ether_addr *mac_addr,
 			__rte_unused uint32_t index, uint32_t vmdq)
 {
-	struct rte_eth_dev *slave_eth_dev;
+	struct rte_eth_dev *child_eth_dev;
 	struct bond_dev_private *internals = dev->data->dev_private;
 	int ret, i;
 
 	rte_spinlock_lock(&internals->lock);
 
-	for (i = 0; i < internals->slave_count; i++) {
-		slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
-		if (*slave_eth_dev->dev_ops->mac_addr_add == NULL ||
-			 *slave_eth_dev->dev_ops->mac_addr_remove == NULL) {
+	for (i = 0; i < internals->child_count; i++) {
+		child_eth_dev = &rte_eth_devices[internals->children[i].port_id];
+		if (*child_eth_dev->dev_ops->mac_addr_add == NULL ||
+			 *child_eth_dev->dev_ops->mac_addr_remove == NULL) {
 			ret = -ENOTSUP;
 			goto end;
 		}
 	}
 
-	for (i = 0; i < internals->slave_count; i++) {
-		ret = rte_eth_dev_mac_addr_add(internals->slaves[i].port_id,
+	for (i = 0; i < internals->child_count; i++) {
+		ret = rte_eth_dev_mac_addr_add(internals->children[i].port_id,
 				mac_addr, vmdq);
 		if (ret < 0) {
 			/* rollback */
 			for (i--; i >= 0; i--)
 				rte_eth_dev_mac_addr_remove(
-					internals->slaves[i].port_id, mac_addr);
+					internals->children[i].port_id, mac_addr);
 			goto end;
 		}
 	}
@@ -3307,22 +3307,22 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
 static void
 bond_ethdev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index)
 {
-	struct rte_eth_dev *slave_eth_dev;
+	struct rte_eth_dev *child_eth_dev;
 	struct bond_dev_private *internals = dev->data->dev_private;
 	int i;
 
 	rte_spinlock_lock(&internals->lock);
 
-	for (i = 0; i < internals->slave_count; i++) {
-		slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
-		if (*slave_eth_dev->dev_ops->mac_addr_remove == NULL)
+	for (i = 0; i < internals->child_count; i++) {
+		child_eth_dev = &rte_eth_devices[internals->children[i].port_id];
+		if (*child_eth_dev->dev_ops->mac_addr_remove == NULL)
 			goto end;
 	}
 
 	struct rte_ether_addr *mac_addr = &dev->data->mac_addrs[index];
 
-	for (i = 0; i < internals->slave_count; i++)
-		rte_eth_dev_mac_addr_remove(internals->slaves[i].port_id,
+	for (i = 0; i < internals->child_count; i++)
+		rte_eth_dev_mac_addr_remove(internals->children[i].port_id,
 				mac_addr);
 
 end:
@@ -3402,30 +3402,30 @@ dump_basic(const struct rte_eth_dev *dev, FILE *f)
 		fprintf(f, "\n");
 	}
 
-	if (internals->slave_count > 0) {
-		fprintf(f, "\tSlaves (%u): [", internals->slave_count);
-		for (i = 0; i < internals->slave_count - 1; i++)
-			fprintf(f, "%u ", internals->slaves[i].port_id);
+	if (internals->child_count > 0) {
+		fprintf(f, "\tChilds (%u): [", internals->child_count);
+		for (i = 0; i < internals->child_count - 1; i++)
+			fprintf(f, "%u ", internals->children[i].port_id);
 
-		fprintf(f, "%u]\n", internals->slaves[internals->slave_count - 1].port_id);
+		fprintf(f, "%u]\n", internals->children[internals->child_count - 1].port_id);
 	} else {
-		fprintf(f, "\tSlaves: []\n");
+		fprintf(f, "\tChilds: []\n");
 	}
 
-	if (internals->active_slave_count > 0) {
-		fprintf(f, "\tActive Slaves (%u): [", internals->active_slave_count);
-		for (i = 0; i < internals->active_slave_count - 1; i++)
-			fprintf(f, "%u ", internals->active_slaves[i]);
+	if (internals->active_child_count > 0) {
+		fprintf(f, "\tActive Childs (%u): [", internals->active_child_count);
+		for (i = 0; i < internals->active_child_count - 1; i++)
+			fprintf(f, "%u ", internals->active_children[i]);
 
-		fprintf(f, "%u]\n", internals->active_slaves[internals->active_slave_count - 1]);
+		fprintf(f, "%u]\n", internals->active_children[internals->active_child_count - 1]);
 
 	} else {
-		fprintf(f, "\tActive Slaves: []\n");
+		fprintf(f, "\tActive Childs: []\n");
 	}
 
 	if (internals->user_defined_primary_port)
 		fprintf(f, "\tUser Defined Primary: [%u]\n", internals->primary_port);
-	if (internals->slave_count > 0)
+	if (internals->child_count > 0)
 		fprintf(f, "\tCurrent Primary: [%u]\n", internals->current_primary_port);
 }
 
@@ -3471,7 +3471,7 @@ dump_lacp_port_param(const struct port_params *params, FILE *f)
 }
 
 static void
-dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
+dump_lacp_child(const struct rte_eth_bond_8023ad_child_info *info, FILE *f)
 {
 	char a_state[256] = { 0 };
 	char p_state[256] = { 0 };
@@ -3520,18 +3520,18 @@ dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
 static void
 dump_lacp(uint16_t port_id, FILE *f)
 {
-	struct rte_eth_bond_8023ad_slave_info slave_info;
+	struct rte_eth_bond_8023ad_child_info child_info;
 	struct rte_eth_bond_8023ad_conf port_conf;
-	uint16_t slaves[RTE_MAX_ETHPORTS];
-	int num_active_slaves;
+	uint16_t children[RTE_MAX_ETHPORTS];
+	int num_active_children;
 	int i, ret;
 
 	fprintf(f, "  - Lacp info:\n");
 
-	num_active_slaves = rte_eth_bond_active_slaves_get(port_id, slaves,
+	num_active_children = rte_eth_bond_active_children_get(port_id, children,
 			RTE_MAX_ETHPORTS);
-	if (num_active_slaves < 0) {
-		fprintf(f, "\tFailed to get active slave list for port %u\n",
+	if (num_active_children < 0) {
+		fprintf(f, "\tFailed to get active child list for port %u\n",
 				port_id);
 		return;
 	}
@@ -3545,16 +3545,16 @@ dump_lacp(uint16_t port_id, FILE *f)
 	}
 	dump_lacp_conf(&port_conf, f);
 
-	for (i = 0; i < num_active_slaves; i++) {
-		ret = rte_eth_bond_8023ad_slave_info(port_id, slaves[i],
-				&slave_info);
+	for (i = 0; i < num_active_children; i++) {
+		ret = rte_eth_bond_8023ad_child_info(port_id, children[i],
+				&child_info);
 		if (ret) {
-			fprintf(f, "\tGet slave device %u 8023ad info failed\n",
-				slaves[i]);
+			fprintf(f, "\tGet child device %u 8023ad info failed\n",
+				children[i]);
 			return;
 		}
-		fprintf(f, "\tSlave Port: %u\n", slaves[i]);
-		dump_lacp_slave(&slave_info, f);
+		fprintf(f, "\tChild Port: %u\n", children[i]);
+		dump_lacp_child(&child_info, f);
 	}
 }
 
@@ -3655,8 +3655,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
 	internals->link_down_delay_ms = 0;
 	internals->link_up_delay_ms = 0;
 
-	internals->slave_count = 0;
-	internals->active_slave_count = 0;
+	internals->child_count = 0;
+	internals->active_child_count = 0;
 	internals->rx_offload_capa = 0;
 	internals->tx_offload_capa = 0;
 	internals->rx_queue_offload_capa = 0;
@@ -3684,8 +3684,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
 	internals->rx_desc_lim.nb_align = 1;
 	internals->tx_desc_lim.nb_align = 1;
 
-	memset(internals->active_slaves, 0, sizeof(internals->active_slaves));
-	memset(internals->slaves, 0, sizeof(internals->slaves));
+	memset(internals->active_children, 0, sizeof(internals->active_children));
+	memset(internals->children, 0, sizeof(internals->children));
 
 	TAILQ_INIT(&internals->flow_list);
 	internals->flow_isolated_valid = 0;
@@ -3770,7 +3770,7 @@ bond_probe(struct rte_vdev_device *dev)
 	/* Parse link bonding mode */
 	if (rte_kvargs_count(kvlist, PMD_BOND_MODE_KVARG) == 1) {
 		if (rte_kvargs_process(kvlist, PMD_BOND_MODE_KVARG,
-				&bond_ethdev_parse_slave_mode_kvarg,
+				&bond_ethdev_parse_child_mode_kvarg,
 				&bonding_mode) != 0) {
 			RTE_BOND_LOG(ERR, "Invalid mode for bonded device %s",
 					name);
@@ -3815,7 +3815,7 @@ bond_probe(struct rte_vdev_device *dev)
 	if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
 		if (rte_kvargs_process(kvlist,
 				PMD_BOND_AGG_MODE_KVARG,
-				&bond_ethdev_parse_slave_agg_mode_kvarg,
+				&bond_ethdev_parse_child_agg_mode_kvarg,
 				&agg_mode) != 0) {
 			RTE_BOND_LOG(ERR,
 					"Failed to parse agg selection mode for bonded device %s",
@@ -3865,7 +3865,7 @@ bond_remove(struct rte_vdev_device *dev)
 	RTE_ASSERT(eth_dev->device == &dev->device);
 
 	internals = eth_dev->data->dev_private;
-	if (internals->slave_count != 0)
+	if (internals->child_count != 0)
 		return -EBUSY;
 
 	if (eth_dev->data->dev_started == 1) {
@@ -3877,7 +3877,7 @@ bond_remove(struct rte_vdev_device *dev)
 	return ret;
 }
 
-/* this part will resolve the slave portids after all the other pdev and vdev
+/* this part will resolve the child portids after all the other pdev and vdev
  * have been allocated */
 static int
 bond_ethdev_configure(struct rte_eth_dev *dev)
@@ -3959,7 +3959,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 	if (link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		if ((link_speeds &
 		    (internals->speed_capa & ~RTE_ETH_LINK_SPEED_FIXED)) == 0) {
-			RTE_BOND_LOG(ERR, "the fixed speed is not supported by all slave devices.");
+			RTE_BOND_LOG(ERR, "the fixed speed is not supported by all child devices.");
 			return -EINVAL;
 		}
 		/*
@@ -4041,7 +4041,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 	if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
 		if (rte_kvargs_process(kvlist,
 				       PMD_BOND_AGG_MODE_KVARG,
-				       &bond_ethdev_parse_slave_agg_mode_kvarg,
+				       &bond_ethdev_parse_child_agg_mode_kvarg,
 				       &agg_mode) != 0) {
 			RTE_BOND_LOG(ERR,
 				     "Failed to parse agg selection mode for bonded device %s",
@@ -4059,60 +4059,60 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	/* Parse/add slave ports to bonded device */
-	if (rte_kvargs_count(kvlist, PMD_BOND_SLAVE_PORT_KVARG) > 0) {
-		struct bond_ethdev_slave_ports slave_ports;
+	/* Parse/add child ports to bonded device */
+	if (rte_kvargs_count(kvlist, PMD_BOND_CHILD_PORT_KVARG) > 0) {
+		struct bond_ethdev_child_ports child_ports;
 		unsigned i;
 
-		memset(&slave_ports, 0, sizeof(slave_ports));
+		memset(&child_ports, 0, sizeof(child_ports));
 
-		if (rte_kvargs_process(kvlist, PMD_BOND_SLAVE_PORT_KVARG,
-				       &bond_ethdev_parse_slave_port_kvarg, &slave_ports) != 0) {
+		if (rte_kvargs_process(kvlist, PMD_BOND_CHILD_PORT_KVARG,
+				       &bond_ethdev_parse_child_port_kvarg, &child_ports) != 0) {
 			RTE_BOND_LOG(ERR,
-				     "Failed to parse slave ports for bonded device %s",
+				     "Failed to parse child ports for bonded device %s",
 				     name);
 			return -1;
 		}
 
-		for (i = 0; i < slave_ports.slave_count; i++) {
-			if (rte_eth_bond_slave_add(port_id, slave_ports.slaves[i]) != 0) {
+		for (i = 0; i < child_ports.child_count; i++) {
+			if (rte_eth_bond_child_add(port_id, child_ports.children[i]) != 0) {
 				RTE_BOND_LOG(ERR,
-					     "Failed to add port %d as slave to bonded device %s",
-					     slave_ports.slaves[i], name);
+					     "Failed to add port %d as child to bonded device %s",
+					     child_ports.children[i], name);
 			}
 		}
 
 	} else {
-		RTE_BOND_LOG(INFO, "No slaves specified for bonded device %s", name);
+		RTE_BOND_LOG(INFO, "No children specified for bonded device %s", name);
 		return -1;
 	}
 
-	/* Parse/set primary slave port id*/
-	arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_SLAVE_KVARG);
+	/* Parse/set primary child port id*/
+	arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_CHILD_KVARG);
 	if (arg_count == 1) {
-		uint16_t primary_slave_port_id;
+		uint16_t primary_child_port_id;
 
 		if (rte_kvargs_process(kvlist,
-				       PMD_BOND_PRIMARY_SLAVE_KVARG,
-				       &bond_ethdev_parse_primary_slave_port_id_kvarg,
-				       &primary_slave_port_id) < 0) {
+				       PMD_BOND_PRIMARY_CHILD_KVARG,
+				       &bond_ethdev_parse_primary_child_port_id_kvarg,
+				       &primary_child_port_id) < 0) {
 			RTE_BOND_LOG(INFO,
-				     "Invalid primary slave port id specified for bonded device %s",
+				     "Invalid primary child port id specified for bonded device %s",
 				     name);
 			return -1;
 		}
 
 		/* Set balance mode transmit policy*/
-		if (rte_eth_bond_primary_set(port_id, primary_slave_port_id)
+		if (rte_eth_bond_primary_set(port_id, primary_child_port_id)
 		    != 0) {
 			RTE_BOND_LOG(ERR,
-				     "Failed to set primary slave port %d on bonded device %s",
-				     primary_slave_port_id, name);
+				     "Failed to set primary child port %d on bonded device %s",
+				     primary_child_port_id, name);
 			return -1;
 		}
 	} else if (arg_count > 1) {
 		RTE_BOND_LOG(INFO,
-			     "Primary slave can be specified only once for bonded device %s",
+			     "Primary child can be specified only once for bonded device %s",
 			     name);
 		return -1;
 	}
@@ -4206,15 +4206,15 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 		return -1;
 	}
 
-	/* configure slaves so we can pass mtu setting */
-	for (i = 0; i < internals->slave_count; i++) {
-		struct rte_eth_dev *slave_ethdev =
-				&(rte_eth_devices[internals->slaves[i].port_id]);
-		if (slave_configure(dev, slave_ethdev) != 0) {
+	/* configure children so we can pass mtu setting */
+	for (i = 0; i < internals->child_count; i++) {
+		struct rte_eth_dev *child_ethdev =
+				&(rte_eth_devices[internals->children[i].port_id]);
+		if (child_configure(dev, child_ethdev) != 0) {
 			RTE_BOND_LOG(ERR,
-				"bonded port (%d) failed to configure slave device (%d)",
+				"bonded port (%d) failed to configure child device (%d)",
 				dev->data->port_id,
-				internals->slaves[i].port_id);
+				internals->children[i].port_id);
 			return -1;
 		}
 	}
@@ -4230,7 +4230,7 @@ RTE_PMD_REGISTER_VDEV(net_bonding, pmd_bond_drv);
 RTE_PMD_REGISTER_ALIAS(net_bonding, eth_bond);
 
 RTE_PMD_REGISTER_PARAM_STRING(net_bonding,
-	"slave=<ifc> "
+	"child=<ifc> "
 	"primary=<ifc> "
 	"mode=[0-6] "
 	"xmit_policy=[l2 | l23 | l34] "
diff --git a/drivers/net/bonding/version.map b/drivers/net/bonding/version.map
index 9333923b4e62..b31ed8d49689 100644
--- a/drivers/net/bonding/version.map
+++ b/drivers/net/bonding/version.map
@@ -3,6 +3,7 @@ DPDK_23 {
 
 	rte_eth_bond_8023ad_agg_selection_get;
 	rte_eth_bond_8023ad_agg_selection_set;
+	rte_eth_bond_8023ad_child_info;
 	rte_eth_bond_8023ad_conf_get;
 	rte_eth_bond_8023ad_dedicated_queues_disable;
 	rte_eth_bond_8023ad_dedicated_queues_enable;
@@ -12,8 +13,10 @@ DPDK_23 {
 	rte_eth_bond_8023ad_ext_distrib_get;
 	rte_eth_bond_8023ad_ext_slowtx;
 	rte_eth_bond_8023ad_setup;
-	rte_eth_bond_8023ad_slave_info;
-	rte_eth_bond_active_slaves_get;
+	rte_eth_bond_active_children_get;
+	rte_eth_bond_child_add;
+	rte_eth_bond_child_remove;
+	rte_eth_bond_children_get;
 	rte_eth_bond_create;
 	rte_eth_bond_free;
 	rte_eth_bond_link_monitoring_set;
@@ -23,9 +26,6 @@ DPDK_23 {
 	rte_eth_bond_mode_set;
 	rte_eth_bond_primary_get;
 	rte_eth_bond_primary_set;
-	rte_eth_bond_slave_add;
-	rte_eth_bond_slave_remove;
-	rte_eth_bond_slaves_get;
 	rte_eth_bond_xmit_policy_get;
 	rte_eth_bond_xmit_policy_set;
 
diff --git a/examples/bond/main.c b/examples/bond/main.c
index 9b076bb39fa3..12de9c1f2901 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -105,8 +105,8 @@
 		":%02"PRIx8":%02"PRIx8":%02"PRIx8,	\
 		RTE_ETHER_ADDR_BYTES(&addr))
 
-uint16_t slaves[RTE_MAX_ETHPORTS];
-uint16_t slaves_count;
+uint16_t children[RTE_MAX_ETHPORTS];
+uint16_t children_count;
 
 static uint16_t BOND_PORT = 0xffff;
 
@@ -128,7 +128,7 @@ static struct rte_eth_conf port_conf = {
 };
 
 static void
-slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
+child_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
 {
 	int retval;
 	uint16_t nb_rxd = RTE_RX_DESC_DEFAULT;
@@ -252,10 +252,10 @@ bond_port_init(struct rte_mempool *mbuf_pool)
 		rte_exit(EXIT_FAILURE, "port %u: rte_eth_dev_adjust_nb_rx_tx_desc "
 				"failed (res=%d)\n", BOND_PORT, retval);
 
-	for (i = 0; i < slaves_count; i++) {
-		if (rte_eth_bond_slave_add(BOND_PORT, slaves[i]) == -1)
-			rte_exit(-1, "Oooops! adding slave (%u) to bond (%u) failed!\n",
-					slaves[i], BOND_PORT);
+	for (i = 0; i < children_count; i++) {
+		if (rte_eth_bond_child_add(BOND_PORT, children[i]) == -1)
+			rte_exit(-1, "Oooops! adding child (%u) to bond (%u) failed!\n",
+					children[i], BOND_PORT);
 
 	}
 
@@ -283,18 +283,18 @@ bond_port_init(struct rte_mempool *mbuf_pool)
 	if (retval < 0)
 		rte_exit(retval, "Start port %d failed (res=%d)", BOND_PORT, retval);
 
-	printf("Waiting for slaves to become active...");
+	printf("Waiting for children to become active...");
 	while (wait_counter) {
-		uint16_t act_slaves[16] = {0};
-		if (rte_eth_bond_active_slaves_get(BOND_PORT, act_slaves, 16) ==
-				slaves_count) {
+		uint16_t act_children[16] = {0};
+		if (rte_eth_bond_active_children_get(BOND_PORT, act_children, 16) ==
+				children_count) {
 			printf("\n");
 			break;
 		}
 		sleep(1);
 		printf("...");
 		if (--wait_counter == 0)
-			rte_exit(-1, "\nFailed to activate slaves\n");
+			rte_exit(-1, "\nFailed to activate children\n");
 	}
 
 	retval = rte_eth_promiscuous_enable(BOND_PORT);
@@ -631,7 +631,7 @@ static void cmd_help_parsed(__rte_unused void *parsed_result,
 			"send IP	- sends one ARPrequest through bonding for IP.\n"
 			"start		- starts listening ARPs.\n"
 			"stop		- stops lcore_main.\n"
-			"show		- shows some bond info: ex. active slaves etc.\n"
+			"show		- shows some bond info: ex. active children etc.\n"
 			"help		- prints help.\n"
 			"quit		- terminate all threads and quit.\n"
 		       );
@@ -742,13 +742,13 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
 			    struct cmdline *cl,
 			    __rte_unused void *data)
 {
-	uint16_t slaves[16] = {0};
+	uint16_t children[16] = {0};
 	uint8_t len = 16;
 	struct rte_ether_addr addr;
 	uint16_t i;
 	int ret;
 
-	for (i = 0; i < slaves_count; i++) {
+	for (i = 0; i < children_count; i++) {
 		ret = rte_eth_macaddr_get(i, &addr);
 		if (ret != 0) {
 			cmdline_printf(cl,
@@ -763,9 +763,9 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
 
 	rte_spinlock_lock(&global_flag_stru_p->lock);
 	cmdline_printf(cl,
-			"Active_slaves:%d "
+			"Active_children:%d "
 			"packets received:Tot:%d Arp:%d IPv4:%d\n",
-			rte_eth_bond_active_slaves_get(BOND_PORT, slaves, len),
+			rte_eth_bond_active_children_get(BOND_PORT, children, len),
 			global_flag_stru_p->port_packets[0],
 			global_flag_stru_p->port_packets[1],
 			global_flag_stru_p->port_packets[2]);
@@ -836,10 +836,10 @@ main(int argc, char *argv[])
 		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
 
 	/* initialize all ports */
-	slaves_count = nb_ports;
+	children_count = nb_ports;
 	RTE_ETH_FOREACH_DEV(i) {
-		slave_port_init(i, mbuf_pool);
-		slaves[i] = i;
+		child_port_init(i, mbuf_pool);
+		children[i] = i;
 	}
 
 	bond_port_init(mbuf_pool);
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 99fe9e238b20..c717a463c905 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -2035,8 +2035,10 @@ struct rte_eth_dev_owner {
 #define RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE  RTE_BIT32(0)
 /** Device supports link state interrupt */
 #define RTE_ETH_DEV_INTR_LSC              RTE_BIT32(1)
-/** Device is a bonded slave */
-#define RTE_ETH_DEV_BONDED_SLAVE          RTE_BIT32(2)
+/** Device is a bonded */
+#define RTE_ETH_DEV_BONDED_CHILD          RTE_BIT32(2)
+#define RTE_ETH_DEV_BONDED_SLAVE \
+	RTE_DEPRECATED(RTE_ETH_DEV_BONDED_SLAVE) RTE_ETH_DEV_BONDED_CHILD
 /** Device supports device removal interrupt */
 #define RTE_ETH_DEV_INTR_RMV              RTE_BIT32(3)
 /** Device is port representor */
-- 
2.39.2


^ permalink raw reply	[relevance 1%]

* RE: [PATCH] eventdev: fix alignment padding
  @ 2023-05-17 13:35  3%         ` Morten Brørup
  2023-05-23 15:15  3%           ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2023-05-17 13:35 UTC (permalink / raw)
  To: Jerin Jacob, Mattias Rönnblom; +Cc: Sivaprasad Tummala, jerinj, dev

> From: Jerin Jacob [mailto:jerinjacobk@gmail.com]
> Sent: Wednesday, 17 May 2023 15.20
> 
> On Tue, Apr 18, 2023 at 8:46 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
> >
> > On 2023-04-18 16:07, Morten Brørup wrote:
> > >> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> > >> Sent: Tuesday, 18 April 2023 14.31
> > >>
> > >> On 2023-04-18 12:45, Sivaprasad Tummala wrote:
> > >>> fixed the padding required to align to cacheline size.
> > >>>
> > >>
> > >> What's the point in having this structure cache-line aligned? False
> > >> sharing is a non-issue, since this is more or less a read only struct.
> > >>
> > >> This is not so much a comment on your patch, but the __rte_cache_aligned
> > >> attribute.
> > >
> > > When the structure is cache aligned, an individual entry in the array does
> not unnecessarily cross a cache line border. With 16 pointers and aligned, it
> uses exactly two cache lines. If unaligned, it may span three cache lines.
> > >
> > An *element* in the reserved uint64_t array won't span across two cache
> > lines, regardless if __rte_cache_aligned is specified or not. You would
> > need a packed struct for that to occur, plus the reserved array field
> > being preceded by some appropriately-sized fields.
> >
> > The only effect __rte_cache_aligned has on this particular struct is
> > that if you instantiate the struct on the stack, or as a static
> > variable, it will be cache-line aligned. That effect you can get by
> > specifying the attribute when you define the variable, and you will save
> > some space (by having smaller elements). In this case it doesn't matter
> > if the array is compact or not, since an application is likely to only
> > use one of the members in the array.
> >
> > It also doesn't matter of the struct is two or three cache lines, as
> > long as only the first two are used.
> 
> 
> Discussions stalled at this point.

Not stalled at this point. You seem to have missed my follow-up email clarifying why cache aligning is relevant:
http://inbox.dpdk.org/dev/98CBD80474FA8B44BF855DF32C47DC35D87897@smartserver.smartshare.dk/

But the patch still breaks the ABI, and thus should be postponed to 23.11.

> 
> Hi Shiva,
> 
> Marking this patch as rejected. If you think the other way, Please
> change patchwork status and let's discuss more here.

I am not taking any action regarding the status of this patch. I will leave that decision to Jerin and Shiva.

> 
> 
> 
> >
> > >>
> > >>> Fixes: 54f17843a887 ("eventdev: add port maintenance API")
> > >>> Cc: mattias.ronnblom@ericsson.com
> > >>>
> > >>> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > >>> ---
> > >>>    lib/eventdev/rte_eventdev_core.h | 2 +-
> > >>>    1 file changed, 1 insertion(+), 1 deletion(-)
> > >>>
> > >>> diff --git a/lib/eventdev/rte_eventdev_core.h
> > >> b/lib/eventdev/rte_eventdev_core.h
> > >>> index c328bdbc82..c27a52ccc0 100644
> > >>> --- a/lib/eventdev/rte_eventdev_core.h
> > >>> +++ b/lib/eventdev/rte_eventdev_core.h
> > >>> @@ -65,7 +65,7 @@ struct rte_event_fp_ops {
> > >>>     /**< PMD Tx adapter enqueue same destination function. */
> > >>>     event_crypto_adapter_enqueue_t ca_enqueue;
> > >>>     /**< PMD Crypto adapter enqueue function. */
> > >>> -   uintptr_t reserved[6];
> > >>> +   uintptr_t reserved[5];
> > >>>    } __rte_cache_aligned;
> > >>>
> > >>>    extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
> > >
> >

^ permalink raw reply	[relevance 3%]

* Re: [PATCH v3] eventdev: avoid non-burst shortcut for variable-size bursts
  2023-05-17  7:16  3%             ` Mattias Rönnblom
@ 2023-05-17 12:28  0%               ` Jerin Jacob
  0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2023-05-17 12:28 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: Mattias Rönnblom, jerinj, dev, Morten Brørup

On Wed, May 17, 2023 at 12:46 PM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
>
> On 2023-05-16 15:08, Jerin Jacob wrote:
> > On Tue, May 16, 2023 at 2:22 AM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
> >>
> >> On 2023-05-15 14:38, Jerin Jacob wrote:
> >>> On Fri, May 12, 2023 at 6:45 PM Mattias Rönnblom
> >>> <mattias.ronnblom@ericsson.com> wrote:
> >>>>
> >>>> On 2023-05-12 13:59, Jerin Jacob wrote:
> >>>>> On Thu, May 11, 2023 at 2:00 PM Mattias Rönnblom
> >>>>> <mattias.ronnblom@ericsson.com> wrote:
> >>>>>>
> >>>>>> Use non-burst event enqueue and dequeue calls from burst enqueue and
> >>>>>> dequeue only when the burst size is compile-time constant (and equal
> >>>>>> to one).
> >>>>>>
> >>>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >>>>>>
> >>>>>> ---
> >>>>>>
> >>>>>> v3: Actually include the change v2 claimed to contain.
> >>>>>> v2: Wrap builtin call in __extension__, to avoid compiler warnings if
> >>>>>>        application is compiled with -pedantic. (Morten Brørup)
> >>>>>> ---
> >>>>>>     lib/eventdev/rte_eventdev.h | 4 ++--
> >>>>>>     1 file changed, 2 insertions(+), 2 deletions(-)
> >>>>>>
> >>>>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> >>>>>> index a90e23ac8b..a471caeb6d 100644
> >>>>>> --- a/lib/eventdev/rte_eventdev.h
> >>>>>> +++ b/lib/eventdev/rte_eventdev.h
> >>>>>> @@ -1944,7 +1944,7 @@ __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
> >>>>>>             * Allow zero cost non burst mode routine invocation if application
> >>>>>>             * requests nb_events as const one
> >>>>>>             */
> >>>>>> -       if (nb_events == 1)
> >>>>>> +       if (__extension__(__builtin_constant_p(nb_events)) && nb_events == 1)
> >>>>>
> >>>>> "Why" part is not clear from the commit message. Is this to avoid
> >>>>> nb_events read if it is built-in const.
> >>>>
> >>>> The __builtin_constant_p() is introduced to avoid having the compiler
> >>>> generate a conditional branch and two different code paths in case
> >>>> nb_elem is a run-time variable.
> >>>>
> >>>> In particular, this matters if nb_elems is run-time variable and varies
> >>>> between 1 and some larger value.
> >>>>
> >>>> I should have mention this in the commit message.
> >>>>
> >>>> A very slight performance improvement. It also makes the code better
> >>>> match the comment, imo. Zero cost for const one enqueues, but no impact
> >>>> non-compile-time-constant-length enqueues.
> >>>>
> >>>> Feel free to ignore.
> >>>
> >>>
> >>> I did some performance comparison of the patch.
> >>> A low-end ARM machines shows 0.7%  drop with single event case. No
> >>> regression see with high-end ARM cores with single event case.
> >>>
> >>> IMO, optimizing the check for burst mode(the new patch) may not show
> >>> any real improvement as the cost is divided by number of event.
> >>> Whereas optimizing the check for single event case(The current code)
> >>> shows better performance with single event case and no regression
> >>> with burst mode as cost is divided by number of events.
> >>
> >> I ran some tests on an AMD Zen 3 with DSW.
> >> In the below tests the enqueue burst size is not compile-time constant.
> >>
> >> Enqueue burst size      Performance improvement
> >> Run-time constant 1     ~5%
> >> Run-time constant 2     ~0%
> >> Run-time variable 1-2   ~9%
> >> Run-time variable 1-16  ~0%
> >>
> >> The run-time variable enqueue sizes randomly (uniformly) distributed in
> >> the specified range.
> >>
> >> The first result may come as a surprise. The benchmark is using
> >> RTE_EVENT_OP_FORWARD type events (which likely is the dominating op type
> >> in most apps). The single-event enqueue function only exists in a
> >> generic variant (i.e., no rte_event_enqueue_forward_burst() equivalent).
> >> I suspect that is the reason for the performance improvement.
> >>
> >> This effect is large-enough to make it somewhat beneficial (+~1%) to use
> >> run-time variable single-event enqueue compared to keeping the burst
> >> size compile-time constant.
> >
> > # Interesting, Could you share your testeventdev command to test it.
>
> I'm using a proprietary benchmark to evaluate the effect of these
> changes. There's certainly nothing secret about that program, and also
> nothing very DSW-specific either. I hope to at some point both extend
> DPDK eventdev tests to include DSW, and also to contribute
> benchmarks/characteristics tests (perf unit tests or as a separate
> program), if there seems to be a value in this.

Yes. Please extend the testeventdev for your use case so that all
drivers can test
and help to optimize _real world_ cases. Testeventdev already has
plugin kind of interface,
it should pretty easy to add new MODES.


>
> > # By having quick glance on DSW code, following change can be added(or
> >   similar cases).
> > Not sure such change in DSW driver is making a difference or nor?
> >
> >
> > diff --git a/drivers/event/dsw/dsw_event.c b/drivers/event/dsw/dsw_event.c
> > index e84b65d99f..455470997b 100644
> > --- a/drivers/event/dsw/dsw_event.c
> > +++ b/drivers/event/dsw/dsw_event.c
> > @@ -1251,7 +1251,7 @@ dsw_port_flush_out_buffers(struct dsw_evdev
> > *dsw, struct dsw_port *source_port)
> >   uint16_t
> >   dsw_event_enqueue(void *port, const struct rte_event *ev)
> >   {
> > -       return dsw_event_enqueue_burst(port, ev, unlikely(ev == NULL) ? 0 : 1);
> > +       return dsw_event_enqueue_burst(port, ev, 1);
>
> Good point.
>
> Historical note: I think that comparison is old cruft borne out of a
> misconception, that the single-event enqueue could be called directly
> from application code, combined with the fact that producer-only ports
> needed some way to "maintain" a port, prior to the introduction of
> rte_event_maintain().
>
> >   }
> >
> >   static __rte_always_inline uint16_t
> > @@ -1340,7 +1340,7 @@ dsw_event_enqueue_burst_generic(struct dsw_port
> > *source_port,
> >          return (num_non_release + num_release);
> >   }
> >
> > -uint16_t
> > +inline uint16_t
>
>  From what it seems, this does not have the desired effect, at least not
> on GCC 11.3 (w/ the default DPDK compiler configuration).
>
> I reached this conclusion when I noticed that if I reshuffle the code so
> to force (not hint) the inlining of the burst (and generic burst)
> enqueue function into dsw_event_enqueue(), your change performs better.
>
> >   dsw_event_enqueue_burst(void *port, const struct rte_event events[],
> >                          uint16_t events_len)
> >   {
> >
> > # I am testing with command like this "/build/app/dpdk-test-eventdev
> > -l 0-23 -a 0002:0e:00.0 -- --test=perf_atq --plcores 1 --wlcores 8
> > --stlist p --nb_pkts=10000000000"
> >
>
>
> I re-ran the compile-time variable, run-time constant enqueue size of 1,
> and I got the following:
>
> Jerin's change: +4%
> Jerin's change + ensure inlining: +6%
> RFC v3: +7%
>
> (Here I use a more different setup that produces more deterministic
> results, hence the different numbers compared to the previous runs. They
> were using a pipeline spread over two chiplets, and these runs are using
> only a single chiplet.)
>
> It seems like with your suggested changes you eliminate most of the
> single-enqueue-special case performance degradation (for DSW), but not
> all of it. The remaining degradation is very small (for the above case,


Cores like AMD Zen 3, I was not expecting 1% diff with such check.
e.s.p if it has proper branch predictors. Even pretty low-end arm
cores, had around
0.7% diff and new arm cores shows no difference.

> larger for small by run-time variable enqueue sizes), but it's a little
> sad that a supposedly performance-enhancing special case (that drives
> complexity in the code, although not much) actually degrades performance.

OK. Let's get rid of fp_ops->dequeue callback. Initial RFC of eventdev
has public non burst API,
that was the reason for that callback.

>
> >>
> >> The performance gain is counted toward both enqueue and dequeue costs
> >> (+benchmark app overhead), so an under-estimation if see this as an
> >> enqueue performance improvement.
> >>
> >>> If you agree, then we can skip this patch.
> >>>
> >>
> >> I have no strong opinion if this should be included or not.
> >>
> >> It was up to me, I would drop the single-enqueue special case handling
> >> altogether in the next ABI update.
> >
> > That's a reasonable path. If we are willing to push a patch, we can
> > test it and give feedback.
> > Or in our spare time, We can do that as well.
> >
>
> Sure, I'll give it a try.
>
> The next release is an ABI-breaking one?

Yes  (23.11). Please plan to send deprecation notice before 23.07 release.

I will mark this patch as rejected in patchwork.

Thanks for your time.

^ permalink raw reply	[relevance 0%]

* Re: [PATCH v2 3/3] vhost: add device op to offload the interrupt kick
  2023-05-16 11:36  0%         ` Eelco Chaudron
  2023-05-16 11:45  0%           ` Maxime Coquelin
@ 2023-05-17  9:18  0%           ` Eelco Chaudron
  1 sibling, 0 replies; 200+ results
From: Eelco Chaudron @ 2023-05-17  9:18 UTC (permalink / raw)
  To: David Marchand; +Cc: maxime.coquelin, chenbo.xia, dev



On 16 May 2023, at 13:36, Eelco Chaudron wrote:

> On 16 May 2023, at 12:12, David Marchand wrote:
>
>> On Tue, May 16, 2023 at 10:53 AM Eelco Chaudron <echaudro@redhat.com> wrote:
>>> On 10 May 2023, at 13:44, David Marchand wrote:
>>
>> [snip]
>>
>>>>> @@ -846,6 +848,14 @@ vhost_user_socket_mem_free(struct vhost_user_socket *vsocket)
>>>>>                 vsocket->path = NULL;
>>>>>         }
>>>>>
>>>>> +       if (vsocket && vsocket->alloc_notify_ops) {
>>>>> +#pragma GCC diagnostic push
>>>>> +#pragma GCC diagnostic ignored "-Wcast-qual"
>>>>> +               free((struct rte_vhost_device_ops *)vsocket->notify_ops);
>>>>> +#pragma GCC diagnostic pop
>>>>> +               vsocket->notify_ops = NULL;
>>>>> +       }
>>>>
>>>> Rather than select the behavior based on a boolean (and here force the
>>>> compiler to close its eyes), I would instead add a non const pointer
>>>> to ops (let's say alloc_notify_ops) in vhost_user_socket.
>>>> The code can then unconditionnally call free(vsocket->alloc_notify_ops);
>>>
>>> Good idea, I will make the change in v3.
>>
>> Feel free to use a better name for this field :-).
>>
>>>
>>>>> +
>>>>>         if (vsocket) {
>>>>>                 free(vsocket);
>>>>>                 vsocket = NULL;
>>
>> [snip]
>>
>>>>> +       /*
>>>>> +        * Although the ops structure is a const structure, we do need to
>>>>> +        * override the guest_notify operation. This is because with the
>>>>> +        * previous APIs it was "reserved" and if any garbage value was passed,
>>>>> +        * it could crash the application.
>>>>> +        */
>>>>> +       if (ops && !ops->guest_notify) {
>>>>
>>>> Hum, as described in the comment above, I don't think we should look
>>>> at ops->guest_notify value at all.
>>>> Checking ops != NULL should be enough.
>>>
>>> Not sure I get you here. If the guest_notify passed by the user is NULL, it means the previously ‘reserved[1]’ field is NULL, so we do not need to use a new structure.
>>>
>>> I guess your comment would be true if we would introduce a new field in the data structure, not replacing a reserved one.
>>
>> Hum, I don't understand my comment either o_O'.
>> Too many days off... or maybe my evil twin took over the keyboard.
>>
>>
>>>
>>>>> +               struct rte_vhost_device_ops *new_ops;
>>>>> +
>>>>> +               new_ops = malloc(sizeof(*new_ops));
>>>>
>>>> Strictly speaking, we lose the numa affinity of "ops" by calling malloc.
>>>> I am unclear of the impact though.
>>>
>>> Don’t think there is a portable API that we can use to determine the NUMA for the ops memory and then allocate this on the same numa?
>>>
>>> Any thoughts or ideas on how to solve this? I hope most people will memset() the ops structure and the reserved[1] part is zero, but it might be a problem in the future if more extensions get added.
>>
>> Determinining current numa is doable, via 'ops'
>> get_mempolicy(MPOL_F_NODE | MPOL_F_ADDR), like what is done for vq in
>> numa_realloc().
>> The problem is how to allocate on this numa with the libc allocator
>> for which I have no idea...
>> We could go with the dpdk allocator (again, like numa_realloc()).
>>
>>
>> In practice, the passed ops will be probably from a const variable in
>> the program .data section (for which I think fields are set to 0
>> unless explicitly initialised), or a memset() will be called for a
>> dynamic allocation from good citizens.
>> So we can probably live with the current proposal.
>> Plus, this is only for one release, since in 23.11 with the ABI bump,
>> we will drop this compat code.
>>
>> Maxime, Chenbo, what do you think?
>
> Wait for their response, but for now I assume we can just keep the numa unaware malloc().
>
>>
>> [snip]
>>
>>>>
>>>> But putting indentation aside, is this change equivalent?
>>>> -               if ((vhost_need_event(vhost_used_event(vq), new, old) &&
>>>> -                                       (vq->callfd >= 0)) ||
>>>> -                               unlikely(!signalled_used_valid)) {
>>>> +               if ((vhost_need_event(vhost_used_event(vq), new, old) ||
>>>> +                               unlikely(!signalled_used_valid)) &&
>>>> +                               vq->callfd >= 0) {
>>>
>>> They are not equal, but in the past eventfd_write() should also not have been called with callfd < 0, guess this was an existing bug ;)
>>
>> I think this should be a separate fix.
>
> ACK, will add a separate patch in this series to fix it.

FYI I sent out the v3 patch.

//Eelco


^ permalink raw reply	[relevance 0%]

* [PATCH v3 0/4] vhost: add device op to offload the interrupt kick
@ 2023-05-17  9:08  4% Eelco Chaudron
  0 siblings, 0 replies; 200+ results
From: Eelco Chaudron @ 2023-05-17  9:08 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia, david.marchand; +Cc: dev

This series adds an operation callback which gets called every time the
library wants to call eventfd_write(). This eventfd_write() call could
result in a system call, which could potentially block the PMD thread.

The callback function can decide whether it's ok to handle the
eventfd_write() now or have the newly introduced function,
rte_vhost_notify_guest(), called at a later time.

This can be used by 3rd party applications, like OVS, to avoid system
calls being called as part of the PMD threads.

v3:
    - Changed ABI compatibility code to no longer use a boolean
      to avoid having to disable specific GCC warnings.
    - Moved the fd check fix to a separate patch (patch 3/4).
    - Fixed some coding style issues.

v2: - Used vhost_virtqueue->index to find index for operation.
    - Aligned function name to VDUSE RFC patchset.
    - Added error and offload statistics counter.
    - Mark new API as experimental.
    - Change the virtual queue spin lock to read/write spin lock.
    - Made shared counters atomic.
    - Add versioned rte_vhost_driver_callback_register() for
      ABI compliance.

Eelco Chaudron (4):
      vhost: change vhost_virtqueue access lock to a read/write one
      vhost: make the guest_notifications statistic counter atomic
      vhost: fix invalid call FD handling
      vhost: add device op to offload the interrupt kick


 lib/eal/include/generic/rte_rwlock.h | 17 +++++
 lib/vhost/meson.build                |  2 +
 lib/vhost/rte_vhost.h                | 23 ++++++-
 lib/vhost/socket.c                   | 63 +++++++++++++++++--
 lib/vhost/version.map                |  9 +++
 lib/vhost/vhost.c                    | 92 +++++++++++++++++++++-------
 lib/vhost/vhost.h                    | 69 ++++++++++++++-------
 lib/vhost/vhost_user.c               | 14 ++---
 lib/vhost/virtio_net.c               | 90 +++++++++++++--------------
 9 files changed, 278 insertions(+), 101 deletions(-)


^ permalink raw reply	[relevance 4%]

* Re: [PATCH V8] ethdev: fix one address occupies two entries in MAC addrs
  2023-05-17  7:45  0%       ` lihuisong (C)
@ 2023-05-17  8:53  0%         ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2023-05-17  8:53 UTC (permalink / raw)
  To: lihuisong (C)
  Cc: thomas, andrew.rybchenko, liudongdong3, huangdaode, fengchengwen,
	dev, techboard

On 5/17/2023 8:45 AM, lihuisong (C) wrote:
> 
> 在 2023/5/16 22:13, Ferruh Yigit 写道:
>> On 5/16/2023 12:47 PM, lihuisong (C) wrote:
>>> Hi Ferruh,
>>>
>>> There is no result on techboard.
>>> How to deal with this problem next?
>> +techboard for comment.
>>
>>
>> Btw, what was your positioning to Bruce's suggestion,
>> when a MAC address is in the list, fail to set it as default and enforce
>> user do the corrective action (delete MAC explicitly etc...).
> If a MAC address is in the list, rte_eth_dev_default_mac_addr_set
> returns failure?

Yes.
In that case API can return EEXIST or similar. In this case user need to
call 'rte_eth_dev_mac_addr_remove()' first and call
'rte_eth_dev_default_mac_addr_set()' again, if this is the intention.

>> If you are OK with it, that is good for me too, unless techboard objects
>> we can proceed with that one.
>>
>>
>>> /Huisong
>>>
>>> 在 2023/2/2 20:36, Huisong Li 写道:
>>>> The dev->data->mac_addrs[0] will be changed to a new MAC address when
>>>> applications modify the default MAC address by .mac_addr_set().
>>>> However,
>>>> if the new default one has been added as a non-default MAC address by
>>>> .mac_addr_add(), the .mac_addr_set() doesn't remove it from the
>>>> mac_addrs
>>>> list. As a result, one MAC address occupies two entries in the list.
>>>> Like:
>>>> add(MAC1)
>>>> add(MAC2)
>>>> add(MAC3)
>>>> add(MAC4)
>>>> set_default(MAC3)
>>>> default=MAC3, the rest of the list=MAC1, MAC2, MAC3, MAC4
>>>> Note: MAC3 occupies two entries.
>>>>
>>>> In addition, some PMDs, such as i40e, ice, hns3 and so on, do remove
>>>> the
>>>> old default MAC when set default MAC. If user continues to do
>>>> set_default(MAC5), and the mac_addrs list is default=MAC5,
>>>> filters=(MAC1,
>>>> MAC2, MAC3, MAC4). At this moment, user can still see MAC3 from the
>>>> list,
>>>> but packets with MAC3 aren't actually received by the PMD.
>>>>
>>>> So need to ensure that the new default address is removed from the
>>>> rest of
>>>> the list if the address was already in the list.
>>>>
>>>> Fixes: 854d8ad4ef68 ("ethdev: add default mac address modifier")
>>>> Cc: stable@dpdk.org
>>>>
>>>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>>>> Acked-by: Chengwen Feng <fengchengwen@huawei.com>
>>>> ---
>>>> v8: fix some comments.
>>>> v7: add announcement in the release notes and document this behavior.
>>>> v6: fix commit log and some code comments.
>>>> v5:
>>>>    - merge the second patch into the first patch.
>>>>    - add error log when rollback failed.
>>>> v4:
>>>>     - fix broken in the patchwork
>>>> v3:
>>>>     - first explicitly remove the non-default MAC, then set default
>>>> one.
>>>>     - document default and non-default MAC address
>>>> v2:
>>>>     - fixed commit log.
>>>> ---
>>>>    doc/guides/rel_notes/release_23_03.rst |  6 +++++
>>>>    lib/ethdev/ethdev_driver.h             |  6 ++++-
>>>>    lib/ethdev/rte_ethdev.c                | 35
>>>> ++++++++++++++++++++++++--
>>>>    lib/ethdev/rte_ethdev.h                |  3 +++
>>>>    4 files changed, 47 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/doc/guides/rel_notes/release_23_03.rst
>>>> b/doc/guides/rel_notes/release_23_03.rst
>>>> index 84b112a8b1..1c9b9912c2 100644
>>>> --- a/doc/guides/rel_notes/release_23_03.rst
>>>> +++ b/doc/guides/rel_notes/release_23_03.rst
>>>> @@ -105,6 +105,12 @@ API Changes
>>>>       Also, make sure to start the actual text at the margin.
>>>>       =======================================================
>>>>    +* ethdev: ensured all entries in MAC address list are uniques.
>>>> +  When setting a default MAC address with the function
>>>> +  ``rte_eth_dev_default_mac_addr_set``,
>>>> +  the address is now removed from the rest of the address list
>>>> +  in order to ensure it is only at index 0 of the list.
>>>> +
>>>>      ABI Changes
>>>>    -----------
>>>> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
>>>> index dde3ec84ef..3994c61b86 100644
>>>> --- a/lib/ethdev/ethdev_driver.h
>>>> +++ b/lib/ethdev/ethdev_driver.h
>>>> @@ -117,7 +117,11 @@ struct rte_eth_dev_data {
>>>>          uint64_t rx_mbuf_alloc_failed; /**< Rx ring mbuf allocation
>>>> failures */
>>>>    -    /** Device Ethernet link address. @see
>>>> rte_eth_dev_release_port() */
>>>> +    /**
>>>> +     * Device Ethernet link addresses.
>>>> +     * All entries are unique.
>>>> +     * The first entry (index zero) is the default address.
>>>> +     */
>>>>        struct rte_ether_addr *mac_addrs;
>>>>        /** Bitmap associating MAC addresses to pools */
>>>>        uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
>>>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>>>> index 86ca303ab5..de25183619 100644
>>>> --- a/lib/ethdev/rte_ethdev.c
>>>> +++ b/lib/ethdev/rte_ethdev.c
>>>> @@ -4498,7 +4498,10 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id,
>>>> struct rte_ether_addr *addr)
>>>>    int
>>>>    rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct
>>>> rte_ether_addr *addr)
>>>>    {
>>>> +    uint64_t mac_pool_sel_bk = 0;
>>>>        struct rte_eth_dev *dev;
>>>> +    uint32_t pool;
>>>> +    int index;
>>>>        int ret;
>>>>          RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>>> @@ -4517,16 +4520,44 @@ rte_eth_dev_default_mac_addr_set(uint16_t
>>>> port_id, struct rte_ether_addr *addr)
>>>>        if (*dev->dev_ops->mac_addr_set == NULL)
>>>>            return -ENOTSUP;
>>>>    +    /* Keep address unique in dev->data->mac_addrs[]. */
>>>> +    index = eth_dev_get_mac_addr_index(port_id, addr);
>>>> +    if (index > 0) {
>>>> +        /* Remove address in dev data structure */
>>>> +        mac_pool_sel_bk = dev->data->mac_pool_sel[index];
>>>> +        ret = rte_eth_dev_mac_addr_remove(port_id, addr);
>>>> +        if (ret < 0) {
>>>> +            RTE_ETHDEV_LOG(ERR, "Cannot remove the port %u address
>>>> from the rest of list.\n",
>>>> +                       port_id);
>>>> +            return ret;
>>>> +        }
>>>> +    }
>>>>        ret = (*dev->dev_ops->mac_addr_set)(dev, addr);
>>>>        if (ret < 0)
>>>> -        return ret;
>>>> +        goto out;
>>>>          /* Update default address in NIC data structure */
>>>>        rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]);
>>>>          return 0;
>>>> -}
>>>>    +out:
>>>> +    if (index > 0) {
>>>> +        pool = 0;
>>>> +        do {
>>>> +            if (mac_pool_sel_bk & UINT64_C(1)) {
>>>> +                if (rte_eth_dev_mac_addr_add(port_id, addr,
>>>> +                                 pool) != 0)
>>>> +                    RTE_ETHDEV_LOG(ERR, "failed to restore MAC pool
>>>> id(%u) in port %u.\n",
>>>> +                               pool, port_id);
>>>> +            }
>>>> +            mac_pool_sel_bk >>= 1;
>>>> +            pool++;
>>>> +        } while (mac_pool_sel_bk != 0);
>>>> +    }
>>>> +
>>>> +    return ret;
>>>> +}
>>>>      /*
>>>>     * Returns index into MAC address array of addr. Use
>>>> 00:00:00:00:00:00 to find
>>>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>>>> index d22de196db..2456153457 100644
>>>> --- a/lib/ethdev/rte_ethdev.h
>>>> +++ b/lib/ethdev/rte_ethdev.h
>>>> @@ -4356,6 +4356,9 @@ int rte_eth_dev_mac_addr_remove(uint16_t port_id,
>>>>      /**
>>>>     * Set the default MAC address.
>>>> + * It replaces the address at index 0 of the MAC address list.
>>>> + * If the address was already in the MAC address list,
>>>> + * it is removed from the rest of the list.
>>>>     *
>>>>     * @param port_id
>>>>     *   The port identifier of the Ethernet device.
>> .


^ permalink raw reply	[relevance 0%]

* Re: [PATCH V8] ethdev: fix one address occupies two entries in MAC addrs
  2023-05-16 14:13  0%     ` Ferruh Yigit
@ 2023-05-17  7:45  0%       ` lihuisong (C)
  2023-05-17  8:53  0%         ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: lihuisong (C) @ 2023-05-17  7:45 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: thomas, andrew.rybchenko, liudongdong3, huangdaode, fengchengwen,
	dev, techboard


在 2023/5/16 22:13, Ferruh Yigit 写道:
> On 5/16/2023 12:47 PM, lihuisong (C) wrote:
>> Hi Ferruh,
>>
>> There is no result on techboard.
>> How to deal with this problem next?
> +techboard for comment.
>
>
> Btw, what was your positioning to Bruce's suggestion,
> when a MAC address is in the list, fail to set it as default and enforce
> user do the corrective action (delete MAC explicitly etc...).
If a MAC address is in the list, rte_eth_dev_default_mac_addr_set 
returns failure?
> If you are OK with it, that is good for me too, unless techboard objects
> we can proceed with that one.
>
>
>> /Huisong
>>
>> 在 2023/2/2 20:36, Huisong Li 写道:
>>> The dev->data->mac_addrs[0] will be changed to a new MAC address when
>>> applications modify the default MAC address by .mac_addr_set(). However,
>>> if the new default one has been added as a non-default MAC address by
>>> .mac_addr_add(), the .mac_addr_set() doesn't remove it from the mac_addrs
>>> list. As a result, one MAC address occupies two entries in the list.
>>> Like:
>>> add(MAC1)
>>> add(MAC2)
>>> add(MAC3)
>>> add(MAC4)
>>> set_default(MAC3)
>>> default=MAC3, the rest of the list=MAC1, MAC2, MAC3, MAC4
>>> Note: MAC3 occupies two entries.
>>>
>>> In addition, some PMDs, such as i40e, ice, hns3 and so on, do remove the
>>> old default MAC when set default MAC. If user continues to do
>>> set_default(MAC5), and the mac_addrs list is default=MAC5, filters=(MAC1,
>>> MAC2, MAC3, MAC4). At this moment, user can still see MAC3 from the list,
>>> but packets with MAC3 aren't actually received by the PMD.
>>>
>>> So need to ensure that the new default address is removed from the
>>> rest of
>>> the list if the address was already in the list.
>>>
>>> Fixes: 854d8ad4ef68 ("ethdev: add default mac address modifier")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>>> Acked-by: Chengwen Feng <fengchengwen@huawei.com>
>>> ---
>>> v8: fix some comments.
>>> v7: add announcement in the release notes and document this behavior.
>>> v6: fix commit log and some code comments.
>>> v5:
>>>    - merge the second patch into the first patch.
>>>    - add error log when rollback failed.
>>> v4:
>>>     - fix broken in the patchwork
>>> v3:
>>>     - first explicitly remove the non-default MAC, then set default one.
>>>     - document default and non-default MAC address
>>> v2:
>>>     - fixed commit log.
>>> ---
>>>    doc/guides/rel_notes/release_23_03.rst |  6 +++++
>>>    lib/ethdev/ethdev_driver.h             |  6 ++++-
>>>    lib/ethdev/rte_ethdev.c                | 35 ++++++++++++++++++++++++--
>>>    lib/ethdev/rte_ethdev.h                |  3 +++
>>>    4 files changed, 47 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/doc/guides/rel_notes/release_23_03.rst
>>> b/doc/guides/rel_notes/release_23_03.rst
>>> index 84b112a8b1..1c9b9912c2 100644
>>> --- a/doc/guides/rel_notes/release_23_03.rst
>>> +++ b/doc/guides/rel_notes/release_23_03.rst
>>> @@ -105,6 +105,12 @@ API Changes
>>>       Also, make sure to start the actual text at the margin.
>>>       =======================================================
>>>    +* ethdev: ensured all entries in MAC address list are uniques.
>>> +  When setting a default MAC address with the function
>>> +  ``rte_eth_dev_default_mac_addr_set``,
>>> +  the address is now removed from the rest of the address list
>>> +  in order to ensure it is only at index 0 of the list.
>>> +
>>>      ABI Changes
>>>    -----------
>>> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
>>> index dde3ec84ef..3994c61b86 100644
>>> --- a/lib/ethdev/ethdev_driver.h
>>> +++ b/lib/ethdev/ethdev_driver.h
>>> @@ -117,7 +117,11 @@ struct rte_eth_dev_data {
>>>          uint64_t rx_mbuf_alloc_failed; /**< Rx ring mbuf allocation
>>> failures */
>>>    -    /** Device Ethernet link address. @see
>>> rte_eth_dev_release_port() */
>>> +    /**
>>> +     * Device Ethernet link addresses.
>>> +     * All entries are unique.
>>> +     * The first entry (index zero) is the default address.
>>> +     */
>>>        struct rte_ether_addr *mac_addrs;
>>>        /** Bitmap associating MAC addresses to pools */
>>>        uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
>>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>>> index 86ca303ab5..de25183619 100644
>>> --- a/lib/ethdev/rte_ethdev.c
>>> +++ b/lib/ethdev/rte_ethdev.c
>>> @@ -4498,7 +4498,10 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id,
>>> struct rte_ether_addr *addr)
>>>    int
>>>    rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct
>>> rte_ether_addr *addr)
>>>    {
>>> +    uint64_t mac_pool_sel_bk = 0;
>>>        struct rte_eth_dev *dev;
>>> +    uint32_t pool;
>>> +    int index;
>>>        int ret;
>>>          RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>>> @@ -4517,16 +4520,44 @@ rte_eth_dev_default_mac_addr_set(uint16_t
>>> port_id, struct rte_ether_addr *addr)
>>>        if (*dev->dev_ops->mac_addr_set == NULL)
>>>            return -ENOTSUP;
>>>    +    /* Keep address unique in dev->data->mac_addrs[]. */
>>> +    index = eth_dev_get_mac_addr_index(port_id, addr);
>>> +    if (index > 0) {
>>> +        /* Remove address in dev data structure */
>>> +        mac_pool_sel_bk = dev->data->mac_pool_sel[index];
>>> +        ret = rte_eth_dev_mac_addr_remove(port_id, addr);
>>> +        if (ret < 0) {
>>> +            RTE_ETHDEV_LOG(ERR, "Cannot remove the port %u address
>>> from the rest of list.\n",
>>> +                       port_id);
>>> +            return ret;
>>> +        }
>>> +    }
>>>        ret = (*dev->dev_ops->mac_addr_set)(dev, addr);
>>>        if (ret < 0)
>>> -        return ret;
>>> +        goto out;
>>>          /* Update default address in NIC data structure */
>>>        rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]);
>>>          return 0;
>>> -}
>>>    +out:
>>> +    if (index > 0) {
>>> +        pool = 0;
>>> +        do {
>>> +            if (mac_pool_sel_bk & UINT64_C(1)) {
>>> +                if (rte_eth_dev_mac_addr_add(port_id, addr,
>>> +                                 pool) != 0)
>>> +                    RTE_ETHDEV_LOG(ERR, "failed to restore MAC pool
>>> id(%u) in port %u.\n",
>>> +                               pool, port_id);
>>> +            }
>>> +            mac_pool_sel_bk >>= 1;
>>> +            pool++;
>>> +        } while (mac_pool_sel_bk != 0);
>>> +    }
>>> +
>>> +    return ret;
>>> +}
>>>      /*
>>>     * Returns index into MAC address array of addr. Use
>>> 00:00:00:00:00:00 to find
>>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>>> index d22de196db..2456153457 100644
>>> --- a/lib/ethdev/rte_ethdev.h
>>> +++ b/lib/ethdev/rte_ethdev.h
>>> @@ -4356,6 +4356,9 @@ int rte_eth_dev_mac_addr_remove(uint16_t port_id,
>>>      /**
>>>     * Set the default MAC address.
>>> + * It replaces the address at index 0 of the MAC address list.
>>> + * If the address was already in the MAC address list,
>>> + * it is removed from the rest of the list.
>>>     *
>>>     * @param port_id
>>>     *   The port identifier of the Ethernet device.
> .

^ permalink raw reply	[relevance 0%]

* Re: [PATCH v3] eventdev: avoid non-burst shortcut for variable-size bursts
  2023-05-16 13:08  0%           ` Jerin Jacob
@ 2023-05-17  7:16  3%             ` Mattias Rönnblom
  2023-05-17 12:28  0%               ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Mattias Rönnblom @ 2023-05-17  7:16 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: Mattias Rönnblom, jerinj, dev, Morten Brørup

On 2023-05-16 15:08, Jerin Jacob wrote:
> On Tue, May 16, 2023 at 2:22 AM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
>>
>> On 2023-05-15 14:38, Jerin Jacob wrote:
>>> On Fri, May 12, 2023 at 6:45 PM Mattias Rönnblom
>>> <mattias.ronnblom@ericsson.com> wrote:
>>>>
>>>> On 2023-05-12 13:59, Jerin Jacob wrote:
>>>>> On Thu, May 11, 2023 at 2:00 PM Mattias Rönnblom
>>>>> <mattias.ronnblom@ericsson.com> wrote:
>>>>>>
>>>>>> Use non-burst event enqueue and dequeue calls from burst enqueue and
>>>>>> dequeue only when the burst size is compile-time constant (and equal
>>>>>> to one).
>>>>>>
>>>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>>>>>
>>>>>> ---
>>>>>>
>>>>>> v3: Actually include the change v2 claimed to contain.
>>>>>> v2: Wrap builtin call in __extension__, to avoid compiler warnings if
>>>>>>        application is compiled with -pedantic. (Morten Brørup)
>>>>>> ---
>>>>>>     lib/eventdev/rte_eventdev.h | 4 ++--
>>>>>>     1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>>>>>> index a90e23ac8b..a471caeb6d 100644
>>>>>> --- a/lib/eventdev/rte_eventdev.h
>>>>>> +++ b/lib/eventdev/rte_eventdev.h
>>>>>> @@ -1944,7 +1944,7 @@ __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>>>>>>             * Allow zero cost non burst mode routine invocation if application
>>>>>>             * requests nb_events as const one
>>>>>>             */
>>>>>> -       if (nb_events == 1)
>>>>>> +       if (__extension__(__builtin_constant_p(nb_events)) && nb_events == 1)
>>>>>
>>>>> "Why" part is not clear from the commit message. Is this to avoid
>>>>> nb_events read if it is built-in const.
>>>>
>>>> The __builtin_constant_p() is introduced to avoid having the compiler
>>>> generate a conditional branch and two different code paths in case
>>>> nb_elem is a run-time variable.
>>>>
>>>> In particular, this matters if nb_elems is run-time variable and varies
>>>> between 1 and some larger value.
>>>>
>>>> I should have mention this in the commit message.
>>>>
>>>> A very slight performance improvement. It also makes the code better
>>>> match the comment, imo. Zero cost for const one enqueues, but no impact
>>>> non-compile-time-constant-length enqueues.
>>>>
>>>> Feel free to ignore.
>>>
>>>
>>> I did some performance comparison of the patch.
>>> A low-end ARM machines shows 0.7%  drop with single event case. No
>>> regression see with high-end ARM cores with single event case.
>>>
>>> IMO, optimizing the check for burst mode(the new patch) may not show
>>> any real improvement as the cost is divided by number of event.
>>> Whereas optimizing the check for single event case(The current code)
>>> shows better performance with single event case and no regression
>>> with burst mode as cost is divided by number of events.
>>
>> I ran some tests on an AMD Zen 3 with DSW.
>> In the below tests the enqueue burst size is not compile-time constant.
>>
>> Enqueue burst size      Performance improvement
>> Run-time constant 1     ~5%
>> Run-time constant 2     ~0%
>> Run-time variable 1-2   ~9%
>> Run-time variable 1-16  ~0%
>>
>> The run-time variable enqueue sizes randomly (uniformly) distributed in
>> the specified range.
>>
>> The first result may come as a surprise. The benchmark is using
>> RTE_EVENT_OP_FORWARD type events (which likely is the dominating op type
>> in most apps). The single-event enqueue function only exists in a
>> generic variant (i.e., no rte_event_enqueue_forward_burst() equivalent).
>> I suspect that is the reason for the performance improvement.
>>
>> This effect is large-enough to make it somewhat beneficial (+~1%) to use
>> run-time variable single-event enqueue compared to keeping the burst
>> size compile-time constant.
> 
> # Interesting, Could you share your testeventdev command to test it.

I'm using a proprietary benchmark to evaluate the effect of these 
changes. There's certainly nothing secret about that program, and also 
nothing very DSW-specific either. I hope to at some point both extend 
DPDK eventdev tests to include DSW, and also to contribute 
benchmarks/characteristics tests (perf unit tests or as a separate 
program), if there seems to be a value in this.

> # By having quick glance on DSW code, following change can be added(or
>   similar cases).
> Not sure such change in DSW driver is making a difference or nor?
> 
> 
> diff --git a/drivers/event/dsw/dsw_event.c b/drivers/event/dsw/dsw_event.c
> index e84b65d99f..455470997b 100644
> --- a/drivers/event/dsw/dsw_event.c
> +++ b/drivers/event/dsw/dsw_event.c
> @@ -1251,7 +1251,7 @@ dsw_port_flush_out_buffers(struct dsw_evdev
> *dsw, struct dsw_port *source_port)
>   uint16_t
>   dsw_event_enqueue(void *port, const struct rte_event *ev)
>   {
> -       return dsw_event_enqueue_burst(port, ev, unlikely(ev == NULL) ? 0 : 1);
> +       return dsw_event_enqueue_burst(port, ev, 1);

Good point.

Historical note: I think that comparison is old cruft borne out of a 
misconception, that the single-event enqueue could be called directly 
from application code, combined with the fact that producer-only ports 
needed some way to "maintain" a port, prior to the introduction of 
rte_event_maintain().

>   }
> 
>   static __rte_always_inline uint16_t
> @@ -1340,7 +1340,7 @@ dsw_event_enqueue_burst_generic(struct dsw_port
> *source_port,
>          return (num_non_release + num_release);
>   }
> 
> -uint16_t
> +inline uint16_t

 From what it seems, this does not have the desired effect, at least not 
on GCC 11.3 (w/ the default DPDK compiler configuration).

I reached this conclusion when I noticed that if I reshuffle the code so 
to force (not hint) the inlining of the burst (and generic burst) 
enqueue function into dsw_event_enqueue(), your change performs better.

>   dsw_event_enqueue_burst(void *port, const struct rte_event events[],
>                          uint16_t events_len)
>   {
> 
> # I am testing with command like this "/build/app/dpdk-test-eventdev
> -l 0-23 -a 0002:0e:00.0 -- --test=perf_atq --plcores 1 --wlcores 8
> --stlist p --nb_pkts=10000000000"
> 


I re-ran the compile-time variable, run-time constant enqueue size of 1, 
and I got the following:

Jerin's change: +4%
Jerin's change + ensure inlining: +6%
RFC v3: +7%

(Here I use a more different setup that produces more deterministic 
results, hence the different numbers compared to the previous runs. They 
were using a pipeline spread over two chiplets, and these runs are using 
only a single chiplet.)

It seems like with your suggested changes you eliminate most of the 
single-enqueue-special case performance degradation (for DSW), but not 
all of it. The remaining degradation is very small (for the above case, 
larger for small by run-time variable enqueue sizes), but it's a little 
sad that a supposedly performance-enhancing special case (that drives 
complexity in the code, although not much) actually degrades performance.

>>
>> The performance gain is counted toward both enqueue and dequeue costs
>> (+benchmark app overhead), so an under-estimation if see this as an
>> enqueue performance improvement.
>>
>>> If you agree, then we can skip this patch.
>>>
>>
>> I have no strong opinion if this should be included or not.
>>
>> It was up to me, I would drop the single-enqueue special case handling
>> altogether in the next ABI update.
> 
> That's a reasonable path. If we are willing to push a patch, we can
> test it and give feedback.
> Or in our spare time, We can do that as well.
> 

Sure, I'll give it a try.

The next release is an ABI-breaking one?

>>
>>>
>>>>
>>>>> If so, check should be following. Right?
>>>>>
>>>>> if (__extension__((__builtin_constant_p(nb_events)) && nb_events == 1)
>>>>> || nb_events  == 1)
>>>>>
>>>>> At least, It was my original intention in the code.
>>>>>
>>>>>
>>>>>
>>>>>>                    return (fp_ops->enqueue)(port, ev);
>>>>>>            else
>>>>>>                    return fn(port, ev, nb_events);
>>>>>> @@ -2200,7 +2200,7 @@ rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
>>>>>>             * Allow zero cost non burst mode routine invocation if application
>>>>>>             * requests nb_events as const one
>>>>>>             */
>>>>>> -       if (nb_events == 1)
>>>>>> +       if (__extension__(__builtin_constant_p(nb_events)) && nb_events == 1)
>>>>>>                    return (fp_ops->dequeue)(port, ev, timeout_ticks);
>>>>>>            else
>>>>>>                    return (fp_ops->dequeue_burst)(port, ev, nb_events,
>>>>>> --
>>>>>> 2.34.1
>>>>>>
>>>>

^ permalink raw reply	[relevance 3%]

* Re: [PATCH V8] ethdev: fix one address occupies two entries in MAC addrs
  2023-05-16 11:47  0%   ` lihuisong (C)
@ 2023-05-16 14:13  0%     ` Ferruh Yigit
  2023-05-17  7:45  0%       ` lihuisong (C)
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2023-05-16 14:13 UTC (permalink / raw)
  To: lihuisong (C)
  Cc: thomas, andrew.rybchenko, liudongdong3, huangdaode, fengchengwen,
	dev, techboard

On 5/16/2023 12:47 PM, lihuisong (C) wrote:
> Hi Ferruh,
> 
> There is no result on techboard.
> How to deal with this problem next?

+techboard for comment.


Btw, what was your positioning to Bruce's suggestion,
when a MAC address is in the list, fail to set it as default and enforce
user do the corrective action (delete MAC explicitly etc...).
If you are OK with it, that is good for me too, unless techboard objects
we can proceed with that one.


> 
> /Huisong
> 
> 在 2023/2/2 20:36, Huisong Li 写道:
>> The dev->data->mac_addrs[0] will be changed to a new MAC address when
>> applications modify the default MAC address by .mac_addr_set(). However,
>> if the new default one has been added as a non-default MAC address by
>> .mac_addr_add(), the .mac_addr_set() doesn't remove it from the mac_addrs
>> list. As a result, one MAC address occupies two entries in the list.
>> Like:
>> add(MAC1)
>> add(MAC2)
>> add(MAC3)
>> add(MAC4)
>> set_default(MAC3)
>> default=MAC3, the rest of the list=MAC1, MAC2, MAC3, MAC4
>> Note: MAC3 occupies two entries.
>>
>> In addition, some PMDs, such as i40e, ice, hns3 and so on, do remove the
>> old default MAC when set default MAC. If user continues to do
>> set_default(MAC5), and the mac_addrs list is default=MAC5, filters=(MAC1,
>> MAC2, MAC3, MAC4). At this moment, user can still see MAC3 from the list,
>> but packets with MAC3 aren't actually received by the PMD.
>>
>> So need to ensure that the new default address is removed from the
>> rest of
>> the list if the address was already in the list.
>>
>> Fixes: 854d8ad4ef68 ("ethdev: add default mac address modifier")
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>> Acked-by: Chengwen Feng <fengchengwen@huawei.com>
>> ---
>> v8: fix some comments.
>> v7: add announcement in the release notes and document this behavior.
>> v6: fix commit log and some code comments.
>> v5:
>>   - merge the second patch into the first patch.
>>   - add error log when rollback failed.
>> v4:
>>    - fix broken in the patchwork
>> v3:
>>    - first explicitly remove the non-default MAC, then set default one.
>>    - document default and non-default MAC address
>> v2:
>>    - fixed commit log.
>> ---
>>   doc/guides/rel_notes/release_23_03.rst |  6 +++++
>>   lib/ethdev/ethdev_driver.h             |  6 ++++-
>>   lib/ethdev/rte_ethdev.c                | 35 ++++++++++++++++++++++++--
>>   lib/ethdev/rte_ethdev.h                |  3 +++
>>   4 files changed, 47 insertions(+), 3 deletions(-)
>>
>> diff --git a/doc/guides/rel_notes/release_23_03.rst
>> b/doc/guides/rel_notes/release_23_03.rst
>> index 84b112a8b1..1c9b9912c2 100644
>> --- a/doc/guides/rel_notes/release_23_03.rst
>> +++ b/doc/guides/rel_notes/release_23_03.rst
>> @@ -105,6 +105,12 @@ API Changes
>>      Also, make sure to start the actual text at the margin.
>>      =======================================================
>>   +* ethdev: ensured all entries in MAC address list are uniques.
>> +  When setting a default MAC address with the function
>> +  ``rte_eth_dev_default_mac_addr_set``,
>> +  the address is now removed from the rest of the address list
>> +  in order to ensure it is only at index 0 of the list.
>> +
>>     ABI Changes
>>   -----------
>> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
>> index dde3ec84ef..3994c61b86 100644
>> --- a/lib/ethdev/ethdev_driver.h
>> +++ b/lib/ethdev/ethdev_driver.h
>> @@ -117,7 +117,11 @@ struct rte_eth_dev_data {
>>         uint64_t rx_mbuf_alloc_failed; /**< Rx ring mbuf allocation
>> failures */
>>   -    /** Device Ethernet link address. @see
>> rte_eth_dev_release_port() */
>> +    /**
>> +     * Device Ethernet link addresses.
>> +     * All entries are unique.
>> +     * The first entry (index zero) is the default address.
>> +     */
>>       struct rte_ether_addr *mac_addrs;
>>       /** Bitmap associating MAC addresses to pools */
>>       uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>> index 86ca303ab5..de25183619 100644
>> --- a/lib/ethdev/rte_ethdev.c
>> +++ b/lib/ethdev/rte_ethdev.c
>> @@ -4498,7 +4498,10 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id,
>> struct rte_ether_addr *addr)
>>   int
>>   rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct
>> rte_ether_addr *addr)
>>   {
>> +    uint64_t mac_pool_sel_bk = 0;
>>       struct rte_eth_dev *dev;
>> +    uint32_t pool;
>> +    int index;
>>       int ret;
>>         RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>> @@ -4517,16 +4520,44 @@ rte_eth_dev_default_mac_addr_set(uint16_t
>> port_id, struct rte_ether_addr *addr)
>>       if (*dev->dev_ops->mac_addr_set == NULL)
>>           return -ENOTSUP;
>>   +    /* Keep address unique in dev->data->mac_addrs[]. */
>> +    index = eth_dev_get_mac_addr_index(port_id, addr);
>> +    if (index > 0) {
>> +        /* Remove address in dev data structure */
>> +        mac_pool_sel_bk = dev->data->mac_pool_sel[index];
>> +        ret = rte_eth_dev_mac_addr_remove(port_id, addr);
>> +        if (ret < 0) {
>> +            RTE_ETHDEV_LOG(ERR, "Cannot remove the port %u address
>> from the rest of list.\n",
>> +                       port_id);
>> +            return ret;
>> +        }
>> +    }
>>       ret = (*dev->dev_ops->mac_addr_set)(dev, addr);
>>       if (ret < 0)
>> -        return ret;
>> +        goto out;
>>         /* Update default address in NIC data structure */
>>       rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]);
>>         return 0;
>> -}
>>   +out:
>> +    if (index > 0) {
>> +        pool = 0;
>> +        do {
>> +            if (mac_pool_sel_bk & UINT64_C(1)) {
>> +                if (rte_eth_dev_mac_addr_add(port_id, addr,
>> +                                 pool) != 0)
>> +                    RTE_ETHDEV_LOG(ERR, "failed to restore MAC pool
>> id(%u) in port %u.\n",
>> +                               pool, port_id);
>> +            }
>> +            mac_pool_sel_bk >>= 1;
>> +            pool++;
>> +        } while (mac_pool_sel_bk != 0);
>> +    }
>> +
>> +    return ret;
>> +}
>>     /*
>>    * Returns index into MAC address array of addr. Use
>> 00:00:00:00:00:00 to find
>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>> index d22de196db..2456153457 100644
>> --- a/lib/ethdev/rte_ethdev.h
>> +++ b/lib/ethdev/rte_ethdev.h
>> @@ -4356,6 +4356,9 @@ int rte_eth_dev_mac_addr_remove(uint16_t port_id,
>>     /**
>>    * Set the default MAC address.
>> + * It replaces the address at index 0 of the MAC address list.
>> + * If the address was already in the MAC address list,
>> + * it is removed from the rest of the list.
>>    *
>>    * @param port_id
>>    *   The port identifier of the Ethernet device.


^ permalink raw reply	[relevance 0%]

* Re: [PATCH v3] eventdev: avoid non-burst shortcut for variable-size bursts
  2023-05-15 20:52  3%         ` Mattias Rönnblom
@ 2023-05-16 13:08  0%           ` Jerin Jacob
  2023-05-17  7:16  3%             ` Mattias Rönnblom
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2023-05-16 13:08 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: Mattias Rönnblom, jerinj, dev, Morten Brørup

On Tue, May 16, 2023 at 2:22 AM Mattias Rönnblom <hofors@lysator.liu.se> wrote:
>
> On 2023-05-15 14:38, Jerin Jacob wrote:
> > On Fri, May 12, 2023 at 6:45 PM Mattias Rönnblom
> > <mattias.ronnblom@ericsson.com> wrote:
> >>
> >> On 2023-05-12 13:59, Jerin Jacob wrote:
> >>> On Thu, May 11, 2023 at 2:00 PM Mattias Rönnblom
> >>> <mattias.ronnblom@ericsson.com> wrote:
> >>>>
> >>>> Use non-burst event enqueue and dequeue calls from burst enqueue and
> >>>> dequeue only when the burst size is compile-time constant (and equal
> >>>> to one).
> >>>>
> >>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >>>>
> >>>> ---
> >>>>
> >>>> v3: Actually include the change v2 claimed to contain.
> >>>> v2: Wrap builtin call in __extension__, to avoid compiler warnings if
> >>>>       application is compiled with -pedantic. (Morten Brørup)
> >>>> ---
> >>>>    lib/eventdev/rte_eventdev.h | 4 ++--
> >>>>    1 file changed, 2 insertions(+), 2 deletions(-)
> >>>>
> >>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> >>>> index a90e23ac8b..a471caeb6d 100644
> >>>> --- a/lib/eventdev/rte_eventdev.h
> >>>> +++ b/lib/eventdev/rte_eventdev.h
> >>>> @@ -1944,7 +1944,7 @@ __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
> >>>>            * Allow zero cost non burst mode routine invocation if application
> >>>>            * requests nb_events as const one
> >>>>            */
> >>>> -       if (nb_events == 1)
> >>>> +       if (__extension__(__builtin_constant_p(nb_events)) && nb_events == 1)
> >>>
> >>> "Why" part is not clear from the commit message. Is this to avoid
> >>> nb_events read if it is built-in const.
> >>
> >> The __builtin_constant_p() is introduced to avoid having the compiler
> >> generate a conditional branch and two different code paths in case
> >> nb_elem is a run-time variable.
> >>
> >> In particular, this matters if nb_elems is run-time variable and varies
> >> between 1 and some larger value.
> >>
> >> I should have mention this in the commit message.
> >>
> >> A very slight performance improvement. It also makes the code better
> >> match the comment, imo. Zero cost for const one enqueues, but no impact
> >> non-compile-time-constant-length enqueues.
> >>
> >> Feel free to ignore.
> >
> >
> > I did some performance comparison of the patch.
> > A low-end ARM machines shows 0.7%  drop with single event case. No
> > regression see with high-end ARM cores with single event case.
> >
> > IMO, optimizing the check for burst mode(the new patch) may not show
> > any real improvement as the cost is divided by number of event.
> > Whereas optimizing the check for single event case(The current code)
> > shows better performance with single event case and no regression
> > with burst mode as cost is divided by number of events.
>
> I ran some tests on an AMD Zen 3 with DSW.
> In the below tests the enqueue burst size is not compile-time constant.
>
> Enqueue burst size      Performance improvement
> Run-time constant 1     ~5%
> Run-time constant 2     ~0%
> Run-time variable 1-2   ~9%
> Run-time variable 1-16  ~0%
>
> The run-time variable enqueue sizes randomly (uniformly) distributed in
> the specified range.
>
> The first result may come as a surprise. The benchmark is using
> RTE_EVENT_OP_FORWARD type events (which likely is the dominating op type
> in most apps). The single-event enqueue function only exists in a
> generic variant (i.e., no rte_event_enqueue_forward_burst() equivalent).
> I suspect that is the reason for the performance improvement.
>
> This effect is large-enough to make it somewhat beneficial (+~1%) to use
> run-time variable single-event enqueue compared to keeping the burst
> size compile-time constant.

# Interesting, Could you share your testeventdev command to test it.
# By having quick glance on DSW code, following change can be added(or
 similar cases).
Not sure such change in DSW driver is making a difference or nor?


diff --git a/drivers/event/dsw/dsw_event.c b/drivers/event/dsw/dsw_event.c
index e84b65d99f..455470997b 100644
--- a/drivers/event/dsw/dsw_event.c
+++ b/drivers/event/dsw/dsw_event.c
@@ -1251,7 +1251,7 @@ dsw_port_flush_out_buffers(struct dsw_evdev
*dsw, struct dsw_port *source_port)
 uint16_t
 dsw_event_enqueue(void *port, const struct rte_event *ev)
 {
-       return dsw_event_enqueue_burst(port, ev, unlikely(ev == NULL) ? 0 : 1);
+       return dsw_event_enqueue_burst(port, ev, 1);
 }

 static __rte_always_inline uint16_t
@@ -1340,7 +1340,7 @@ dsw_event_enqueue_burst_generic(struct dsw_port
*source_port,
        return (num_non_release + num_release);
 }

-uint16_t
+inline uint16_t
 dsw_event_enqueue_burst(void *port, const struct rte_event events[],
                        uint16_t events_len)
 {

# I am testing with command like this "/build/app/dpdk-test-eventdev
-l 0-23 -a 0002:0e:00.0 -- --test=perf_atq --plcores 1 --wlcores 8
--stlist p --nb_pkts=10000000000"

>
> The performance gain is counted toward both enqueue and dequeue costs
> (+benchmark app overhead), so an under-estimation if see this as an
> enqueue performance improvement.
>
> > If you agree, then we can skip this patch.
> >
>
> I have no strong opinion if this should be included or not.
>
> It was up to me, I would drop the single-enqueue special case handling
> altogether in the next ABI update.

That's a reasonable path. If we are willing to push a patch, we can
test it and give feedback.
Or in our spare time, We can do that as well.

>
> >
> >>
> >>> If so, check should be following. Right?
> >>>
> >>> if (__extension__((__builtin_constant_p(nb_events)) && nb_events == 1)
> >>> || nb_events  == 1)
> >>>
> >>> At least, It was my original intention in the code.
> >>>
> >>>
> >>>
> >>>>                   return (fp_ops->enqueue)(port, ev);
> >>>>           else
> >>>>                   return fn(port, ev, nb_events);
> >>>> @@ -2200,7 +2200,7 @@ rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
> >>>>            * Allow zero cost non burst mode routine invocation if application
> >>>>            * requests nb_events as const one
> >>>>            */
> >>>> -       if (nb_events == 1)
> >>>> +       if (__extension__(__builtin_constant_p(nb_events)) && nb_events == 1)
> >>>>                   return (fp_ops->dequeue)(port, ev, timeout_ticks);
> >>>>           else
> >>>>                   return (fp_ops->dequeue_burst)(port, ev, nb_events,
> >>>> --
> >>>> 2.34.1
> >>>>
> >>

^ permalink raw reply	[relevance 0%]

* Re: [PATCH v2 3/3] vhost: add device op to offload the interrupt kick
  2023-05-16 11:45  0%           ` Maxime Coquelin
@ 2023-05-16 12:07  0%             ` Eelco Chaudron
  0 siblings, 0 replies; 200+ results
From: Eelco Chaudron @ 2023-05-16 12:07 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: David Marchand, chenbo.xia, dev



On 16 May 2023, at 13:45, Maxime Coquelin wrote:

> On 5/16/23 13:36, Eelco Chaudron wrote:
>>
>>
>> On 16 May 2023, at 12:12, David Marchand wrote:
>>
>>> On Tue, May 16, 2023 at 10:53 AM Eelco Chaudron <echaudro@redhat.com> wrote:
>>>> On 10 May 2023, at 13:44, David Marchand wrote:
>>>
>>> [snip]
>>>
>>>>>> @@ -846,6 +848,14 @@ vhost_user_socket_mem_free(struct vhost_user_socket *vsocket)
>>>>>>                  vsocket->path = NULL;
>>>>>>          }
>>>>>>
>>>>>> +       if (vsocket && vsocket->alloc_notify_ops) {
>>>>>> +#pragma GCC diagnostic push
>>>>>> +#pragma GCC diagnostic ignored "-Wcast-qual"
>>>>>> +               free((struct rte_vhost_device_ops *)vsocket->notify_ops);
>>>>>> +#pragma GCC diagnostic pop
>>>>>> +               vsocket->notify_ops = NULL;
>>>>>> +       }
>>>>>
>>>>> Rather than select the behavior based on a boolean (and here force the
>>>>> compiler to close its eyes), I would instead add a non const pointer
>>>>> to ops (let's say alloc_notify_ops) in vhost_user_socket.
>>>>> The code can then unconditionnally call free(vsocket->alloc_notify_ops);
>>>>
>>>> Good idea, I will make the change in v3.
>>>
>>> Feel free to use a better name for this field :-).
>>>
>>>>
>>>>>> +
>>>>>>          if (vsocket) {
>>>>>>                  free(vsocket);
>>>>>>                  vsocket = NULL;
>>>
>>> [snip]
>>>
>>>>>> +       /*
>>>>>> +        * Although the ops structure is a const structure, we do need to
>>>>>> +        * override the guest_notify operation. This is because with the
>>>>>> +        * previous APIs it was "reserved" and if any garbage value was passed,
>>>>>> +        * it could crash the application.
>>>>>> +        */
>>>>>> +       if (ops && !ops->guest_notify) {
>>>>>
>>>>> Hum, as described in the comment above, I don't think we should look
>>>>> at ops->guest_notify value at all.
>>>>> Checking ops != NULL should be enough.
>>>>
>>>> Not sure I get you here. If the guest_notify passed by the user is NULL, it means the previously ‘reserved[1]’ field is NULL, so we do not need to use a new structure.
>>>>
>>>> I guess your comment would be true if we would introduce a new field in the data structure, not replacing a reserved one.
>>>
>>> Hum, I don't understand my comment either o_O'.
>>> Too many days off... or maybe my evil twin took over the keyboard.
>>>
>>>
>>>>
>>>>>> +               struct rte_vhost_device_ops *new_ops;
>>>>>> +
>>>>>> +               new_ops = malloc(sizeof(*new_ops));
>>>>>
>>>>> Strictly speaking, we lose the numa affinity of "ops" by calling malloc.
>>>>> I am unclear of the impact though.
>>>>
>>>> Don’t think there is a portable API that we can use to determine the NUMA for the ops memory and then allocate this on the same numa?
>>>>
>>>> Any thoughts or ideas on how to solve this? I hope most people will memset() the ops structure and the reserved[1] part is zero, but it might be a problem in the future if more extensions get added.
>>>
>>> Determinining current numa is doable, via 'ops'
>>> get_mempolicy(MPOL_F_NODE | MPOL_F_ADDR), like what is done for vq in
>>> numa_realloc().
>>> The problem is how to allocate on this numa with the libc allocator
>>> for which I have no idea...
>>> We could go with the dpdk allocator (again, like numa_realloc()).
>>>
>>>
>>> In practice, the passed ops will be probably from a const variable in
>>> the program .data section (for which I think fields are set to 0
>>> unless explicitly initialised), or a memset() will be called for a
>>> dynamic allocation from good citizens.
>>> So we can probably live with the current proposal.
>>> Plus, this is only for one release, since in 23.11 with the ABI bump,
>>> we will drop this compat code.
>>>
>>> Maxime, Chenbo, what do you think?
>>
>> Wait for their response, but for now I assume we can just keep the numa unaware malloc().
>
> Let's keep it as is as we'll get rid of it in 23.11.

Thanks for confirming.

>>>
>>> [snip]
>>>
>>>>>
>>>>> But putting indentation aside, is this change equivalent?
>>>>> -               if ((vhost_need_event(vhost_used_event(vq), new, old) &&
>>>>> -                                       (vq->callfd >= 0)) ||
>>>>> -                               unlikely(!signalled_used_valid)) {
>>>>> +               if ((vhost_need_event(vhost_used_event(vq), new, old) ||
>>>>> +                               unlikely(!signalled_used_valid)) &&
>>>>> +                               vq->callfd >= 0) {
>>>>
>>>> They are not equal, but in the past eventfd_write() should also not have been called with callfd < 0, guess this was an existing bug ;)
>>>
>>> I think this should be a separate fix.
>>
>> ACK, will add a separate patch in this series to fix it.
>
> I also caught & fixed it while implementing my VDUSE series [0].
> You can pick it in your series, and I will rebase my series on top of
> it.

Thanks for the details I’ll include your patch in my series.

I will send out a new revision soon (after testing the changes with OVS).

Thanks,

Eelco

> Thanks,
> Maxime
>
> [0]: https://gitlab.com/mcoquelin/dpdk-next-virtio/-/commit/b976e1f226db5c09834148847d994045eb89be93
>
>
>>
>>>
>>>>
>>>>>> +                       vhost_vring_inject_irq(dev, vq);
>>>
>>>
>>> -- 
>>> David Marchand
>>


^ permalink raw reply	[relevance 0%]

* Re: [PATCH V8] ethdev: fix one address occupies two entries in MAC addrs
  @ 2023-05-16 11:47  0%   ` lihuisong (C)
  2023-05-16 14:13  0%     ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: lihuisong (C) @ 2023-05-16 11:47 UTC (permalink / raw)
  To: ferruh.yigit
  Cc: thomas, andrew.rybchenko, liudongdong3, huangdaode, fengchengwen, dev

Hi Ferruh,

There is no result on techboard.
How to deal with this problem next?

/Huisong

在 2023/2/2 20:36, Huisong Li 写道:
> The dev->data->mac_addrs[0] will be changed to a new MAC address when
> applications modify the default MAC address by .mac_addr_set(). However,
> if the new default one has been added as a non-default MAC address by
> .mac_addr_add(), the .mac_addr_set() doesn't remove it from the mac_addrs
> list. As a result, one MAC address occupies two entries in the list. Like:
> add(MAC1)
> add(MAC2)
> add(MAC3)
> add(MAC4)
> set_default(MAC3)
> default=MAC3, the rest of the list=MAC1, MAC2, MAC3, MAC4
> Note: MAC3 occupies two entries.
>
> In addition, some PMDs, such as i40e, ice, hns3 and so on, do remove the
> old default MAC when set default MAC. If user continues to do
> set_default(MAC5), and the mac_addrs list is default=MAC5, filters=(MAC1,
> MAC2, MAC3, MAC4). At this moment, user can still see MAC3 from the list,
> but packets with MAC3 aren't actually received by the PMD.
>
> So need to ensure that the new default address is removed from the rest of
> the list if the address was already in the list.
>
> Fixes: 854d8ad4ef68 ("ethdev: add default mac address modifier")
> Cc: stable@dpdk.org
>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> Acked-by: Chengwen Feng <fengchengwen@huawei.com>
> ---
> v8: fix some comments.
> v7: add announcement in the release notes and document this behavior.
> v6: fix commit log and some code comments.
> v5:
>   - merge the second patch into the first patch.
>   - add error log when rollback failed.
> v4:
>    - fix broken in the patchwork
> v3:
>    - first explicitly remove the non-default MAC, then set default one.
>    - document default and non-default MAC address
> v2:
>    - fixed commit log.
> ---
>   doc/guides/rel_notes/release_23_03.rst |  6 +++++
>   lib/ethdev/ethdev_driver.h             |  6 ++++-
>   lib/ethdev/rte_ethdev.c                | 35 ++++++++++++++++++++++++--
>   lib/ethdev/rte_ethdev.h                |  3 +++
>   4 files changed, 47 insertions(+), 3 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst
> index 84b112a8b1..1c9b9912c2 100644
> --- a/doc/guides/rel_notes/release_23_03.rst
> +++ b/doc/guides/rel_notes/release_23_03.rst
> @@ -105,6 +105,12 @@ API Changes
>      Also, make sure to start the actual text at the margin.
>      =======================================================
>   
> +* ethdev: ensured all entries in MAC address list are uniques.
> +  When setting a default MAC address with the function
> +  ``rte_eth_dev_default_mac_addr_set``,
> +  the address is now removed from the rest of the address list
> +  in order to ensure it is only at index 0 of the list.
> +
>   
>   ABI Changes
>   -----------
> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> index dde3ec84ef..3994c61b86 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -117,7 +117,11 @@ struct rte_eth_dev_data {
>   
>   	uint64_t rx_mbuf_alloc_failed; /**< Rx ring mbuf allocation failures */
>   
> -	/** Device Ethernet link address. @see rte_eth_dev_release_port() */
> +	/**
> +	 * Device Ethernet link addresses.
> +	 * All entries are unique.
> +	 * The first entry (index zero) is the default address.
> +	 */
>   	struct rte_ether_addr *mac_addrs;
>   	/** Bitmap associating MAC addresses to pools */
>   	uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 86ca303ab5..de25183619 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -4498,7 +4498,10 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id, struct rte_ether_addr *addr)
>   int
>   rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr)
>   {
> +	uint64_t mac_pool_sel_bk = 0;
>   	struct rte_eth_dev *dev;
> +	uint32_t pool;
> +	int index;
>   	int ret;
>   
>   	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> @@ -4517,16 +4520,44 @@ rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr)
>   	if (*dev->dev_ops->mac_addr_set == NULL)
>   		return -ENOTSUP;
>   
> +	/* Keep address unique in dev->data->mac_addrs[]. */
> +	index = eth_dev_get_mac_addr_index(port_id, addr);
> +	if (index > 0) {
> +		/* Remove address in dev data structure */
> +		mac_pool_sel_bk = dev->data->mac_pool_sel[index];
> +		ret = rte_eth_dev_mac_addr_remove(port_id, addr);
> +		if (ret < 0) {
> +			RTE_ETHDEV_LOG(ERR, "Cannot remove the port %u address from the rest of list.\n",
> +				       port_id);
> +			return ret;
> +		}
> +	}
>   	ret = (*dev->dev_ops->mac_addr_set)(dev, addr);
>   	if (ret < 0)
> -		return ret;
> +		goto out;
>   
>   	/* Update default address in NIC data structure */
>   	rte_ether_addr_copy(addr, &dev->data->mac_addrs[0]);
>   
>   	return 0;
> -}
>   
> +out:
> +	if (index > 0) {
> +		pool = 0;
> +		do {
> +			if (mac_pool_sel_bk & UINT64_C(1)) {
> +				if (rte_eth_dev_mac_addr_add(port_id, addr,
> +							     pool) != 0)
> +					RTE_ETHDEV_LOG(ERR, "failed to restore MAC pool id(%u) in port %u.\n",
> +						       pool, port_id);
> +			}
> +			mac_pool_sel_bk >>= 1;
> +			pool++;
> +		} while (mac_pool_sel_bk != 0);
> +	}
> +
> +	return ret;
> +}
>   
>   /*
>    * Returns index into MAC address array of addr. Use 00:00:00:00:00:00 to find
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index d22de196db..2456153457 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -4356,6 +4356,9 @@ int rte_eth_dev_mac_addr_remove(uint16_t port_id,
>   
>   /**
>    * Set the default MAC address.
> + * It replaces the address at index 0 of the MAC address list.
> + * If the address was already in the MAC address list,
> + * it is removed from the rest of the list.
>    *
>    * @param port_id
>    *   The port identifier of the Ethernet device.

^ permalink raw reply	[relevance 0%]

* Re: [PATCH v2 3/3] vhost: add device op to offload the interrupt kick
  2023-05-16 11:36  0%         ` Eelco Chaudron
@ 2023-05-16 11:45  0%           ` Maxime Coquelin
  2023-05-16 12:07  0%             ` Eelco Chaudron
  2023-05-17  9:18  0%           ` Eelco Chaudron
  1 sibling, 1 reply; 200+ results
From: Maxime Coquelin @ 2023-05-16 11:45 UTC (permalink / raw)
  To: Eelco Chaudron, David Marchand; +Cc: chenbo.xia, dev



On 5/16/23 13:36, Eelco Chaudron wrote:
> 
> 
> On 16 May 2023, at 12:12, David Marchand wrote:
> 
>> On Tue, May 16, 2023 at 10:53 AM Eelco Chaudron <echaudro@redhat.com> wrote:
>>> On 10 May 2023, at 13:44, David Marchand wrote:
>>
>> [snip]
>>
>>>>> @@ -846,6 +848,14 @@ vhost_user_socket_mem_free(struct vhost_user_socket *vsocket)
>>>>>                  vsocket->path = NULL;
>>>>>          }
>>>>>
>>>>> +       if (vsocket && vsocket->alloc_notify_ops) {
>>>>> +#pragma GCC diagnostic push
>>>>> +#pragma GCC diagnostic ignored "-Wcast-qual"
>>>>> +               free((struct rte_vhost_device_ops *)vsocket->notify_ops);
>>>>> +#pragma GCC diagnostic pop
>>>>> +               vsocket->notify_ops = NULL;
>>>>> +       }
>>>>
>>>> Rather than select the behavior based on a boolean (and here force the
>>>> compiler to close its eyes), I would instead add a non const pointer
>>>> to ops (let's say alloc_notify_ops) in vhost_user_socket.
>>>> The code can then unconditionnally call free(vsocket->alloc_notify_ops);
>>>
>>> Good idea, I will make the change in v3.
>>
>> Feel free to use a better name for this field :-).
>>
>>>
>>>>> +
>>>>>          if (vsocket) {
>>>>>                  free(vsocket);
>>>>>                  vsocket = NULL;
>>
>> [snip]
>>
>>>>> +       /*
>>>>> +        * Although the ops structure is a const structure, we do need to
>>>>> +        * override the guest_notify operation. This is because with the
>>>>> +        * previous APIs it was "reserved" and if any garbage value was passed,
>>>>> +        * it could crash the application.
>>>>> +        */
>>>>> +       if (ops && !ops->guest_notify) {
>>>>
>>>> Hum, as described in the comment above, I don't think we should look
>>>> at ops->guest_notify value at all.
>>>> Checking ops != NULL should be enough.
>>>
>>> Not sure I get you here. If the guest_notify passed by the user is NULL, it means the previously ‘reserved[1]’ field is NULL, so we do not need to use a new structure.
>>>
>>> I guess your comment would be true if we would introduce a new field in the data structure, not replacing a reserved one.
>>
>> Hum, I don't understand my comment either o_O'.
>> Too many days off... or maybe my evil twin took over the keyboard.
>>
>>
>>>
>>>>> +               struct rte_vhost_device_ops *new_ops;
>>>>> +
>>>>> +               new_ops = malloc(sizeof(*new_ops));
>>>>
>>>> Strictly speaking, we lose the numa affinity of "ops" by calling malloc.
>>>> I am unclear of the impact though.
>>>
>>> Don’t think there is a portable API that we can use to determine the NUMA for the ops memory and then allocate this on the same numa?
>>>
>>> Any thoughts or ideas on how to solve this? I hope most people will memset() the ops structure and the reserved[1] part is zero, but it might be a problem in the future if more extensions get added.
>>
>> Determinining current numa is doable, via 'ops'
>> get_mempolicy(MPOL_F_NODE | MPOL_F_ADDR), like what is done for vq in
>> numa_realloc().
>> The problem is how to allocate on this numa with the libc allocator
>> for which I have no idea...
>> We could go with the dpdk allocator (again, like numa_realloc()).
>>
>>
>> In practice, the passed ops will be probably from a const variable in
>> the program .data section (for which I think fields are set to 0
>> unless explicitly initialised), or a memset() will be called for a
>> dynamic allocation from good citizens.
>> So we can probably live with the current proposal.
>> Plus, this is only for one release, since in 23.11 with the ABI bump,
>> we will drop this compat code.
>>
>> Maxime, Chenbo, what do you think?
> 
> Wait for their response, but for now I assume we can just keep the numa unaware malloc().

Let's keep it as is as we'll get rid of it in 23.11.

>>
>> [snip]
>>
>>>>
>>>> But putting indentation aside, is this change equivalent?
>>>> -               if ((vhost_need_event(vhost_used_event(vq), new, old) &&
>>>> -                                       (vq->callfd >= 0)) ||
>>>> -                               unlikely(!signalled_used_valid)) {
>>>> +               if ((vhost_need_event(vhost_used_event(vq), new, old) ||
>>>> +                               unlikely(!signalled_used_valid)) &&
>>>> +                               vq->callfd >= 0) {
>>>
>>> They are not equal, but in the past eventfd_write() should also not have been called with callfd < 0, guess this was an existing bug ;)
>>
>> I think this should be a separate fix.
> 
> ACK, will add a separate patch in this series to fix it.

I also caught & fixed it while implementing my VDUSE series [0].
You can pick it in your series, and I will rebase my series on top of
it.

Thanks,
Maxime

[0]: 
https://gitlab.com/mcoquelin/dpdk-next-virtio/-/commit/b976e1f226db5c09834148847d994045eb89be93


> 
>>
>>>
>>>>> +                       vhost_vring_inject_irq(dev, vq);
>>
>>
>> -- 
>> David Marchand
> 


^ permalink raw reply	[relevance 0%]

* Re: [PATCH v2 3/3] vhost: add device op to offload the interrupt kick
  2023-05-16 10:12  3%       ` David Marchand
@ 2023-05-16 11:36  0%         ` Eelco Chaudron
  2023-05-16 11:45  0%           ` Maxime Coquelin
  2023-05-17  9:18  0%           ` Eelco Chaudron
  0 siblings, 2 replies; 200+ results
From: Eelco Chaudron @ 2023-05-16 11:36 UTC (permalink / raw)
  To: David Marchand; +Cc: maxime.coquelin, chenbo.xia, dev



On 16 May 2023, at 12:12, David Marchand wrote:

> On Tue, May 16, 2023 at 10:53 AM Eelco Chaudron <echaudro@redhat.com> wrote:
>> On 10 May 2023, at 13:44, David Marchand wrote:
>
> [snip]
>
>>>> @@ -846,6 +848,14 @@ vhost_user_socket_mem_free(struct vhost_user_socket *vsocket)
>>>>                 vsocket->path = NULL;
>>>>         }
>>>>
>>>> +       if (vsocket && vsocket->alloc_notify_ops) {
>>>> +#pragma GCC diagnostic push
>>>> +#pragma GCC diagnostic ignored "-Wcast-qual"
>>>> +               free((struct rte_vhost_device_ops *)vsocket->notify_ops);
>>>> +#pragma GCC diagnostic pop
>>>> +               vsocket->notify_ops = NULL;
>>>> +       }
>>>
>>> Rather than select the behavior based on a boolean (and here force the
>>> compiler to close its eyes), I would instead add a non const pointer
>>> to ops (let's say alloc_notify_ops) in vhost_user_socket.
>>> The code can then unconditionnally call free(vsocket->alloc_notify_ops);
>>
>> Good idea, I will make the change in v3.
>
> Feel free to use a better name for this field :-).
>
>>
>>>> +
>>>>         if (vsocket) {
>>>>                 free(vsocket);
>>>>                 vsocket = NULL;
>
> [snip]
>
>>>> +       /*
>>>> +        * Although the ops structure is a const structure, we do need to
>>>> +        * override the guest_notify operation. This is because with the
>>>> +        * previous APIs it was "reserved" and if any garbage value was passed,
>>>> +        * it could crash the application.
>>>> +        */
>>>> +       if (ops && !ops->guest_notify) {
>>>
>>> Hum, as described in the comment above, I don't think we should look
>>> at ops->guest_notify value at all.
>>> Checking ops != NULL should be enough.
>>
>> Not sure I get you here. If the guest_notify passed by the user is NULL, it means the previously ‘reserved[1]’ field is NULL, so we do not need to use a new structure.
>>
>> I guess your comment would be true if we would introduce a new field in the data structure, not replacing a reserved one.
>
> Hum, I don't understand my comment either o_O'.
> Too many days off... or maybe my evil twin took over the keyboard.
>
>
>>
>>>> +               struct rte_vhost_device_ops *new_ops;
>>>> +
>>>> +               new_ops = malloc(sizeof(*new_ops));
>>>
>>> Strictly speaking, we lose the numa affinity of "ops" by calling malloc.
>>> I am unclear of the impact though.
>>
>> Don’t think there is a portable API that we can use to determine the NUMA for the ops memory and then allocate this on the same numa?
>>
>> Any thoughts or ideas on how to solve this? I hope most people will memset() the ops structure and the reserved[1] part is zero, but it might be a problem in the future if more extensions get added.
>
> Determinining current numa is doable, via 'ops'
> get_mempolicy(MPOL_F_NODE | MPOL_F_ADDR), like what is done for vq in
> numa_realloc().
> The problem is how to allocate on this numa with the libc allocator
> for which I have no idea...
> We could go with the dpdk allocator (again, like numa_realloc()).
>
>
> In practice, the passed ops will be probably from a const variable in
> the program .data section (for which I think fields are set to 0
> unless explicitly initialised), or a memset() will be called for a
> dynamic allocation from good citizens.
> So we can probably live with the current proposal.
> Plus, this is only for one release, since in 23.11 with the ABI bump,
> we will drop this compat code.
>
> Maxime, Chenbo, what do you think?

Wait for their response, but for now I assume we can just keep the numa unaware malloc().

>
> [snip]
>
>>>
>>> But putting indentation aside, is this change equivalent?
>>> -               if ((vhost_need_event(vhost_used_event(vq), new, old) &&
>>> -                                       (vq->callfd >= 0)) ||
>>> -                               unlikely(!signalled_used_valid)) {
>>> +               if ((vhost_need_event(vhost_used_event(vq), new, old) ||
>>> +                               unlikely(!signalled_used_valid)) &&
>>> +                               vq->callfd >= 0) {
>>
>> They are not equal, but in the past eventfd_write() should also not have been called with callfd < 0, guess this was an existing bug ;)
>
> I think this should be a separate fix.

ACK, will add a separate patch in this series to fix it.

>
>>
>>>> +                       vhost_vring_inject_irq(dev, vq);
>
>
> -- 
> David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [PATCH V5 0/5] app/testpmd: support multiple process attach and detach port
  @ 2023-05-16 11:27  0%   ` lihuisong (C)
  2023-05-23  0:46  0%   ` fengchengwen
  1 sibling, 0 replies; 200+ results
From: lihuisong (C) @ 2023-05-16 11:27 UTC (permalink / raw)
  To: ferruh.yigit, thomas
  Cc: dev, andrew.rybchenko, liudongdong3, huangdaode, fengchengwen

Hi Ferruh and Thomas,

Can you continue to take a look at this series?
This work has been working on since August last year.

/Huisong


在 2023/1/31 11:33, Huisong Li 写道:
> This patchset fix some bugs and support attaching and detaching port
> in primary and secondary.
>
> ---
>   -v5: move 'ALLOCATED' state to the back of 'REMOVED' to avoid abi break.
>   -v4: fix a misspelling.
>   -v3:
>     #1 merge patch 1/6 and patch 2/6 into patch 1/5, and add modification
>        for other bus type.
>     #2 add a RTE_ETH_DEV_ALLOCATED state in rte_eth_dev_state to resolve
>        the probelm in patch 2/5.
>   -v2: resend due to CI unexplained failure.
>
> Huisong Li (5):
>    drivers/bus: restore driver assignment at front of probing
>    ethdev: fix skip valid port in probing callback
>    app/testpmd: check the validity of the port
>    app/testpmd: add attach and detach port for multiple process
>    app/testpmd: stop forwarding in new or destroy event
>
>   app/test-pmd/testpmd.c                   | 47 +++++++++++++++---------
>   app/test-pmd/testpmd.h                   |  1 -
>   drivers/bus/auxiliary/auxiliary_common.c |  9 ++++-
>   drivers/bus/dpaa/dpaa_bus.c              |  9 ++++-
>   drivers/bus/fslmc/fslmc_bus.c            |  8 +++-
>   drivers/bus/ifpga/ifpga_bus.c            | 12 ++++--
>   drivers/bus/pci/pci_common.c             |  9 ++++-
>   drivers/bus/vdev/vdev.c                  | 10 ++++-
>   drivers/bus/vmbus/vmbus_common.c         |  9 ++++-
>   drivers/net/bnxt/bnxt_ethdev.c           |  3 +-
>   drivers/net/bonding/bonding_testpmd.c    |  1 -
>   drivers/net/mlx5/mlx5.c                  |  2 +-
>   lib/ethdev/ethdev_driver.c               | 13 +++++--
>   lib/ethdev/ethdev_driver.h               | 12 ++++++
>   lib/ethdev/ethdev_pci.h                  |  2 +-
>   lib/ethdev/rte_class_eth.c               |  2 +-
>   lib/ethdev/rte_ethdev.c                  |  4 +-
>   lib/ethdev/rte_ethdev.h                  |  4 +-
>   lib/ethdev/version.map                   |  1 +
>   19 files changed, 114 insertions(+), 44 deletions(-)
>

^ permalink raw reply	[relevance 0%]

* Re: [PATCH v2 3/3] vhost: add device op to offload the interrupt kick
  @ 2023-05-16 10:12  3%       ` David Marchand
  2023-05-16 11:36  0%         ` Eelco Chaudron
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2023-05-16 10:12 UTC (permalink / raw)
  To: Eelco Chaudron, maxime.coquelin, chenbo.xia; +Cc: dev

On Tue, May 16, 2023 at 10:53 AM Eelco Chaudron <echaudro@redhat.com> wrote:
> On 10 May 2023, at 13:44, David Marchand wrote:

[snip]

> >> @@ -846,6 +848,14 @@ vhost_user_socket_mem_free(struct vhost_user_socket *vsocket)
> >>                 vsocket->path = NULL;
> >>         }
> >>
> >> +       if (vsocket && vsocket->alloc_notify_ops) {
> >> +#pragma GCC diagnostic push
> >> +#pragma GCC diagnostic ignored "-Wcast-qual"
> >> +               free((struct rte_vhost_device_ops *)vsocket->notify_ops);
> >> +#pragma GCC diagnostic pop
> >> +               vsocket->notify_ops = NULL;
> >> +       }
> >
> > Rather than select the behavior based on a boolean (and here force the
> > compiler to close its eyes), I would instead add a non const pointer
> > to ops (let's say alloc_notify_ops) in vhost_user_socket.
> > The code can then unconditionnally call free(vsocket->alloc_notify_ops);
>
> Good idea, I will make the change in v3.

Feel free to use a better name for this field :-).

>
> >> +
> >>         if (vsocket) {
> >>                 free(vsocket);
> >>                 vsocket = NULL;

[snip]

> >> +       /*
> >> +        * Although the ops structure is a const structure, we do need to
> >> +        * override the guest_notify operation. This is because with the
> >> +        * previous APIs it was "reserved" and if any garbage value was passed,
> >> +        * it could crash the application.
> >> +        */
> >> +       if (ops && !ops->guest_notify) {
> >
> > Hum, as described in the comment above, I don't think we should look
> > at ops->guest_notify value at all.
> > Checking ops != NULL should be enough.
>
> Not sure I get you here. If the guest_notify passed by the user is NULL, it means the previously ‘reserved[1]’ field is NULL, so we do not need to use a new structure.
>
> I guess your comment would be true if we would introduce a new field in the data structure, not replacing a reserved one.

Hum, I don't understand my comment either o_O'.
Too many days off... or maybe my evil twin took over the keyboard.


>
> >> +               struct rte_vhost_device_ops *new_ops;
> >> +
> >> +               new_ops = malloc(sizeof(*new_ops));
> >
> > Strictly speaking, we lose the numa affinity of "ops" by calling malloc.
> > I am unclear of the impact though.
>
> Don’t think there is a portable API that we can use to determine the NUMA for the ops memory and then allocate this on the same numa?
>
> Any thoughts or ideas on how to solve this? I hope most people will memset() the ops structure and the reserved[1] part is zero, but it might be a problem in the future if more extensions get added.

Determinining current numa is doable, via 'ops'
get_mempolicy(MPOL_F_NODE | MPOL_F_ADDR), like what is done for vq in
numa_realloc().
The problem is how to allocate on this numa with the libc allocator
for which I have no idea...
We could go with the dpdk allocator (again, like numa_realloc()).


In practice, the passed ops will be probably from a const variable in
the program .data section (for which I think fields are set to 0
unless explicitly initialised), or a memset() will be called for a
dynamic allocation from good citizens.
So we can probably live with the current proposal.
Plus, this is only for one release, since in 23.11 with the ABI bump,
we will drop this compat code.

Maxime, Chenbo, what do you think?


[snip]

> >
> > But putting indentation aside, is this change equivalent?
> > -               if ((vhost_need_event(vhost_used_event(vq), new, old) &&
> > -                                       (vq->callfd >= 0)) ||
> > -                               unlikely(!signalled_used_valid)) {
> > +               if ((vhost_need_event(vhost_used_event(vq), new, old) ||
> > +                               unlikely(!signalled_used_valid)) &&
> > +                               vq->callfd >= 0) {
>
> They are not equal, but in the past eventfd_write() should also not have been called with callfd < 0, guess this was an existing bug ;)

I think this should be a separate fix.

>
> >> +                       vhost_vring_inject_irq(dev, vq);


-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* [PATCH v1 5/7] ethdev: add GENEVE TLV option modification support
  @ 2023-05-16  6:37  3% ` Michael Baum
    1 sibling, 0 replies; 200+ results
From: Michael Baum @ 2023-05-16  6:37 UTC (permalink / raw)
  To: dev; +Cc: Ori Kam, Aman Singh, Yuying Zhang, Ferruh Yigit, Thomas Monjalon

Add modify field support for GENEVE option fields:
 - "RTE_FLOW_FIELD_GENEVE_OPT_TYPE"
 - "RTE_FLOW_FIELD_GENEVE_OPT_CLASS"
 - "RTE_FLOW_FIELD_GENEVE_OPT_DATA"

Each GENEVE TLV option is identified by both its "class" and "type", so
2 new fields were added to "rte_flow_action_modify_data" structure to
help specify which option to modify.

To get room for those 2 new fields, the "level" field move to use
"uint8_t" which is more than enough for encapsulation level.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
 app/test-pmd/cmdline_flow.c            | 48 +++++++++++++++++++++++-
 doc/guides/prog_guide/rte_flow.rst     | 12 ++++++
 doc/guides/rel_notes/release_23_07.rst |  3 ++
 lib/ethdev/rte_flow.h                  | 51 +++++++++++++++++++++++++-
 4 files changed, 112 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 58939ec321..8c1dea53c0 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -636,11 +636,15 @@ enum index {
 	ACTION_MODIFY_FIELD_DST_TYPE_VALUE,
 	ACTION_MODIFY_FIELD_DST_LEVEL,
 	ACTION_MODIFY_FIELD_DST_LEVEL_VALUE,
+	ACTION_MODIFY_FIELD_DST_TYPE_ID,
+	ACTION_MODIFY_FIELD_DST_CLASS_ID,
 	ACTION_MODIFY_FIELD_DST_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_TYPE,
 	ACTION_MODIFY_FIELD_SRC_TYPE_VALUE,
 	ACTION_MODIFY_FIELD_SRC_LEVEL,
 	ACTION_MODIFY_FIELD_SRC_LEVEL_VALUE,
+	ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+	ACTION_MODIFY_FIELD_SRC_CLASS_ID,
 	ACTION_MODIFY_FIELD_SRC_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_VALUE,
 	ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -854,7 +858,9 @@ static const char *const modify_field_ids[] = {
 	"ipv4_ecn", "ipv6_ecn", "gtp_psc_qfi", "meter_color",
 	"ipv6_proto",
 	"flex_item",
-	"hash_result", NULL
+	"hash_result",
+	"geneve_opt_type", "geneve_opt_class", "geneve_opt_data",
+	NULL
 };
 
 static const char *const meter_colors[] = {
@@ -2295,6 +2301,8 @@ static const enum index next_action_sample[] = {
 
 static const enum index action_modify_field_dst[] = {
 	ACTION_MODIFY_FIELD_DST_LEVEL,
+	ACTION_MODIFY_FIELD_DST_TYPE_ID,
+	ACTION_MODIFY_FIELD_DST_CLASS_ID,
 	ACTION_MODIFY_FIELD_DST_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_TYPE,
 	ZERO,
@@ -2302,6 +2310,8 @@ static const enum index action_modify_field_dst[] = {
 
 static const enum index action_modify_field_src[] = {
 	ACTION_MODIFY_FIELD_SRC_LEVEL,
+	ACTION_MODIFY_FIELD_SRC_TYPE_ID,
+	ACTION_MODIFY_FIELD_SRC_CLASS_ID,
 	ACTION_MODIFY_FIELD_SRC_OFFSET,
 	ACTION_MODIFY_FIELD_SRC_VALUE,
 	ACTION_MODIFY_FIELD_SRC_POINTER,
@@ -6388,6 +6398,24 @@ static const struct token token_list[] = {
 		.call = parse_vc_modify_field_level,
 		.comp = comp_none,
 	},
+	[ACTION_MODIFY_FIELD_DST_TYPE_ID] = {
+		.name = "dst_type_id",
+		.help = "destination field type ID",
+		.next = NEXT(action_modify_field_dst,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+					dst.type)),
+		.call = parse_vc_conf,
+	},
+	[ACTION_MODIFY_FIELD_DST_CLASS_ID] = {
+		.name = "dst_class",
+		.help = "destination field class ID",
+		.next = NEXT(action_modify_field_dst,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+					     dst.class_id)),
+		.call = parse_vc_conf,
+	},
 	[ACTION_MODIFY_FIELD_DST_OFFSET] = {
 		.name = "dst_offset",
 		.help = "destination field bit offset",
@@ -6423,6 +6451,24 @@ static const struct token token_list[] = {
 		.call = parse_vc_modify_field_level,
 		.comp = comp_none,
 	},
+	[ACTION_MODIFY_FIELD_SRC_TYPE_ID] = {
+		.name = "src_type_id",
+		.help = "source field type ID",
+		.next = NEXT(action_modify_field_src,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct rte_flow_action_modify_field,
+					src.type)),
+		.call = parse_vc_conf,
+	},
+	[ACTION_MODIFY_FIELD_SRC_CLASS_ID] = {
+		.name = "src_class",
+		.help = "source field class ID",
+		.next = NEXT(action_modify_field_src,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_action_modify_field,
+					     src.class_id)),
+		.call = parse_vc_conf,
+	},
 	[ACTION_MODIFY_FIELD_SRC_OFFSET] = {
 		.name = "src_offset",
 		.help = "source field bit offset",
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 25b57bf86d..cd38f0de46 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -2937,6 +2937,14 @@ as well as any tag element in the tag array:
 For the tag array (in case of multiple tags are supported and present)
 ``level`` translates directly into the array index.
 
+``type`` is used to specify (along with ``class_id``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
+``class_id`` is used to specify (along with ``type``) the Geneve option which
+is being modified.
+This field is relevant only for ``RTE_FLOW_FIELD_GENEVE_OPT_XXXX`` type.
+
 ``flex_handle`` is used to specify the flex item pointer which is being
 modified. ``flex_handle`` and ``level`` are mutually exclusive.
 
@@ -2994,6 +3002,10 @@ value as sequence of bytes {xxx, xxx, 0x85, xxx, xxx, xxx}.
    +-----------------+----------------------------------------------------------+
    | ``level``       | encapsulation level of a packet field or tag array index |
    +-----------------+----------------------------------------------------------+
+   | ``type``        | geneve option type                                       |
+   +-----------------+----------------------------------------------------------+
+   | ``class_id``    | geneve option class ID                                   |
+   +-----------------+----------------------------------------------------------+
    | ``flex_handle`` | flex item handle of a packet field                       |
    +-----------------+----------------------------------------------------------+
    | ``offset``      | number of bits to skip at the beginning                  |
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index a9b1293689..ce1755096f 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -84,6 +84,9 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* The ``level`` field in experimental structure
+  ``struct rte_flow_action_modify_data`` was reduced to 8 bits.
+
 
 ABI Changes
 -----------
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 713ba8b65c..b82eb0c0a8 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3773,6 +3773,9 @@ enum rte_flow_field_id {
 	RTE_FLOW_FIELD_IPV6_PROTO,	/**< IPv6 next header. */
 	RTE_FLOW_FIELD_FLEX_ITEM,	/**< Flex item. */
 	RTE_FLOW_FIELD_HASH_RESULT,	/**< Hash result. */
+	RTE_FLOW_FIELD_GENEVE_OPT_TYPE,	/**< GENEVE option type */
+	RTE_FLOW_FIELD_GENEVE_OPT_CLASS,/**< GENEVE option class */
+	RTE_FLOW_FIELD_GENEVE_OPT_DATA	/**< GENEVE option data */
 };
 
 /**
@@ -3788,7 +3791,53 @@ struct rte_flow_action_modify_data {
 		struct {
 			/** Encapsulation level or tag index or flex item handle. */
 			union {
-				uint32_t level;
+				struct {
+					/**
+					 * Packet encapsulation level containing
+					 * the field modify to.
+					 *
+					 * - @p 0 requests the default behavior.
+					 *   Depending on the packet type, it
+					 *   can mean outermost, innermost or
+					 *   anything in between.
+					 *
+					 *   It basically stands for the
+					 *   innermost encapsulation level
+					 *   modification can be performed on
+					 *   according to PMD and device
+					 *   capabilities.
+					 *
+					 * - @p 1 requests modification to be
+					 *   performed on the outermost packet
+					 *   encapsulation level.
+					 *
+					 * - @p 2 and subsequent values request
+					 *   modification to be performed on
+					 *   the specified inner packet
+					 *   encapsulation level, from
+					 *   outermost to innermost (lower to
+					 *   higher values).
+					 *
+					 * Values other than @p 0 are not
+					 * necessarily supported.
+					 *
+					 * For RTE_FLOW_FIELD_TAG it represents
+					 * the tag element in the tag array.
+					 */
+					uint8_t level;
+					/**
+					 * Geneve option type. relevant only
+					 * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+					 * modification type.
+					 */
+					uint8_t type;
+					/**
+					 * Geneve option class. relevant only
+					 * for RTE_FLOW_FIELD_GENEVE_OPT_XXXX
+					 * modification type.
+					 */
+					rte_be16_t class_id;
+				};
 				struct rte_flow_item_flex_handle *flex_handle;
 			};
 			/** Number of bits to skip from a field. */
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [PATCH v3] eventdev: avoid non-burst shortcut for variable-size bursts
  @ 2023-05-15 20:52  3%         ` Mattias Rönnblom
  2023-05-16 13:08  0%           ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Mattias Rönnblom @ 2023-05-15 20:52 UTC (permalink / raw)
  To: Jerin Jacob, Mattias Rönnblom; +Cc: jerinj, dev, Morten Brørup

On 2023-05-15 14:38, Jerin Jacob wrote:
> On Fri, May 12, 2023 at 6:45 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
>>
>> On 2023-05-12 13:59, Jerin Jacob wrote:
>>> On Thu, May 11, 2023 at 2:00 PM Mattias Rönnblom
>>> <mattias.ronnblom@ericsson.com> wrote:
>>>>
>>>> Use non-burst event enqueue and dequeue calls from burst enqueue and
>>>> dequeue only when the burst size is compile-time constant (and equal
>>>> to one).
>>>>
>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>>>
>>>> ---
>>>>
>>>> v3: Actually include the change v2 claimed to contain.
>>>> v2: Wrap builtin call in __extension__, to avoid compiler warnings if
>>>>       application is compiled with -pedantic. (Morten Brørup)
>>>> ---
>>>>    lib/eventdev/rte_eventdev.h | 4 ++--
>>>>    1 file changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>>>> index a90e23ac8b..a471caeb6d 100644
>>>> --- a/lib/eventdev/rte_eventdev.h
>>>> +++ b/lib/eventdev/rte_eventdev.h
>>>> @@ -1944,7 +1944,7 @@ __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>>>>            * Allow zero cost non burst mode routine invocation if application
>>>>            * requests nb_events as const one
>>>>            */
>>>> -       if (nb_events == 1)
>>>> +       if (__extension__(__builtin_constant_p(nb_events)) && nb_events == 1)
>>>
>>> "Why" part is not clear from the commit message. Is this to avoid
>>> nb_events read if it is built-in const.
>>
>> The __builtin_constant_p() is introduced to avoid having the compiler
>> generate a conditional branch and two different code paths in case
>> nb_elem is a run-time variable.
>>
>> In particular, this matters if nb_elems is run-time variable and varies
>> between 1 and some larger value.
>>
>> I should have mention this in the commit message.
>>
>> A very slight performance improvement. It also makes the code better
>> match the comment, imo. Zero cost for const one enqueues, but no impact
>> non-compile-time-constant-length enqueues.
>>
>> Feel free to ignore.
> 
> 
> I did some performance comparison of the patch.
> A low-end ARM machines shows 0.7%  drop with single event case. No
> regression see with high-end ARM cores with single event case.
> 
> IMO, optimizing the check for burst mode(the new patch) may not show
> any real improvement as the cost is divided by number of event.
> Whereas optimizing the check for single event case(The current code)
> shows better performance with single event case and no regression
> with burst mode as cost is divided by number of events.

I ran some tests on an AMD Zen 3 with DSW.

In the below tests the enqueue burst size is not compile-time constant.

Enqueue burst size      Performance improvement
Run-time constant 1     ~5%
Run-time constant 2     ~0%
Run-time variable 1-2   ~9%
Run-time variable 1-16  ~0%

The run-time variable enqueue sizes randomly (uniformly) distributed in 
the specified range.

The first result may come as a surprise. The benchmark is using 
RTE_EVENT_OP_FORWARD type events (which likely is the dominating op type 
in most apps). The single-event enqueue function only exists in a 
generic variant (i.e., no rte_event_enqueue_forward_burst() equivalent). 
I suspect that is the reason for the performance improvement.

This effect is large-enough to make it somewhat beneficial (+~1%) to use 
run-time variable single-event enqueue compared to keeping the burst 
size compile-time constant.

The performance gain is counted toward both enqueue and dequeue costs 
(+benchmark app overhead), so an under-estimation if see this as an 
enqueue performance improvement.

> If you agree, then we can skip this patch.
>

I have no strong opinion if this should be included or not.

It was up to me, I would drop the single-enqueue special case handling 
altogether in the next ABI update.

> 
>>
>>> If so, check should be following. Right?
>>>
>>> if (__extension__((__builtin_constant_p(nb_events)) && nb_events == 1)
>>> || nb_events  == 1)
>>>
>>> At least, It was my original intention in the code.
>>>
>>>
>>>
>>>>                   return (fp_ops->enqueue)(port, ev);
>>>>           else
>>>>                   return fn(port, ev, nb_events);
>>>> @@ -2200,7 +2200,7 @@ rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
>>>>            * Allow zero cost non burst mode routine invocation if application
>>>>            * requests nb_events as const one
>>>>            */
>>>> -       if (nb_events == 1)
>>>> +       if (__extension__(__builtin_constant_p(nb_events)) && nb_events == 1)
>>>>                   return (fp_ops->dequeue)(port, ev, timeout_ticks);
>>>>           else
>>>>                   return (fp_ops->dequeue_burst)(port, ev, nb_events,
>>>> --
>>>> 2.34.1
>>>>
>>

^ permalink raw reply	[relevance 3%]

* [PATCH v6 1/3] ring: fix unmatched type definition and usage
  2023-05-09  9:24  3%   ` [PATCH v6 0/3] add telemetry cmds for ring Jie Hai
@ 2023-05-09  9:24  3%     ` Jie Hai
  0 siblings, 0 replies; 200+ results
From: Jie Hai @ 2023-05-09  9:24 UTC (permalink / raw)
  To: Honnappa Nagarahalli, Konstantin Ananyev; +Cc: dev, liudongdong3

Field 'flags' of struct rte_ring is defined as int type. However,
it is used as unsigned int. To ensure consistency, change the
type of flags to unsigned int. Since these two types has the
same byte size, this change is not an ABI change.

Fixes: af75078fece3 ("first public release")

Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/ring/rte_ring_core.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h
index 82b237091b71..1c809abeb531 100644
--- a/lib/ring/rte_ring_core.h
+++ b/lib/ring/rte_ring_core.h
@@ -120,7 +120,7 @@ struct rte_ring_hts_headtail {
 struct rte_ring {
 	char name[RTE_RING_NAMESIZE] __rte_cache_aligned;
 	/**< Name of the ring. */
-	int flags;               /**< Flags supplied at creation. */
+	uint32_t flags;               /**< Flags supplied at creation. */
 	const struct rte_memzone *memzone;
 			/**< Memzone, if any, containing the rte_ring */
 	uint32_t size;           /**< Size of ring. */
-- 
2.33.0


^ permalink raw reply	[relevance 3%]

* [PATCH v6 0/3] add telemetry cmds for ring
  2023-05-09  1:29  3% ` [PATCH v5 " Jie Hai
  2023-05-09  1:29  3%   ` [PATCH v5 1/3] ring: fix unmatched type definition and usage Jie Hai
@ 2023-05-09  9:24  3%   ` Jie Hai
  2023-05-09  9:24  3%     ` [PATCH v6 1/3] ring: fix unmatched type definition and usage Jie Hai
  1 sibling, 1 reply; 200+ results
From: Jie Hai @ 2023-05-09  9:24 UTC (permalink / raw)
  Cc: dev, liudongdong3

This patch set supports telemetry cmd to list rings and dump information
of a ring by its name.

v1->v2:
1. Add space after "switch".
2. Fix wrong strlen parameter.

v2->v3:
1. Remove prefix "rte_" for static function.
2. Add Acked-by Konstantin Ananyev for PATCH 1.
3. Introduce functions to return strings instead copy strings.
4. Check pointer to memzone of ring.
5. Remove redundant variable.
6. Hold lock when access ring data.

v3->v4:
1. Update changelog according to reviews of Honnappa Nagarahalli.
2. Add Reviewed-by Honnappa Nagarahalli.
3. Correct grammar in help information.
4. Correct spell warning on "te" reported by checkpatch.pl.
5. Use ring_walk() to query ring info instead of rte_ring_lookup().
6. Fix that type definition the flag field of rte_ring does not match the usage.
7. Use rte_tel_data_add_dict_uint_hex instead of rte_tel_data_add_dict_u64
   for mask and flags.

v4->v5:
1. Add Acked-by Konstantin Ananyev and Chengwen Feng.
2. Add ABI change explanation for commit message of patch 1/3.

v5->v6:
1. Add Acked-by Morten Brørup.
2. Fix incorrect reference of commit.

Jie Hai (3):
  ring: fix unmatched type definition and usage
  ring: add telemetry cmd to list rings
  ring: add telemetry cmd for ring info

 lib/ring/meson.build     |   1 +
 lib/ring/rte_ring.c      | 139 +++++++++++++++++++++++++++++++++++++++
 lib/ring/rte_ring_core.h |   2 +-
 3 files changed, 141 insertions(+), 1 deletion(-)

-- 
2.33.0


^ permalink raw reply	[relevance 3%]

* Re: [PATCH v5 1/3] ring: fix unmatched type definition and usage
  2023-05-09  6:23  0%     ` Ruifeng Wang
@ 2023-05-09  8:15  0%       ` Jie Hai
  0 siblings, 0 replies; 200+ results
From: Jie Hai @ 2023-05-09  8:15 UTC (permalink / raw)
  To: Ruifeng Wang, Honnappa Nagarahalli, Konstantin Ananyev,
	Olivier Matz, Dharmik Jayesh Thakkar
  Cc: dev, liudongdong3, nd

On 2023/5/9 14:23, Ruifeng Wang wrote:
>> -----Original Message-----
>> From: Jie Hai <haijie1@huawei.com>
>> Sent: Tuesday, May 9, 2023 9:29 AM
>> To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Konstantin Ananyev
>> <konstantin.v.ananyev@yandex.ru>; Ruifeng Wang <Ruifeng.Wang@arm.com>; Gavin Hu
>> <Gavin.Hu@arm.com>; Olivier Matz <olivier.matz@6wind.com>; Dharmik Jayesh Thakkar
>> <DharmikJayesh.Thakkar@arm.com>
>> Cc: dev@dpdk.org; liudongdong3@huawei.com
>> Subject: [PATCH v5 1/3] ring: fix unmatched type definition and usage
>>
>> Field 'flags' of struct rte_ring is defined as int type. However, it is used as unsigned
>> int. To ensure consistency, change the type of flags to unsigned int. Since these two
>> types has the same byte size, this change is not an ABI change.
>>
>> Fixes: cc4b218790f6 ("ring: support configurable element size")
> 
> The change looks good.
> However, I think the fix line is not accurate.
> I suppose it fixes af75078fece3 ("first public release").
> 
Thanks for your review. Sorry for quoting the wrong commit.
This issue was indeed introduced by commit af75078fece3 ("first public 
release").
I will fix this in the next version.
>>
>> Signed-off-by: Jie Hai <haijie1@huawei.com>
>> Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
>> Acked-by: Chengwen Feng <fengchengwen@huawei.com>
>> ---
>>   lib/ring/rte_ring_core.h | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h index
>> 82b237091b71..1c809abeb531 100644
>> --- a/lib/ring/rte_ring_core.h
>> +++ b/lib/ring/rte_ring_core.h
>> @@ -120,7 +120,7 @@ struct rte_ring_hts_headtail {  struct rte_ring {
>>   	char name[RTE_RING_NAMESIZE] __rte_cache_aligned;
>>   	/**< Name of the ring. */
>> -	int flags;               /**< Flags supplied at creation. */
>> +	uint32_t flags;               /**< Flags supplied at creation. */
>>   	const struct rte_memzone *memzone;
>>   			/**< Memzone, if any, containing the rte_ring */
>>   	uint32_t size;           /**< Size of ring. */
>> --
>> 2.33.0
> 
> .

^ permalink raw reply	[relevance 0%]

* RE: [PATCH v5 1/3] ring: fix unmatched type definition and usage
  2023-05-09  1:29  3%   ` [PATCH v5 1/3] ring: fix unmatched type definition and usage Jie Hai
@ 2023-05-09  6:23  0%     ` Ruifeng Wang
  2023-05-09  8:15  0%       ` Jie Hai
  0 siblings, 1 reply; 200+ results
From: Ruifeng Wang @ 2023-05-09  6:23 UTC (permalink / raw)
  To: Jie Hai, Honnappa Nagarahalli, Konstantin Ananyev, Olivier Matz,
	Dharmik Jayesh Thakkar
  Cc: dev, liudongdong3, nd

> -----Original Message-----
> From: Jie Hai <haijie1@huawei.com>
> Sent: Tuesday, May 9, 2023 9:29 AM
> To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Konstantin Ananyev
> <konstantin.v.ananyev@yandex.ru>; Ruifeng Wang <Ruifeng.Wang@arm.com>; Gavin Hu
> <Gavin.Hu@arm.com>; Olivier Matz <olivier.matz@6wind.com>; Dharmik Jayesh Thakkar
> <DharmikJayesh.Thakkar@arm.com>
> Cc: dev@dpdk.org; liudongdong3@huawei.com
> Subject: [PATCH v5 1/3] ring: fix unmatched type definition and usage
> 
> Field 'flags' of struct rte_ring is defined as int type. However, it is used as unsigned
> int. To ensure consistency, change the type of flags to unsigned int. Since these two
> types has the same byte size, this change is not an ABI change.
> 
> Fixes: cc4b218790f6 ("ring: support configurable element size")

The change looks good.
However, I think the fix line is not accurate. 
I suppose it fixes af75078fece3 ("first public release").

> 
> Signed-off-by: Jie Hai <haijie1@huawei.com>
> Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
> Acked-by: Chengwen Feng <fengchengwen@huawei.com>
> ---
>  lib/ring/rte_ring_core.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h index
> 82b237091b71..1c809abeb531 100644
> --- a/lib/ring/rte_ring_core.h
> +++ b/lib/ring/rte_ring_core.h
> @@ -120,7 +120,7 @@ struct rte_ring_hts_headtail {  struct rte_ring {
>  	char name[RTE_RING_NAMESIZE] __rte_cache_aligned;
>  	/**< Name of the ring. */
> -	int flags;               /**< Flags supplied at creation. */
> +	uint32_t flags;               /**< Flags supplied at creation. */
>  	const struct rte_memzone *memzone;
>  			/**< Memzone, if any, containing the rte_ring */
>  	uint32_t size;           /**< Size of ring. */
> --
> 2.33.0


^ permalink raw reply	[relevance 0%]

* [PATCH v5 1/3] ring: fix unmatched type definition and usage
  2023-05-09  1:29  3% ` [PATCH v5 " Jie Hai
@ 2023-05-09  1:29  3%   ` Jie Hai
  2023-05-09  6:23  0%     ` Ruifeng Wang
  2023-05-09  9:24  3%   ` [PATCH v6 0/3] add telemetry cmds for ring Jie Hai
  1 sibling, 1 reply; 200+ results
From: Jie Hai @ 2023-05-09  1:29 UTC (permalink / raw)
  To: Honnappa Nagarahalli, Konstantin Ananyev, Ruifeng Wang, Gavin Hu,
	Olivier Matz, Dharmik Thakkar
  Cc: dev, liudongdong3

Field 'flags' of struct rte_ring is defined as int type. However,
it is used as unsigned int. To ensure consistency, change the
type of flags to unsigned int. Since these two types has the
same byte size, this change is not an ABI change.

Fixes: cc4b218790f6 ("ring: support configurable element size")

Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
 lib/ring/rte_ring_core.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h
index 82b237091b71..1c809abeb531 100644
--- a/lib/ring/rte_ring_core.h
+++ b/lib/ring/rte_ring_core.h
@@ -120,7 +120,7 @@ struct rte_ring_hts_headtail {
 struct rte_ring {
 	char name[RTE_RING_NAMESIZE] __rte_cache_aligned;
 	/**< Name of the ring. */
-	int flags;               /**< Flags supplied at creation. */
+	uint32_t flags;               /**< Flags supplied at creation. */
 	const struct rte_memzone *memzone;
 			/**< Memzone, if any, containing the rte_ring */
 	uint32_t size;           /**< Size of ring. */
-- 
2.33.0


^ permalink raw reply	[relevance 3%]

* [PATCH v5 0/3] add telemetry cmds for ring
  @ 2023-05-09  1:29  3% ` Jie Hai
  2023-05-09  1:29  3%   ` [PATCH v5 1/3] ring: fix unmatched type definition and usage Jie Hai
  2023-05-09  9:24  3%   ` [PATCH v6 0/3] add telemetry cmds for ring Jie Hai
  0 siblings, 2 replies; 200+ results
From: Jie Hai @ 2023-05-09  1:29 UTC (permalink / raw)
  Cc: dev, liudongdong3

This patch set supports telemetry cmd to list rings and dump information
of a ring by its name.

v1->v2:
1. Add space after "switch".
2. Fix wrong strlen parameter.

v2->v3:
1. Remove prefix "rte_" for static function.
2. Add Acked-by Konstantin Ananyev for PATCH 1.
3. Introduce functions to return strings instead copy strings.
4. Check pointer to memzone of ring.
5. Remove redundant variable.
6. Hold lock when access ring data.

v3->v4:
1. Update changelog according to reviews of Honnappa Nagarahalli.
2. Add Reviewed-by Honnappa Nagarahalli.
3. Correct grammar in help information.
4. Correct spell warning on "te" reported by checkpatch.pl.
5. Use ring_walk() to query ring info instead of rte_ring_lookup().
6. Fix that type definition the flag field of rte_ring does not match the usage.
7. Use rte_tel_data_add_dict_uint_hex instead of rte_tel_data_add_dict_u64
   for mask and flags.

v4-v5:
1. Add Acked-by Konstantin Ananyev and Chengwen Feng.
2. Add ABI change explanation for commit message of patch 1/3.

Jie Hai (3):
  ring: fix unmatched type definition and usage
  ring: add telemetry cmd to list rings
  ring: add telemetry cmd for ring info

 lib/ring/meson.build     |   1 +
 lib/ring/rte_ring.c      | 139 +++++++++++++++++++++++++++++++++++++++
 lib/ring/rte_ring_core.h |   2 +-
 3 files changed, 141 insertions(+), 1 deletion(-)

-- 
2.33.0


^ permalink raw reply	[relevance 3%]

* Re: [PATCH v2 0/3] vhost: add device op to offload the interrupt kick
  2023-04-05 12:40  3% [PATCH v2 0/3] vhost: add device op to offload the interrupt kick Eelco Chaudron
  @ 2023-05-08 13:58  0% ` Eelco Chaudron
  1 sibling, 0 replies; 200+ results
From: Eelco Chaudron @ 2023-05-08 13:58 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia; +Cc: dev



On 5 Apr 2023, at 14:40, Eelco Chaudron wrote:

> This series adds an operation callback which gets called every time the
> library wants to call eventfd_write(). This eventfd_write() call could
> result in a system call, which could potentially block the PMD thread.
>
> The callback function can decide whether it's ok to handle the
> eventfd_write() now or have the newly introduced function,
> rte_vhost_notify_guest(), called at a later time.
>
> This can be used by 3rd party applications, like OVS, to avoid system
> calls being called as part of the PMD threads.


Wondering if anyone had a chance to look at this patchset.

Cheers,

Eelco

> v2: - Used vhost_virtqueue->index to find index for operation.
>     - Aligned function name to VDUSE RFC patchset.
>     - Added error and offload statistics counter.
>     - Mark new API as experimental.
>     - Change the virtual queue spin lock to read/write spin lock.
>     - Made shared counters atomic.
>     - Add versioned rte_vhost_driver_callback_register() for
>       ABI compliance.
>
> Eelco Chaudron (3):
>       vhost: Change vhost_virtqueue access lock to a read/write one.
>       vhost: make the guest_notifications statistic counter atomic.
>       vhost: add device op to offload the interrupt kick
>
>
>  lib/eal/include/generic/rte_rwlock.h | 17 +++++
>  lib/vhost/meson.build                |  2 +
>  lib/vhost/rte_vhost.h                | 23 ++++++-
>  lib/vhost/socket.c                   | 72 ++++++++++++++++++++--
>  lib/vhost/version.map                |  9 +++
>  lib/vhost/vhost.c                    | 92 +++++++++++++++++++++-------
>  lib/vhost/vhost.h                    | 70 ++++++++++++++-------
>  lib/vhost/vhost_user.c               | 14 ++---
>  lib/vhost/virtio_net.c               | 90 +++++++++++++--------------
>  9 files changed, 288 insertions(+), 101 deletions(-)


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v2] net/liquidio: remove LiquidIO ethdev driver
    2023-05-02 14:18  5% ` Ferruh Yigit
@ 2023-05-08 13:44  1% ` jerinj
  2023-05-17 15:47  0%   ` Jerin Jacob
  1 sibling, 1 reply; 200+ results
From: jerinj @ 2023-05-08 13:44 UTC (permalink / raw)
  To: dev, Thomas Monjalon, Anatoly Burakov
  Cc: david.marchand, ferruh.yigit, Jerin Jacob

From: Jerin Jacob <jerinj@marvell.com>

The LiquidIO product line has been substituted with CN9K/CN10K
OCTEON product line smart NICs located at drivers/net/octeon_ep/.

DPDK 20.08 has categorized the LiquidIO driver as UNMAINTAINED
because of the absence of updates in the driver.

Due to the above reasons, the driver removed from DPDK 23.07.

Also removed deprecation notice entry for the removal in
doc/guides/rel_notes/deprecation.rst and skipped removed
driver file in ABI check in devtools/libabigail.abignore.

Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
v2:
- Skip driver ABI check (Ferruh)
- Addressed the review comments in
  http://patches.dpdk.org/project/dpdk/patch/20230428103127.1059989-1-jerinj@marvell.com/ (Ferruh)

 MAINTAINERS                              |    8 -
 devtools/libabigail.abignore             |    1 +
 doc/guides/nics/features/liquidio.ini    |   29 -
 doc/guides/nics/index.rst                |    1 -
 doc/guides/nics/liquidio.rst             |  169 --
 doc/guides/rel_notes/deprecation.rst     |    7 -
 doc/guides/rel_notes/release_23_07.rst   |    2 +
 drivers/net/liquidio/base/lio_23xx_reg.h |  165 --
 drivers/net/liquidio/base/lio_23xx_vf.c  |  513 ------
 drivers/net/liquidio/base/lio_23xx_vf.h  |   63 -
 drivers/net/liquidio/base/lio_hw_defs.h  |  239 ---
 drivers/net/liquidio/base/lio_mbox.c     |  246 ---
 drivers/net/liquidio/base/lio_mbox.h     |  102 -
 drivers/net/liquidio/lio_ethdev.c        | 2147 ----------------------
 drivers/net/liquidio/lio_ethdev.h        |  179 --
 drivers/net/liquidio/lio_logs.h          |   58 -
 drivers/net/liquidio/lio_rxtx.c          | 1804 ------------------
 drivers/net/liquidio/lio_rxtx.h          |  740 --------
 drivers/net/liquidio/lio_struct.h        |  661 -------
 drivers/net/liquidio/meson.build         |   16 -
 drivers/net/meson.build                  |    1 -
 21 files changed, 3 insertions(+), 7148 deletions(-)
 delete mode 100644 doc/guides/nics/features/liquidio.ini
 delete mode 100644 doc/guides/nics/liquidio.rst
 delete mode 100644 drivers/net/liquidio/base/lio_23xx_reg.h
 delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.c
 delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.h
 delete mode 100644 drivers/net/liquidio/base/lio_hw_defs.h
 delete mode 100644 drivers/net/liquidio/base/lio_mbox.c
 delete mode 100644 drivers/net/liquidio/base/lio_mbox.h
 delete mode 100644 drivers/net/liquidio/lio_ethdev.c
 delete mode 100644 drivers/net/liquidio/lio_ethdev.h
 delete mode 100644 drivers/net/liquidio/lio_logs.h
 delete mode 100644 drivers/net/liquidio/lio_rxtx.c
 delete mode 100644 drivers/net/liquidio/lio_rxtx.h
 delete mode 100644 drivers/net/liquidio/lio_struct.h
 delete mode 100644 drivers/net/liquidio/meson.build

diff --git a/MAINTAINERS b/MAINTAINERS
index 8df23e5099..0157c26dd2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -681,14 +681,6 @@ F: drivers/net/thunderx/
 F: doc/guides/nics/thunderx.rst
 F: doc/guides/nics/features/thunderx.ini
 
-Cavium LiquidIO - UNMAINTAINED
-M: Shijith Thotton <sthotton@marvell.com>
-M: Srisivasubramanian Srinivasan <srinivasan@marvell.com>
-T: git://dpdk.org/next/dpdk-next-net-mrvl
-F: drivers/net/liquidio/
-F: doc/guides/nics/liquidio.rst
-F: doc/guides/nics/features/liquidio.ini
-
 Cavium OCTEON TX
 M: Harman Kalra <hkalra@marvell.com>
 T: git://dpdk.org/next/dpdk-next-net-mrvl
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 3ff51509de..c0361bfc7b 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -25,6 +25,7 @@
 ;
 ; SKIP_LIBRARY=librte_common_mlx5_glue
 ; SKIP_LIBRARY=librte_net_mlx4_glue
+; SKIP_LIBRARY=librte_net_liquidio
 
 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
 ; Experimental APIs exceptions ;
diff --git a/doc/guides/nics/features/liquidio.ini b/doc/guides/nics/features/liquidio.ini
deleted file mode 100644
index a8bde282e0..0000000000
--- a/doc/guides/nics/features/liquidio.ini
+++ /dev/null
@@ -1,29 +0,0 @@
-;
-; Supported features of the 'LiquidIO' network poll mode driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Speed capabilities   = Y
-Link status          = Y
-Link status event    = Y
-MTU update           = Y
-Scattered Rx         = Y
-Promiscuous mode     = Y
-Allmulticast mode    = Y
-RSS hash             = Y
-RSS key update       = Y
-RSS reta update      = Y
-VLAN filter          = Y
-CRC offload          = Y
-VLAN offload         = P
-L3 checksum offload  = Y
-L4 checksum offload  = Y
-Inner L3 checksum    = Y
-Inner L4 checksum    = Y
-Basic stats          = Y
-Extended stats       = Y
-Multiprocess aware   = Y
-Linux                = Y
-x86-64               = Y
-Usage doc            = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 5c9d1edf5e..31296822e5 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -44,7 +44,6 @@ Network Interface Controller Drivers
     ipn3ke
     ixgbe
     kni
-    liquidio
     mana
     memif
     mlx4
diff --git a/doc/guides/nics/liquidio.rst b/doc/guides/nics/liquidio.rst
deleted file mode 100644
index f893b3b539..0000000000
--- a/doc/guides/nics/liquidio.rst
+++ /dev/null
@@ -1,169 +0,0 @@
-..  SPDX-License-Identifier: BSD-3-Clause
-    Copyright(c) 2017 Cavium, Inc
-
-LiquidIO VF Poll Mode Driver
-============================
-
-The LiquidIO VF PMD library (**librte_net_liquidio**) provides poll mode driver support for
-Cavium LiquidIO® II server adapter VFs. PF management and VF creation can be
-done using kernel driver.
-
-More information can be found at `Cavium Official Website
-<http://cavium.com/LiquidIO_Adapters.html>`_.
-
-Supported LiquidIO Adapters
------------------------------
-
-- LiquidIO II CN2350 210SV/225SV
-- LiquidIO II CN2350 210SVPT
-- LiquidIO II CN2360 210SV/225SV
-- LiquidIO II CN2360 210SVPT
-
-
-SR-IOV: Prerequisites and Sample Application Notes
---------------------------------------------------
-
-This section provides instructions to configure SR-IOV with Linux OS.
-
-#. Verify SR-IOV and ARI capabilities are enabled on the adapter using ``lspci``:
-
-   .. code-block:: console
-
-      lspci -s <slot> -vvv
-
-   Example output:
-
-   .. code-block:: console
-
-      [...]
-      Capabilities: [148 v1] Alternative Routing-ID Interpretation (ARI)
-      [...]
-      Capabilities: [178 v1] Single Root I/O Virtualization (SR-IOV)
-      [...]
-      Kernel driver in use: LiquidIO
-
-#. Load the kernel module:
-
-   .. code-block:: console
-
-      modprobe liquidio
-
-#. Bring up the PF ports:
-
-   .. code-block:: console
-
-      ifconfig p4p1 up
-      ifconfig p4p2 up
-
-#. Change PF MTU if required:
-
-   .. code-block:: console
-
-      ifconfig p4p1 mtu 9000
-      ifconfig p4p2 mtu 9000
-
-#. Create VF device(s):
-
-   Echo number of VFs to be created into ``"sriov_numvfs"`` sysfs entry
-   of the parent PF.
-
-   .. code-block:: console
-
-      echo 1 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs
-      echo 1 > /sys/bus/pci/devices/0000:03:00.1/sriov_numvfs
-
-#. Assign VF MAC address:
-
-   Assign MAC address to the VF using iproute2 utility. The syntax is::
-
-      ip link set <PF iface> vf <VF id> mac <macaddr>
-
-   Example output:
-
-   .. code-block:: console
-
-      ip link set p4p1 vf 0 mac F2:A8:1B:5E:B4:66
-
-#. Assign VF(s) to VM.
-
-   The VF devices may be passed through to the guest VM using qemu or
-   virt-manager or virsh etc.
-
-   Example qemu guest launch command:
-
-   .. code-block:: console
-
-      ./qemu-system-x86_64 -name lio-vm -machine accel=kvm \
-      -cpu host -m 4096 -smp 4 \
-      -drive file=<disk_file>,if=none,id=disk1,format=<type> \
-      -device virtio-blk-pci,scsi=off,drive=disk1,id=virtio-disk1,bootindex=1 \
-      -device vfio-pci,host=03:00.3 -device vfio-pci,host=03:08.3
-
-#. Running testpmd
-
-   Refer to the document
-   :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` to run
-   ``testpmd`` application.
-
-   .. note::
-
-      Use ``igb_uio`` instead of ``vfio-pci`` in VM.
-
-   Example output:
-
-   .. code-block:: console
-
-      [...]
-      EAL: PCI device 0000:03:00.3 on NUMA socket 0
-      EAL:   probe driver: 177d:9712 net_liovf
-      EAL:   using IOMMU type 1 (Type 1)
-      PMD: net_liovf[03:00.3]INFO: DEVICE : CN23XX VF
-      EAL: PCI device 0000:03:08.3 on NUMA socket 0
-      EAL:   probe driver: 177d:9712 net_liovf
-      PMD: net_liovf[03:08.3]INFO: DEVICE : CN23XX VF
-      Interactive-mode selected
-      USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
-      Configuring Port 0 (socket 0)
-      PMD: net_liovf[03:00.3]INFO: Starting port 0
-      Port 0: F2:A8:1B:5E:B4:66
-      Configuring Port 1 (socket 0)
-      PMD: net_liovf[03:08.3]INFO: Starting port 1
-      Port 1: 32:76:CC:EE:56:D7
-      Checking link statuses...
-      Port 0 Link Up - speed 10000 Mbps - full-duplex
-      Port 1 Link Up - speed 10000 Mbps - full-duplex
-      Done
-      testpmd>
-
-#. Enabling VF promiscuous mode
-
-   One VF per PF can be marked as trusted for promiscuous mode.
-
-   .. code-block:: console
-
-      ip link set dev <PF iface> vf <VF id> trust on
-
-
-Limitations
------------
-
-VF MTU
-~~~~~~
-
-VF MTU is limited by PF MTU. Raise PF value before configuring VF for larger packet size.
-
-VLAN offload
-~~~~~~~~~~~~
-
-Tx VLAN insertion is not supported and consequently VLAN offload feature is
-marked partial.
-
-Ring size
-~~~~~~~~~
-
-Number of descriptors for Rx/Tx ring should be in the range 128 to 512.
-
-CRC stripping
-~~~~~~~~~~~~~
-
-LiquidIO adapters strip ethernet FCS of every packet coming to the host interface.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index dcc1ca1696..8e1cdd677a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -121,13 +121,6 @@ Deprecation Notices
 * net/bnx2x: Starting from DPDK 23.07, the Marvell QLogic bnx2x driver will be removed.
   This decision has been made to alleviate the burden of maintaining a discontinued product.
 
-* net/liquidio: Remove LiquidIO ethdev driver.
-  The LiquidIO product line has been substituted
-  with CN9K/CN10K OCTEON product line smart NICs located in ``drivers/net/octeon_ep/``.
-  DPDK 20.08 has categorized the LiquidIO driver as UNMAINTAINED
-  because of the absence of updates in the driver.
-  Due to the above reasons, the driver will be unavailable from DPDK 23.07.
-
 * cryptodev: The function ``rte_cryptodev_cb_fn`` will be updated
   to have another parameter ``qp_id`` to return the queue pair ID
   which got error interrupt to the application,
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index a9b1293689..f13a7b32b6 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -68,6 +68,8 @@ Removed Items
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* Removed LiquidIO ethdev driver located at ``drivers/net/liquidio/``
+
 
 API Changes
 -----------
diff --git a/drivers/net/liquidio/base/lio_23xx_reg.h b/drivers/net/liquidio/base/lio_23xx_reg.h
deleted file mode 100644
index 9f28504b53..0000000000
--- a/drivers/net/liquidio/base/lio_23xx_reg.h
+++ /dev/null
@@ -1,165 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_23XX_REG_H_
-#define _LIO_23XX_REG_H_
-
-/* ###################### REQUEST QUEUE ######################### */
-
-/* 64 registers for Input Queues Start Addr - SLI_PKT(0..63)_INSTR_BADDR */
-#define CN23XX_SLI_PKT_INSTR_BADDR_START64	0x10010
-
-/* 64 registers for Input Doorbell - SLI_PKT(0..63)_INSTR_BAOFF_DBELL */
-#define CN23XX_SLI_PKT_INSTR_BADDR_DBELL_START	0x10020
-
-/* 64 registers for Input Queue size - SLI_PKT(0..63)_INSTR_FIFO_RSIZE */
-#define CN23XX_SLI_PKT_INSTR_FIFO_RSIZE_START	0x10030
-
-/* 64 registers for Input Queue Instr Count - SLI_PKT_IN_DONE(0..63)_CNTS */
-#define CN23XX_SLI_PKT_IN_DONE_CNTS_START64	0x10040
-
-/* 64 registers (64-bit) - ES, RO, NS, Arbitration for Input Queue Data &
- * gather list fetches. SLI_PKT(0..63)_INPUT_CONTROL.
- */
-#define CN23XX_SLI_PKT_INPUT_CONTROL_START64	0x10000
-
-/* ------- Request Queue Macros --------- */
-
-/* Each Input Queue register is at a 16-byte Offset in BAR0 */
-#define CN23XX_IQ_OFFSET			0x20000
-
-#define CN23XX_SLI_IQ_PKT_CONTROL64(iq)					\
-	(CN23XX_SLI_PKT_INPUT_CONTROL_START64 + ((iq) * CN23XX_IQ_OFFSET))
-
-#define CN23XX_SLI_IQ_BASE_ADDR64(iq)					\
-	(CN23XX_SLI_PKT_INSTR_BADDR_START64 + ((iq) * CN23XX_IQ_OFFSET))
-
-#define CN23XX_SLI_IQ_SIZE(iq)						\
-	(CN23XX_SLI_PKT_INSTR_FIFO_RSIZE_START + ((iq) * CN23XX_IQ_OFFSET))
-
-#define CN23XX_SLI_IQ_DOORBELL(iq)					\
-	(CN23XX_SLI_PKT_INSTR_BADDR_DBELL_START + ((iq) * CN23XX_IQ_OFFSET))
-
-#define CN23XX_SLI_IQ_INSTR_COUNT64(iq)					\
-	(CN23XX_SLI_PKT_IN_DONE_CNTS_START64 + ((iq) * CN23XX_IQ_OFFSET))
-
-/* Number of instructions to be read in one MAC read request.
- * setting to Max value(4)
- */
-#define CN23XX_PKT_INPUT_CTL_RDSIZE			(3 << 25)
-#define CN23XX_PKT_INPUT_CTL_IS_64B			(1 << 24)
-#define CN23XX_PKT_INPUT_CTL_RST			(1 << 23)
-#define CN23XX_PKT_INPUT_CTL_QUIET			(1 << 28)
-#define CN23XX_PKT_INPUT_CTL_RING_ENB			(1 << 22)
-#define CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP		(1 << 6)
-#define CN23XX_PKT_INPUT_CTL_USE_CSR			(1 << 4)
-#define CN23XX_PKT_INPUT_CTL_GATHER_ES_64B_SWAP		(2)
-
-/* These bits[47:44] select the Physical function number within the MAC */
-#define CN23XX_PKT_INPUT_CTL_PF_NUM_POS		45
-/* These bits[43:32] select the function number within the PF */
-#define CN23XX_PKT_INPUT_CTL_VF_NUM_POS		32
-
-#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-#define CN23XX_PKT_INPUT_CTL_MASK			\
-	(CN23XX_PKT_INPUT_CTL_RDSIZE |			\
-	 CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP |	\
-	 CN23XX_PKT_INPUT_CTL_USE_CSR)
-#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-#define CN23XX_PKT_INPUT_CTL_MASK			\
-	(CN23XX_PKT_INPUT_CTL_RDSIZE |			\
-	 CN23XX_PKT_INPUT_CTL_DATA_ES_64B_SWAP |	\
-	 CN23XX_PKT_INPUT_CTL_USE_CSR |			\
-	 CN23XX_PKT_INPUT_CTL_GATHER_ES_64B_SWAP)
-#endif
-
-/* ############################ OUTPUT QUEUE ######################### */
-
-/* 64 registers for Output queue control - SLI_PKT(0..63)_OUTPUT_CONTROL */
-#define CN23XX_SLI_PKT_OUTPUT_CONTROL_START	0x10050
-
-/* 64 registers for Output queue buffer and info size
- * SLI_PKT(0..63)_OUT_SIZE
- */
-#define CN23XX_SLI_PKT_OUT_SIZE			0x10060
-
-/* 64 registers for Output Queue Start Addr - SLI_PKT(0..63)_SLIST_BADDR */
-#define CN23XX_SLI_SLIST_BADDR_START64		0x10070
-
-/* 64 registers for Output Queue Packet Credits
- * SLI_PKT(0..63)_SLIST_BAOFF_DBELL
- */
-#define CN23XX_SLI_PKT_SLIST_BAOFF_DBELL_START	0x10080
-
-/* 64 registers for Output Queue size - SLI_PKT(0..63)_SLIST_FIFO_RSIZE */
-#define CN23XX_SLI_PKT_SLIST_FIFO_RSIZE_START	0x10090
-
-/* 64 registers for Output Queue Packet Count - SLI_PKT(0..63)_CNTS */
-#define CN23XX_SLI_PKT_CNTS_START		0x100B0
-
-/* Each Output Queue register is at a 16-byte Offset in BAR0 */
-#define CN23XX_OQ_OFFSET			0x20000
-
-/* ------- Output Queue Macros --------- */
-
-#define CN23XX_SLI_OQ_PKT_CONTROL(oq)					\
-	(CN23XX_SLI_PKT_OUTPUT_CONTROL_START + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_BASE_ADDR64(oq)					\
-	(CN23XX_SLI_SLIST_BADDR_START64 + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_SIZE(oq)						\
-	(CN23XX_SLI_PKT_SLIST_FIFO_RSIZE_START + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_BUFF_INFO_SIZE(oq)				\
-	(CN23XX_SLI_PKT_OUT_SIZE + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_PKTS_SENT(oq)					\
-	(CN23XX_SLI_PKT_CNTS_START + ((oq) * CN23XX_OQ_OFFSET))
-
-#define CN23XX_SLI_OQ_PKTS_CREDIT(oq)					\
-	(CN23XX_SLI_PKT_SLIST_BAOFF_DBELL_START + ((oq) * CN23XX_OQ_OFFSET))
-
-/* ------------------ Masks ---------------- */
-#define CN23XX_PKT_OUTPUT_CTL_IPTR		(1 << 11)
-#define CN23XX_PKT_OUTPUT_CTL_ES		(1 << 9)
-#define CN23XX_PKT_OUTPUT_CTL_NSR		(1 << 8)
-#define CN23XX_PKT_OUTPUT_CTL_ROR		(1 << 7)
-#define CN23XX_PKT_OUTPUT_CTL_DPTR		(1 << 6)
-#define CN23XX_PKT_OUTPUT_CTL_BMODE		(1 << 5)
-#define CN23XX_PKT_OUTPUT_CTL_ES_P		(1 << 3)
-#define CN23XX_PKT_OUTPUT_CTL_NSR_P		(1 << 2)
-#define CN23XX_PKT_OUTPUT_CTL_ROR_P		(1 << 1)
-#define CN23XX_PKT_OUTPUT_CTL_RING_ENB		(1 << 0)
-
-/* Rings per Virtual Function [RO] */
-#define CN23XX_PKT_INPUT_CTL_RPVF_MASK		0x3F
-#define CN23XX_PKT_INPUT_CTL_RPVF_POS		48
-
-/* These bits[47:44][RO] give the Physical function
- * number info within the MAC
- */
-#define CN23XX_PKT_INPUT_CTL_PF_NUM_MASK	0x7
-
-/* These bits[43:32][RO] give the virtual function
- * number info within the PF
- */
-#define CN23XX_PKT_INPUT_CTL_VF_NUM_MASK	0x1FFF
-
-/* ######################### Mailbox Reg Macros ######################## */
-#define CN23XX_SLI_PKT_PF_VF_MBOX_SIG_START	0x10200
-#define CN23XX_VF_SLI_PKT_MBOX_INT_START	0x10210
-
-#define CN23XX_SLI_MBOX_OFFSET			0x20000
-#define CN23XX_SLI_MBOX_SIG_IDX_OFFSET		0x8
-
-#define CN23XX_SLI_PKT_PF_VF_MBOX_SIG(q, idx)				\
-	(CN23XX_SLI_PKT_PF_VF_MBOX_SIG_START +				\
-	 ((q) * CN23XX_SLI_MBOX_OFFSET +				\
-	  (idx) * CN23XX_SLI_MBOX_SIG_IDX_OFFSET))
-
-#define CN23XX_VF_SLI_PKT_MBOX_INT(q)					\
-	(CN23XX_VF_SLI_PKT_MBOX_INT_START + ((q) * CN23XX_SLI_MBOX_OFFSET))
-
-#endif /* _LIO_23XX_REG_H_ */
diff --git a/drivers/net/liquidio/base/lio_23xx_vf.c b/drivers/net/liquidio/base/lio_23xx_vf.c
deleted file mode 100644
index c6b8310b71..0000000000
--- a/drivers/net/liquidio/base/lio_23xx_vf.c
+++ /dev/null
@@ -1,513 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#include <string.h>
-
-#include <ethdev_driver.h>
-#include <rte_cycles.h>
-#include <rte_malloc.h>
-
-#include "lio_logs.h"
-#include "lio_23xx_vf.h"
-#include "lio_23xx_reg.h"
-#include "lio_mbox.h"
-
-static int
-cn23xx_vf_reset_io_queues(struct lio_device *lio_dev, uint32_t num_queues)
-{
-	uint32_t loop = CN23XX_VF_BUSY_READING_REG_LOOP_COUNT;
-	uint64_t d64, q_no;
-	int ret_val = 0;
-
-	PMD_INIT_FUNC_TRACE();
-
-	for (q_no = 0; q_no < num_queues; q_no++) {
-		/* set RST bit to 1. This bit applies to both IQ and OQ */
-		d64 = lio_read_csr64(lio_dev,
-				     CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
-		d64 = d64 | CN23XX_PKT_INPUT_CTL_RST;
-		lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
-				d64);
-	}
-
-	/* wait until the RST bit is clear or the RST and QUIET bits are set */
-	for (q_no = 0; q_no < num_queues; q_no++) {
-		volatile uint64_t reg_val;
-
-		reg_val	= lio_read_csr64(lio_dev,
-					 CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
-		while ((reg_val & CN23XX_PKT_INPUT_CTL_RST) &&
-				!(reg_val & CN23XX_PKT_INPUT_CTL_QUIET) &&
-				loop) {
-			reg_val = lio_read_csr64(
-					lio_dev,
-					CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
-			loop = loop - 1;
-		}
-
-		if (loop == 0) {
-			lio_dev_err(lio_dev,
-				    "clearing the reset reg failed or setting the quiet reg failed for qno: %lu\n",
-				    (unsigned long)q_no);
-			return -1;
-		}
-
-		reg_val = reg_val & ~CN23XX_PKT_INPUT_CTL_RST;
-		lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
-				reg_val);
-
-		reg_val = lio_read_csr64(
-		    lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
-		if (reg_val & CN23XX_PKT_INPUT_CTL_RST) {
-			lio_dev_err(lio_dev,
-				    "clearing the reset failed for qno: %lu\n",
-				    (unsigned long)q_no);
-			ret_val = -1;
-		}
-	}
-
-	return ret_val;
-}
-
-static int
-cn23xx_vf_setup_global_input_regs(struct lio_device *lio_dev)
-{
-	uint64_t q_no;
-	uint64_t d64;
-
-	PMD_INIT_FUNC_TRACE();
-
-	if (cn23xx_vf_reset_io_queues(lio_dev,
-				      lio_dev->sriov_info.rings_per_vf))
-		return -1;
-
-	for (q_no = 0; q_no < (lio_dev->sriov_info.rings_per_vf); q_no++) {
-		lio_write_csr64(lio_dev, CN23XX_SLI_IQ_DOORBELL(q_no),
-				0xFFFFFFFF);
-
-		d64 = lio_read_csr64(lio_dev,
-				     CN23XX_SLI_IQ_INSTR_COUNT64(q_no));
-
-		d64 &= 0xEFFFFFFFFFFFFFFFL;
-
-		lio_write_csr64(lio_dev, CN23XX_SLI_IQ_INSTR_COUNT64(q_no),
-				d64);
-
-		/* Select ES, RO, NS, RDSIZE,DPTR Fomat#0 for
-		 * the Input Queues
-		 */
-		lio_write_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
-				CN23XX_PKT_INPUT_CTL_MASK);
-	}
-
-	return 0;
-}
-
-static void
-cn23xx_vf_setup_global_output_regs(struct lio_device *lio_dev)
-{
-	uint32_t reg_val;
-	uint32_t q_no;
-
-	PMD_INIT_FUNC_TRACE();
-
-	for (q_no = 0; q_no < lio_dev->sriov_info.rings_per_vf; q_no++) {
-		lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKTS_CREDIT(q_no),
-			      0xFFFFFFFF);
-
-		reg_val =
-		    lio_read_csr(lio_dev, CN23XX_SLI_OQ_PKTS_SENT(q_no));
-
-		reg_val &= 0xEFFFFFFFFFFFFFFFL;
-
-		lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKTS_SENT(q_no), reg_val);
-
-		reg_val =
-		    lio_read_csr(lio_dev, CN23XX_SLI_OQ_PKT_CONTROL(q_no));
-
-		/* set IPTR & DPTR */
-		reg_val |=
-		    (CN23XX_PKT_OUTPUT_CTL_IPTR | CN23XX_PKT_OUTPUT_CTL_DPTR);
-
-		/* reset BMODE */
-		reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_BMODE);
-
-		/* No Relaxed Ordering, No Snoop, 64-bit Byte swap
-		 * for Output Queue Scatter List
-		 * reset ROR_P, NSR_P
-		 */
-		reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ROR_P);
-		reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_NSR_P);
-
-#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-		reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ES_P);
-#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-		reg_val |= (CN23XX_PKT_OUTPUT_CTL_ES_P);
-#endif
-		/* No Relaxed Ordering, No Snoop, 64-bit Byte swap
-		 * for Output Queue Data
-		 * reset ROR, NSR
-		 */
-		reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_ROR);
-		reg_val &= ~(CN23XX_PKT_OUTPUT_CTL_NSR);
-		/* set the ES bit */
-		reg_val |= (CN23XX_PKT_OUTPUT_CTL_ES);
-
-		/* write all the selected settings */
-		lio_write_csr(lio_dev, CN23XX_SLI_OQ_PKT_CONTROL(q_no),
-			      reg_val);
-	}
-}
-
-static int
-cn23xx_vf_setup_device_regs(struct lio_device *lio_dev)
-{
-	PMD_INIT_FUNC_TRACE();
-
-	if (cn23xx_vf_setup_global_input_regs(lio_dev))
-		return -1;
-
-	cn23xx_vf_setup_global_output_regs(lio_dev);
-
-	return 0;
-}
-
-static void
-cn23xx_vf_setup_iq_regs(struct lio_device *lio_dev, uint32_t iq_no)
-{
-	struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
-	uint64_t pkt_in_done = 0;
-
-	PMD_INIT_FUNC_TRACE();
-
-	/* Write the start of the input queue's ring and its size */
-	lio_write_csr64(lio_dev, CN23XX_SLI_IQ_BASE_ADDR64(iq_no),
-			iq->base_addr_dma);
-	lio_write_csr(lio_dev, CN23XX_SLI_IQ_SIZE(iq_no), iq->nb_desc);
-
-	/* Remember the doorbell & instruction count register addr
-	 * for this queue
-	 */
-	iq->doorbell_reg = (uint8_t *)lio_dev->hw_addr +
-				CN23XX_SLI_IQ_DOORBELL(iq_no);
-	iq->inst_cnt_reg = (uint8_t *)lio_dev->hw_addr +
-				CN23XX_SLI_IQ_INSTR_COUNT64(iq_no);
-	lio_dev_dbg(lio_dev, "InstQ[%d]:dbell reg @ 0x%p instcnt_reg @ 0x%p\n",
-		    iq_no, iq->doorbell_reg, iq->inst_cnt_reg);
-
-	/* Store the current instruction counter (used in flush_iq
-	 * calculation)
-	 */
-	pkt_in_done = rte_read64(iq->inst_cnt_reg);
-
-	/* Clear the count by writing back what we read, but don't
-	 * enable data traffic here
-	 */
-	rte_write64(pkt_in_done, iq->inst_cnt_reg);
-}
-
-static void
-cn23xx_vf_setup_oq_regs(struct lio_device *lio_dev, uint32_t oq_no)
-{
-	struct lio_droq *droq = lio_dev->droq[oq_no];
-
-	PMD_INIT_FUNC_TRACE();
-
-	lio_write_csr64(lio_dev, CN23XX_SLI_OQ_BASE_ADDR64(oq_no),
-			droq->desc_ring_dma);
-	lio_write_csr(lio_dev, CN23XX_SLI_OQ_SIZE(oq_no), droq->nb_desc);
-
-	lio_write_csr(lio_dev, CN23XX_SLI_OQ_BUFF_INFO_SIZE(oq_no),
-		      (droq->buffer_size | (OCTEON_RH_SIZE << 16)));
-
-	/* Get the mapped address of the pkt_sent and pkts_credit regs */
-	droq->pkts_sent_reg = (uint8_t *)lio_dev->hw_addr +
-					CN23XX_SLI_OQ_PKTS_SENT(oq_no);
-	droq->pkts_credit_reg = (uint8_t *)lio_dev->hw_addr +
-					CN23XX_SLI_OQ_PKTS_CREDIT(oq_no);
-}
-
-static void
-cn23xx_vf_free_mbox(struct lio_device *lio_dev)
-{
-	PMD_INIT_FUNC_TRACE();
-
-	rte_free(lio_dev->mbox[0]);
-	lio_dev->mbox[0] = NULL;
-
-	rte_free(lio_dev->mbox);
-	lio_dev->mbox = NULL;
-}
-
-static int
-cn23xx_vf_setup_mbox(struct lio_device *lio_dev)
-{
-	struct lio_mbox *mbox;
-
-	PMD_INIT_FUNC_TRACE();
-
-	if (lio_dev->mbox == NULL) {
-		lio_dev->mbox = rte_zmalloc(NULL, sizeof(void *), 0);
-		if (lio_dev->mbox == NULL)
-			return -ENOMEM;
-	}
-
-	mbox = rte_zmalloc(NULL, sizeof(struct lio_mbox), 0);
-	if (mbox == NULL) {
-		rte_free(lio_dev->mbox);
-		lio_dev->mbox = NULL;
-		return -ENOMEM;
-	}
-
-	rte_spinlock_init(&mbox->lock);
-
-	mbox->lio_dev = lio_dev;
-
-	mbox->q_no = 0;
-
-	mbox->state = LIO_MBOX_STATE_IDLE;
-
-	/* VF mbox interrupt reg */
-	mbox->mbox_int_reg = (uint8_t *)lio_dev->hw_addr +
-				CN23XX_VF_SLI_PKT_MBOX_INT(0);
-	/* VF reads from SIG0 reg */
-	mbox->mbox_read_reg = (uint8_t *)lio_dev->hw_addr +
-				CN23XX_SLI_PKT_PF_VF_MBOX_SIG(0, 0);
-	/* VF writes into SIG1 reg */
-	mbox->mbox_write_reg = (uint8_t *)lio_dev->hw_addr +
-				CN23XX_SLI_PKT_PF_VF_MBOX_SIG(0, 1);
-
-	lio_dev->mbox[0] = mbox;
-
-	rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
-
-	return 0;
-}
-
-static int
-cn23xx_vf_enable_io_queues(struct lio_device *lio_dev)
-{
-	uint32_t q_no;
-
-	PMD_INIT_FUNC_TRACE();
-
-	for (q_no = 0; q_no < lio_dev->num_iqs; q_no++) {
-		uint64_t reg_val;
-
-		/* set the corresponding IQ IS_64B bit */
-		if (lio_dev->io_qmask.iq64B & (1ULL << q_no)) {
-			reg_val = lio_read_csr64(
-					lio_dev,
-					CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
-			reg_val = reg_val | CN23XX_PKT_INPUT_CTL_IS_64B;
-			lio_write_csr64(lio_dev,
-					CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
-					reg_val);
-		}
-
-		/* set the corresponding IQ ENB bit */
-		if (lio_dev->io_qmask.iq & (1ULL << q_no)) {
-			reg_val = lio_read_csr64(
-					lio_dev,
-					CN23XX_SLI_IQ_PKT_CONTROL64(q_no));
-			reg_val = reg_val | CN23XX_PKT_INPUT_CTL_RING_ENB;
-			lio_write_csr64(lio_dev,
-					CN23XX_SLI_IQ_PKT_CONTROL64(q_no),
-					reg_val);
-		}
-	}
-	for (q_no = 0; q_no < lio_dev->num_oqs; q_no++) {
-		uint32_t reg_val;
-
-		/* set the corresponding OQ ENB bit */
-		if (lio_dev->io_qmask.oq & (1ULL << q_no)) {
-			reg_val = lio_read_csr(
-					lio_dev,
-					CN23XX_SLI_OQ_PKT_CONTROL(q_no));
-			reg_val = reg_val | CN23XX_PKT_OUTPUT_CTL_RING_ENB;
-			lio_write_csr(lio_dev,
-				      CN23XX_SLI_OQ_PKT_CONTROL(q_no),
-				      reg_val);
-		}
-	}
-
-	return 0;
-}
-
-static void
-cn23xx_vf_disable_io_queues(struct lio_device *lio_dev)
-{
-	uint32_t num_queues;
-
-	PMD_INIT_FUNC_TRACE();
-
-	/* per HRM, rings can only be disabled via reset operation,
-	 * NOT via SLI_PKT()_INPUT/OUTPUT_CONTROL[ENB]
-	 */
-	num_queues = lio_dev->num_iqs;
-	if (num_queues < lio_dev->num_oqs)
-		num_queues = lio_dev->num_oqs;
-
-	cn23xx_vf_reset_io_queues(lio_dev, num_queues);
-}
-
-void
-cn23xx_vf_ask_pf_to_do_flr(struct lio_device *lio_dev)
-{
-	struct lio_mbox_cmd mbox_cmd;
-
-	memset(&mbox_cmd, 0, sizeof(struct lio_mbox_cmd));
-	mbox_cmd.msg.s.type = LIO_MBOX_REQUEST;
-	mbox_cmd.msg.s.resp_needed = 0;
-	mbox_cmd.msg.s.cmd = LIO_VF_FLR_REQUEST;
-	mbox_cmd.msg.s.len = 1;
-	mbox_cmd.q_no = 0;
-	mbox_cmd.recv_len = 0;
-	mbox_cmd.recv_status = 0;
-	mbox_cmd.fn = NULL;
-	mbox_cmd.fn_arg = 0;
-
-	lio_mbox_write(lio_dev, &mbox_cmd);
-}
-
-static void
-cn23xx_pfvf_hs_callback(struct lio_device *lio_dev,
-			struct lio_mbox_cmd *cmd, void *arg)
-{
-	uint32_t major = 0;
-
-	PMD_INIT_FUNC_TRACE();
-
-	rte_memcpy((uint8_t *)&lio_dev->pfvf_hsword, cmd->msg.s.params, 6);
-	if (cmd->recv_len > 1) {
-		struct lio_version *lio_ver = (struct lio_version *)cmd->data;
-
-		major = lio_ver->major;
-		major = major << 16;
-	}
-
-	rte_atomic64_set((rte_atomic64_t *)arg, major | 1);
-}
-
-int
-cn23xx_pfvf_handshake(struct lio_device *lio_dev)
-{
-	struct lio_mbox_cmd mbox_cmd;
-	struct lio_version *lio_ver = (struct lio_version *)&mbox_cmd.data[0];
-	uint32_t q_no, count = 0;
-	rte_atomic64_t status;
-	uint32_t pfmajor;
-	uint32_t vfmajor;
-	uint32_t ret;
-
-	PMD_INIT_FUNC_TRACE();
-
-	/* Sending VF_ACTIVE indication to the PF driver */
-	lio_dev_dbg(lio_dev, "requesting info from PF\n");
-
-	mbox_cmd.msg.mbox_msg64 = 0;
-	mbox_cmd.msg.s.type = LIO_MBOX_REQUEST;
-	mbox_cmd.msg.s.resp_needed = 1;
-	mbox_cmd.msg.s.cmd = LIO_VF_ACTIVE;
-	mbox_cmd.msg.s.len = 2;
-	mbox_cmd.data[0] = 0;
-	lio_ver->major = LIO_BASE_MAJOR_VERSION;
-	lio_ver->minor = LIO_BASE_MINOR_VERSION;
-	lio_ver->micro = LIO_BASE_MICRO_VERSION;
-	mbox_cmd.q_no = 0;
-	mbox_cmd.recv_len = 0;
-	mbox_cmd.recv_status = 0;
-	mbox_cmd.fn = (lio_mbox_callback)cn23xx_pfvf_hs_callback;
-	mbox_cmd.fn_arg = (void *)&status;
-
-	if (lio_mbox_write(lio_dev, &mbox_cmd)) {
-		lio_dev_err(lio_dev, "Write to mailbox failed\n");
-		return -1;
-	}
-
-	rte_atomic64_set(&status, 0);
-
-	do {
-		rte_delay_ms(1);
-	} while ((rte_atomic64_read(&status) == 0) && (count++ < 10000));
-
-	ret = rte_atomic64_read(&status);
-	if (ret == 0) {
-		lio_dev_err(lio_dev, "cn23xx_pfvf_handshake timeout\n");
-		return -1;
-	}
-
-	for (q_no = 0; q_no < lio_dev->num_iqs; q_no++)
-		lio_dev->instr_queue[q_no]->txpciq.s.pkind =
-						lio_dev->pfvf_hsword.pkind;
-
-	vfmajor = LIO_BASE_MAJOR_VERSION;
-	pfmajor = ret >> 16;
-	if (pfmajor != vfmajor) {
-		lio_dev_err(lio_dev,
-			    "VF LiquidIO driver (major version %d) is not compatible with LiquidIO PF driver (major version %d)\n",
-			    vfmajor, pfmajor);
-		ret = -EPERM;
-	} else {
-		lio_dev_dbg(lio_dev,
-			    "VF LiquidIO driver (major version %d), LiquidIO PF driver (major version %d)\n",
-			    vfmajor, pfmajor);
-		ret = 0;
-	}
-
-	lio_dev_dbg(lio_dev, "got data from PF pkind is %d\n",
-		    lio_dev->pfvf_hsword.pkind);
-
-	return ret;
-}
-
-void
-cn23xx_vf_handle_mbox(struct lio_device *lio_dev)
-{
-	uint64_t mbox_int_val;
-
-	/* read and clear by writing 1 */
-	mbox_int_val = rte_read64(lio_dev->mbox[0]->mbox_int_reg);
-	rte_write64(mbox_int_val, lio_dev->mbox[0]->mbox_int_reg);
-	if (lio_mbox_read(lio_dev->mbox[0]))
-		lio_mbox_process_message(lio_dev->mbox[0]);
-}
-
-int
-cn23xx_vf_setup_device(struct lio_device *lio_dev)
-{
-	uint64_t reg_val;
-
-	PMD_INIT_FUNC_TRACE();
-
-	/* INPUT_CONTROL[RPVF] gives the VF IOq count */
-	reg_val = lio_read_csr64(lio_dev, CN23XX_SLI_IQ_PKT_CONTROL64(0));
-
-	lio_dev->pf_num = (reg_val >> CN23XX_PKT_INPUT_CTL_PF_NUM_POS) &
-				CN23XX_PKT_INPUT_CTL_PF_NUM_MASK;
-	lio_dev->vf_num = (reg_val >> CN23XX_PKT_INPUT_CTL_VF_NUM_POS) &
-				CN23XX_PKT_INPUT_CTL_VF_NUM_MASK;
-
-	reg_val = reg_val >> CN23XX_PKT_INPUT_CTL_RPVF_POS;
-
-	lio_dev->sriov_info.rings_per_vf =
-				reg_val & CN23XX_PKT_INPUT_CTL_RPVF_MASK;
-
-	lio_dev->default_config = lio_get_conf(lio_dev);
-	if (lio_dev->default_config == NULL)
-		return -1;
-
-	lio_dev->fn_list.setup_iq_regs		= cn23xx_vf_setup_iq_regs;
-	lio_dev->fn_list.setup_oq_regs		= cn23xx_vf_setup_oq_regs;
-	lio_dev->fn_list.setup_mbox		= cn23xx_vf_setup_mbox;
-	lio_dev->fn_list.free_mbox		= cn23xx_vf_free_mbox;
-
-	lio_dev->fn_list.setup_device_regs	= cn23xx_vf_setup_device_regs;
-
-	lio_dev->fn_list.enable_io_queues	= cn23xx_vf_enable_io_queues;
-	lio_dev->fn_list.disable_io_queues	= cn23xx_vf_disable_io_queues;
-
-	return 0;
-}
-
diff --git a/drivers/net/liquidio/base/lio_23xx_vf.h b/drivers/net/liquidio/base/lio_23xx_vf.h
deleted file mode 100644
index 8e5362db15..0000000000
--- a/drivers/net/liquidio/base/lio_23xx_vf.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_23XX_VF_H_
-#define _LIO_23XX_VF_H_
-
-#include <stdio.h>
-
-#include "lio_struct.h"
-
-static const struct lio_config default_cn23xx_conf	= {
-	.card_type				= LIO_23XX,
-	.card_name				= LIO_23XX_NAME,
-	/** IQ attributes */
-	.iq					= {
-		.max_iqs			= CN23XX_CFG_IO_QUEUES,
-		.pending_list_size		=
-			(CN23XX_MAX_IQ_DESCRIPTORS * CN23XX_CFG_IO_QUEUES),
-		.instr_type			= OCTEON_64BYTE_INSTR,
-	},
-
-	/** OQ attributes */
-	.oq					= {
-		.max_oqs			= CN23XX_CFG_IO_QUEUES,
-		.info_ptr			= OCTEON_OQ_INFOPTR_MODE,
-		.refill_threshold		= CN23XX_OQ_REFIL_THRESHOLD,
-	},
-
-	.num_nic_ports				= CN23XX_DEFAULT_NUM_PORTS,
-	.num_def_rx_descs			= CN23XX_MAX_OQ_DESCRIPTORS,
-	.num_def_tx_descs			= CN23XX_MAX_IQ_DESCRIPTORS,
-	.def_rx_buf_size			= CN23XX_OQ_BUF_SIZE,
-};
-
-static inline const struct lio_config *
-lio_get_conf(struct lio_device *lio_dev)
-{
-	const struct lio_config *default_lio_conf = NULL;
-
-	/* check the LIO Device model & return the corresponding lio
-	 * configuration
-	 */
-	default_lio_conf = &default_cn23xx_conf;
-
-	if (default_lio_conf == NULL) {
-		lio_dev_err(lio_dev, "Configuration verification failed\n");
-		return NULL;
-	}
-
-	return default_lio_conf;
-}
-
-#define CN23XX_VF_BUSY_READING_REG_LOOP_COUNT	100000
-
-void cn23xx_vf_ask_pf_to_do_flr(struct lio_device *lio_dev);
-
-int cn23xx_pfvf_handshake(struct lio_device *lio_dev);
-
-int cn23xx_vf_setup_device(struct lio_device  *lio_dev);
-
-void cn23xx_vf_handle_mbox(struct lio_device *lio_dev);
-#endif /* _LIO_23XX_VF_H_  */
diff --git a/drivers/net/liquidio/base/lio_hw_defs.h b/drivers/net/liquidio/base/lio_hw_defs.h
deleted file mode 100644
index 5e119c1241..0000000000
--- a/drivers/net/liquidio/base/lio_hw_defs.h
+++ /dev/null
@@ -1,239 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_HW_DEFS_H_
-#define _LIO_HW_DEFS_H_
-
-#include <rte_io.h>
-
-#ifndef PCI_VENDOR_ID_CAVIUM
-#define PCI_VENDOR_ID_CAVIUM	0x177D
-#endif
-
-#define LIO_CN23XX_VF_VID	0x9712
-
-/* CN23xx subsystem device ids */
-#define PCI_SUBSYS_DEV_ID_CN2350_210		0x0004
-#define PCI_SUBSYS_DEV_ID_CN2360_210		0x0005
-#define PCI_SUBSYS_DEV_ID_CN2360_225		0x0006
-#define PCI_SUBSYS_DEV_ID_CN2350_225		0x0007
-#define PCI_SUBSYS_DEV_ID_CN2350_210SVPN3	0x0008
-#define PCI_SUBSYS_DEV_ID_CN2360_210SVPN3	0x0009
-#define PCI_SUBSYS_DEV_ID_CN2350_210SVPT	0x000a
-#define PCI_SUBSYS_DEV_ID_CN2360_210SVPT	0x000b
-
-/* --------------------------CONFIG VALUES------------------------ */
-
-/* CN23xx IQ configuration macros */
-#define CN23XX_MAX_RINGS_PER_PF			64
-#define CN23XX_MAX_RINGS_PER_VF			8
-
-#define CN23XX_MAX_INPUT_QUEUES			CN23XX_MAX_RINGS_PER_PF
-#define CN23XX_MAX_IQ_DESCRIPTORS		512
-#define CN23XX_MIN_IQ_DESCRIPTORS		128
-
-#define CN23XX_MAX_OUTPUT_QUEUES		CN23XX_MAX_RINGS_PER_PF
-#define CN23XX_MAX_OQ_DESCRIPTORS		512
-#define CN23XX_MIN_OQ_DESCRIPTORS		128
-#define CN23XX_OQ_BUF_SIZE			1536
-
-#define CN23XX_OQ_REFIL_THRESHOLD		16
-
-#define CN23XX_DEFAULT_NUM_PORTS		1
-
-#define CN23XX_CFG_IO_QUEUES			CN23XX_MAX_RINGS_PER_PF
-
-/* common OCTEON configuration macros */
-#define OCTEON_64BYTE_INSTR			64
-#define OCTEON_OQ_INFOPTR_MODE			1
-
-/* Max IOQs per LIO Link */
-#define LIO_MAX_IOQS_PER_IF			64
-
-/* Wait time in milliseconds for FLR */
-#define LIO_PCI_FLR_WAIT			100
-
-enum lio_card_type {
-	LIO_23XX /* 23xx */
-};
-
-#define LIO_23XX_NAME "23xx"
-
-#define LIO_DEV_RUNNING		0xc
-
-#define LIO_OQ_REFILL_THRESHOLD_CFG(cfg)				\
-		((cfg)->default_config->oq.refill_threshold)
-#define LIO_NUM_DEF_TX_DESCS_CFG(cfg)					\
-		((cfg)->default_config->num_def_tx_descs)
-
-#define LIO_IQ_INSTR_TYPE(cfg)		((cfg)->default_config->iq.instr_type)
-
-/* The following config values are fixed and should not be modified. */
-
-/* Maximum number of Instruction queues */
-#define LIO_MAX_INSTR_QUEUES(lio_dev)		CN23XX_MAX_RINGS_PER_VF
-
-#define LIO_MAX_POSSIBLE_INSTR_QUEUES		CN23XX_MAX_INPUT_QUEUES
-#define LIO_MAX_POSSIBLE_OUTPUT_QUEUES		CN23XX_MAX_OUTPUT_QUEUES
-
-#define LIO_DEVICE_NAME_LEN		32
-#define LIO_BASE_MAJOR_VERSION		1
-#define LIO_BASE_MINOR_VERSION		5
-#define LIO_BASE_MICRO_VERSION		1
-
-#define LIO_FW_VERSION_LENGTH		32
-
-#define LIO_Q_RECONF_MIN_VERSION	"1.7.0"
-#define LIO_VF_TRUST_MIN_VERSION	"1.7.1"
-
-/** Tag types used by Octeon cores in its work. */
-enum octeon_tag_type {
-	OCTEON_ORDERED_TAG	= 0,
-	OCTEON_ATOMIC_TAG	= 1,
-};
-
-/* pre-defined host->NIC tag values */
-#define LIO_CONTROL	(0x11111110)
-#define LIO_DATA(i)	(0x11111111 + (i))
-
-/* used for NIC operations */
-#define LIO_OPCODE	1
-
-/* Subcodes are used by host driver/apps to identify the sub-operation
- * for the core. They only need to by unique for a given subsystem.
- */
-#define LIO_OPCODE_SUBCODE(op, sub)		\
-		((((op) & 0x0f) << 8) | ((sub) & 0x7f))
-
-/** LIO_OPCODE subcodes */
-/* This subcode is sent by core PCI driver to indicate cores are ready. */
-#define LIO_OPCODE_NW_DATA		0x02 /* network packet data */
-#define LIO_OPCODE_CMD			0x03
-#define LIO_OPCODE_INFO			0x04
-#define LIO_OPCODE_PORT_STATS		0x05
-#define LIO_OPCODE_IF_CFG		0x09
-
-#define LIO_MIN_RX_BUF_SIZE		64
-#define LIO_MAX_RX_PKTLEN		(64 * 1024)
-
-/* NIC Command types */
-#define LIO_CMD_CHANGE_MTU		0x1
-#define LIO_CMD_CHANGE_DEVFLAGS		0x3
-#define LIO_CMD_RX_CTL			0x4
-#define LIO_CMD_CLEAR_STATS		0x6
-#define LIO_CMD_SET_RSS			0xD
-#define LIO_CMD_TNL_RX_CSUM_CTL		0x10
-#define LIO_CMD_TNL_TX_CSUM_CTL		0x11
-#define LIO_CMD_ADD_VLAN_FILTER		0x17
-#define LIO_CMD_DEL_VLAN_FILTER		0x18
-#define LIO_CMD_VXLAN_PORT_CONFIG	0x19
-#define LIO_CMD_QUEUE_COUNT_CTL		0x1f
-
-#define LIO_CMD_VXLAN_PORT_ADD		0x0
-#define LIO_CMD_VXLAN_PORT_DEL		0x1
-#define LIO_CMD_RXCSUM_ENABLE		0x0
-#define LIO_CMD_TXCSUM_ENABLE		0x0
-
-/* RX(packets coming from wire) Checksum verification flags */
-/* TCP/UDP csum */
-#define LIO_L4_CSUM_VERIFIED		0x1
-#define LIO_IP_CSUM_VERIFIED		0x2
-
-/* RSS */
-#define LIO_RSS_PARAM_DISABLE_RSS		0x10
-#define LIO_RSS_PARAM_HASH_KEY_UNCHANGED	0x08
-#define LIO_RSS_PARAM_ITABLE_UNCHANGED		0x04
-#define LIO_RSS_PARAM_HASH_INFO_UNCHANGED	0x02
-
-#define LIO_RSS_HASH_IPV4			0x100
-#define LIO_RSS_HASH_TCP_IPV4			0x200
-#define LIO_RSS_HASH_IPV6			0x400
-#define LIO_RSS_HASH_TCP_IPV6			0x1000
-#define LIO_RSS_HASH_IPV6_EX			0x800
-#define LIO_RSS_HASH_TCP_IPV6_EX		0x2000
-
-#define LIO_RSS_OFFLOAD_ALL (		\
-		LIO_RSS_HASH_IPV4 |	\
-		LIO_RSS_HASH_TCP_IPV4 |	\
-		LIO_RSS_HASH_IPV6 |	\
-		LIO_RSS_HASH_TCP_IPV6 |	\
-		LIO_RSS_HASH_IPV6_EX |	\
-		LIO_RSS_HASH_TCP_IPV6_EX)
-
-#define LIO_RSS_MAX_TABLE_SZ		128
-#define LIO_RSS_MAX_KEY_SZ		40
-#define LIO_RSS_PARAM_SIZE		16
-
-/* Interface flags communicated between host driver and core app. */
-enum lio_ifflags {
-	LIO_IFFLAG_PROMISC	= 0x01,
-	LIO_IFFLAG_ALLMULTI	= 0x02,
-	LIO_IFFLAG_UNICAST	= 0x10
-};
-
-/* Routines for reading and writing CSRs */
-#ifdef RTE_LIBRTE_LIO_DEBUG_REGS
-#define lio_write_csr(lio_dev, reg_off, value)				\
-	do {								\
-		typeof(lio_dev) _dev = lio_dev;				\
-		typeof(reg_off) _reg_off = reg_off;			\
-		typeof(value) _value = value;				\
-		PMD_REGS_LOG(_dev,					\
-			     "Write32: Reg: 0x%08lx Val: 0x%08lx\n",	\
-			     (unsigned long)_reg_off,			\
-			     (unsigned long)_value);			\
-		rte_write32(_value, _dev->hw_addr + _reg_off);		\
-	} while (0)
-
-#define lio_write_csr64(lio_dev, reg_off, val64)			\
-	do {								\
-		typeof(lio_dev) _dev = lio_dev;				\
-		typeof(reg_off) _reg_off = reg_off;			\
-		typeof(val64) _val64 = val64;				\
-		PMD_REGS_LOG(						\
-		    _dev,						\
-		    "Write64: Reg: 0x%08lx Val: 0x%016llx\n",		\
-		    (unsigned long)_reg_off,				\
-		    (unsigned long long)_val64);			\
-		rte_write64(_val64, _dev->hw_addr + _reg_off);		\
-	} while (0)
-
-#define lio_read_csr(lio_dev, reg_off)					\
-	({								\
-		typeof(lio_dev) _dev = lio_dev;				\
-		typeof(reg_off) _reg_off = reg_off;			\
-		uint32_t val = rte_read32(_dev->hw_addr + _reg_off);	\
-		PMD_REGS_LOG(_dev,					\
-			     "Read32: Reg: 0x%08lx Val: 0x%08lx\n",	\
-			     (unsigned long)_reg_off,			\
-			     (unsigned long)val);			\
-		val;							\
-	})
-
-#define lio_read_csr64(lio_dev, reg_off)				\
-	({								\
-		typeof(lio_dev) _dev = lio_dev;				\
-		typeof(reg_off) _reg_off = reg_off;			\
-		uint64_t val64 = rte_read64(_dev->hw_addr + _reg_off);	\
-		PMD_REGS_LOG(						\
-		    _dev,						\
-		    "Read64: Reg: 0x%08lx Val: 0x%016llx\n",		\
-		    (unsigned long)_reg_off,				\
-		    (unsigned long long)val64);				\
-		val64;							\
-	})
-#else
-#define lio_write_csr(lio_dev, reg_off, value)				\
-	rte_write32(value, (lio_dev)->hw_addr + (reg_off))
-
-#define lio_write_csr64(lio_dev, reg_off, val64)			\
-	rte_write64(val64, (lio_dev)->hw_addr + (reg_off))
-
-#define lio_read_csr(lio_dev, reg_off)					\
-	rte_read32((lio_dev)->hw_addr + (reg_off))
-
-#define lio_read_csr64(lio_dev, reg_off)				\
-	rte_read64((lio_dev)->hw_addr + (reg_off))
-#endif
-#endif /* _LIO_HW_DEFS_H_ */
diff --git a/drivers/net/liquidio/base/lio_mbox.c b/drivers/net/liquidio/base/lio_mbox.c
deleted file mode 100644
index 2ac2b1b334..0000000000
--- a/drivers/net/liquidio/base/lio_mbox.c
+++ /dev/null
@@ -1,246 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#include <ethdev_driver.h>
-#include <rte_cycles.h>
-
-#include "lio_logs.h"
-#include "lio_struct.h"
-#include "lio_mbox.h"
-
-/**
- * lio_mbox_read:
- * @mbox: Pointer mailbox
- *
- * Reads the 8-bytes of data from the mbox register
- * Writes back the acknowledgment indicating completion of read
- */
-int
-lio_mbox_read(struct lio_mbox *mbox)
-{
-	union lio_mbox_message msg;
-	int ret = 0;
-
-	msg.mbox_msg64 = rte_read64(mbox->mbox_read_reg);
-
-	if ((msg.mbox_msg64 == LIO_PFVFACK) || (msg.mbox_msg64 == LIO_PFVFSIG))
-		return 0;
-
-	if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVING) {
-		mbox->mbox_req.data[mbox->mbox_req.recv_len - 1] =
-					msg.mbox_msg64;
-		mbox->mbox_req.recv_len++;
-	} else {
-		if (mbox->state & LIO_MBOX_STATE_RES_RECEIVING) {
-			mbox->mbox_resp.data[mbox->mbox_resp.recv_len - 1] =
-					msg.mbox_msg64;
-			mbox->mbox_resp.recv_len++;
-		} else {
-			if ((mbox->state & LIO_MBOX_STATE_IDLE) &&
-					(msg.s.type == LIO_MBOX_REQUEST)) {
-				mbox->state &= ~LIO_MBOX_STATE_IDLE;
-				mbox->state |= LIO_MBOX_STATE_REQ_RECEIVING;
-				mbox->mbox_req.msg.mbox_msg64 = msg.mbox_msg64;
-				mbox->mbox_req.q_no = mbox->q_no;
-				mbox->mbox_req.recv_len = 1;
-			} else {
-				if ((mbox->state &
-				     LIO_MBOX_STATE_RES_PENDING) &&
-				    (msg.s.type == LIO_MBOX_RESPONSE)) {
-					mbox->state &=
-						~LIO_MBOX_STATE_RES_PENDING;
-					mbox->state |=
-						LIO_MBOX_STATE_RES_RECEIVING;
-					mbox->mbox_resp.msg.mbox_msg64 =
-								msg.mbox_msg64;
-					mbox->mbox_resp.q_no = mbox->q_no;
-					mbox->mbox_resp.recv_len = 1;
-				} else {
-					rte_write64(LIO_PFVFERR,
-						    mbox->mbox_read_reg);
-					mbox->state |= LIO_MBOX_STATE_ERROR;
-					return -1;
-				}
-			}
-		}
-	}
-
-	if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVING) {
-		if (mbox->mbox_req.recv_len < msg.s.len) {
-			ret = 0;
-		} else {
-			mbox->state &= ~LIO_MBOX_STATE_REQ_RECEIVING;
-			mbox->state |= LIO_MBOX_STATE_REQ_RECEIVED;
-			ret = 1;
-		}
-	} else {
-		if (mbox->state & LIO_MBOX_STATE_RES_RECEIVING) {
-			if (mbox->mbox_resp.recv_len < msg.s.len) {
-				ret = 0;
-			} else {
-				mbox->state &= ~LIO_MBOX_STATE_RES_RECEIVING;
-				mbox->state |= LIO_MBOX_STATE_RES_RECEIVED;
-				ret = 1;
-			}
-		} else {
-			RTE_ASSERT(0);
-		}
-	}
-
-	rte_write64(LIO_PFVFACK, mbox->mbox_read_reg);
-
-	return ret;
-}
-
-/**
- * lio_mbox_write:
- * @lio_dev: Pointer lio device
- * @mbox_cmd: Cmd to send to mailbox.
- *
- * Populates the queue specific mbox structure
- * with cmd information.
- * Write the cmd to mbox register
- */
-int
-lio_mbox_write(struct lio_device *lio_dev,
-	       struct lio_mbox_cmd *mbox_cmd)
-{
-	struct lio_mbox *mbox = lio_dev->mbox[mbox_cmd->q_no];
-	uint32_t count, i, ret = LIO_MBOX_STATUS_SUCCESS;
-
-	if ((mbox_cmd->msg.s.type == LIO_MBOX_RESPONSE) &&
-			!(mbox->state & LIO_MBOX_STATE_REQ_RECEIVED))
-		return LIO_MBOX_STATUS_FAILED;
-
-	if ((mbox_cmd->msg.s.type == LIO_MBOX_REQUEST) &&
-			!(mbox->state & LIO_MBOX_STATE_IDLE))
-		return LIO_MBOX_STATUS_BUSY;
-
-	if (mbox_cmd->msg.s.type == LIO_MBOX_REQUEST) {
-		rte_memcpy(&mbox->mbox_resp, mbox_cmd,
-			   sizeof(struct lio_mbox_cmd));
-		mbox->state = LIO_MBOX_STATE_RES_PENDING;
-	}
-
-	count = 0;
-
-	while (rte_read64(mbox->mbox_write_reg) != LIO_PFVFSIG) {
-		rte_delay_ms(1);
-		if (count++ == 1000) {
-			ret = LIO_MBOX_STATUS_FAILED;
-			break;
-		}
-	}
-
-	if (ret == LIO_MBOX_STATUS_SUCCESS) {
-		rte_write64(mbox_cmd->msg.mbox_msg64, mbox->mbox_write_reg);
-		for (i = 0; i < (uint32_t)(mbox_cmd->msg.s.len - 1); i++) {
-			count = 0;
-			while (rte_read64(mbox->mbox_write_reg) !=
-					LIO_PFVFACK) {
-				rte_delay_ms(1);
-				if (count++ == 1000) {
-					ret = LIO_MBOX_STATUS_FAILED;
-					break;
-				}
-			}
-			rte_write64(mbox_cmd->data[i], mbox->mbox_write_reg);
-		}
-	}
-
-	if (mbox_cmd->msg.s.type == LIO_MBOX_RESPONSE) {
-		mbox->state = LIO_MBOX_STATE_IDLE;
-		rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
-	} else {
-		if ((!mbox_cmd->msg.s.resp_needed) ||
-				(ret == LIO_MBOX_STATUS_FAILED)) {
-			mbox->state &= ~LIO_MBOX_STATE_RES_PENDING;
-			if (!(mbox->state & (LIO_MBOX_STATE_REQ_RECEIVING |
-					     LIO_MBOX_STATE_REQ_RECEIVED)))
-				mbox->state = LIO_MBOX_STATE_IDLE;
-		}
-	}
-
-	return ret;
-}
-
-/**
- * lio_mbox_process_cmd:
- * @mbox: Pointer mailbox
- * @mbox_cmd: Pointer to command received
- *
- * Process the cmd received in mbox
- */
-static int
-lio_mbox_process_cmd(struct lio_mbox *mbox,
-		     struct lio_mbox_cmd *mbox_cmd)
-{
-	struct lio_device *lio_dev = mbox->lio_dev;
-
-	if (mbox_cmd->msg.s.cmd == LIO_CORES_CRASHED)
-		lio_dev_err(lio_dev, "Octeon core(s) crashed or got stuck!\n");
-
-	return 0;
-}
-
-/**
- * Process the received mbox message.
- */
-int
-lio_mbox_process_message(struct lio_mbox *mbox)
-{
-	struct lio_mbox_cmd mbox_cmd;
-
-	if (mbox->state & LIO_MBOX_STATE_ERROR) {
-		if (mbox->state & (LIO_MBOX_STATE_RES_PENDING |
-				   LIO_MBOX_STATE_RES_RECEIVING)) {
-			rte_memcpy(&mbox_cmd, &mbox->mbox_resp,
-				   sizeof(struct lio_mbox_cmd));
-			mbox->state = LIO_MBOX_STATE_IDLE;
-			rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
-			mbox_cmd.recv_status = 1;
-			if (mbox_cmd.fn)
-				mbox_cmd.fn(mbox->lio_dev, &mbox_cmd,
-					    mbox_cmd.fn_arg);
-
-			return 0;
-		}
-
-		mbox->state = LIO_MBOX_STATE_IDLE;
-		rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
-
-		return 0;
-	}
-
-	if (mbox->state & LIO_MBOX_STATE_RES_RECEIVED) {
-		rte_memcpy(&mbox_cmd, &mbox->mbox_resp,
-			   sizeof(struct lio_mbox_cmd));
-		mbox->state = LIO_MBOX_STATE_IDLE;
-		rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
-		mbox_cmd.recv_status = 0;
-		if (mbox_cmd.fn)
-			mbox_cmd.fn(mbox->lio_dev, &mbox_cmd, mbox_cmd.fn_arg);
-
-		return 0;
-	}
-
-	if (mbox->state & LIO_MBOX_STATE_REQ_RECEIVED) {
-		rte_memcpy(&mbox_cmd, &mbox->mbox_req,
-			   sizeof(struct lio_mbox_cmd));
-		if (!mbox_cmd.msg.s.resp_needed) {
-			mbox->state &= ~LIO_MBOX_STATE_REQ_RECEIVED;
-			if (!(mbox->state & LIO_MBOX_STATE_RES_PENDING))
-				mbox->state = LIO_MBOX_STATE_IDLE;
-			rte_write64(LIO_PFVFSIG, mbox->mbox_read_reg);
-		}
-
-		lio_mbox_process_cmd(mbox, &mbox_cmd);
-
-		return 0;
-	}
-
-	RTE_ASSERT(0);
-
-	return 0;
-}
diff --git a/drivers/net/liquidio/base/lio_mbox.h b/drivers/net/liquidio/base/lio_mbox.h
deleted file mode 100644
index 457917e91f..0000000000
--- a/drivers/net/liquidio/base/lio_mbox.h
+++ /dev/null
@@ -1,102 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_MBOX_H_
-#define _LIO_MBOX_H_
-
-#include <stdint.h>
-
-#include <rte_spinlock.h>
-
-/* Macros for Mail Box Communication */
-
-#define LIO_MBOX_DATA_MAX			32
-
-#define LIO_VF_ACTIVE				0x1
-#define LIO_VF_FLR_REQUEST			0x2
-#define LIO_CORES_CRASHED			0x3
-
-/* Macro for Read acknowledgment */
-#define LIO_PFVFACK				0xffffffffffffffff
-#define LIO_PFVFSIG				0x1122334455667788
-#define LIO_PFVFERR				0xDEADDEADDEADDEAD
-
-enum lio_mbox_cmd_status {
-	LIO_MBOX_STATUS_SUCCESS		= 0,
-	LIO_MBOX_STATUS_FAILED		= 1,
-	LIO_MBOX_STATUS_BUSY		= 2
-};
-
-enum lio_mbox_message_type {
-	LIO_MBOX_REQUEST	= 0,
-	LIO_MBOX_RESPONSE	= 1
-};
-
-union lio_mbox_message {
-	uint64_t mbox_msg64;
-	struct {
-		uint16_t type : 1;
-		uint16_t resp_needed : 1;
-		uint16_t cmd : 6;
-		uint16_t len : 8;
-		uint8_t params[6];
-	} s;
-};
-
-typedef void (*lio_mbox_callback)(void *, void *, void *);
-
-struct lio_mbox_cmd {
-	union lio_mbox_message msg;
-	uint64_t data[LIO_MBOX_DATA_MAX];
-	uint32_t q_no;
-	uint32_t recv_len;
-	uint32_t recv_status;
-	lio_mbox_callback fn;
-	void *fn_arg;
-};
-
-enum lio_mbox_state {
-	LIO_MBOX_STATE_IDLE		= 1,
-	LIO_MBOX_STATE_REQ_RECEIVING	= 2,
-	LIO_MBOX_STATE_REQ_RECEIVED	= 4,
-	LIO_MBOX_STATE_RES_PENDING	= 8,
-	LIO_MBOX_STATE_RES_RECEIVING	= 16,
-	LIO_MBOX_STATE_RES_RECEIVED	= 16,
-	LIO_MBOX_STATE_ERROR		= 32
-};
-
-struct lio_mbox {
-	/* A spinlock to protect access to this q_mbox. */
-	rte_spinlock_t lock;
-
-	struct lio_device *lio_dev;
-
-	uint32_t q_no;
-
-	enum lio_mbox_state state;
-
-	/* SLI_MAC_PF_MBOX_INT for PF, SLI_PKT_MBOX_INT for VF. */
-	void *mbox_int_reg;
-
-	/* SLI_PKT_PF_VF_MBOX_SIG(0) for PF,
-	 * SLI_PKT_PF_VF_MBOX_SIG(1) for VF.
-	 */
-	void *mbox_write_reg;
-
-	/* SLI_PKT_PF_VF_MBOX_SIG(1) for PF,
-	 * SLI_PKT_PF_VF_MBOX_SIG(0) for VF.
-	 */
-	void *mbox_read_reg;
-
-	struct lio_mbox_cmd mbox_req;
-
-	struct lio_mbox_cmd mbox_resp;
-
-};
-
-int lio_mbox_read(struct lio_mbox *mbox);
-int lio_mbox_write(struct lio_device *lio_dev,
-		   struct lio_mbox_cmd *mbox_cmd);
-int lio_mbox_process_message(struct lio_mbox *mbox);
-#endif	/* _LIO_MBOX_H_ */
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
deleted file mode 100644
index ebcfbb1a5c..0000000000
--- a/drivers/net/liquidio/lio_ethdev.c
+++ /dev/null
@@ -1,2147 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#include <rte_string_fns.h>
-#include <ethdev_driver.h>
-#include <ethdev_pci.h>
-#include <rte_cycles.h>
-#include <rte_malloc.h>
-#include <rte_alarm.h>
-#include <rte_ether.h>
-
-#include "lio_logs.h"
-#include "lio_23xx_vf.h"
-#include "lio_ethdev.h"
-#include "lio_rxtx.h"
-
-/* Default RSS key in use */
-static uint8_t lio_rss_key[40] = {
-	0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2,
-	0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0,
-	0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4,
-	0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C,
-	0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
-};
-
-static const struct rte_eth_desc_lim lio_rx_desc_lim = {
-	.nb_max		= CN23XX_MAX_OQ_DESCRIPTORS,
-	.nb_min		= CN23XX_MIN_OQ_DESCRIPTORS,
-	.nb_align	= 1,
-};
-
-static const struct rte_eth_desc_lim lio_tx_desc_lim = {
-	.nb_max		= CN23XX_MAX_IQ_DESCRIPTORS,
-	.nb_min		= CN23XX_MIN_IQ_DESCRIPTORS,
-	.nb_align	= 1,
-};
-
-/* Wait for control command to reach nic. */
-static uint16_t
-lio_wait_for_ctrl_cmd(struct lio_device *lio_dev,
-		      struct lio_dev_ctrl_cmd *ctrl_cmd)
-{
-	uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
-
-	while ((ctrl_cmd->cond == 0) && --timeout) {
-		lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-		rte_delay_ms(1);
-	}
-
-	return !timeout;
-}
-
-/**
- * \brief Send Rx control command
- * @param eth_dev Pointer to the structure rte_eth_dev
- * @param start_stop whether to start or stop
- */
-static int
-lio_send_rx_ctrl_cmd(struct rte_eth_dev *eth_dev, int start_stop)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct lio_dev_ctrl_cmd ctrl_cmd;
-	struct lio_ctrl_pkt ctrl_pkt;
-
-	/* flush added to prevent cmd failure
-	 * incase the queue is full
-	 */
-	lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
-	memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
-	memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
-	ctrl_cmd.eth_dev = eth_dev;
-	ctrl_cmd.cond = 0;
-
-	ctrl_pkt.ncmd.s.cmd = LIO_CMD_RX_CTL;
-	ctrl_pkt.ncmd.s.param1 = start_stop;
-	ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
-	if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
-		lio_dev_err(lio_dev, "Failed to send RX Control message\n");
-		return -1;
-	}
-
-	if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
-		lio_dev_err(lio_dev, "RX Control command timed out\n");
-		return -1;
-	}
-
-	return 0;
-}
-
-/* store statistics names and its offset in stats structure */
-struct rte_lio_xstats_name_off {
-	char name[RTE_ETH_XSTATS_NAME_SIZE];
-	unsigned int offset;
-};
-
-static const struct rte_lio_xstats_name_off rte_lio_stats_strings[] = {
-	{"rx_pkts", offsetof(struct octeon_rx_stats, total_rcvd)},
-	{"rx_bytes", offsetof(struct octeon_rx_stats, bytes_rcvd)},
-	{"rx_broadcast_pkts", offsetof(struct octeon_rx_stats, total_bcst)},
-	{"rx_multicast_pkts", offsetof(struct octeon_rx_stats, total_mcst)},
-	{"rx_flow_ctrl_pkts", offsetof(struct octeon_rx_stats, ctl_rcvd)},
-	{"rx_fifo_err", offsetof(struct octeon_rx_stats, fifo_err)},
-	{"rx_dmac_drop", offsetof(struct octeon_rx_stats, dmac_drop)},
-	{"rx_fcs_err", offsetof(struct octeon_rx_stats, fcs_err)},
-	{"rx_jabber_err", offsetof(struct octeon_rx_stats, jabber_err)},
-	{"rx_l2_err", offsetof(struct octeon_rx_stats, l2_err)},
-	{"rx_vxlan_pkts", offsetof(struct octeon_rx_stats, fw_rx_vxlan)},
-	{"rx_vxlan_err", offsetof(struct octeon_rx_stats, fw_rx_vxlan_err)},
-	{"rx_lro_pkts", offsetof(struct octeon_rx_stats, fw_lro_pkts)},
-	{"tx_pkts", (offsetof(struct octeon_tx_stats, total_pkts_sent)) +
-						sizeof(struct octeon_rx_stats)},
-	{"tx_bytes", (offsetof(struct octeon_tx_stats, total_bytes_sent)) +
-						sizeof(struct octeon_rx_stats)},
-	{"tx_broadcast_pkts",
-		(offsetof(struct octeon_tx_stats, bcast_pkts_sent)) +
-			sizeof(struct octeon_rx_stats)},
-	{"tx_multicast_pkts",
-		(offsetof(struct octeon_tx_stats, mcast_pkts_sent)) +
-			sizeof(struct octeon_rx_stats)},
-	{"tx_flow_ctrl_pkts", (offsetof(struct octeon_tx_stats, ctl_sent)) +
-						sizeof(struct octeon_rx_stats)},
-	{"tx_fifo_err", (offsetof(struct octeon_tx_stats, fifo_err)) +
-						sizeof(struct octeon_rx_stats)},
-	{"tx_total_collisions", (offsetof(struct octeon_tx_stats,
-					  total_collisions)) +
-						sizeof(struct octeon_rx_stats)},
-	{"tx_tso", (offsetof(struct octeon_tx_stats, fw_tso)) +
-						sizeof(struct octeon_rx_stats)},
-	{"tx_vxlan_pkts", (offsetof(struct octeon_tx_stats, fw_tx_vxlan)) +
-						sizeof(struct octeon_rx_stats)},
-};
-
-#define LIO_NB_XSTATS	RTE_DIM(rte_lio_stats_strings)
-
-/* Get hw stats of the port */
-static int
-lio_dev_xstats_get(struct rte_eth_dev *eth_dev, struct rte_eth_xstat *xstats,
-		   unsigned int n)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
-	struct octeon_link_stats *hw_stats;
-	struct lio_link_stats_resp *resp;
-	struct lio_soft_command *sc;
-	uint32_t resp_size;
-	unsigned int i;
-	int retval;
-
-	if (!lio_dev->intf_open) {
-		lio_dev_err(lio_dev, "Port %d down\n",
-			    lio_dev->port_id);
-		return -EINVAL;
-	}
-
-	if (n < LIO_NB_XSTATS)
-		return LIO_NB_XSTATS;
-
-	resp_size = sizeof(struct lio_link_stats_resp);
-	sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
-	if (sc == NULL)
-		return -ENOMEM;
-
-	resp = (struct lio_link_stats_resp *)sc->virtrptr;
-	lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
-				 LIO_OPCODE_PORT_STATS, 0, 0, 0);
-
-	/* Setting wait time in seconds */
-	sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
-
-	retval = lio_send_soft_command(lio_dev, sc);
-	if (retval == LIO_IQ_SEND_FAILED) {
-		lio_dev_err(lio_dev, "failed to get port stats from firmware. status: %x\n",
-			    retval);
-		goto get_stats_fail;
-	}
-
-	while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
-		lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
-		lio_process_ordered_list(lio_dev);
-		rte_delay_ms(1);
-	}
-
-	retval = resp->status;
-	if (retval) {
-		lio_dev_err(lio_dev, "failed to get port stats from firmware\n");
-		goto get_stats_fail;
-	}
-
-	lio_swap_8B_data((uint64_t *)(&resp->link_stats),
-			 sizeof(struct octeon_link_stats) >> 3);
-
-	hw_stats = &resp->link_stats;
-
-	for (i = 0; i < LIO_NB_XSTATS; i++) {
-		xstats[i].id = i;
-		xstats[i].value =
-		    *(uint64_t *)(((char *)hw_stats) +
-					rte_lio_stats_strings[i].offset);
-	}
-
-	lio_free_soft_command(sc);
-
-	return LIO_NB_XSTATS;
-
-get_stats_fail:
-	lio_free_soft_command(sc);
-
-	return -1;
-}
-
-static int
-lio_dev_xstats_get_names(struct rte_eth_dev *eth_dev,
-			 struct rte_eth_xstat_name *xstats_names,
-			 unsigned limit __rte_unused)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	unsigned int i;
-
-	if (!lio_dev->intf_open) {
-		lio_dev_err(lio_dev, "Port %d down\n",
-			    lio_dev->port_id);
-		return -EINVAL;
-	}
-
-	if (xstats_names == NULL)
-		return LIO_NB_XSTATS;
-
-	/* Note: limit checked in rte_eth_xstats_names() */
-
-	for (i = 0; i < LIO_NB_XSTATS; i++) {
-		snprintf(xstats_names[i].name, sizeof(xstats_names[i].name),
-			 "%s", rte_lio_stats_strings[i].name);
-	}
-
-	return LIO_NB_XSTATS;
-}
-
-/* Reset hw stats for the port */
-static int
-lio_dev_xstats_reset(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct lio_dev_ctrl_cmd ctrl_cmd;
-	struct lio_ctrl_pkt ctrl_pkt;
-	int ret;
-
-	if (!lio_dev->intf_open) {
-		lio_dev_err(lio_dev, "Port %d down\n",
-			    lio_dev->port_id);
-		return -EINVAL;
-	}
-
-	/* flush added to prevent cmd failure
-	 * incase the queue is full
-	 */
-	lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
-	memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
-	memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
-	ctrl_cmd.eth_dev = eth_dev;
-	ctrl_cmd.cond = 0;
-
-	ctrl_pkt.ncmd.s.cmd = LIO_CMD_CLEAR_STATS;
-	ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
-	ret = lio_send_ctrl_pkt(lio_dev, &ctrl_pkt);
-	if (ret != 0) {
-		lio_dev_err(lio_dev, "Failed to send clear stats command\n");
-		return ret;
-	}
-
-	ret = lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd);
-	if (ret != 0) {
-		lio_dev_err(lio_dev, "Clear stats command timed out\n");
-		return ret;
-	}
-
-	/* clear stored per queue stats */
-	if (*eth_dev->dev_ops->stats_reset == NULL)
-		return 0;
-	return (*eth_dev->dev_ops->stats_reset)(eth_dev);
-}
-
-/* Retrieve the device statistics (# packets in/out, # bytes in/out, etc */
-static int
-lio_dev_stats_get(struct rte_eth_dev *eth_dev,
-		  struct rte_eth_stats *stats)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct lio_droq_stats *oq_stats;
-	struct lio_iq_stats *iq_stats;
-	struct lio_instr_queue *txq;
-	struct lio_droq *droq;
-	int i, iq_no, oq_no;
-	uint64_t bytes = 0;
-	uint64_t pkts = 0;
-	uint64_t drop = 0;
-
-	for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
-		iq_no = lio_dev->linfo.txpciq[i].s.q_no;
-		txq = lio_dev->instr_queue[iq_no];
-		if (txq != NULL) {
-			iq_stats = &txq->stats;
-			pkts += iq_stats->tx_done;
-			drop += iq_stats->tx_dropped;
-			bytes += iq_stats->tx_tot_bytes;
-		}
-	}
-
-	stats->opackets = pkts;
-	stats->obytes = bytes;
-	stats->oerrors = drop;
-
-	pkts = 0;
-	drop = 0;
-	bytes = 0;
-
-	for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
-		oq_no = lio_dev->linfo.rxpciq[i].s.q_no;
-		droq = lio_dev->droq[oq_no];
-		if (droq != NULL) {
-			oq_stats = &droq->stats;
-			pkts += oq_stats->rx_pkts_received;
-			drop += (oq_stats->rx_dropped +
-					oq_stats->dropped_toomany +
-					oq_stats->dropped_nomem);
-			bytes += oq_stats->rx_bytes_received;
-		}
-	}
-	stats->ibytes = bytes;
-	stats->ipackets = pkts;
-	stats->ierrors = drop;
-
-	return 0;
-}
-
-static int
-lio_dev_stats_reset(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct lio_droq_stats *oq_stats;
-	struct lio_iq_stats *iq_stats;
-	struct lio_instr_queue *txq;
-	struct lio_droq *droq;
-	int i, iq_no, oq_no;
-
-	for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
-		iq_no = lio_dev->linfo.txpciq[i].s.q_no;
-		txq = lio_dev->instr_queue[iq_no];
-		if (txq != NULL) {
-			iq_stats = &txq->stats;
-			memset(iq_stats, 0, sizeof(struct lio_iq_stats));
-		}
-	}
-
-	for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
-		oq_no = lio_dev->linfo.rxpciq[i].s.q_no;
-		droq = lio_dev->droq[oq_no];
-		if (droq != NULL) {
-			oq_stats = &droq->stats;
-			memset(oq_stats, 0, sizeof(struct lio_droq_stats));
-		}
-	}
-
-	return 0;
-}
-
-static int
-lio_dev_info_get(struct rte_eth_dev *eth_dev,
-		 struct rte_eth_dev_info *devinfo)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
-
-	switch (pci_dev->id.subsystem_device_id) {
-	/* CN23xx 10G cards */
-	case PCI_SUBSYS_DEV_ID_CN2350_210:
-	case PCI_SUBSYS_DEV_ID_CN2360_210:
-	case PCI_SUBSYS_DEV_ID_CN2350_210SVPN3:
-	case PCI_SUBSYS_DEV_ID_CN2360_210SVPN3:
-	case PCI_SUBSYS_DEV_ID_CN2350_210SVPT:
-	case PCI_SUBSYS_DEV_ID_CN2360_210SVPT:
-		devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
-		break;
-	/* CN23xx 25G cards */
-	case PCI_SUBSYS_DEV_ID_CN2350_225:
-	case PCI_SUBSYS_DEV_ID_CN2360_225:
-		devinfo->speed_capa = RTE_ETH_LINK_SPEED_25G;
-		break;
-	default:
-		devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
-		lio_dev_err(lio_dev,
-			    "Unknown CN23XX subsystem device id. Setting 10G as default link speed.\n");
-		return -EINVAL;
-	}
-
-	devinfo->max_rx_queues = lio_dev->max_rx_queues;
-	devinfo->max_tx_queues = lio_dev->max_tx_queues;
-
-	devinfo->min_rx_bufsize = LIO_MIN_RX_BUF_SIZE;
-	devinfo->max_rx_pktlen = LIO_MAX_RX_PKTLEN;
-
-	devinfo->max_mac_addrs = 1;
-
-	devinfo->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM		|
-				    RTE_ETH_RX_OFFLOAD_UDP_CKSUM		|
-				    RTE_ETH_RX_OFFLOAD_TCP_CKSUM		|
-				    RTE_ETH_RX_OFFLOAD_VLAN_STRIP		|
-				    RTE_ETH_RX_OFFLOAD_RSS_HASH);
-	devinfo->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
-				    RTE_ETH_TX_OFFLOAD_UDP_CKSUM		|
-				    RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
-				    RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM);
-
-	devinfo->rx_desc_lim = lio_rx_desc_lim;
-	devinfo->tx_desc_lim = lio_tx_desc_lim;
-
-	devinfo->reta_size = LIO_RSS_MAX_TABLE_SZ;
-	devinfo->hash_key_size = LIO_RSS_MAX_KEY_SZ;
-	devinfo->flow_type_rss_offloads = (RTE_ETH_RSS_IPV4			|
-					   RTE_ETH_RSS_NONFRAG_IPV4_TCP	|
-					   RTE_ETH_RSS_IPV6			|
-					   RTE_ETH_RSS_NONFRAG_IPV6_TCP	|
-					   RTE_ETH_RSS_IPV6_EX		|
-					   RTE_ETH_RSS_IPV6_TCP_EX);
-	return 0;
-}
-
-static int
-lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct lio_dev_ctrl_cmd ctrl_cmd;
-	struct lio_ctrl_pkt ctrl_pkt;
-
-	PMD_INIT_FUNC_TRACE();
-
-	if (!lio_dev->intf_open) {
-		lio_dev_err(lio_dev, "Port %d down, can't set MTU\n",
-			    lio_dev->port_id);
-		return -EINVAL;
-	}
-
-	/* flush added to prevent cmd failure
-	 * incase the queue is full
-	 */
-	lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
-	memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
-	memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
-	ctrl_cmd.eth_dev = eth_dev;
-	ctrl_cmd.cond = 0;
-
-	ctrl_pkt.ncmd.s.cmd = LIO_CMD_CHANGE_MTU;
-	ctrl_pkt.ncmd.s.param1 = mtu;
-	ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
-	if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
-		lio_dev_err(lio_dev, "Failed to send command to change MTU\n");
-		return -1;
-	}
-
-	if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
-		lio_dev_err(lio_dev, "Command to change MTU timed out\n");
-		return -1;
-	}
-
-	return 0;
-}
-
-static int
-lio_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
-			struct rte_eth_rss_reta_entry64 *reta_conf,
-			uint16_t reta_size)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
-	struct lio_rss_set *rss_param;
-	struct lio_dev_ctrl_cmd ctrl_cmd;
-	struct lio_ctrl_pkt ctrl_pkt;
-	int i, j, index;
-
-	if (!lio_dev->intf_open) {
-		lio_dev_err(lio_dev, "Port %d down, can't update reta\n",
-			    lio_dev->port_id);
-		return -EINVAL;
-	}
-
-	if (reta_size != LIO_RSS_MAX_TABLE_SZ) {
-		lio_dev_err(lio_dev,
-			    "The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)\n",
-			    reta_size, LIO_RSS_MAX_TABLE_SZ);
-		return -EINVAL;
-	}
-
-	/* flush added to prevent cmd failure
-	 * incase the queue is full
-	 */
-	lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
-	memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
-	memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
-	rss_param = (struct lio_rss_set *)&ctrl_pkt.udd[0];
-
-	ctrl_cmd.eth_dev = eth_dev;
-	ctrl_cmd.cond = 0;
-
-	ctrl_pkt.ncmd.s.cmd = LIO_CMD_SET_RSS;
-	ctrl_pkt.ncmd.s.more = sizeof(struct lio_rss_set) >> 3;
-	ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
-	rss_param->param.flags = 0xF;
-	rss_param->param.flags &= ~LIO_RSS_PARAM_ITABLE_UNCHANGED;
-	rss_param->param.itablesize = LIO_RSS_MAX_TABLE_SZ;
-
-	for (i = 0; i < (reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
-			if ((reta_conf[i].mask) & ((uint64_t)1 << j)) {
-				index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
-				rss_state->itable[index] = reta_conf[i].reta[j];
-			}
-		}
-	}
-
-	rss_state->itable_size = LIO_RSS_MAX_TABLE_SZ;
-	memcpy(rss_param->itable, rss_state->itable, rss_state->itable_size);
-
-	lio_swap_8B_data((uint64_t *)rss_param, LIO_RSS_PARAM_SIZE >> 3);
-
-	if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
-		lio_dev_err(lio_dev, "Failed to set rss hash\n");
-		return -1;
-	}
-
-	if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
-		lio_dev_err(lio_dev, "Set rss hash timed out\n");
-		return -1;
-	}
-
-	return 0;
-}
-
-static int
-lio_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
-		       struct rte_eth_rss_reta_entry64 *reta_conf,
-		       uint16_t reta_size)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
-	int i, num;
-
-	if (reta_size != LIO_RSS_MAX_TABLE_SZ) {
-		lio_dev_err(lio_dev,
-			    "The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)\n",
-			    reta_size, LIO_RSS_MAX_TABLE_SZ);
-		return -EINVAL;
-	}
-
-	num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
-
-	for (i = 0; i < num; i++) {
-		memcpy(reta_conf->reta,
-		       &rss_state->itable[i * RTE_ETH_RETA_GROUP_SIZE],
-		       RTE_ETH_RETA_GROUP_SIZE);
-		reta_conf++;
-	}
-
-	return 0;
-}
-
-static int
-lio_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
-			  struct rte_eth_rss_conf *rss_conf)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
-	uint8_t *hash_key = NULL;
-	uint64_t rss_hf = 0;
-
-	if (rss_state->hash_disable) {
-		lio_dev_info(lio_dev, "RSS disabled in nic\n");
-		rss_conf->rss_hf = 0;
-		return 0;
-	}
-
-	/* Get key value */
-	hash_key = rss_conf->rss_key;
-	if (hash_key != NULL)
-		memcpy(hash_key, rss_state->hash_key, rss_state->hash_key_size);
-
-	if (rss_state->ip)
-		rss_hf |= RTE_ETH_RSS_IPV4;
-	if (rss_state->tcp_hash)
-		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
-	if (rss_state->ipv6)
-		rss_hf |= RTE_ETH_RSS_IPV6;
-	if (rss_state->ipv6_tcp_hash)
-		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
-	if (rss_state->ipv6_ex)
-		rss_hf |= RTE_ETH_RSS_IPV6_EX;
-	if (rss_state->ipv6_tcp_ex_hash)
-		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
-
-	rss_conf->rss_hf = rss_hf;
-
-	return 0;
-}
-
-static int
-lio_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
-			struct rte_eth_rss_conf *rss_conf)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
-	struct lio_rss_set *rss_param;
-	struct lio_dev_ctrl_cmd ctrl_cmd;
-	struct lio_ctrl_pkt ctrl_pkt;
-
-	if (!lio_dev->intf_open) {
-		lio_dev_err(lio_dev, "Port %d down, can't update hash\n",
-			    lio_dev->port_id);
-		return -EINVAL;
-	}
-
-	/* flush added to prevent cmd failure
-	 * incase the queue is full
-	 */
-	lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
-	memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
-	memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
-	rss_param = (struct lio_rss_set *)&ctrl_pkt.udd[0];
-
-	ctrl_cmd.eth_dev = eth_dev;
-	ctrl_cmd.cond = 0;
-
-	ctrl_pkt.ncmd.s.cmd = LIO_CMD_SET_RSS;
-	ctrl_pkt.ncmd.s.more = sizeof(struct lio_rss_set) >> 3;
-	ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
-	rss_param->param.flags = 0xF;
-
-	if (rss_conf->rss_key) {
-		rss_param->param.flags &= ~LIO_RSS_PARAM_HASH_KEY_UNCHANGED;
-		rss_state->hash_key_size = LIO_RSS_MAX_KEY_SZ;
-		rss_param->param.hashkeysize = LIO_RSS_MAX_KEY_SZ;
-		memcpy(rss_state->hash_key, rss_conf->rss_key,
-		       rss_state->hash_key_size);
-		memcpy(rss_param->key, rss_state->hash_key,
-		       rss_state->hash_key_size);
-	}
-
-	if ((rss_conf->rss_hf & LIO_RSS_OFFLOAD_ALL) == 0) {
-		/* Can't disable rss through hash flags,
-		 * if it is enabled by default during init
-		 */
-		if (!rss_state->hash_disable)
-			return -EINVAL;
-
-		/* This is for --disable-rss during testpmd launch */
-		rss_param->param.flags |= LIO_RSS_PARAM_DISABLE_RSS;
-	} else {
-		uint32_t hashinfo = 0;
-
-		/* Can't enable rss if disabled by default during init */
-		if (rss_state->hash_disable)
-			return -EINVAL;
-
-		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
-			hashinfo |= LIO_RSS_HASH_IPV4;
-			rss_state->ip = 1;
-		} else {
-			rss_state->ip = 0;
-		}
-
-		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
-			hashinfo |= LIO_RSS_HASH_TCP_IPV4;
-			rss_state->tcp_hash = 1;
-		} else {
-			rss_state->tcp_hash = 0;
-		}
-
-		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6) {
-			hashinfo |= LIO_RSS_HASH_IPV6;
-			rss_state->ipv6 = 1;
-		} else {
-			rss_state->ipv6 = 0;
-		}
-
-		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
-			hashinfo |= LIO_RSS_HASH_TCP_IPV6;
-			rss_state->ipv6_tcp_hash = 1;
-		} else {
-			rss_state->ipv6_tcp_hash = 0;
-		}
-
-		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX) {
-			hashinfo |= LIO_RSS_HASH_IPV6_EX;
-			rss_state->ipv6_ex = 1;
-		} else {
-			rss_state->ipv6_ex = 0;
-		}
-
-		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) {
-			hashinfo |= LIO_RSS_HASH_TCP_IPV6_EX;
-			rss_state->ipv6_tcp_ex_hash = 1;
-		} else {
-			rss_state->ipv6_tcp_ex_hash = 0;
-		}
-
-		rss_param->param.flags &= ~LIO_RSS_PARAM_HASH_INFO_UNCHANGED;
-		rss_param->param.hashinfo = hashinfo;
-	}
-
-	lio_swap_8B_data((uint64_t *)rss_param, LIO_RSS_PARAM_SIZE >> 3);
-
-	if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
-		lio_dev_err(lio_dev, "Failed to set rss hash\n");
-		return -1;
-	}
-
-	if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
-		lio_dev_err(lio_dev, "Set rss hash timed out\n");
-		return -1;
-	}
-
-	return 0;
-}
-
-/**
- * Add vxlan dest udp port for an interface.
- *
- * @param eth_dev
- *  Pointer to the structure rte_eth_dev
- * @param udp_tnl
- *  udp tunnel conf
- *
- * @return
- *  On success return 0
- *  On failure return -1
- */
-static int
-lio_dev_udp_tunnel_add(struct rte_eth_dev *eth_dev,
-		       struct rte_eth_udp_tunnel *udp_tnl)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct lio_dev_ctrl_cmd ctrl_cmd;
-	struct lio_ctrl_pkt ctrl_pkt;
-
-	if (udp_tnl == NULL)
-		return -EINVAL;
-
-	if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
-		lio_dev_err(lio_dev, "Unsupported tunnel type\n");
-		return -1;
-	}
-
-	/* flush added to prevent cmd failure
-	 * incase the queue is full
-	 */
-	lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
-	memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
-	memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
-	ctrl_cmd.eth_dev = eth_dev;
-	ctrl_cmd.cond = 0;
-
-	ctrl_pkt.ncmd.s.cmd = LIO_CMD_VXLAN_PORT_CONFIG;
-	ctrl_pkt.ncmd.s.param1 = udp_tnl->udp_port;
-	ctrl_pkt.ncmd.s.more = LIO_CMD_VXLAN_PORT_ADD;
-	ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
-	if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
-		lio_dev_err(lio_dev, "Failed to send VXLAN_PORT_ADD command\n");
-		return -1;
-	}
-
-	if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
-		lio_dev_err(lio_dev, "VXLAN_PORT_ADD command timed out\n");
-		return -1;
-	}
-
-	return 0;
-}
-
-/**
- * Remove vxlan dest udp port for an interface.
- *
- * @param eth_dev
- *  Pointer to the structure rte_eth_dev
- * @param udp_tnl
- *  udp tunnel conf
- *
- * @return
- *  On success return 0
- *  On failure return -1
- */
-static int
-lio_dev_udp_tunnel_del(struct rte_eth_dev *eth_dev,
-		       struct rte_eth_udp_tunnel *udp_tnl)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct lio_dev_ctrl_cmd ctrl_cmd;
-	struct lio_ctrl_pkt ctrl_pkt;
-
-	if (udp_tnl == NULL)
-		return -EINVAL;
-
-	if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
-		lio_dev_err(lio_dev, "Unsupported tunnel type\n");
-		return -1;
-	}
-
-	/* flush added to prevent cmd failure
-	 * incase the queue is full
-	 */
-	lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
-	memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
-	memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
-	ctrl_cmd.eth_dev = eth_dev;
-	ctrl_cmd.cond = 0;
-
-	ctrl_pkt.ncmd.s.cmd = LIO_CMD_VXLAN_PORT_CONFIG;
-	ctrl_pkt.ncmd.s.param1 = udp_tnl->udp_port;
-	ctrl_pkt.ncmd.s.more = LIO_CMD_VXLAN_PORT_DEL;
-	ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
-	if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
-		lio_dev_err(lio_dev, "Failed to send VXLAN_PORT_DEL command\n");
-		return -1;
-	}
-
-	if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
-		lio_dev_err(lio_dev, "VXLAN_PORT_DEL command timed out\n");
-		return -1;
-	}
-
-	return 0;
-}
-
-static int
-lio_dev_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id, int on)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct lio_dev_ctrl_cmd ctrl_cmd;
-	struct lio_ctrl_pkt ctrl_pkt;
-
-	if (lio_dev->linfo.vlan_is_admin_assigned)
-		return -EPERM;
-
-	/* flush added to prevent cmd failure
-	 * incase the queue is full
-	 */
-	lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
-	memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
-	memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
-	ctrl_cmd.eth_dev = eth_dev;
-	ctrl_cmd.cond = 0;
-
-	ctrl_pkt.ncmd.s.cmd = on ?
-			LIO_CMD_ADD_VLAN_FILTER : LIO_CMD_DEL_VLAN_FILTER;
-	ctrl_pkt.ncmd.s.param1 = vlan_id;
-	ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
-	if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
-		lio_dev_err(lio_dev, "Failed to %s VLAN port\n",
-			    on ? "add" : "remove");
-		return -1;
-	}
-
-	if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
-		lio_dev_err(lio_dev, "Command to %s VLAN port timed out\n",
-			    on ? "add" : "remove");
-		return -1;
-	}
-
-	return 0;
-}
-
-static uint64_t
-lio_hweight64(uint64_t w)
-{
-	uint64_t res = w - ((w >> 1) & 0x5555555555555555ul);
-
-	res =
-	    (res & 0x3333333333333333ul) + ((res >> 2) & 0x3333333333333333ul);
-	res = (res + (res >> 4)) & 0x0F0F0F0F0F0F0F0Ful;
-	res = res + (res >> 8);
-	res = res + (res >> 16);
-
-	return (res + (res >> 32)) & 0x00000000000000FFul;
-}
-
-static int
-lio_dev_link_update(struct rte_eth_dev *eth_dev,
-		    int wait_to_complete __rte_unused)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct rte_eth_link link;
-
-	/* Initialize */
-	memset(&link, 0, sizeof(link));
-	link.link_status = RTE_ETH_LINK_DOWN;
-	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
-	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
-	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
-
-	/* Return what we found */
-	if (lio_dev->linfo.link.s.link_up == 0) {
-		/* Interface is down */
-		return rte_eth_linkstatus_set(eth_dev, &link);
-	}
-
-	link.link_status = RTE_ETH_LINK_UP; /* Interface is up */
-	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
-	switch (lio_dev->linfo.link.s.speed) {
-	case LIO_LINK_SPEED_10000:
-		link.link_speed = RTE_ETH_SPEED_NUM_10G;
-		break;
-	case LIO_LINK_SPEED_25000:
-		link.link_speed = RTE_ETH_SPEED_NUM_25G;
-		break;
-	default:
-		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
-		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
-	}
-
-	return rte_eth_linkstatus_set(eth_dev, &link);
-}
-
-/**
- * \brief Net device enable, disable allmulticast
- * @param eth_dev Pointer to the structure rte_eth_dev
- *
- * @return
- *  On success return 0
- *  On failure return negative errno
- */
-static int
-lio_change_dev_flag(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct lio_dev_ctrl_cmd ctrl_cmd;
-	struct lio_ctrl_pkt ctrl_pkt;
-
-	/* flush added to prevent cmd failure
-	 * incase the queue is full
-	 */
-	lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
-	memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
-	memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
-	ctrl_cmd.eth_dev = eth_dev;
-	ctrl_cmd.cond = 0;
-
-	/* Create a ctrl pkt command to be sent to core app. */
-	ctrl_pkt.ncmd.s.cmd = LIO_CMD_CHANGE_DEVFLAGS;
-	ctrl_pkt.ncmd.s.param1 = lio_dev->ifflags;
-	ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
-	if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
-		lio_dev_err(lio_dev, "Failed to send change flag message\n");
-		return -EAGAIN;
-	}
-
-	if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
-		lio_dev_err(lio_dev, "Change dev flag command timed out\n");
-		return -ETIMEDOUT;
-	}
-
-	return 0;
-}
-
-static int
-lio_dev_promiscuous_enable(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
-	if (strcmp(lio_dev->firmware_version, LIO_VF_TRUST_MIN_VERSION) < 0) {
-		lio_dev_err(lio_dev, "Require firmware version >= %s\n",
-			    LIO_VF_TRUST_MIN_VERSION);
-		return -EAGAIN;
-	}
-
-	if (!lio_dev->intf_open) {
-		lio_dev_err(lio_dev, "Port %d down, can't enable promiscuous\n",
-			    lio_dev->port_id);
-		return -EAGAIN;
-	}
-
-	lio_dev->ifflags |= LIO_IFFLAG_PROMISC;
-	return lio_change_dev_flag(eth_dev);
-}
-
-static int
-lio_dev_promiscuous_disable(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
-	if (strcmp(lio_dev->firmware_version, LIO_VF_TRUST_MIN_VERSION) < 0) {
-		lio_dev_err(lio_dev, "Require firmware version >= %s\n",
-			    LIO_VF_TRUST_MIN_VERSION);
-		return -EAGAIN;
-	}
-
-	if (!lio_dev->intf_open) {
-		lio_dev_err(lio_dev, "Port %d down, can't disable promiscuous\n",
-			    lio_dev->port_id);
-		return -EAGAIN;
-	}
-
-	lio_dev->ifflags &= ~LIO_IFFLAG_PROMISC;
-	return lio_change_dev_flag(eth_dev);
-}
-
-static int
-lio_dev_allmulticast_enable(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
-	if (!lio_dev->intf_open) {
-		lio_dev_err(lio_dev, "Port %d down, can't enable multicast\n",
-			    lio_dev->port_id);
-		return -EAGAIN;
-	}
-
-	lio_dev->ifflags |= LIO_IFFLAG_ALLMULTI;
-	return lio_change_dev_flag(eth_dev);
-}
-
-static int
-lio_dev_allmulticast_disable(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
-	if (!lio_dev->intf_open) {
-		lio_dev_err(lio_dev, "Port %d down, can't disable multicast\n",
-			    lio_dev->port_id);
-		return -EAGAIN;
-	}
-
-	lio_dev->ifflags &= ~LIO_IFFLAG_ALLMULTI;
-	return lio_change_dev_flag(eth_dev);
-}
-
-static void
-lio_dev_rss_configure(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
-	struct rte_eth_rss_reta_entry64 reta_conf[8];
-	struct rte_eth_rss_conf rss_conf;
-	uint16_t i;
-
-	/* Configure the RSS key and the RSS protocols used to compute
-	 * the RSS hash of input packets.
-	 */
-	rss_conf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
-	if ((rss_conf.rss_hf & LIO_RSS_OFFLOAD_ALL) == 0) {
-		rss_state->hash_disable = 1;
-		lio_dev_rss_hash_update(eth_dev, &rss_conf);
-		return;
-	}
-
-	if (rss_conf.rss_key == NULL)
-		rss_conf.rss_key = lio_rss_key; /* Default hash key */
-
-	lio_dev_rss_hash_update(eth_dev, &rss_conf);
-
-	memset(reta_conf, 0, sizeof(reta_conf));
-	for (i = 0; i < LIO_RSS_MAX_TABLE_SZ; i++) {
-		uint8_t q_idx, conf_idx, reta_idx;
-
-		q_idx = (uint8_t)((eth_dev->data->nb_rx_queues > 1) ?
-				  i % eth_dev->data->nb_rx_queues : 0);
-		conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
-		reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
-		reta_conf[conf_idx].reta[reta_idx] = q_idx;
-		reta_conf[conf_idx].mask |= ((uint64_t)1 << reta_idx);
-	}
-
-	lio_dev_rss_reta_update(eth_dev, reta_conf, LIO_RSS_MAX_TABLE_SZ);
-}
-
-static void
-lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct lio_rss_ctx *rss_state = &lio_dev->rss_state;
-	struct rte_eth_rss_conf rss_conf;
-
-	switch (eth_dev->data->dev_conf.rxmode.mq_mode) {
-	case RTE_ETH_MQ_RX_RSS:
-		lio_dev_rss_configure(eth_dev);
-		break;
-	case RTE_ETH_MQ_RX_NONE:
-	/* if mq_mode is none, disable rss mode. */
-	default:
-		memset(&rss_conf, 0, sizeof(rss_conf));
-		rss_state->hash_disable = 1;
-		lio_dev_rss_hash_update(eth_dev, &rss_conf);
-	}
-}
-
-/**
- * Setup our receive queue/ringbuffer. This is the
- * queue the Octeon uses to send us packets and
- * responses. We are given a memory pool for our
- * packet buffers that are used to populate the receive
- * queue.
- *
- * @param eth_dev
- *    Pointer to the structure rte_eth_dev
- * @param q_no
- *    Queue number
- * @param num_rx_descs
- *    Number of entries in the queue
- * @param socket_id
- *    Where to allocate memory
- * @param rx_conf
- *    Pointer to the struction rte_eth_rxconf
- * @param mp
- *    Pointer to the packet pool
- *
- * @return
- *    - On success, return 0
- *    - On failure, return -1
- */
-static int
-lio_dev_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
-		       uint16_t num_rx_descs, unsigned int socket_id,
-		       const struct rte_eth_rxconf *rx_conf __rte_unused,
-		       struct rte_mempool *mp)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct rte_pktmbuf_pool_private *mbp_priv;
-	uint32_t fw_mapped_oq;
-	uint16_t buf_size;
-
-	if (q_no >= lio_dev->nb_rx_queues) {
-		lio_dev_err(lio_dev, "Invalid rx queue number %u\n", q_no);
-		return -EINVAL;
-	}
-
-	lio_dev_dbg(lio_dev, "setting up rx queue %u\n", q_no);
-
-	fw_mapped_oq = lio_dev->linfo.rxpciq[q_no].s.q_no;
-
-	/* Free previous allocation if any */
-	if (eth_dev->data->rx_queues[q_no] != NULL) {
-		lio_dev_rx_queue_release(eth_dev, q_no);
-		eth_dev->data->rx_queues[q_no] = NULL;
-	}
-
-	mbp_priv = rte_mempool_get_priv(mp);
-	buf_size = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
-
-	if (lio_setup_droq(lio_dev, fw_mapped_oq, num_rx_descs, buf_size, mp,
-			   socket_id)) {
-		lio_dev_err(lio_dev, "droq allocation failed\n");
-		return -1;
-	}
-
-	eth_dev->data->rx_queues[q_no] = lio_dev->droq[fw_mapped_oq];
-
-	return 0;
-}
-
-/**
- * Release the receive queue/ringbuffer. Called by
- * the upper layers.
- *
- * @param eth_dev
- *    Pointer to Ethernet device structure.
- * @param q_no
- *    Receive queue index.
- *
- * @return
- *    - nothing
- */
-void
-lio_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
-{
-	struct lio_droq *droq = dev->data->rx_queues[q_no];
-	int oq_no;
-
-	if (droq) {
-		oq_no = droq->q_no;
-		lio_delete_droq_queue(droq->lio_dev, oq_no);
-	}
-}
-
-/**
- * Allocate and initialize SW ring. Initialize associated HW registers.
- *
- * @param eth_dev
- *   Pointer to structure rte_eth_dev
- *
- * @param q_no
- *   Queue number
- *
- * @param num_tx_descs
- *   Number of ringbuffer descriptors
- *
- * @param socket_id
- *   NUMA socket id, used for memory allocations
- *
- * @param tx_conf
- *   Pointer to the structure rte_eth_txconf
- *
- * @return
- *   - On success, return 0
- *   - On failure, return -errno value
- */
-static int
-lio_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
-		       uint16_t num_tx_descs, unsigned int socket_id,
-		       const struct rte_eth_txconf *tx_conf __rte_unused)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	int fw_mapped_iq = lio_dev->linfo.txpciq[q_no].s.q_no;
-	int retval;
-
-	if (q_no >= lio_dev->nb_tx_queues) {
-		lio_dev_err(lio_dev, "Invalid tx queue number %u\n", q_no);
-		return -EINVAL;
-	}
-
-	lio_dev_dbg(lio_dev, "setting up tx queue %u\n", q_no);
-
-	/* Free previous allocation if any */
-	if (eth_dev->data->tx_queues[q_no] != NULL) {
-		lio_dev_tx_queue_release(eth_dev, q_no);
-		eth_dev->data->tx_queues[q_no] = NULL;
-	}
-
-	retval = lio_setup_iq(lio_dev, q_no, lio_dev->linfo.txpciq[q_no],
-			      num_tx_descs, lio_dev, socket_id);
-
-	if (retval) {
-		lio_dev_err(lio_dev, "Runtime IQ(TxQ) creation failed.\n");
-		return retval;
-	}
-
-	retval = lio_setup_sglists(lio_dev, q_no, fw_mapped_iq,
-				lio_dev->instr_queue[fw_mapped_iq]->nb_desc,
-				socket_id);
-
-	if (retval) {
-		lio_delete_instruction_queue(lio_dev, fw_mapped_iq);
-		return retval;
-	}
-
-	eth_dev->data->tx_queues[q_no] = lio_dev->instr_queue[fw_mapped_iq];
-
-	return 0;
-}
-
-/**
- * Release the transmit queue/ringbuffer. Called by
- * the upper layers.
- *
- * @param eth_dev
- *    Pointer to Ethernet device structure.
- * @param q_no
- *   Transmit queue index.
- *
- * @return
- *    - nothing
- */
-void
-lio_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
-{
-	struct lio_instr_queue *tq = dev->data->tx_queues[q_no];
-	uint32_t fw_mapped_iq_no;
-
-
-	if (tq) {
-		/* Free sg_list */
-		lio_delete_sglist(tq);
-
-		fw_mapped_iq_no = tq->txpciq.s.q_no;
-		lio_delete_instruction_queue(tq->lio_dev, fw_mapped_iq_no);
-	}
-}
-
-/**
- * Api to check link state.
- */
-static void
-lio_dev_get_link_status(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
-	struct lio_link_status_resp *resp;
-	union octeon_link_status *ls;
-	struct lio_soft_command *sc;
-	uint32_t resp_size;
-
-	if (!lio_dev->intf_open)
-		return;
-
-	resp_size = sizeof(struct lio_link_status_resp);
-	sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
-	if (sc == NULL)
-		return;
-
-	resp = (struct lio_link_status_resp *)sc->virtrptr;
-	lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
-				 LIO_OPCODE_INFO, 0, 0, 0);
-
-	/* Setting wait time in seconds */
-	sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
-
-	if (lio_send_soft_command(lio_dev, sc) == LIO_IQ_SEND_FAILED)
-		goto get_status_fail;
-
-	while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
-		lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
-		rte_delay_ms(1);
-	}
-
-	if (resp->status)
-		goto get_status_fail;
-
-	ls = &resp->link_info.link;
-
-	lio_swap_8B_data((uint64_t *)ls, sizeof(union octeon_link_status) >> 3);
-
-	if (lio_dev->linfo.link.link_status64 != ls->link_status64) {
-		if (ls->s.mtu < eth_dev->data->mtu) {
-			lio_dev_info(lio_dev, "Lowered VF MTU to %d as PF MTU dropped\n",
-				     ls->s.mtu);
-			eth_dev->data->mtu = ls->s.mtu;
-		}
-		lio_dev->linfo.link.link_status64 = ls->link_status64;
-		lio_dev_link_update(eth_dev, 0);
-	}
-
-	lio_free_soft_command(sc);
-
-	return;
-
-get_status_fail:
-	lio_free_soft_command(sc);
-}
-
-/* This function will be invoked every LSC_TIMEOUT ns (100ms)
- * and will update link state if it changes.
- */
-static void
-lio_sync_link_state_check(void *eth_dev)
-{
-	struct lio_device *lio_dev =
-		(((struct rte_eth_dev *)eth_dev)->data->dev_private);
-
-	if (lio_dev->port_configured)
-		lio_dev_get_link_status(eth_dev);
-
-	/* Schedule periodic link status check.
-	 * Stop check if interface is close and start again while opening.
-	 */
-	if (lio_dev->intf_open)
-		rte_eal_alarm_set(LIO_LSC_TIMEOUT, lio_sync_link_state_check,
-				  eth_dev);
-}
-
-static int
-lio_dev_start(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
-	int ret = 0;
-
-	lio_dev_info(lio_dev, "Starting port %d\n", eth_dev->data->port_id);
-
-	if (lio_dev->fn_list.enable_io_queues(lio_dev))
-		return -1;
-
-	if (lio_send_rx_ctrl_cmd(eth_dev, 1))
-		return -1;
-
-	/* Ready for link status updates */
-	lio_dev->intf_open = 1;
-	rte_mb();
-
-	/* Configure RSS if device configured with multiple RX queues. */
-	lio_dev_mq_rx_configure(eth_dev);
-
-	/* Before update the link info,
-	 * must set linfo.link.link_status64 to 0.
-	 */
-	lio_dev->linfo.link.link_status64 = 0;
-
-	/* start polling for lsc */
-	ret = rte_eal_alarm_set(LIO_LSC_TIMEOUT,
-				lio_sync_link_state_check,
-				eth_dev);
-	if (ret) {
-		lio_dev_err(lio_dev,
-			    "link state check handler creation failed\n");
-		goto dev_lsc_handle_error;
-	}
-
-	while ((lio_dev->linfo.link.link_status64 == 0) && (--timeout))
-		rte_delay_ms(1);
-
-	if (lio_dev->linfo.link.link_status64 == 0) {
-		ret = -1;
-		goto dev_mtu_set_error;
-	}
-
-	ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu);
-	if (ret != 0)
-		goto dev_mtu_set_error;
-
-	return 0;
-
-dev_mtu_set_error:
-	rte_eal_alarm_cancel(lio_sync_link_state_check, eth_dev);
-
-dev_lsc_handle_error:
-	lio_dev->intf_open = 0;
-	lio_send_rx_ctrl_cmd(eth_dev, 0);
-
-	return ret;
-}
-
-/* Stop device and disable input/output functions */
-static int
-lio_dev_stop(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
-	lio_dev_info(lio_dev, "Stopping port %d\n", eth_dev->data->port_id);
-	eth_dev->data->dev_started = 0;
-	lio_dev->intf_open = 0;
-	rte_mb();
-
-	/* Cancel callback if still running. */
-	rte_eal_alarm_cancel(lio_sync_link_state_check, eth_dev);
-
-	lio_send_rx_ctrl_cmd(eth_dev, 0);
-
-	lio_wait_for_instr_fetch(lio_dev);
-
-	/* Clear recorded link status */
-	lio_dev->linfo.link.link_status64 = 0;
-
-	return 0;
-}
-
-static int
-lio_dev_set_link_up(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
-	if (!lio_dev->intf_open) {
-		lio_dev_info(lio_dev, "Port is stopped, Start the port first\n");
-		return 0;
-	}
-
-	if (lio_dev->linfo.link.s.link_up) {
-		lio_dev_info(lio_dev, "Link is already UP\n");
-		return 0;
-	}
-
-	if (lio_send_rx_ctrl_cmd(eth_dev, 1)) {
-		lio_dev_err(lio_dev, "Unable to set Link UP\n");
-		return -1;
-	}
-
-	lio_dev->linfo.link.s.link_up = 1;
-	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
-
-	return 0;
-}
-
-static int
-lio_dev_set_link_down(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
-	if (!lio_dev->intf_open) {
-		lio_dev_info(lio_dev, "Port is stopped, Start the port first\n");
-		return 0;
-	}
-
-	if (!lio_dev->linfo.link.s.link_up) {
-		lio_dev_info(lio_dev, "Link is already DOWN\n");
-		return 0;
-	}
-
-	lio_dev->linfo.link.s.link_up = 0;
-	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
-
-	if (lio_send_rx_ctrl_cmd(eth_dev, 0)) {
-		lio_dev->linfo.link.s.link_up = 1;
-		eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
-		lio_dev_err(lio_dev, "Unable to set Link Down\n");
-		return -1;
-	}
-
-	return 0;
-}
-
-/**
- * Reset and stop the device. This occurs on the first
- * call to this routine. Subsequent calls will simply
- * return. NB: This will require the NIC to be rebooted.
- *
- * @param eth_dev
- *    Pointer to the structure rte_eth_dev
- *
- * @return
- *    - nothing
- */
-static int
-lio_dev_close(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	int ret = 0;
-
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
-		return 0;
-
-	lio_dev_info(lio_dev, "closing port %d\n", eth_dev->data->port_id);
-
-	if (lio_dev->intf_open)
-		ret = lio_dev_stop(eth_dev);
-
-	/* Reset ioq regs */
-	lio_dev->fn_list.setup_device_regs(lio_dev);
-
-	if (lio_dev->pci_dev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
-		cn23xx_vf_ask_pf_to_do_flr(lio_dev);
-		rte_delay_ms(LIO_PCI_FLR_WAIT);
-	}
-
-	/* lio_free_mbox */
-	lio_dev->fn_list.free_mbox(lio_dev);
-
-	/* Free glist resources */
-	rte_free(lio_dev->glist_head);
-	rte_free(lio_dev->glist_lock);
-	lio_dev->glist_head = NULL;
-	lio_dev->glist_lock = NULL;
-
-	lio_dev->port_configured = 0;
-
-	 /* Delete all queues */
-	lio_dev_clear_queues(eth_dev);
-
-	return ret;
-}
-
-/**
- * Enable tunnel rx checksum verification from firmware.
- */
-static void
-lio_enable_hw_tunnel_rx_checksum(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct lio_dev_ctrl_cmd ctrl_cmd;
-	struct lio_ctrl_pkt ctrl_pkt;
-
-	/* flush added to prevent cmd failure
-	 * incase the queue is full
-	 */
-	lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
-	memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
-	memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
-	ctrl_cmd.eth_dev = eth_dev;
-	ctrl_cmd.cond = 0;
-
-	ctrl_pkt.ncmd.s.cmd = LIO_CMD_TNL_RX_CSUM_CTL;
-	ctrl_pkt.ncmd.s.param1 = LIO_CMD_RXCSUM_ENABLE;
-	ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
-	if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
-		lio_dev_err(lio_dev, "Failed to send TNL_RX_CSUM command\n");
-		return;
-	}
-
-	if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd))
-		lio_dev_err(lio_dev, "TNL_RX_CSUM command timed out\n");
-}
-
-/**
- * Enable checksum calculation for inner packet in a tunnel.
- */
-static void
-lio_enable_hw_tunnel_tx_checksum(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct lio_dev_ctrl_cmd ctrl_cmd;
-	struct lio_ctrl_pkt ctrl_pkt;
-
-	/* flush added to prevent cmd failure
-	 * incase the queue is full
-	 */
-	lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
-	memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
-	memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
-	ctrl_cmd.eth_dev = eth_dev;
-	ctrl_cmd.cond = 0;
-
-	ctrl_pkt.ncmd.s.cmd = LIO_CMD_TNL_TX_CSUM_CTL;
-	ctrl_pkt.ncmd.s.param1 = LIO_CMD_TXCSUM_ENABLE;
-	ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
-	if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
-		lio_dev_err(lio_dev, "Failed to send TNL_TX_CSUM command\n");
-		return;
-	}
-
-	if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd))
-		lio_dev_err(lio_dev, "TNL_TX_CSUM command timed out\n");
-}
-
-static int
-lio_send_queue_count_update(struct rte_eth_dev *eth_dev, int num_txq,
-			    int num_rxq)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	struct lio_dev_ctrl_cmd ctrl_cmd;
-	struct lio_ctrl_pkt ctrl_pkt;
-
-	if (strcmp(lio_dev->firmware_version, LIO_Q_RECONF_MIN_VERSION) < 0) {
-		lio_dev_err(lio_dev, "Require firmware version >= %s\n",
-			    LIO_Q_RECONF_MIN_VERSION);
-		return -ENOTSUP;
-	}
-
-	/* flush added to prevent cmd failure
-	 * incase the queue is full
-	 */
-	lio_flush_iq(lio_dev, lio_dev->instr_queue[0]);
-
-	memset(&ctrl_pkt, 0, sizeof(struct lio_ctrl_pkt));
-	memset(&ctrl_cmd, 0, sizeof(struct lio_dev_ctrl_cmd));
-
-	ctrl_cmd.eth_dev = eth_dev;
-	ctrl_cmd.cond = 0;
-
-	ctrl_pkt.ncmd.s.cmd = LIO_CMD_QUEUE_COUNT_CTL;
-	ctrl_pkt.ncmd.s.param1 = num_txq;
-	ctrl_pkt.ncmd.s.param2 = num_rxq;
-	ctrl_pkt.ctrl_cmd = &ctrl_cmd;
-
-	if (lio_send_ctrl_pkt(lio_dev, &ctrl_pkt)) {
-		lio_dev_err(lio_dev, "Failed to send queue count control command\n");
-		return -1;
-	}
-
-	if (lio_wait_for_ctrl_cmd(lio_dev, &ctrl_cmd)) {
-		lio_dev_err(lio_dev, "Queue count control command timed out\n");
-		return -1;
-	}
-
-	return 0;
-}
-
-static int
-lio_reconf_queues(struct rte_eth_dev *eth_dev, int num_txq, int num_rxq)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	int ret;
-
-	if (lio_dev->nb_rx_queues != num_rxq ||
-	    lio_dev->nb_tx_queues != num_txq) {
-		if (lio_send_queue_count_update(eth_dev, num_txq, num_rxq))
-			return -1;
-		lio_dev->nb_rx_queues = num_rxq;
-		lio_dev->nb_tx_queues = num_txq;
-	}
-
-	if (lio_dev->intf_open) {
-		ret = lio_dev_stop(eth_dev);
-		if (ret != 0)
-			return ret;
-	}
-
-	/* Reset ioq registers */
-	if (lio_dev->fn_list.setup_device_regs(lio_dev)) {
-		lio_dev_err(lio_dev, "Failed to configure device registers\n");
-		return -1;
-	}
-
-	return 0;
-}
-
-static int
-lio_dev_configure(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-	uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
-	int retval, num_iqueues, num_oqueues;
-	uint8_t mac[RTE_ETHER_ADDR_LEN], i;
-	struct lio_if_cfg_resp *resp;
-	struct lio_soft_command *sc;
-	union lio_if_cfg if_cfg;
-	uint32_t resp_size;
-
-	PMD_INIT_FUNC_TRACE();
-
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
-		eth_dev->data->dev_conf.rxmode.offloads |=
-			RTE_ETH_RX_OFFLOAD_RSS_HASH;
-
-	/* Inform firmware about change in number of queues to use.
-	 * Disable IO queues and reset registers for re-configuration.
-	 */
-	if (lio_dev->port_configured)
-		return lio_reconf_queues(eth_dev,
-					 eth_dev->data->nb_tx_queues,
-					 eth_dev->data->nb_rx_queues);
-
-	lio_dev->nb_rx_queues = eth_dev->data->nb_rx_queues;
-	lio_dev->nb_tx_queues = eth_dev->data->nb_tx_queues;
-
-	/* Set max number of queues which can be re-configured. */
-	lio_dev->max_rx_queues = eth_dev->data->nb_rx_queues;
-	lio_dev->max_tx_queues = eth_dev->data->nb_tx_queues;
-
-	resp_size = sizeof(struct lio_if_cfg_resp);
-	sc = lio_alloc_soft_command(lio_dev, 0, resp_size, 0);
-	if (sc == NULL)
-		return -ENOMEM;
-
-	resp = (struct lio_if_cfg_resp *)sc->virtrptr;
-
-	/* Firmware doesn't have capability to reconfigure the queues,
-	 * Claim all queues, and use as many required
-	 */
-	if_cfg.if_cfg64 = 0;
-	if_cfg.s.num_iqueues = lio_dev->nb_tx_queues;
-	if_cfg.s.num_oqueues = lio_dev->nb_rx_queues;
-	if_cfg.s.base_queue = 0;
-
-	if_cfg.s.gmx_port_id = lio_dev->pf_num;
-
-	lio_prepare_soft_command(lio_dev, sc, LIO_OPCODE,
-				 LIO_OPCODE_IF_CFG, 0,
-				 if_cfg.if_cfg64, 0);
-
-	/* Setting wait time in seconds */
-	sc->wait_time = LIO_MAX_CMD_TIMEOUT / 1000;
-
-	retval = lio_send_soft_command(lio_dev, sc);
-	if (retval == LIO_IQ_SEND_FAILED) {
-		lio_dev_err(lio_dev, "iq/oq config failed status: %x\n",
-			    retval);
-		/* Soft instr is freed by driver in case of failure. */
-		goto nic_config_fail;
-	}
-
-	/* Sleep on a wait queue till the cond flag indicates that the
-	 * response arrived or timed-out.
-	 */
-	while ((*sc->status_word == LIO_COMPLETION_WORD_INIT) && --timeout) {
-		lio_flush_iq(lio_dev, lio_dev->instr_queue[sc->iq_no]);
-		lio_process_ordered_list(lio_dev);
-		rte_delay_ms(1);
-	}
-
-	retval = resp->status;
-	if (retval) {
-		lio_dev_err(lio_dev, "iq/oq config failed\n");
-		goto nic_config_fail;
-	}
-
-	strlcpy(lio_dev->firmware_version,
-		resp->cfg_info.lio_firmware_version, LIO_FW_VERSION_LENGTH);
-
-	lio_swap_8B_data((uint64_t *)(&resp->cfg_info),
-			 sizeof(struct octeon_if_cfg_info) >> 3);
-
-	num_iqueues = lio_hweight64(resp->cfg_info.iqmask);
-	num_oqueues = lio_hweight64(resp->cfg_info.oqmask);
-
-	if (!(num_iqueues) || !(num_oqueues)) {
-		lio_dev_err(lio_dev,
-			    "Got bad iqueues (%016lx) or oqueues (%016lx) from firmware.\n",
-			    (unsigned long)resp->cfg_info.iqmask,
-			    (unsigned long)resp->cfg_info.oqmask);
-		goto nic_config_fail;
-	}
-
-	lio_dev_dbg(lio_dev,
-		    "interface %d, iqmask %016lx, oqmask %016lx, numiqueues %d, numoqueues %d\n",
-		    eth_dev->data->port_id,
-		    (unsigned long)resp->cfg_info.iqmask,
-		    (unsigned long)resp->cfg_info.oqmask,
-		    num_iqueues, num_oqueues);
-
-	lio_dev->linfo.num_rxpciq = num_oqueues;
-	lio_dev->linfo.num_txpciq = num_iqueues;
-
-	for (i = 0; i < num_oqueues; i++) {
-		lio_dev->linfo.rxpciq[i].rxpciq64 =
-		    resp->cfg_info.linfo.rxpciq[i].rxpciq64;
-		lio_dev_dbg(lio_dev, "index %d OQ %d\n",
-			    i, lio_dev->linfo.rxpciq[i].s.q_no);
-	}
-
-	for (i = 0; i < num_iqueues; i++) {
-		lio_dev->linfo.txpciq[i].txpciq64 =
-		    resp->cfg_info.linfo.txpciq[i].txpciq64;
-		lio_dev_dbg(lio_dev, "index %d IQ %d\n",
-			    i, lio_dev->linfo.txpciq[i].s.q_no);
-	}
-
-	lio_dev->linfo.hw_addr = resp->cfg_info.linfo.hw_addr;
-	lio_dev->linfo.gmxport = resp->cfg_info.linfo.gmxport;
-	lio_dev->linfo.link.link_status64 =
-			resp->cfg_info.linfo.link.link_status64;
-
-	/* 64-bit swap required on LE machines */
-	lio_swap_8B_data(&lio_dev->linfo.hw_addr, 1);
-	for (i = 0; i < RTE_ETHER_ADDR_LEN; i++)
-		mac[i] = *((uint8_t *)(((uint8_t *)&lio_dev->linfo.hw_addr) +
-				       2 + i));
-
-	/* Copy the permanent MAC address */
-	rte_ether_addr_copy((struct rte_ether_addr *)mac,
-			&eth_dev->data->mac_addrs[0]);
-
-	/* enable firmware checksum support for tunnel packets */
-	lio_enable_hw_tunnel_rx_checksum(eth_dev);
-	lio_enable_hw_tunnel_tx_checksum(eth_dev);
-
-	lio_dev->glist_lock =
-	    rte_zmalloc(NULL, sizeof(*lio_dev->glist_lock) * num_iqueues, 0);
-	if (lio_dev->glist_lock == NULL)
-		return -ENOMEM;
-
-	lio_dev->glist_head =
-		rte_zmalloc(NULL, sizeof(*lio_dev->glist_head) * num_iqueues,
-			    0);
-	if (lio_dev->glist_head == NULL) {
-		rte_free(lio_dev->glist_lock);
-		lio_dev->glist_lock = NULL;
-		return -ENOMEM;
-	}
-
-	lio_dev_link_update(eth_dev, 0);
-
-	lio_dev->port_configured = 1;
-
-	lio_free_soft_command(sc);
-
-	/* Reset ioq regs */
-	lio_dev->fn_list.setup_device_regs(lio_dev);
-
-	/* Free iq_0 used during init */
-	lio_free_instr_queue0(lio_dev);
-
-	return 0;
-
-nic_config_fail:
-	lio_dev_err(lio_dev, "Failed retval %d\n", retval);
-	lio_free_soft_command(sc);
-	lio_free_instr_queue0(lio_dev);
-
-	return -ENODEV;
-}
-
-/* Define our ethernet definitions */
-static const struct eth_dev_ops liovf_eth_dev_ops = {
-	.dev_configure		= lio_dev_configure,
-	.dev_start		= lio_dev_start,
-	.dev_stop		= lio_dev_stop,
-	.dev_set_link_up	= lio_dev_set_link_up,
-	.dev_set_link_down	= lio_dev_set_link_down,
-	.dev_close		= lio_dev_close,
-	.promiscuous_enable	= lio_dev_promiscuous_enable,
-	.promiscuous_disable	= lio_dev_promiscuous_disable,
-	.allmulticast_enable	= lio_dev_allmulticast_enable,
-	.allmulticast_disable	= lio_dev_allmulticast_disable,
-	.link_update		= lio_dev_link_update,
-	.stats_get		= lio_dev_stats_get,
-	.xstats_get		= lio_dev_xstats_get,
-	.xstats_get_names	= lio_dev_xstats_get_names,
-	.stats_reset		= lio_dev_stats_reset,
-	.xstats_reset		= lio_dev_xstats_reset,
-	.dev_infos_get		= lio_dev_info_get,
-	.vlan_filter_set	= lio_dev_vlan_filter_set,
-	.rx_queue_setup		= lio_dev_rx_queue_setup,
-	.rx_queue_release	= lio_dev_rx_queue_release,
-	.tx_queue_setup		= lio_dev_tx_queue_setup,
-	.tx_queue_release	= lio_dev_tx_queue_release,
-	.reta_update		= lio_dev_rss_reta_update,
-	.reta_query		= lio_dev_rss_reta_query,
-	.rss_hash_conf_get	= lio_dev_rss_hash_conf_get,
-	.rss_hash_update	= lio_dev_rss_hash_update,
-	.udp_tunnel_port_add	= lio_dev_udp_tunnel_add,
-	.udp_tunnel_port_del	= lio_dev_udp_tunnel_del,
-	.mtu_set		= lio_dev_mtu_set,
-};
-
-static void
-lio_check_pf_hs_response(void *lio_dev)
-{
-	struct lio_device *dev = lio_dev;
-
-	/* check till response arrives */
-	if (dev->pfvf_hsword.coproc_tics_per_us)
-		return;
-
-	cn23xx_vf_handle_mbox(dev);
-
-	rte_eal_alarm_set(1, lio_check_pf_hs_response, lio_dev);
-}
-
-/**
- * \brief Identify the LIO device and to map the BAR address space
- * @param lio_dev lio device
- */
-static int
-lio_chip_specific_setup(struct lio_device *lio_dev)
-{
-	struct rte_pci_device *pdev = lio_dev->pci_dev;
-	uint32_t dev_id = pdev->id.device_id;
-	const char *s;
-	int ret = 1;
-
-	switch (dev_id) {
-	case LIO_CN23XX_VF_VID:
-		lio_dev->chip_id = LIO_CN23XX_VF_VID;
-		ret = cn23xx_vf_setup_device(lio_dev);
-		s = "CN23XX VF";
-		break;
-	default:
-		s = "?";
-		lio_dev_err(lio_dev, "Unsupported Chip\n");
-	}
-
-	if (!ret)
-		lio_dev_info(lio_dev, "DEVICE : %s\n", s);
-
-	return ret;
-}
-
-static int
-lio_first_time_init(struct lio_device *lio_dev,
-		    struct rte_pci_device *pdev)
-{
-	int dpdk_queues;
-
-	PMD_INIT_FUNC_TRACE();
-
-	/* set dpdk specific pci device pointer */
-	lio_dev->pci_dev = pdev;
-
-	/* Identify the LIO type and set device ops */
-	if (lio_chip_specific_setup(lio_dev)) {
-		lio_dev_err(lio_dev, "Chip specific setup failed\n");
-		return -1;
-	}
-
-	/* Initialize soft command buffer pool */
-	if (lio_setup_sc_buffer_pool(lio_dev)) {
-		lio_dev_err(lio_dev, "sc buffer pool allocation failed\n");
-		return -1;
-	}
-
-	/* Initialize lists to manage the requests of different types that
-	 * arrive from applications for this lio device.
-	 */
-	lio_setup_response_list(lio_dev);
-
-	if (lio_dev->fn_list.setup_mbox(lio_dev)) {
-		lio_dev_err(lio_dev, "Mailbox setup failed\n");
-		goto error;
-	}
-
-	/* Check PF response */
-	lio_check_pf_hs_response((void *)lio_dev);
-
-	/* Do handshake and exit if incompatible PF driver */
-	if (cn23xx_pfvf_handshake(lio_dev))
-		goto error;
-
-	/* Request and wait for device reset. */
-	if (pdev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
-		cn23xx_vf_ask_pf_to_do_flr(lio_dev);
-		/* FLR wait time doubled as a precaution. */
-		rte_delay_ms(LIO_PCI_FLR_WAIT * 2);
-	}
-
-	if (lio_dev->fn_list.setup_device_regs(lio_dev)) {
-		lio_dev_err(lio_dev, "Failed to configure device registers\n");
-		goto error;
-	}
-
-	if (lio_setup_instr_queue0(lio_dev)) {
-		lio_dev_err(lio_dev, "Failed to setup instruction queue 0\n");
-		goto error;
-	}
-
-	dpdk_queues = (int)lio_dev->sriov_info.rings_per_vf;
-
-	lio_dev->max_tx_queues = dpdk_queues;
-	lio_dev->max_rx_queues = dpdk_queues;
-
-	/* Enable input and output queues for this device */
-	if (lio_dev->fn_list.enable_io_queues(lio_dev))
-		goto error;
-
-	return 0;
-
-error:
-	lio_free_sc_buffer_pool(lio_dev);
-	if (lio_dev->mbox[0])
-		lio_dev->fn_list.free_mbox(lio_dev);
-	if (lio_dev->instr_queue[0])
-		lio_free_instr_queue0(lio_dev);
-
-	return -1;
-}
-
-static int
-lio_eth_dev_uninit(struct rte_eth_dev *eth_dev)
-{
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
-	PMD_INIT_FUNC_TRACE();
-
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
-		return 0;
-
-	/* lio_free_sc_buffer_pool */
-	lio_free_sc_buffer_pool(lio_dev);
-
-	return 0;
-}
-
-static int
-lio_eth_dev_init(struct rte_eth_dev *eth_dev)
-{
-	struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(eth_dev);
-	struct lio_device *lio_dev = LIO_DEV(eth_dev);
-
-	PMD_INIT_FUNC_TRACE();
-
-	eth_dev->rx_pkt_burst = &lio_dev_recv_pkts;
-	eth_dev->tx_pkt_burst = &lio_dev_xmit_pkts;
-
-	/* Primary does the initialization. */
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
-		return 0;
-
-	rte_eth_copy_pci_info(eth_dev, pdev);
-
-	if (pdev->mem_resource[0].addr) {
-		lio_dev->hw_addr = pdev->mem_resource[0].addr;
-	} else {
-		PMD_INIT_LOG(ERR, "ERROR: Failed to map BAR0\n");
-		return -ENODEV;
-	}
-
-	lio_dev->eth_dev = eth_dev;
-	/* set lio device print string */
-	snprintf(lio_dev->dev_string, sizeof(lio_dev->dev_string),
-		 "%s[%02x:%02x.%x]", pdev->driver->driver.name,
-		 pdev->addr.bus, pdev->addr.devid, pdev->addr.function);
-
-	lio_dev->port_id = eth_dev->data->port_id;
-
-	if (lio_first_time_init(lio_dev, pdev)) {
-		lio_dev_err(lio_dev, "Device init failed\n");
-		return -EINVAL;
-	}
-
-	eth_dev->dev_ops = &liovf_eth_dev_ops;
-	eth_dev->data->mac_addrs = rte_zmalloc("lio", RTE_ETHER_ADDR_LEN, 0);
-	if (eth_dev->data->mac_addrs == NULL) {
-		lio_dev_err(lio_dev,
-			    "MAC addresses memory allocation failed\n");
-		eth_dev->dev_ops = NULL;
-		eth_dev->rx_pkt_burst = NULL;
-		eth_dev->tx_pkt_burst = NULL;
-		return -ENOMEM;
-	}
-
-	rte_atomic64_set(&lio_dev->status, LIO_DEV_RUNNING);
-	rte_wmb();
-
-	lio_dev->port_configured = 0;
-	/* Always allow unicast packets */
-	lio_dev->ifflags |= LIO_IFFLAG_UNICAST;
-
-	return 0;
-}
-
-static int
-lio_eth_dev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
-		      struct rte_pci_device *pci_dev)
-{
-	return rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct lio_device),
-			lio_eth_dev_init);
-}
-
-static int
-lio_eth_dev_pci_remove(struct rte_pci_device *pci_dev)
-{
-	return rte_eth_dev_pci_generic_remove(pci_dev,
-					      lio_eth_dev_uninit);
-}
-
-/* Set of PCI devices this driver supports */
-static const struct rte_pci_id pci_id_liovf_map[] = {
-	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, LIO_CN23XX_VF_VID) },
-	{ .vendor_id = 0, /* sentinel */ }
-};
-
-static struct rte_pci_driver rte_liovf_pmd = {
-	.id_table	= pci_id_liovf_map,
-	.drv_flags      = RTE_PCI_DRV_NEED_MAPPING,
-	.probe		= lio_eth_dev_pci_probe,
-	.remove		= lio_eth_dev_pci_remove,
-};
-
-RTE_PMD_REGISTER_PCI(net_liovf, rte_liovf_pmd);
-RTE_PMD_REGISTER_PCI_TABLE(net_liovf, pci_id_liovf_map);
-RTE_PMD_REGISTER_KMOD_DEP(net_liovf, "* igb_uio | vfio-pci");
-RTE_LOG_REGISTER_SUFFIX(lio_logtype_init, init, NOTICE);
-RTE_LOG_REGISTER_SUFFIX(lio_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/liquidio/lio_ethdev.h b/drivers/net/liquidio/lio_ethdev.h
deleted file mode 100644
index ece2b03858..0000000000
--- a/drivers/net/liquidio/lio_ethdev.h
+++ /dev/null
@@ -1,179 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_ETHDEV_H_
-#define _LIO_ETHDEV_H_
-
-#include <stdint.h>
-
-#include "lio_struct.h"
-
-/* timeout to check link state updates from firmware in us */
-#define LIO_LSC_TIMEOUT		100000 /* 100000us (100ms) */
-#define LIO_MAX_CMD_TIMEOUT     10000 /* 10000ms (10s) */
-
-/* The max frame size with default MTU */
-#define LIO_ETH_MAX_LEN (RTE_ETHER_MTU + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
-
-#define LIO_DEV(_eth_dev)		((_eth_dev)->data->dev_private)
-
-/* LIO Response condition variable */
-struct lio_dev_ctrl_cmd {
-	struct rte_eth_dev *eth_dev;
-	uint64_t cond;
-};
-
-enum lio_bus_speed {
-	LIO_LINK_SPEED_UNKNOWN  = 0,
-	LIO_LINK_SPEED_10000    = 10000,
-	LIO_LINK_SPEED_25000    = 25000
-};
-
-struct octeon_if_cfg_info {
-	uint64_t iqmask;	/** mask for IQs enabled for the port */
-	uint64_t oqmask;	/** mask for OQs enabled for the port */
-	struct octeon_link_info linfo; /** initial link information */
-	char lio_firmware_version[LIO_FW_VERSION_LENGTH];
-};
-
-/** Stats for each NIC port in RX direction. */
-struct octeon_rx_stats {
-	/* link-level stats */
-	uint64_t total_rcvd;
-	uint64_t bytes_rcvd;
-	uint64_t total_bcst;
-	uint64_t total_mcst;
-	uint64_t runts;
-	uint64_t ctl_rcvd;
-	uint64_t fifo_err; /* Accounts for over/under-run of buffers */
-	uint64_t dmac_drop;
-	uint64_t fcs_err;
-	uint64_t jabber_err;
-	uint64_t l2_err;
-	uint64_t frame_err;
-
-	/* firmware stats */
-	uint64_t fw_total_rcvd;
-	uint64_t fw_total_fwd;
-	uint64_t fw_total_fwd_bytes;
-	uint64_t fw_err_pko;
-	uint64_t fw_err_link;
-	uint64_t fw_err_drop;
-	uint64_t fw_rx_vxlan;
-	uint64_t fw_rx_vxlan_err;
-
-	/* LRO */
-	uint64_t fw_lro_pkts;   /* Number of packets that are LROed */
-	uint64_t fw_lro_octs;   /* Number of octets that are LROed */
-	uint64_t fw_total_lro;  /* Number of LRO packets formed */
-	uint64_t fw_lro_aborts; /* Number of times lRO of packet aborted */
-	uint64_t fw_lro_aborts_port;
-	uint64_t fw_lro_aborts_seq;
-	uint64_t fw_lro_aborts_tsval;
-	uint64_t fw_lro_aborts_timer;
-	/* intrmod: packet forward rate */
-	uint64_t fwd_rate;
-};
-
-/** Stats for each NIC port in RX direction. */
-struct octeon_tx_stats {
-	/* link-level stats */
-	uint64_t total_pkts_sent;
-	uint64_t total_bytes_sent;
-	uint64_t mcast_pkts_sent;
-	uint64_t bcast_pkts_sent;
-	uint64_t ctl_sent;
-	uint64_t one_collision_sent;	/* Packets sent after one collision */
-	/* Packets sent after multiple collision */
-	uint64_t multi_collision_sent;
-	/* Packets not sent due to max collisions */
-	uint64_t max_collision_fail;
-	/* Packets not sent due to max deferrals */
-	uint64_t max_deferral_fail;
-	/* Accounts for over/under-run of buffers */
-	uint64_t fifo_err;
-	uint64_t runts;
-	uint64_t total_collisions; /* Total number of collisions detected */
-
-	/* firmware stats */
-	uint64_t fw_total_sent;
-	uint64_t fw_total_fwd;
-	uint64_t fw_total_fwd_bytes;
-	uint64_t fw_err_pko;
-	uint64_t fw_err_link;
-	uint64_t fw_err_drop;
-	uint64_t fw_err_tso;
-	uint64_t fw_tso;     /* number of tso requests */
-	uint64_t fw_tso_fwd; /* number of packets segmented in tso */
-	uint64_t fw_tx_vxlan;
-};
-
-struct octeon_link_stats {
-	struct octeon_rx_stats fromwire;
-	struct octeon_tx_stats fromhost;
-};
-
-union lio_if_cfg {
-	uint64_t if_cfg64;
-	struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-		uint64_t base_queue : 16;
-		uint64_t num_iqueues : 16;
-		uint64_t num_oqueues : 16;
-		uint64_t gmx_port_id : 8;
-		uint64_t vf_id : 8;
-#else
-		uint64_t vf_id : 8;
-		uint64_t gmx_port_id : 8;
-		uint64_t num_oqueues : 16;
-		uint64_t num_iqueues : 16;
-		uint64_t base_queue : 16;
-#endif
-	} s;
-};
-
-struct lio_if_cfg_resp {
-	uint64_t rh;
-	struct octeon_if_cfg_info cfg_info;
-	uint64_t status;
-};
-
-struct lio_link_stats_resp {
-	uint64_t rh;
-	struct octeon_link_stats link_stats;
-	uint64_t status;
-};
-
-struct lio_link_status_resp {
-	uint64_t rh;
-	struct octeon_link_info link_info;
-	uint64_t status;
-};
-
-struct lio_rss_set {
-	struct param {
-#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-		uint64_t flags : 16;
-		uint64_t hashinfo : 32;
-		uint64_t itablesize : 16;
-		uint64_t hashkeysize : 16;
-		uint64_t reserved : 48;
-#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-		uint64_t itablesize : 16;
-		uint64_t hashinfo : 32;
-		uint64_t flags : 16;
-		uint64_t reserved : 48;
-		uint64_t hashkeysize : 16;
-#endif
-	} param;
-
-	uint8_t itable[LIO_RSS_MAX_TABLE_SZ];
-	uint8_t key[LIO_RSS_MAX_KEY_SZ];
-};
-
-void lio_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no);
-
-void lio_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t q_no);
-
-#endif	/* _LIO_ETHDEV_H_ */
diff --git a/drivers/net/liquidio/lio_logs.h b/drivers/net/liquidio/lio_logs.h
deleted file mode 100644
index f227827081..0000000000
--- a/drivers/net/liquidio/lio_logs.h
+++ /dev/null
@@ -1,58 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_LOGS_H_
-#define _LIO_LOGS_H_
-
-extern int lio_logtype_driver;
-#define lio_dev_printf(lio_dev, level, fmt, args...)		\
-	rte_log(RTE_LOG_ ## level, lio_logtype_driver,		\
-		"%s" fmt, (lio_dev)->dev_string, ##args)
-
-#define lio_dev_info(lio_dev, fmt, args...)				\
-	lio_dev_printf(lio_dev, INFO, "INFO: " fmt, ##args)
-
-#define lio_dev_err(lio_dev, fmt, args...)				\
-	lio_dev_printf(lio_dev, ERR, "ERROR: %s() " fmt, __func__, ##args)
-
-extern int lio_logtype_init;
-#define PMD_INIT_LOG(level, fmt, args...) \
-	rte_log(RTE_LOG_ ## level, lio_logtype_init, \
-		fmt, ## args)
-
-/* Enable these through config options */
-#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, "%s() >>\n", __func__)
-
-#define lio_dev_dbg(lio_dev, fmt, args...)				\
-	lio_dev_printf(lio_dev, DEBUG, "DEBUG: %s() " fmt, __func__, ##args)
-
-#ifdef RTE_LIBRTE_LIO_DEBUG_RX
-#define PMD_RX_LOG(lio_dev, level, fmt, args...)			\
-	lio_dev_printf(lio_dev, level, "RX: %s() " fmt, __func__, ##args)
-#else /* !RTE_LIBRTE_LIO_DEBUG_RX */
-#define PMD_RX_LOG(lio_dev, level, fmt, args...) do { } while (0)
-#endif /* RTE_LIBRTE_LIO_DEBUG_RX */
-
-#ifdef RTE_LIBRTE_LIO_DEBUG_TX
-#define PMD_TX_LOG(lio_dev, level, fmt, args...)			\
-	lio_dev_printf(lio_dev, level, "TX: %s() " fmt, __func__, ##args)
-#else /* !RTE_LIBRTE_LIO_DEBUG_TX */
-#define PMD_TX_LOG(lio_dev, level, fmt, args...) do { } while (0)
-#endif /* RTE_LIBRTE_LIO_DEBUG_TX */
-
-#ifdef RTE_LIBRTE_LIO_DEBUG_MBOX
-#define PMD_MBOX_LOG(lio_dev, level, fmt, args...)			\
-	lio_dev_printf(lio_dev, level, "MBOX: %s() " fmt, __func__, ##args)
-#else /* !RTE_LIBRTE_LIO_DEBUG_MBOX */
-#define PMD_MBOX_LOG(level, fmt, args...) do { } while (0)
-#endif /* RTE_LIBRTE_LIO_DEBUG_MBOX */
-
-#ifdef RTE_LIBRTE_LIO_DEBUG_REGS
-#define PMD_REGS_LOG(lio_dev, fmt, args...)				\
-	lio_dev_printf(lio_dev, DEBUG, "REGS: " fmt, ##args)
-#else /* !RTE_LIBRTE_LIO_DEBUG_REGS */
-#define PMD_REGS_LOG(level, fmt, args...) do { } while (0)
-#endif /* RTE_LIBRTE_LIO_DEBUG_REGS */
-
-#endif  /* _LIO_LOGS_H_ */
diff --git a/drivers/net/liquidio/lio_rxtx.c b/drivers/net/liquidio/lio_rxtx.c
deleted file mode 100644
index e09798ddd7..0000000000
--- a/drivers/net/liquidio/lio_rxtx.c
+++ /dev/null
@@ -1,1804 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#include <ethdev_driver.h>
-#include <rte_cycles.h>
-#include <rte_malloc.h>
-
-#include "lio_logs.h"
-#include "lio_struct.h"
-#include "lio_ethdev.h"
-#include "lio_rxtx.h"
-
-#define LIO_MAX_SG 12
-/* Flush iq if available tx_desc fall below LIO_FLUSH_WM */
-#define LIO_FLUSH_WM(_iq) ((_iq)->nb_desc / 2)
-#define LIO_PKT_IN_DONE_CNT_MASK 0x00000000FFFFFFFFULL
-
-static void
-lio_droq_compute_max_packet_bufs(struct lio_droq *droq)
-{
-	uint32_t count = 0;
-
-	do {
-		count += droq->buffer_size;
-	} while (count < LIO_MAX_RX_PKTLEN);
-}
-
-static void
-lio_droq_reset_indices(struct lio_droq *droq)
-{
-	droq->read_idx	= 0;
-	droq->write_idx	= 0;
-	droq->refill_idx = 0;
-	droq->refill_count = 0;
-	rte_atomic64_set(&droq->pkts_pending, 0);
-}
-
-static void
-lio_droq_destroy_ring_buffers(struct lio_droq *droq)
-{
-	uint32_t i;
-
-	for (i = 0; i < droq->nb_desc; i++) {
-		if (droq->recv_buf_list[i].buffer) {
-			rte_pktmbuf_free((struct rte_mbuf *)
-					 droq->recv_buf_list[i].buffer);
-			droq->recv_buf_list[i].buffer = NULL;
-		}
-	}
-
-	lio_droq_reset_indices(droq);
-}
-
-static int
-lio_droq_setup_ring_buffers(struct lio_device *lio_dev,
-			    struct lio_droq *droq)
-{
-	struct lio_droq_desc *desc_ring = droq->desc_ring;
-	uint32_t i;
-	void *buf;
-
-	for (i = 0; i < droq->nb_desc; i++) {
-		buf = rte_pktmbuf_alloc(droq->mpool);
-		if (buf == NULL) {
-			lio_dev_err(lio_dev, "buffer alloc failed\n");
-			droq->stats.rx_alloc_failure++;
-			lio_droq_destroy_ring_buffers(droq);
-			return -ENOMEM;
-		}
-
-		droq->recv_buf_list[i].buffer = buf;
-		droq->info_list[i].length = 0;
-
-		/* map ring buffers into memory */
-		desc_ring[i].info_ptr = lio_map_ring_info(droq, i);
-		desc_ring[i].buffer_ptr =
-			lio_map_ring(droq->recv_buf_list[i].buffer);
-	}
-
-	lio_droq_reset_indices(droq);
-
-	lio_droq_compute_max_packet_bufs(droq);
-
-	return 0;
-}
-
-static void
-lio_dma_zone_free(struct lio_device *lio_dev, const struct rte_memzone *mz)
-{
-	const struct rte_memzone *mz_tmp;
-	int ret = 0;
-
-	if (mz == NULL) {
-		lio_dev_err(lio_dev, "Memzone NULL\n");
-		return;
-	}
-
-	mz_tmp = rte_memzone_lookup(mz->name);
-	if (mz_tmp == NULL) {
-		lio_dev_err(lio_dev, "Memzone %s Not Found\n", mz->name);
-		return;
-	}
-
-	ret = rte_memzone_free(mz);
-	if (ret)
-		lio_dev_err(lio_dev, "Memzone free Failed ret %d\n", ret);
-}
-
-/**
- *  Frees the space for descriptor ring for the droq.
- *
- *  @param lio_dev	- pointer to the lio device structure
- *  @param q_no		- droq no.
- */
-static void
-lio_delete_droq(struct lio_device *lio_dev, uint32_t q_no)
-{
-	struct lio_droq *droq = lio_dev->droq[q_no];
-
-	lio_dev_dbg(lio_dev, "OQ[%d]\n", q_no);
-
-	lio_droq_destroy_ring_buffers(droq);
-	rte_free(droq->recv_buf_list);
-	droq->recv_buf_list = NULL;
-	lio_dma_zone_free(lio_dev, droq->info_mz);
-	lio_dma_zone_free(lio_dev, droq->desc_ring_mz);
-
-	memset(droq, 0, LIO_DROQ_SIZE);
-}
-
-static void *
-lio_alloc_info_buffer(struct lio_device *lio_dev,
-		      struct lio_droq *droq, unsigned int socket_id)
-{
-	droq->info_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
-						 "info_list", droq->q_no,
-						 (droq->nb_desc *
-							LIO_DROQ_INFO_SIZE),
-						 RTE_CACHE_LINE_SIZE,
-						 socket_id);
-
-	if (droq->info_mz == NULL)
-		return NULL;
-
-	droq->info_list_dma = droq->info_mz->iova;
-	droq->info_alloc_size = droq->info_mz->len;
-	droq->info_base_addr = (size_t)droq->info_mz->addr;
-
-	return droq->info_mz->addr;
-}
-
-/**
- *  Allocates space for the descriptor ring for the droq and
- *  sets the base addr, num desc etc in Octeon registers.
- *
- * @param lio_dev	- pointer to the lio device structure
- * @param q_no		- droq no.
- * @param app_ctx	- pointer to application context
- * @return Success: 0	Failure: -1
- */
-static int
-lio_init_droq(struct lio_device *lio_dev, uint32_t q_no,
-	      uint32_t num_descs, uint32_t desc_size,
-	      struct rte_mempool *mpool, unsigned int socket_id)
-{
-	uint32_t c_refill_threshold;
-	uint32_t desc_ring_size;
-	struct lio_droq *droq;
-
-	lio_dev_dbg(lio_dev, "OQ[%d]\n", q_no);
-
-	droq = lio_dev->droq[q_no];
-	droq->lio_dev = lio_dev;
-	droq->q_no = q_no;
-	droq->mpool = mpool;
-
-	c_refill_threshold = LIO_OQ_REFILL_THRESHOLD_CFG(lio_dev);
-
-	droq->nb_desc = num_descs;
-	droq->buffer_size = desc_size;
-
-	desc_ring_size = droq->nb_desc * LIO_DROQ_DESC_SIZE;
-	droq->desc_ring_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
-						      "droq", q_no,
-						      desc_ring_size,
-						      RTE_CACHE_LINE_SIZE,
-						      socket_id);
-
-	if (droq->desc_ring_mz == NULL) {
-		lio_dev_err(lio_dev,
-			    "Output queue %d ring alloc failed\n", q_no);
-		return -1;
-	}
-
-	droq->desc_ring_dma = droq->desc_ring_mz->iova;
-	droq->desc_ring = (struct lio_droq_desc *)droq->desc_ring_mz->addr;
-
-	lio_dev_dbg(lio_dev, "droq[%d]: desc_ring: virt: 0x%p, dma: %lx\n",
-		    q_no, droq->desc_ring, (unsigned long)droq->desc_ring_dma);
-	lio_dev_dbg(lio_dev, "droq[%d]: num_desc: %d\n", q_no,
-		    droq->nb_desc);
-
-	droq->info_list = lio_alloc_info_buffer(lio_dev, droq, socket_id);
-	if (droq->info_list == NULL) {
-		lio_dev_err(lio_dev, "Cannot allocate memory for info list.\n");
-		goto init_droq_fail;
-	}
-
-	droq->recv_buf_list = rte_zmalloc_socket("recv_buf_list",
-						 (droq->nb_desc *
-							LIO_DROQ_RECVBUF_SIZE),
-						 RTE_CACHE_LINE_SIZE,
-						 socket_id);
-	if (droq->recv_buf_list == NULL) {
-		lio_dev_err(lio_dev,
-			    "Output queue recv buf list alloc failed\n");
-		goto init_droq_fail;
-	}
-
-	if (lio_droq_setup_ring_buffers(lio_dev, droq))
-		goto init_droq_fail;
-
-	droq->refill_threshold = c_refill_threshold;
-
-	rte_spinlock_init(&droq->lock);
-
-	lio_dev->fn_list.setup_oq_regs(lio_dev, q_no);
-
-	lio_dev->io_qmask.oq |= (1ULL << q_no);
-
-	return 0;
-
-init_droq_fail:
-	lio_delete_droq(lio_dev, q_no);
-
-	return -1;
-}
-
-int
-lio_setup_droq(struct lio_device *lio_dev, int oq_no, int num_descs,
-	       int desc_size, struct rte_mempool *mpool, unsigned int socket_id)
-{
-	struct lio_droq *droq;
-
-	PMD_INIT_FUNC_TRACE();
-
-	/* Allocate the DS for the new droq. */
-	droq = rte_zmalloc_socket("ethdev RX queue", sizeof(*droq),
-				  RTE_CACHE_LINE_SIZE, socket_id);
-	if (droq == NULL)
-		return -ENOMEM;
-
-	lio_dev->droq[oq_no] = droq;
-
-	/* Initialize the Droq */
-	if (lio_init_droq(lio_dev, oq_no, num_descs, desc_size, mpool,
-			  socket_id)) {
-		lio_dev_err(lio_dev, "Droq[%u] Initialization Failed\n", oq_no);
-		rte_free(lio_dev->droq[oq_no]);
-		lio_dev->droq[oq_no] = NULL;
-		return -ENOMEM;
-	}
-
-	lio_dev->num_oqs++;
-
-	lio_dev_dbg(lio_dev, "Total number of OQ: %d\n", lio_dev->num_oqs);
-
-	/* Send credit for octeon output queues. credits are always
-	 * sent after the output queue is enabled.
-	 */
-	rte_write32(lio_dev->droq[oq_no]->nb_desc,
-		    lio_dev->droq[oq_no]->pkts_credit_reg);
-	rte_wmb();
-
-	return 0;
-}
-
-static inline uint32_t
-lio_droq_get_bufcount(uint32_t buf_size, uint32_t total_len)
-{
-	uint32_t buf_cnt = 0;
-
-	while (total_len > (buf_size * buf_cnt))
-		buf_cnt++;
-
-	return buf_cnt;
-}
-
-/* If we were not able to refill all buffers, try to move around
- * the buffers that were not dispatched.
- */
-static inline uint32_t
-lio_droq_refill_pullup_descs(struct lio_droq *droq,
-			     struct lio_droq_desc *desc_ring)
-{
-	uint32_t refill_index = droq->refill_idx;
-	uint32_t desc_refilled = 0;
-
-	while (refill_index != droq->read_idx) {
-		if (droq->recv_buf_list[refill_index].buffer) {
-			droq->recv_buf_list[droq->refill_idx].buffer =
-				droq->recv_buf_list[refill_index].buffer;
-			desc_ring[droq->refill_idx].buffer_ptr =
-				desc_ring[refill_index].buffer_ptr;
-			droq->recv_buf_list[refill_index].buffer = NULL;
-			desc_ring[refill_index].buffer_ptr = 0;
-			do {
-				droq->refill_idx = lio_incr_index(
-							droq->refill_idx, 1,
-							droq->nb_desc);
-				desc_refilled++;
-				droq->refill_count--;
-			} while (droq->recv_buf_list[droq->refill_idx].buffer);
-		}
-		refill_index = lio_incr_index(refill_index, 1,
-					      droq->nb_desc);
-	}	/* while */
-
-	return desc_refilled;
-}
-
-/* lio_droq_refill
- *
- * @param droq		- droq in which descriptors require new buffers.
- *
- * Description:
- *  Called during normal DROQ processing in interrupt mode or by the poll
- *  thread to refill the descriptors from which buffers were dispatched
- *  to upper layers. Attempts to allocate new buffers. If that fails, moves
- *  up buffers (that were not dispatched) to form a contiguous ring.
- *
- * Returns:
- *  No of descriptors refilled.
- *
- * Locks:
- * This routine is called with droq->lock held.
- */
-static uint32_t
-lio_droq_refill(struct lio_droq *droq)
-{
-	struct lio_droq_desc *desc_ring;
-	uint32_t desc_refilled = 0;
-	void *buf = NULL;
-
-	desc_ring = droq->desc_ring;
-
-	while (droq->refill_count && (desc_refilled < droq->nb_desc)) {
-		/* If a valid buffer exists (happens if there is no dispatch),
-		 * reuse the buffer, else allocate.
-		 */
-		if (droq->recv_buf_list[droq->refill_idx].buffer == NULL) {
-			buf = rte_pktmbuf_alloc(droq->mpool);
-			/* If a buffer could not be allocated, no point in
-			 * continuing
-			 */
-			if (buf == NULL) {
-				droq->stats.rx_alloc_failure++;
-				break;
-			}
-
-			droq->recv_buf_list[droq->refill_idx].buffer = buf;
-		}
-
-		desc_ring[droq->refill_idx].buffer_ptr =
-		    lio_map_ring(droq->recv_buf_list[droq->refill_idx].buffer);
-		/* Reset any previous values in the length field. */
-		droq->info_list[droq->refill_idx].length = 0;
-
-		droq->refill_idx = lio_incr_index(droq->refill_idx, 1,
-						  droq->nb_desc);
-		desc_refilled++;
-		droq->refill_count--;
-	}
-
-	if (droq->refill_count)
-		desc_refilled += lio_droq_refill_pullup_descs(droq, desc_ring);
-
-	/* if droq->refill_count
-	 * The refill count would not change in pass two. We only moved buffers
-	 * to close the gap in the ring, but we would still have the same no. of
-	 * buffers to refill.
-	 */
-	return desc_refilled;
-}
-
-static int
-lio_droq_fast_process_packet(struct lio_device *lio_dev,
-			     struct lio_droq *droq,
-			     struct rte_mbuf **rx_pkts)
-{
-	struct rte_mbuf *nicbuf = NULL;
-	struct lio_droq_info *info;
-	uint32_t total_len = 0;
-	int data_total_len = 0;
-	uint32_t pkt_len = 0;
-	union octeon_rh *rh;
-	int data_pkts = 0;
-
-	info = &droq->info_list[droq->read_idx];
-	lio_swap_8B_data((uint64_t *)info, 2);
-
-	if (!info->length)
-		return -1;
-
-	/* Len of resp hdr in included in the received data len. */
-	info->length -= OCTEON_RH_SIZE;
-	rh = &info->rh;
-
-	total_len += (uint32_t)info->length;
-
-	if (lio_opcode_slow_path(rh)) {
-		uint32_t buf_cnt;
-
-		buf_cnt = lio_droq_get_bufcount(droq->buffer_size,
-						(uint32_t)info->length);
-		droq->read_idx = lio_incr_index(droq->read_idx, buf_cnt,
-						droq->nb_desc);
-		droq->refill_count += buf_cnt;
-	} else {
-		if (info->length <= droq->buffer_size) {
-			if (rh->r_dh.has_hash)
-				pkt_len = (uint32_t)(info->length - 8);
-			else
-				pkt_len = (uint32_t)info->length;
-
-			nicbuf = droq->recv_buf_list[droq->read_idx].buffer;
-			droq->recv_buf_list[droq->read_idx].buffer = NULL;
-			droq->read_idx = lio_incr_index(
-						droq->read_idx, 1,
-						droq->nb_desc);
-			droq->refill_count++;
-
-			if (likely(nicbuf != NULL)) {
-				/* We don't have a way to pass flags yet */
-				nicbuf->ol_flags = 0;
-				if (rh->r_dh.has_hash) {
-					uint64_t *hash_ptr;
-
-					nicbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
-					hash_ptr = rte_pktmbuf_mtod(nicbuf,
-								    uint64_t *);
-					lio_swap_8B_data(hash_ptr, 1);
-					nicbuf->hash.rss = (uint32_t)*hash_ptr;
-					nicbuf->data_off += 8;
-				}
-
-				nicbuf->pkt_len = pkt_len;
-				nicbuf->data_len = pkt_len;
-				nicbuf->port = lio_dev->port_id;
-				/* Store the mbuf */
-				rx_pkts[data_pkts++] = nicbuf;
-				data_total_len += pkt_len;
-			}
-
-			/* Prefetch buffer pointers when on a cache line
-			 * boundary
-			 */
-			if ((droq->read_idx & 3) == 0) {
-				rte_prefetch0(
-				    &droq->recv_buf_list[droq->read_idx]);
-				rte_prefetch0(
-				    &droq->info_list[droq->read_idx]);
-			}
-		} else {
-			struct rte_mbuf *first_buf = NULL;
-			struct rte_mbuf *last_buf = NULL;
-
-			while (pkt_len < info->length) {
-				int cpy_len = 0;
-
-				cpy_len = ((pkt_len + droq->buffer_size) >
-						info->length)
-						? ((uint32_t)info->length -
-							pkt_len)
-						: droq->buffer_size;
-
-				nicbuf =
-				    droq->recv_buf_list[droq->read_idx].buffer;
-				droq->recv_buf_list[droq->read_idx].buffer =
-				    NULL;
-
-				if (likely(nicbuf != NULL)) {
-					/* Note the first seg */
-					if (!pkt_len)
-						first_buf = nicbuf;
-
-					nicbuf->port = lio_dev->port_id;
-					/* We don't have a way to pass
-					 * flags yet
-					 */
-					nicbuf->ol_flags = 0;
-					if ((!pkt_len) && (rh->r_dh.has_hash)) {
-						uint64_t *hash_ptr;
-
-						nicbuf->ol_flags |=
-						    RTE_MBUF_F_RX_RSS_HASH;
-						hash_ptr = rte_pktmbuf_mtod(
-						    nicbuf, uint64_t *);
-						lio_swap_8B_data(hash_ptr, 1);
-						nicbuf->hash.rss =
-						    (uint32_t)*hash_ptr;
-						nicbuf->data_off += 8;
-						nicbuf->pkt_len = cpy_len - 8;
-						nicbuf->data_len = cpy_len - 8;
-					} else {
-						nicbuf->pkt_len = cpy_len;
-						nicbuf->data_len = cpy_len;
-					}
-
-					if (pkt_len)
-						first_buf->nb_segs++;
-
-					if (last_buf)
-						last_buf->next = nicbuf;
-
-					last_buf = nicbuf;
-				} else {
-					PMD_RX_LOG(lio_dev, ERR, "no buf\n");
-				}
-
-				pkt_len += cpy_len;
-				droq->read_idx = lio_incr_index(
-							droq->read_idx,
-							1, droq->nb_desc);
-				droq->refill_count++;
-
-				/* Prefetch buffer pointers when on a
-				 * cache line boundary
-				 */
-				if ((droq->read_idx & 3) == 0) {
-					rte_prefetch0(&droq->recv_buf_list
-							      [droq->read_idx]);
-
-					rte_prefetch0(
-					    &droq->info_list[droq->read_idx]);
-				}
-			}
-			rx_pkts[data_pkts++] = first_buf;
-			if (rh->r_dh.has_hash)
-				data_total_len += (pkt_len - 8);
-			else
-				data_total_len += pkt_len;
-		}
-
-		/* Inform upper layer about packet checksum verification */
-		struct rte_mbuf *m = rx_pkts[data_pkts - 1];
-
-		if (rh->r_dh.csum_verified & LIO_IP_CSUM_VERIFIED)
-			m->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
-
-		if (rh->r_dh.csum_verified & LIO_L4_CSUM_VERIFIED)
-			m->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
-	}
-
-	if (droq->refill_count >= droq->refill_threshold) {
-		int desc_refilled = lio_droq_refill(droq);
-
-		/* Flush the droq descriptor data to memory to be sure
-		 * that when we update the credits the data in memory is
-		 * accurate.
-		 */
-		rte_wmb();
-		rte_write32(desc_refilled, droq->pkts_credit_reg);
-		/* make sure mmio write completes */
-		rte_wmb();
-	}
-
-	info->length = 0;
-	info->rh.rh64 = 0;
-
-	droq->stats.pkts_received++;
-	droq->stats.rx_pkts_received += data_pkts;
-	droq->stats.rx_bytes_received += data_total_len;
-	droq->stats.bytes_received += total_len;
-
-	return data_pkts;
-}
-
-static uint32_t
-lio_droq_fast_process_packets(struct lio_device *lio_dev,
-			      struct lio_droq *droq,
-			      struct rte_mbuf **rx_pkts,
-			      uint32_t pkts_to_process)
-{
-	int ret, data_pkts = 0;
-	uint32_t pkt;
-
-	for (pkt = 0; pkt < pkts_to_process; pkt++) {
-		ret = lio_droq_fast_process_packet(lio_dev, droq,
-						   &rx_pkts[data_pkts]);
-		if (ret < 0) {
-			lio_dev_err(lio_dev, "Port[%d] DROQ[%d] idx: %d len:0, pkt_cnt: %d\n",
-				    lio_dev->port_id, droq->q_no,
-				    droq->read_idx, pkts_to_process);
-			break;
-		}
-		data_pkts += ret;
-	}
-
-	rte_atomic64_sub(&droq->pkts_pending, pkt);
-
-	return data_pkts;
-}
-
-static inline uint32_t
-lio_droq_check_hw_for_pkts(struct lio_droq *droq)
-{
-	uint32_t last_count;
-	uint32_t pkt_count;
-
-	pkt_count = rte_read32(droq->pkts_sent_reg);
-
-	last_count = pkt_count - droq->pkt_count;
-	droq->pkt_count = pkt_count;
-
-	if (last_count)
-		rte_atomic64_add(&droq->pkts_pending, last_count);
-
-	return last_count;
-}
-
-uint16_t
-lio_dev_recv_pkts(void *rx_queue,
-		  struct rte_mbuf **rx_pkts,
-		  uint16_t budget)
-{
-	struct lio_droq *droq = rx_queue;
-	struct lio_device *lio_dev = droq->lio_dev;
-	uint32_t pkts_processed = 0;
-	uint32_t pkt_count = 0;
-
-	lio_droq_check_hw_for_pkts(droq);
-
-	pkt_count = rte_atomic64_read(&droq->pkts_pending);
-	if (!pkt_count)
-		return 0;
-
-	if (pkt_count > budget)
-		pkt_count = budget;
-
-	/* Grab the lock */
-	rte_spinlock_lock(&droq->lock);
-	pkts_processed = lio_droq_fast_process_packets(lio_dev,
-						       droq, rx_pkts,
-						       pkt_count);
-
-	if (droq->pkt_count) {
-		rte_write32(droq->pkt_count, droq->pkts_sent_reg);
-		droq->pkt_count = 0;
-	}
-
-	/* Release the spin lock */
-	rte_spinlock_unlock(&droq->lock);
-
-	return pkts_processed;
-}
-
-void
-lio_delete_droq_queue(struct lio_device *lio_dev,
-		      int oq_no)
-{
-	lio_delete_droq(lio_dev, oq_no);
-	lio_dev->num_oqs--;
-	rte_free(lio_dev->droq[oq_no]);
-	lio_dev->droq[oq_no] = NULL;
-}
-
-/**
- *  lio_init_instr_queue()
- *  @param lio_dev	- pointer to the lio device structure.
- *  @param txpciq	- queue to be initialized.
- *
- *  Called at driver init time for each input queue. iq_conf has the
- *  configuration parameters for the queue.
- *
- *  @return  Success: 0	Failure: -1
- */
-static int
-lio_init_instr_queue(struct lio_device *lio_dev,
-		     union octeon_txpciq txpciq,
-		     uint32_t num_descs, unsigned int socket_id)
-{
-	uint32_t iq_no = (uint32_t)txpciq.s.q_no;
-	struct lio_instr_queue *iq;
-	uint32_t instr_type;
-	uint32_t q_size;
-
-	instr_type = LIO_IQ_INSTR_TYPE(lio_dev);
-
-	q_size = instr_type * num_descs;
-	iq = lio_dev->instr_queue[iq_no];
-	iq->iq_mz = rte_eth_dma_zone_reserve(lio_dev->eth_dev,
-					     "instr_queue", iq_no, q_size,
-					     RTE_CACHE_LINE_SIZE,
-					     socket_id);
-	if (iq->iq_mz == NULL) {
-		lio_dev_err(lio_dev, "Cannot allocate memory for instr queue %d\n",
-			    iq_no);
-		return -1;
-	}
-
-	iq->base_addr_dma = iq->iq_mz->iova;
-	iq->base_addr = (uint8_t *)iq->iq_mz->addr;
-
-	iq->nb_desc = num_descs;
-
-	/* Initialize a list to holds requests that have been posted to Octeon
-	 * but has yet to be fetched by octeon
-	 */
-	iq->request_list = rte_zmalloc_socket("request_list",
-					      sizeof(*iq->request_list) *
-							num_descs,
-					      RTE_CACHE_LINE_SIZE,
-					      socket_id);
-	if (iq->request_list == NULL) {
-		lio_dev_err(lio_dev, "Alloc failed for IQ[%d] nr free list\n",
-			    iq_no);
-		lio_dma_zone_free(lio_dev, iq->iq_mz);
-		return -1;
-	}
-
-	lio_dev_dbg(lio_dev, "IQ[%d]: base: %p basedma: %lx count: %d\n",
-		    iq_no, iq->base_addr, (unsigned long)iq->base_addr_dma,
-		    iq->nb_desc);
-
-	iq->lio_dev = lio_dev;
-	iq->txpciq.txpciq64 = txpciq.txpciq64;
-	iq->fill_cnt = 0;
-	iq->host_write_index = 0;
-	iq->lio_read_index = 0;
-	iq->flush_index = 0;
-
-	rte_atomic64_set(&iq->instr_pending, 0);
-
-	/* Initialize the spinlock for this instruction queue */
-	rte_spinlock_init(&iq->lock);
-	rte_spinlock_init(&iq->post_lock);
-
-	rte_atomic64_clear(&iq->iq_flush_running);
-
-	lio_dev->io_qmask.iq |= (1ULL << iq_no);
-
-	/* Set the 32B/64B mode for each input queue */
-	lio_dev->io_qmask.iq64B |= ((instr_type == 64) << iq_no);
-	iq->iqcmd_64B = (instr_type == 64);
-
-	lio_dev->fn_list.setup_iq_regs(lio_dev, iq_no);
-
-	return 0;
-}
-
-int
-lio_setup_instr_queue0(struct lio_device *lio_dev)
-{
-	union octeon_txpciq txpciq;
-	uint32_t num_descs = 0;
-	uint32_t iq_no = 0;
-
-	num_descs = LIO_NUM_DEF_TX_DESCS_CFG(lio_dev);
-
-	lio_dev->num_iqs = 0;
-
-	lio_dev->instr_queue[0] = rte_zmalloc(NULL,
-					sizeof(struct lio_instr_queue), 0);
-	if (lio_dev->instr_queue[0] == NULL)
-		return -ENOMEM;
-
-	lio_dev->instr_queue[0]->q_index = 0;
-	lio_dev->instr_queue[0]->app_ctx = (void *)(size_t)0;
-	txpciq.txpciq64 = 0;
-	txpciq.s.q_no = iq_no;
-	txpciq.s.pkind = lio_dev->pfvf_hsword.pkind;
-	txpciq.s.use_qpg = 0;
-	txpciq.s.qpg = 0;
-	if (lio_init_instr_queue(lio_dev, txpciq, num_descs, SOCKET_ID_ANY)) {
-		rte_free(lio_dev->instr_queue[0]);
-		lio_dev->instr_queue[0] = NULL;
-		return -1;
-	}
-
-	lio_dev->num_iqs++;
-
-	return 0;
-}
-
-/**
- *  lio_delete_instr_queue()
- *  @param lio_dev	- pointer to the lio device structure.
- *  @param iq_no	- queue to be deleted.
- *
- *  Called at driver unload time for each input queue. Deletes all
- *  allocated resources for the input queue.
- */
-static void
-lio_delete_instr_queue(struct lio_device *lio_dev, uint32_t iq_no)
-{
-	struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
-
-	rte_free(iq->request_list);
-	iq->request_list = NULL;
-	lio_dma_zone_free(lio_dev, iq->iq_mz);
-}
-
-void
-lio_free_instr_queue0(struct lio_device *lio_dev)
-{
-	lio_delete_instr_queue(lio_dev, 0);
-	rte_free(lio_dev->instr_queue[0]);
-	lio_dev->instr_queue[0] = NULL;
-	lio_dev->num_iqs--;
-}
-
-/* Return 0 on success, -1 on failure */
-int
-lio_setup_iq(struct lio_device *lio_dev, int q_index,
-	     union octeon_txpciq txpciq, uint32_t num_descs, void *app_ctx,
-	     unsigned int socket_id)
-{
-	uint32_t iq_no = (uint32_t)txpciq.s.q_no;
-
-	lio_dev->instr_queue[iq_no] = rte_zmalloc_socket("ethdev TX queue",
-						sizeof(struct lio_instr_queue),
-						RTE_CACHE_LINE_SIZE, socket_id);
-	if (lio_dev->instr_queue[iq_no] == NULL)
-		return -1;
-
-	lio_dev->instr_queue[iq_no]->q_index = q_index;
-	lio_dev->instr_queue[iq_no]->app_ctx = app_ctx;
-
-	if (lio_init_instr_queue(lio_dev, txpciq, num_descs, socket_id)) {
-		rte_free(lio_dev->instr_queue[iq_no]);
-		lio_dev->instr_queue[iq_no] = NULL;
-		return -1;
-	}
-
-	lio_dev->num_iqs++;
-
-	return 0;
-}
-
-int
-lio_wait_for_instr_fetch(struct lio_device *lio_dev)
-{
-	int pending, instr_cnt;
-	int i, retry = 1000;
-
-	do {
-		instr_cnt = 0;
-
-		for (i = 0; i < LIO_MAX_INSTR_QUEUES(lio_dev); i++) {
-			if (!(lio_dev->io_qmask.iq & (1ULL << i)))
-				continue;
-
-			if (lio_dev->instr_queue[i] == NULL)
-				break;
-
-			pending = rte_atomic64_read(
-			    &lio_dev->instr_queue[i]->instr_pending);
-			if (pending)
-				lio_flush_iq(lio_dev, lio_dev->instr_queue[i]);
-
-			instr_cnt += pending;
-		}
-
-		if (instr_cnt == 0)
-			break;
-
-		rte_delay_ms(1);
-
-	} while (retry-- && instr_cnt);
-
-	return instr_cnt;
-}
-
-static inline void
-lio_ring_doorbell(struct lio_device *lio_dev,
-		  struct lio_instr_queue *iq)
-{
-	if (rte_atomic64_read(&lio_dev->status) == LIO_DEV_RUNNING) {
-		rte_write32(iq->fill_cnt, iq->doorbell_reg);
-		/* make sure doorbell write goes through */
-		rte_wmb();
-		iq->fill_cnt = 0;
-	}
-}
-
-static inline void
-copy_cmd_into_iq(struct lio_instr_queue *iq, uint8_t *cmd)
-{
-	uint8_t *iqptr, cmdsize;
-
-	cmdsize = ((iq->iqcmd_64B) ? 64 : 32);
-	iqptr = iq->base_addr + (cmdsize * iq->host_write_index);
-
-	rte_memcpy(iqptr, cmd, cmdsize);
-}
-
-static inline struct lio_iq_post_status
-post_command2(struct lio_instr_queue *iq, uint8_t *cmd)
-{
-	struct lio_iq_post_status st;
-
-	st.status = LIO_IQ_SEND_OK;
-
-	/* This ensures that the read index does not wrap around to the same
-	 * position if queue gets full before Octeon could fetch any instr.
-	 */
-	if (rte_atomic64_read(&iq->instr_pending) >=
-			(int32_t)(iq->nb_desc - 1)) {
-		st.status = LIO_IQ_SEND_FAILED;
-		st.index = -1;
-		return st;
-	}
-
-	if (rte_atomic64_read(&iq->instr_pending) >=
-			(int32_t)(iq->nb_desc - 2))
-		st.status = LIO_IQ_SEND_STOP;
-
-	copy_cmd_into_iq(iq, cmd);
-
-	/* "index" is returned, host_write_index is modified. */
-	st.index = iq->host_write_index;
-	iq->host_write_index = lio_incr_index(iq->host_write_index, 1,
-					      iq->nb_desc);
-	iq->fill_cnt++;
-
-	/* Flush the command into memory. We need to be sure the data is in
-	 * memory before indicating that the instruction is pending.
-	 */
-	rte_wmb();
-
-	rte_atomic64_inc(&iq->instr_pending);
-
-	return st;
-}
-
-static inline void
-lio_add_to_request_list(struct lio_instr_queue *iq,
-			int idx, void *buf, int reqtype)
-{
-	iq->request_list[idx].buf = buf;
-	iq->request_list[idx].reqtype = reqtype;
-}
-
-static inline void
-lio_free_netsgbuf(void *buf)
-{
-	struct lio_buf_free_info *finfo = buf;
-	struct lio_device *lio_dev = finfo->lio_dev;
-	struct rte_mbuf *m = finfo->mbuf;
-	struct lio_gather *g = finfo->g;
-	uint8_t iq = finfo->iq_no;
-
-	/* This will take care of multiple segments also */
-	rte_pktmbuf_free(m);
-
-	rte_spinlock_lock(&lio_dev->glist_lock[iq]);
-	STAILQ_INSERT_TAIL(&lio_dev->glist_head[iq], &g->list, entries);
-	rte_spinlock_unlock(&lio_dev->glist_lock[iq]);
-	rte_free(finfo);
-}
-
-/* Can only run in process context */
-static int
-lio_process_iq_request_list(struct lio_device *lio_dev,
-			    struct lio_instr_queue *iq)
-{
-	struct octeon_instr_irh *irh = NULL;
-	uint32_t old = iq->flush_index;
-	struct lio_soft_command *sc;
-	uint32_t inst_count = 0;
-	int reqtype;
-	void *buf;
-
-	while (old != iq->lio_read_index) {
-		reqtype = iq->request_list[old].reqtype;
-		buf     = iq->request_list[old].buf;
-
-		if (reqtype == LIO_REQTYPE_NONE)
-			goto skip_this;
-
-		switch (reqtype) {
-		case LIO_REQTYPE_NORESP_NET:
-			rte_pktmbuf_free((struct rte_mbuf *)buf);
-			break;
-		case LIO_REQTYPE_NORESP_NET_SG:
-			lio_free_netsgbuf(buf);
-			break;
-		case LIO_REQTYPE_SOFT_COMMAND:
-			sc = buf;
-			irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
-			if (irh->rflag) {
-				/* We're expecting a response from Octeon.
-				 * It's up to lio_process_ordered_list() to
-				 * process sc. Add sc to the ordered soft
-				 * command response list because we expect
-				 * a response from Octeon.
-				 */
-				rte_spinlock_lock(&lio_dev->response_list.lock);
-				rte_atomic64_inc(
-				    &lio_dev->response_list.pending_req_count);
-				STAILQ_INSERT_TAIL(
-					&lio_dev->response_list.head,
-					&sc->node, entries);
-				rte_spinlock_unlock(
-						&lio_dev->response_list.lock);
-			} else {
-				if (sc->callback) {
-					/* This callback must not sleep */
-					sc->callback(LIO_REQUEST_DONE,
-						     sc->callback_arg);
-				}
-			}
-			break;
-		default:
-			lio_dev_err(lio_dev,
-				    "Unknown reqtype: %d buf: %p at idx %d\n",
-				    reqtype, buf, old);
-		}
-
-		iq->request_list[old].buf = NULL;
-		iq->request_list[old].reqtype = 0;
-
-skip_this:
-		inst_count++;
-		old = lio_incr_index(old, 1, iq->nb_desc);
-	}
-
-	iq->flush_index = old;
-
-	return inst_count;
-}
-
-static void
-lio_update_read_index(struct lio_instr_queue *iq)
-{
-	uint32_t pkt_in_done = rte_read32(iq->inst_cnt_reg);
-	uint32_t last_done;
-
-	last_done = pkt_in_done - iq->pkt_in_done;
-	iq->pkt_in_done = pkt_in_done;
-
-	/* Add last_done and modulo with the IQ size to get new index */
-	iq->lio_read_index = (iq->lio_read_index +
-			(uint32_t)(last_done & LIO_PKT_IN_DONE_CNT_MASK)) %
-			iq->nb_desc;
-}
-
-int
-lio_flush_iq(struct lio_device *lio_dev, struct lio_instr_queue *iq)
-{
-	uint32_t inst_processed = 0;
-	int tx_done = 1;
-
-	if (rte_atomic64_test_and_set(&iq->iq_flush_running) == 0)
-		return tx_done;
-
-	rte_spinlock_lock(&iq->lock);
-
-	lio_update_read_index(iq);
-
-	do {
-		/* Process any outstanding IQ packets. */
-		if (iq->flush_index == iq->lio_read_index)
-			break;
-
-		inst_processed = lio_process_iq_request_list(lio_dev, iq);
-
-		if (inst_processed) {
-			rte_atomic64_sub(&iq->instr_pending, inst_processed);
-			iq->stats.instr_processed += inst_processed;
-		}
-
-		inst_processed = 0;
-
-	} while (1);
-
-	rte_spinlock_unlock(&iq->lock);
-
-	rte_atomic64_clear(&iq->iq_flush_running);
-
-	return tx_done;
-}
-
-static int
-lio_send_command(struct lio_device *lio_dev, uint32_t iq_no, void *cmd,
-		 void *buf, uint32_t datasize, uint32_t reqtype)
-{
-	struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
-	struct lio_iq_post_status st;
-
-	rte_spinlock_lock(&iq->post_lock);
-
-	st = post_command2(iq, cmd);
-
-	if (st.status != LIO_IQ_SEND_FAILED) {
-		lio_add_to_request_list(iq, st.index, buf, reqtype);
-		LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, bytes_sent,
-					      datasize);
-		LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, instr_posted, 1);
-
-		lio_ring_doorbell(lio_dev, iq);
-	} else {
-		LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, instr_dropped, 1);
-	}
-
-	rte_spinlock_unlock(&iq->post_lock);
-
-	return st.status;
-}
-
-void
-lio_prepare_soft_command(struct lio_device *lio_dev,
-			 struct lio_soft_command *sc, uint8_t opcode,
-			 uint8_t subcode, uint32_t irh_ossp, uint64_t ossp0,
-			 uint64_t ossp1)
-{
-	struct octeon_instr_pki_ih3 *pki_ih3;
-	struct octeon_instr_ih3 *ih3;
-	struct octeon_instr_irh *irh;
-	struct octeon_instr_rdp *rdp;
-
-	RTE_ASSERT(opcode <= 15);
-	RTE_ASSERT(subcode <= 127);
-
-	ih3	  = (struct octeon_instr_ih3 *)&sc->cmd.cmd3.ih3;
-
-	ih3->pkind = lio_dev->instr_queue[sc->iq_no]->txpciq.s.pkind;
-
-	pki_ih3 = (struct octeon_instr_pki_ih3 *)&sc->cmd.cmd3.pki_ih3;
-
-	pki_ih3->w	= 1;
-	pki_ih3->raw	= 1;
-	pki_ih3->utag	= 1;
-	pki_ih3->uqpg	= lio_dev->instr_queue[sc->iq_no]->txpciq.s.use_qpg;
-	pki_ih3->utt	= 1;
-
-	pki_ih3->tag	= LIO_CONTROL;
-	pki_ih3->tagtype = OCTEON_ATOMIC_TAG;
-	pki_ih3->qpg	= lio_dev->instr_queue[sc->iq_no]->txpciq.s.qpg;
-	pki_ih3->pm	= 0x7;
-	pki_ih3->sl	= 8;
-
-	if (sc->datasize)
-		ih3->dlengsz = sc->datasize;
-
-	irh		= (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
-	irh->opcode	= opcode;
-	irh->subcode	= subcode;
-
-	/* opcode/subcode specific parameters (ossp) */
-	irh->ossp = irh_ossp;
-	sc->cmd.cmd3.ossp[0] = ossp0;
-	sc->cmd.cmd3.ossp[1] = ossp1;
-
-	if (sc->rdatasize) {
-		rdp = (struct octeon_instr_rdp *)&sc->cmd.cmd3.rdp;
-		rdp->pcie_port = lio_dev->pcie_port;
-		rdp->rlen      = sc->rdatasize;
-		irh->rflag = 1;
-		/* PKI IH3 */
-		ih3->fsz    = OCTEON_SOFT_CMD_RESP_IH3;
-	} else {
-		irh->rflag = 0;
-		/* PKI IH3 */
-		ih3->fsz    = OCTEON_PCI_CMD_O3;
-	}
-}
-
-int
-lio_send_soft_command(struct lio_device *lio_dev,
-		      struct lio_soft_command *sc)
-{
-	struct octeon_instr_ih3 *ih3;
-	struct octeon_instr_irh *irh;
-	uint32_t len = 0;
-
-	ih3 = (struct octeon_instr_ih3 *)&sc->cmd.cmd3.ih3;
-	if (ih3->dlengsz) {
-		RTE_ASSERT(sc->dmadptr);
-		sc->cmd.cmd3.dptr = sc->dmadptr;
-	}
-
-	irh = (struct octeon_instr_irh *)&sc->cmd.cmd3.irh;
-	if (irh->rflag) {
-		RTE_ASSERT(sc->dmarptr);
-		RTE_ASSERT(sc->status_word != NULL);
-		*sc->status_word = LIO_COMPLETION_WORD_INIT;
-		sc->cmd.cmd3.rptr = sc->dmarptr;
-	}
-
-	len = (uint32_t)ih3->dlengsz;
-
-	if (sc->wait_time)
-		sc->timeout = lio_uptime + sc->wait_time;
-
-	return lio_send_command(lio_dev, sc->iq_no, &sc->cmd, sc, len,
-				LIO_REQTYPE_SOFT_COMMAND);
-}
-
-int
-lio_setup_sc_buffer_pool(struct lio_device *lio_dev)
-{
-	char sc_pool_name[RTE_MEMPOOL_NAMESIZE];
-	uint16_t buf_size;
-
-	buf_size = LIO_SOFT_COMMAND_BUFFER_SIZE + RTE_PKTMBUF_HEADROOM;
-	snprintf(sc_pool_name, sizeof(sc_pool_name),
-		 "lio_sc_pool_%u", lio_dev->port_id);
-	lio_dev->sc_buf_pool = rte_pktmbuf_pool_create(sc_pool_name,
-						LIO_MAX_SOFT_COMMAND_BUFFERS,
-						0, 0, buf_size, SOCKET_ID_ANY);
-	return 0;
-}
-
-void
-lio_free_sc_buffer_pool(struct lio_device *lio_dev)
-{
-	rte_mempool_free(lio_dev->sc_buf_pool);
-}
-
-struct lio_soft_command *
-lio_alloc_soft_command(struct lio_device *lio_dev, uint32_t datasize,
-		       uint32_t rdatasize, uint32_t ctxsize)
-{
-	uint32_t offset = sizeof(struct lio_soft_command);
-	struct lio_soft_command *sc;
-	struct rte_mbuf *m;
-	uint64_t dma_addr;
-
-	RTE_ASSERT((offset + datasize + rdatasize + ctxsize) <=
-		   LIO_SOFT_COMMAND_BUFFER_SIZE);
-
-	m = rte_pktmbuf_alloc(lio_dev->sc_buf_pool);
-	if (m == NULL) {
-		lio_dev_err(lio_dev, "Cannot allocate mbuf for sc\n");
-		return NULL;
-	}
-
-	/* set rte_mbuf data size and there is only 1 segment */
-	m->pkt_len = LIO_SOFT_COMMAND_BUFFER_SIZE;
-	m->data_len = LIO_SOFT_COMMAND_BUFFER_SIZE;
-
-	/* use rte_mbuf buffer for soft command */
-	sc = rte_pktmbuf_mtod(m, struct lio_soft_command *);
-	memset(sc, 0, LIO_SOFT_COMMAND_BUFFER_SIZE);
-	sc->size = LIO_SOFT_COMMAND_BUFFER_SIZE;
-	sc->dma_addr = rte_mbuf_data_iova(m);
-	sc->mbuf = m;
-
-	dma_addr = sc->dma_addr;
-
-	if (ctxsize) {
-		sc->ctxptr = (uint8_t *)sc + offset;
-		sc->ctxsize = ctxsize;
-	}
-
-	/* Start data at 128 byte boundary */
-	offset = (offset + ctxsize + 127) & 0xffffff80;
-
-	if (datasize) {
-		sc->virtdptr = (uint8_t *)sc + offset;
-		sc->dmadptr = dma_addr + offset;
-		sc->datasize = datasize;
-	}
-
-	/* Start rdata at 128 byte boundary */
-	offset = (offset + datasize + 127) & 0xffffff80;
-
-	if (rdatasize) {
-		RTE_ASSERT(rdatasize >= 16);
-		sc->virtrptr = (uint8_t *)sc + offset;
-		sc->dmarptr = dma_addr + offset;
-		sc->rdatasize = rdatasize;
-		sc->status_word = (uint64_t *)((uint8_t *)(sc->virtrptr) +
-					       rdatasize - 8);
-	}
-
-	return sc;
-}
-
-void
-lio_free_soft_command(struct lio_soft_command *sc)
-{
-	rte_pktmbuf_free(sc->mbuf);
-}
-
-void
-lio_setup_response_list(struct lio_device *lio_dev)
-{
-	STAILQ_INIT(&lio_dev->response_list.head);
-	rte_spinlock_init(&lio_dev->response_list.lock);
-	rte_atomic64_set(&lio_dev->response_list.pending_req_count, 0);
-}
-
-int
-lio_process_ordered_list(struct lio_device *lio_dev)
-{
-	int resp_to_process = LIO_MAX_ORD_REQS_TO_PROCESS;
-	struct lio_response_list *ordered_sc_list;
-	struct lio_soft_command *sc;
-	int request_complete = 0;
-	uint64_t status64;
-	uint32_t status;
-
-	ordered_sc_list = &lio_dev->response_list;
-
-	do {
-		rte_spinlock_lock(&ordered_sc_list->lock);
-
-		if (STAILQ_EMPTY(&ordered_sc_list->head)) {
-			/* ordered_sc_list is empty; there is
-			 * nothing to process
-			 */
-			rte_spinlock_unlock(&ordered_sc_list->lock);
-			return -1;
-		}
-
-		sc = LIO_STQUEUE_FIRST_ENTRY(&ordered_sc_list->head,
-					     struct lio_soft_command, node);
-
-		status = LIO_REQUEST_PENDING;
-
-		/* check if octeon has finished DMA'ing a response
-		 * to where rptr is pointing to
-		 */
-		status64 = *sc->status_word;
-
-		if (status64 != LIO_COMPLETION_WORD_INIT) {
-			/* This logic ensures that all 64b have been written.
-			 * 1. check byte 0 for non-FF
-			 * 2. if non-FF, then swap result from BE to host order
-			 * 3. check byte 7 (swapped to 0) for non-FF
-			 * 4. if non-FF, use the low 32-bit status code
-			 * 5. if either byte 0 or byte 7 is FF, don't use status
-			 */
-			if ((status64 & 0xff) != 0xff) {
-				lio_swap_8B_data(&status64, 1);
-				if (((status64 & 0xff) != 0xff)) {
-					/* retrieve 16-bit firmware status */
-					status = (uint32_t)(status64 &
-							    0xffffULL);
-					if (status) {
-						status =
-						LIO_FIRMWARE_STATUS_CODE(
-									status);
-					} else {
-						/* i.e. no error */
-						status = LIO_REQUEST_DONE;
-					}
-				}
-			}
-		} else if ((sc->timeout && lio_check_timeout(lio_uptime,
-							     sc->timeout))) {
-			lio_dev_err(lio_dev,
-				    "cmd failed, timeout (%ld, %ld)\n",
-				    (long)lio_uptime, (long)sc->timeout);
-			status = LIO_REQUEST_TIMEOUT;
-		}
-
-		if (status != LIO_REQUEST_PENDING) {
-			/* we have received a response or we have timed out.
-			 * remove node from linked list
-			 */
-			STAILQ_REMOVE(&ordered_sc_list->head,
-				      &sc->node, lio_stailq_node, entries);
-			rte_atomic64_dec(
-			    &lio_dev->response_list.pending_req_count);
-			rte_spinlock_unlock(&ordered_sc_list->lock);
-
-			if (sc->callback)
-				sc->callback(status, sc->callback_arg);
-
-			request_complete++;
-		} else {
-			/* no response yet */
-			request_complete = 0;
-			rte_spinlock_unlock(&ordered_sc_list->lock);
-		}
-
-		/* If we hit the Max Ordered requests to process every loop,
-		 * we quit and let this function be invoked the next time
-		 * the poll thread runs to process the remaining requests.
-		 * This function can take up the entire CPU if there is
-		 * no upper limit to the requests processed.
-		 */
-		if (request_complete >= resp_to_process)
-			break;
-	} while (request_complete);
-
-	return 0;
-}
-
-static inline struct lio_stailq_node *
-list_delete_first_node(struct lio_stailq_head *head)
-{
-	struct lio_stailq_node *node;
-
-	if (STAILQ_EMPTY(head))
-		node = NULL;
-	else
-		node = STAILQ_FIRST(head);
-
-	if (node)
-		STAILQ_REMOVE(head, node, lio_stailq_node, entries);
-
-	return node;
-}
-
-void
-lio_delete_sglist(struct lio_instr_queue *txq)
-{
-	struct lio_device *lio_dev = txq->lio_dev;
-	int iq_no = txq->q_index;
-	struct lio_gather *g;
-
-	if (lio_dev->glist_head == NULL)
-		return;
-
-	do {
-		g = (struct lio_gather *)list_delete_first_node(
-						&lio_dev->glist_head[iq_no]);
-		if (g) {
-			if (g->sg)
-				rte_free(
-				    (void *)((unsigned long)g->sg - g->adjust));
-			rte_free(g);
-		}
-	} while (g);
-}
-
-/**
- * \brief Setup gather lists
- * @param lio per-network private data
- */
-int
-lio_setup_sglists(struct lio_device *lio_dev, int iq_no,
-		  int fw_mapped_iq, int num_descs, unsigned int socket_id)
-{
-	struct lio_gather *g;
-	int i;
-
-	rte_spinlock_init(&lio_dev->glist_lock[iq_no]);
-
-	STAILQ_INIT(&lio_dev->glist_head[iq_no]);
-
-	for (i = 0; i < num_descs; i++) {
-		g = rte_zmalloc_socket(NULL, sizeof(*g), RTE_CACHE_LINE_SIZE,
-				       socket_id);
-		if (g == NULL) {
-			lio_dev_err(lio_dev,
-				    "lio_gather memory allocation failed for qno %d\n",
-				    iq_no);
-			break;
-		}
-
-		g->sg_size =
-		    ((ROUNDUP4(LIO_MAX_SG) >> 2) * LIO_SG_ENTRY_SIZE);
-
-		g->sg = rte_zmalloc_socket(NULL, g->sg_size + 8,
-					   RTE_CACHE_LINE_SIZE, socket_id);
-		if (g->sg == NULL) {
-			lio_dev_err(lio_dev,
-				    "sg list memory allocation failed for qno %d\n",
-				    iq_no);
-			rte_free(g);
-			break;
-		}
-
-		/* The gather component should be aligned on 64-bit boundary */
-		if (((unsigned long)g->sg) & 7) {
-			g->adjust = 8 - (((unsigned long)g->sg) & 7);
-			g->sg =
-			    (struct lio_sg_entry *)((unsigned long)g->sg +
-						       g->adjust);
-		}
-
-		STAILQ_INSERT_TAIL(&lio_dev->glist_head[iq_no], &g->list,
-				   entries);
-	}
-
-	if (i != num_descs) {
-		lio_delete_sglist(lio_dev->instr_queue[fw_mapped_iq]);
-		return -ENOMEM;
-	}
-
-	return 0;
-}
-
-void
-lio_delete_instruction_queue(struct lio_device *lio_dev, int iq_no)
-{
-	lio_delete_instr_queue(lio_dev, iq_no);
-	rte_free(lio_dev->instr_queue[iq_no]);
-	lio_dev->instr_queue[iq_no] = NULL;
-	lio_dev->num_iqs--;
-}
-
-static inline uint32_t
-lio_iq_get_available(struct lio_device *lio_dev, uint32_t q_no)
-{
-	return ((lio_dev->instr_queue[q_no]->nb_desc - 1) -
-		(uint32_t)rte_atomic64_read(
-				&lio_dev->instr_queue[q_no]->instr_pending));
-}
-
-static inline int
-lio_iq_is_full(struct lio_device *lio_dev, uint32_t q_no)
-{
-	return ((uint32_t)rte_atomic64_read(
-				&lio_dev->instr_queue[q_no]->instr_pending) >=
-				(lio_dev->instr_queue[q_no]->nb_desc - 2));
-}
-
-static int
-lio_dev_cleanup_iq(struct lio_device *lio_dev, int iq_no)
-{
-	struct lio_instr_queue *iq = lio_dev->instr_queue[iq_no];
-	uint32_t count = 10000;
-
-	while ((lio_iq_get_available(lio_dev, iq_no) < LIO_FLUSH_WM(iq)) &&
-			--count)
-		lio_flush_iq(lio_dev, iq);
-
-	return count ? 0 : 1;
-}
-
-static void
-lio_ctrl_cmd_callback(uint32_t status __rte_unused, void *sc_ptr)
-{
-	struct lio_soft_command *sc = sc_ptr;
-	struct lio_dev_ctrl_cmd *ctrl_cmd;
-	struct lio_ctrl_pkt *ctrl_pkt;
-
-	ctrl_pkt = (struct lio_ctrl_pkt *)sc->ctxptr;
-	ctrl_cmd = ctrl_pkt->ctrl_cmd;
-	ctrl_cmd->cond = 1;
-
-	lio_free_soft_command(sc);
-}
-
-static inline struct lio_soft_command *
-lio_alloc_ctrl_pkt_sc(struct lio_device *lio_dev,
-		      struct lio_ctrl_pkt *ctrl_pkt)
-{
-	struct lio_soft_command *sc = NULL;
-	uint32_t uddsize, datasize;
-	uint32_t rdatasize;
-	uint8_t *data;
-
-	uddsize = (uint32_t)(ctrl_pkt->ncmd.s.more * 8);
-
-	datasize = OCTEON_CMD_SIZE + uddsize;
-	rdatasize = (ctrl_pkt->wait_time) ? 16 : 0;
-
-	sc = lio_alloc_soft_command(lio_dev, datasize,
-				    rdatasize, sizeof(struct lio_ctrl_pkt));
-	if (sc == NULL)
-		return NULL;
-
-	rte_memcpy(sc->ctxptr, ctrl_pkt, sizeof(struct lio_ctrl_pkt));
-
-	data = (uint8_t *)sc->virtdptr;
-
-	rte_memcpy(data, &ctrl_pkt->ncmd, OCTEON_CMD_SIZE);
-
-	lio_swap_8B_data((uint64_t *)data, OCTEON_CMD_SIZE >> 3);
-
-	if (uddsize) {
-		/* Endian-Swap for UDD should have been done by caller. */
-		rte_memcpy(data + OCTEON_CMD_SIZE, ctrl_pkt->udd, uddsize);
-	}
-
-	sc->iq_no = (uint32_t)ctrl_pkt->iq_no;
-
-	lio_prepare_soft_command(lio_dev, sc,
-				 LIO_OPCODE, LIO_OPCODE_CMD,
-				 0, 0, 0);
-
-	sc->callback = lio_ctrl_cmd_callback;
-	sc->callback_arg = sc;
-	sc->wait_time = ctrl_pkt->wait_time;
-
-	return sc;
-}
-
-int
-lio_send_ctrl_pkt(struct lio_device *lio_dev, struct lio_ctrl_pkt *ctrl_pkt)
-{
-	struct lio_soft_command *sc = NULL;
-	int retval;
-
-	sc = lio_alloc_ctrl_pkt_sc(lio_dev, ctrl_pkt);
-	if (sc == NULL) {
-		lio_dev_err(lio_dev, "soft command allocation failed\n");
-		return -1;
-	}
-
-	retval = lio_send_soft_command(lio_dev, sc);
-	if (retval == LIO_IQ_SEND_FAILED) {
-		lio_free_soft_command(sc);
-		lio_dev_err(lio_dev, "Port: %d soft command: %d send failed status: %x\n",
-			    lio_dev->port_id, ctrl_pkt->ncmd.s.cmd, retval);
-		return -1;
-	}
-
-	return retval;
-}
-
-/** Send data packet to the device
- *  @param lio_dev - lio device pointer
- *  @param ndata   - control structure with queueing, and buffer information
- *
- *  @returns IQ_FAILED if it failed to add to the input queue. IQ_STOP if it the
- *  queue should be stopped, and LIO_IQ_SEND_OK if it sent okay.
- */
-static inline int
-lio_send_data_pkt(struct lio_device *lio_dev, struct lio_data_pkt *ndata)
-{
-	return lio_send_command(lio_dev, ndata->q_no, &ndata->cmd,
-				ndata->buf, ndata->datasize, ndata->reqtype);
-}
-
-uint16_t
-lio_dev_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
-{
-	struct lio_instr_queue *txq = tx_queue;
-	union lio_cmd_setup cmdsetup;
-	struct lio_device *lio_dev;
-	struct lio_iq_stats *stats;
-	struct lio_data_pkt ndata;
-	int i, processed = 0;
-	struct rte_mbuf *m;
-	uint32_t tag = 0;
-	int status = 0;
-	int iq_no;
-
-	lio_dev = txq->lio_dev;
-	iq_no = txq->txpciq.s.q_no;
-	stats = &lio_dev->instr_queue[iq_no]->stats;
-
-	if (!lio_dev->intf_open || !lio_dev->linfo.link.s.link_up) {
-		PMD_TX_LOG(lio_dev, ERR, "Transmit failed link_status : %d\n",
-			   lio_dev->linfo.link.s.link_up);
-		goto xmit_failed;
-	}
-
-	lio_dev_cleanup_iq(lio_dev, iq_no);
-
-	for (i = 0; i < nb_pkts; i++) {
-		uint32_t pkt_len = 0;
-
-		m = pkts[i];
-
-		/* Prepare the attributes for the data to be passed to BASE. */
-		memset(&ndata, 0, sizeof(struct lio_data_pkt));
-
-		ndata.buf = m;
-
-		ndata.q_no = iq_no;
-		if (lio_iq_is_full(lio_dev, ndata.q_no)) {
-			stats->tx_iq_busy++;
-			if (lio_dev_cleanup_iq(lio_dev, iq_no)) {
-				PMD_TX_LOG(lio_dev, ERR,
-					   "Transmit failed iq:%d full\n",
-					   ndata.q_no);
-				break;
-			}
-		}
-
-		cmdsetup.cmd_setup64 = 0;
-		cmdsetup.s.iq_no = iq_no;
-
-		/* check checksum offload flags to form cmd */
-		if (m->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
-			cmdsetup.s.ip_csum = 1;
-
-		if (m->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
-			cmdsetup.s.tnl_csum = 1;
-		else if ((m->ol_flags & RTE_MBUF_F_TX_TCP_CKSUM) ||
-				(m->ol_flags & RTE_MBUF_F_TX_UDP_CKSUM))
-			cmdsetup.s.transport_csum = 1;
-
-		if (m->nb_segs == 1) {
-			pkt_len = rte_pktmbuf_data_len(m);
-			cmdsetup.s.u.datasize = pkt_len;
-			lio_prepare_pci_cmd(lio_dev, &ndata.cmd,
-					    &cmdsetup, tag);
-			ndata.cmd.cmd3.dptr = rte_mbuf_data_iova(m);
-			ndata.reqtype = LIO_REQTYPE_NORESP_NET;
-		} else {
-			struct lio_buf_free_info *finfo;
-			struct lio_gather *g;
-			rte_iova_t phyaddr;
-			int i, frags;
-
-			finfo = (struct lio_buf_free_info *)rte_malloc(NULL,
-							sizeof(*finfo), 0);
-			if (finfo == NULL) {
-				PMD_TX_LOG(lio_dev, ERR,
-					   "free buffer alloc failed\n");
-				goto xmit_failed;
-			}
-
-			rte_spinlock_lock(&lio_dev->glist_lock[iq_no]);
-			g = (struct lio_gather *)list_delete_first_node(
-						&lio_dev->glist_head[iq_no]);
-			rte_spinlock_unlock(&lio_dev->glist_lock[iq_no]);
-			if (g == NULL) {
-				PMD_TX_LOG(lio_dev, ERR,
-					   "Transmit scatter gather: glist null!\n");
-				goto xmit_failed;
-			}
-
-			cmdsetup.s.gather = 1;
-			cmdsetup.s.u.gatherptrs = m->nb_segs;
-			lio_prepare_pci_cmd(lio_dev, &ndata.cmd,
-					    &cmdsetup, tag);
-
-			memset(g->sg, 0, g->sg_size);
-			g->sg[0].ptr[0] = rte_mbuf_data_iova(m);
-			lio_add_sg_size(&g->sg[0], m->data_len, 0);
-			pkt_len = m->data_len;
-			finfo->mbuf = m;
-
-			/* First seg taken care above */
-			frags = m->nb_segs - 1;
-			i = 1;
-			m = m->next;
-			while (frags--) {
-				g->sg[(i >> 2)].ptr[(i & 3)] =
-						rte_mbuf_data_iova(m);
-				lio_add_sg_size(&g->sg[(i >> 2)],
-						m->data_len, (i & 3));
-				pkt_len += m->data_len;
-				i++;
-				m = m->next;
-			}
-
-			phyaddr = rte_mem_virt2iova(g->sg);
-			if (phyaddr == RTE_BAD_IOVA) {
-				PMD_TX_LOG(lio_dev, ERR, "bad phys addr\n");
-				goto xmit_failed;
-			}
-
-			ndata.cmd.cmd3.dptr = phyaddr;
-			ndata.reqtype = LIO_REQTYPE_NORESP_NET_SG;
-
-			finfo->g = g;
-			finfo->lio_dev = lio_dev;
-			finfo->iq_no = (uint64_t)iq_no;
-			ndata.buf = finfo;
-		}
-
-		ndata.datasize = pkt_len;
-
-		status = lio_send_data_pkt(lio_dev, &ndata);
-
-		if (unlikely(status == LIO_IQ_SEND_FAILED)) {
-			PMD_TX_LOG(lio_dev, ERR, "send failed\n");
-			break;
-		}
-
-		if (unlikely(status == LIO_IQ_SEND_STOP)) {
-			PMD_TX_LOG(lio_dev, DEBUG, "iq full\n");
-			/* create space as iq is full */
-			lio_dev_cleanup_iq(lio_dev, iq_no);
-		}
-
-		stats->tx_done++;
-		stats->tx_tot_bytes += pkt_len;
-		processed++;
-	}
-
-xmit_failed:
-	stats->tx_dropped += (nb_pkts - processed);
-
-	return processed;
-}
-
-void
-lio_dev_clear_queues(struct rte_eth_dev *eth_dev)
-{
-	struct lio_instr_queue *txq;
-	struct lio_droq *rxq;
-	uint16_t i;
-
-	for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
-		txq = eth_dev->data->tx_queues[i];
-		if (txq != NULL) {
-			lio_dev_tx_queue_release(eth_dev, i);
-			eth_dev->data->tx_queues[i] = NULL;
-		}
-	}
-
-	for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
-		rxq = eth_dev->data->rx_queues[i];
-		if (rxq != NULL) {
-			lio_dev_rx_queue_release(eth_dev, i);
-			eth_dev->data->rx_queues[i] = NULL;
-		}
-	}
-}
diff --git a/drivers/net/liquidio/lio_rxtx.h b/drivers/net/liquidio/lio_rxtx.h
deleted file mode 100644
index d2a45104f0..0000000000
--- a/drivers/net/liquidio/lio_rxtx.h
+++ /dev/null
@@ -1,740 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_RXTX_H_
-#define _LIO_RXTX_H_
-
-#include <stdio.h>
-#include <stdint.h>
-
-#include <rte_spinlock.h>
-#include <rte_memory.h>
-
-#include "lio_struct.h"
-
-#ifndef ROUNDUP4
-#define ROUNDUP4(val) (((val) + 3) & 0xfffffffc)
-#endif
-
-#define LIO_STQUEUE_FIRST_ENTRY(ptr, type, elem)	\
-	(type *)((char *)((ptr)->stqh_first) - offsetof(type, elem))
-
-#define lio_check_timeout(cur_time, chk_time) ((cur_time) > (chk_time))
-
-#define lio_uptime		\
-	(size_t)(rte_get_timer_cycles() / rte_get_timer_hz())
-
-/** Descriptor format.
- *  The descriptor ring is made of descriptors which have 2 64-bit values:
- *  -# Physical (bus) address of the data buffer.
- *  -# Physical (bus) address of a lio_droq_info structure.
- *  The device DMA's incoming packets and its information at the address
- *  given by these descriptor fields.
- */
-struct lio_droq_desc {
-	/** The buffer pointer */
-	uint64_t buffer_ptr;
-
-	/** The Info pointer */
-	uint64_t info_ptr;
-};
-
-#define LIO_DROQ_DESC_SIZE	(sizeof(struct lio_droq_desc))
-
-/** Information about packet DMA'ed by Octeon.
- *  The format of the information available at Info Pointer after Octeon
- *  has posted a packet. Not all descriptors have valid information. Only
- *  the Info field of the first descriptor for a packet has information
- *  about the packet.
- */
-struct lio_droq_info {
-	/** The Output Receive Header. */
-	union octeon_rh rh;
-
-	/** The Length of the packet. */
-	uint64_t length;
-};
-
-#define LIO_DROQ_INFO_SIZE	(sizeof(struct lio_droq_info))
-
-/** Pointer to data buffer.
- *  Driver keeps a pointer to the data buffer that it made available to
- *  the Octeon device. Since the descriptor ring keeps physical (bus)
- *  addresses, this field is required for the driver to keep track of
- *  the virtual address pointers.
- */
-struct lio_recv_buffer {
-	/** Packet buffer, including meta data. */
-	void *buffer;
-
-	/** Data in the packet buffer. */
-	uint8_t *data;
-
-};
-
-#define LIO_DROQ_RECVBUF_SIZE	(sizeof(struct lio_recv_buffer))
-
-#define LIO_DROQ_SIZE		(sizeof(struct lio_droq))
-
-#define LIO_IQ_SEND_OK		0
-#define LIO_IQ_SEND_STOP	1
-#define LIO_IQ_SEND_FAILED	-1
-
-/* conditions */
-#define LIO_REQTYPE_NONE		0
-#define LIO_REQTYPE_NORESP_NET		1
-#define LIO_REQTYPE_NORESP_NET_SG	2
-#define LIO_REQTYPE_SOFT_COMMAND	3
-
-struct lio_request_list {
-	uint32_t reqtype;
-	void *buf;
-};
-
-/*----------------------  INSTRUCTION FORMAT ----------------------------*/
-
-struct lio_instr3_64B {
-	/** Pointer where the input data is available. */
-	uint64_t dptr;
-
-	/** Instruction Header. */
-	uint64_t ih3;
-
-	/** Instruction Header. */
-	uint64_t pki_ih3;
-
-	/** Input Request Header. */
-	uint64_t irh;
-
-	/** opcode/subcode specific parameters */
-	uint64_t ossp[2];
-
-	/** Return Data Parameters */
-	uint64_t rdp;
-
-	/** Pointer where the response for a RAW mode packet will be written
-	 *  by Octeon.
-	 */
-	uint64_t rptr;
-
-};
-
-union lio_instr_64B {
-	struct lio_instr3_64B cmd3;
-};
-
-/** The size of each buffer in soft command buffer pool */
-#define LIO_SOFT_COMMAND_BUFFER_SIZE	1536
-
-/** Maximum number of buffers to allocate into soft command buffer pool */
-#define LIO_MAX_SOFT_COMMAND_BUFFERS	255
-
-struct lio_soft_command {
-	/** Soft command buffer info. */
-	struct lio_stailq_node node;
-	uint64_t dma_addr;
-	uint32_t size;
-
-	/** Command and return status */
-	union lio_instr_64B cmd;
-
-#define LIO_COMPLETION_WORD_INIT	0xffffffffffffffffULL
-	uint64_t *status_word;
-
-	/** Data buffer info */
-	void *virtdptr;
-	uint64_t dmadptr;
-	uint32_t datasize;
-
-	/** Return buffer info */
-	void *virtrptr;
-	uint64_t dmarptr;
-	uint32_t rdatasize;
-
-	/** Context buffer info */
-	void *ctxptr;
-	uint32_t ctxsize;
-
-	/** Time out and callback */
-	size_t wait_time;
-	size_t timeout;
-	uint32_t iq_no;
-	void (*callback)(uint32_t, void *);
-	void *callback_arg;
-	struct rte_mbuf *mbuf;
-};
-
-struct lio_iq_post_status {
-	int status;
-	int index;
-};
-
-/*   wqe
- *  ---------------  0
- * |  wqe  word0-3 |
- *  ---------------  32
- * |    PCI IH     |
- *  ---------------  40
- * |     RPTR      |
- *  ---------------  48
- * |    PCI IRH    |
- *  ---------------  56
- * |    OCTEON_CMD |
- *  ---------------  64
- * | Addtl 8-BData |
- * |               |
- *  ---------------
- */
-
-union octeon_cmd {
-	uint64_t cmd64;
-
-	struct	{
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-		uint64_t cmd : 5;
-
-		uint64_t more : 6; /* How many udd words follow the command */
-
-		uint64_t reserved : 29;
-
-		uint64_t param1 : 16;
-
-		uint64_t param2 : 8;
-
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-
-		uint64_t param2 : 8;
-
-		uint64_t param1 : 16;
-
-		uint64_t reserved : 29;
-
-		uint64_t more : 6;
-
-		uint64_t cmd : 5;
-
-#endif
-	} s;
-};
-
-#define OCTEON_CMD_SIZE (sizeof(union octeon_cmd))
-
-/* Maximum number of 8-byte words can be
- * sent in a NIC control message.
- */
-#define LIO_MAX_NCTRL_UDD	32
-
-/* Structure of control information passed by driver to the BASE
- * layer when sending control commands to Octeon device software.
- */
-struct lio_ctrl_pkt {
-	/** Command to be passed to the Octeon device software. */
-	union octeon_cmd ncmd;
-
-	/** Send buffer */
-	void *data;
-	uint64_t dmadata;
-
-	/** Response buffer */
-	void *rdata;
-	uint64_t dmardata;
-
-	/** Additional data that may be needed by some commands. */
-	uint64_t udd[LIO_MAX_NCTRL_UDD];
-
-	/** Input queue to use to send this command. */
-	uint64_t iq_no;
-
-	/** Time to wait for Octeon software to respond to this control command.
-	 *  If wait_time is 0, BASE assumes no response is expected.
-	 */
-	size_t wait_time;
-
-	struct lio_dev_ctrl_cmd *ctrl_cmd;
-};
-
-/** Structure of data information passed by driver to the BASE
- *  layer when forwarding data to Octeon device software.
- */
-struct lio_data_pkt {
-	/** Pointer to information maintained by NIC module for this packet. The
-	 *  BASE layer passes this as-is to the driver.
-	 */
-	void *buf;
-
-	/** Type of buffer passed in "buf" above. */
-	uint32_t reqtype;
-
-	/** Total data bytes to be transferred in this command. */
-	uint32_t datasize;
-
-	/** Command to be passed to the Octeon device software. */
-	union lio_instr_64B cmd;
-
-	/** Input queue to use to send this command. */
-	uint32_t q_no;
-};
-
-/** Structure passed by driver to BASE layer to prepare a command to send
- *  network data to Octeon.
- */
-union lio_cmd_setup {
-	struct {
-		uint32_t iq_no : 8;
-		uint32_t gather : 1;
-		uint32_t timestamp : 1;
-		uint32_t ip_csum : 1;
-		uint32_t transport_csum : 1;
-		uint32_t tnl_csum : 1;
-		uint32_t rsvd : 19;
-
-		union {
-			uint32_t datasize;
-			uint32_t gatherptrs;
-		} u;
-	} s;
-
-	uint64_t cmd_setup64;
-};
-
-/* Instruction Header */
-struct octeon_instr_ih3 {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-
-	/** Reserved3 */
-	uint64_t reserved3 : 1;
-
-	/** Gather indicator 1=gather*/
-	uint64_t gather : 1;
-
-	/** Data length OR no. of entries in gather list */
-	uint64_t dlengsz : 14;
-
-	/** Front Data size */
-	uint64_t fsz : 6;
-
-	/** Reserved2 */
-	uint64_t reserved2 : 4;
-
-	/** PKI port kind - PKIND */
-	uint64_t pkind : 6;
-
-	/** Reserved1 */
-	uint64_t reserved1 : 32;
-
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-	/** Reserved1 */
-	uint64_t reserved1 : 32;
-
-	/** PKI port kind - PKIND */
-	uint64_t pkind : 6;
-
-	/** Reserved2 */
-	uint64_t reserved2 : 4;
-
-	/** Front Data size */
-	uint64_t fsz : 6;
-
-	/** Data length OR no. of entries in gather list */
-	uint64_t dlengsz : 14;
-
-	/** Gather indicator 1=gather*/
-	uint64_t gather : 1;
-
-	/** Reserved3 */
-	uint64_t reserved3 : 1;
-
-#endif
-};
-
-/* PKI Instruction Header(PKI IH) */
-struct octeon_instr_pki_ih3 {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-
-	/** Wider bit */
-	uint64_t w : 1;
-
-	/** Raw mode indicator 1 = RAW */
-	uint64_t raw : 1;
-
-	/** Use Tag */
-	uint64_t utag : 1;
-
-	/** Use QPG */
-	uint64_t uqpg : 1;
-
-	/** Reserved2 */
-	uint64_t reserved2 : 1;
-
-	/** Parse Mode */
-	uint64_t pm : 3;
-
-	/** Skip Length */
-	uint64_t sl : 8;
-
-	/** Use Tag Type */
-	uint64_t utt : 1;
-
-	/** Tag type */
-	uint64_t tagtype : 2;
-
-	/** Reserved1 */
-	uint64_t reserved1 : 2;
-
-	/** QPG Value */
-	uint64_t qpg : 11;
-
-	/** Tag Value */
-	uint64_t tag : 32;
-
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-
-	/** Tag Value */
-	uint64_t tag : 32;
-
-	/** QPG Value */
-	uint64_t qpg : 11;
-
-	/** Reserved1 */
-	uint64_t reserved1 : 2;
-
-	/** Tag type */
-	uint64_t tagtype : 2;
-
-	/** Use Tag Type */
-	uint64_t utt : 1;
-
-	/** Skip Length */
-	uint64_t sl : 8;
-
-	/** Parse Mode */
-	uint64_t pm : 3;
-
-	/** Reserved2 */
-	uint64_t reserved2 : 1;
-
-	/** Use QPG */
-	uint64_t uqpg : 1;
-
-	/** Use Tag */
-	uint64_t utag : 1;
-
-	/** Raw mode indicator 1 = RAW */
-	uint64_t raw : 1;
-
-	/** Wider bit */
-	uint64_t w : 1;
-#endif
-};
-
-/** Input Request Header */
-struct octeon_instr_irh {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-	uint64_t opcode : 4;
-	uint64_t rflag : 1;
-	uint64_t subcode : 7;
-	uint64_t vlan : 12;
-	uint64_t priority : 3;
-	uint64_t reserved : 5;
-	uint64_t ossp : 32; /* opcode/subcode specific parameters */
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-	uint64_t ossp : 32; /* opcode/subcode specific parameters */
-	uint64_t reserved : 5;
-	uint64_t priority : 3;
-	uint64_t vlan : 12;
-	uint64_t subcode : 7;
-	uint64_t rflag : 1;
-	uint64_t opcode : 4;
-#endif
-};
-
-/* pkiih3 + irh + ossp[0] + ossp[1] + rdp + rptr = 40 bytes */
-#define OCTEON_SOFT_CMD_RESP_IH3	(40 + 8)
-/* pki_h3 + irh + ossp[0] + ossp[1] = 32 bytes */
-#define OCTEON_PCI_CMD_O3		(24 + 8)
-
-/** Return Data Parameters */
-struct octeon_instr_rdp {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-	uint64_t reserved : 49;
-	uint64_t pcie_port : 3;
-	uint64_t rlen : 12;
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-	uint64_t rlen : 12;
-	uint64_t pcie_port : 3;
-	uint64_t reserved : 49;
-#endif
-};
-
-union octeon_packet_params {
-	uint32_t pkt_params32;
-	struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-		uint32_t reserved : 24;
-		uint32_t ip_csum : 1; /* Perform IP header checksum(s) */
-		/* Perform Outer transport header checksum */
-		uint32_t transport_csum : 1;
-		/* Find tunnel, and perform transport csum. */
-		uint32_t tnl_csum : 1;
-		uint32_t tsflag : 1;   /* Timestamp this packet */
-		uint32_t ipsec_ops : 4; /* IPsec operation */
-#else
-		uint32_t ipsec_ops : 4;
-		uint32_t tsflag : 1;
-		uint32_t tnl_csum : 1;
-		uint32_t transport_csum : 1;
-		uint32_t ip_csum : 1;
-		uint32_t reserved : 7;
-#endif
-	} s;
-};
-
-/** Utility function to prepare a 64B NIC instruction based on a setup command
- * @param cmd - pointer to instruction to be filled in.
- * @param setup - pointer to the setup structure
- * @param q_no - which queue for back pressure
- *
- * Assumes the cmd instruction is pre-allocated, but no fields are filled in.
- */
-static inline void
-lio_prepare_pci_cmd(struct lio_device *lio_dev,
-		    union lio_instr_64B *cmd,
-		    union lio_cmd_setup *setup,
-		    uint32_t tag)
-{
-	union octeon_packet_params packet_params;
-	struct octeon_instr_pki_ih3 *pki_ih3;
-	struct octeon_instr_irh *irh;
-	struct octeon_instr_ih3 *ih3;
-	int port;
-
-	memset(cmd, 0, sizeof(union lio_instr_64B));
-
-	ih3 = (struct octeon_instr_ih3 *)&cmd->cmd3.ih3;
-	pki_ih3 = (struct octeon_instr_pki_ih3 *)&cmd->cmd3.pki_ih3;
-
-	/* assume that rflag is cleared so therefore front data will only have
-	 * irh and ossp[1] and ossp[2] for a total of 24 bytes
-	 */
-	ih3->pkind = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.pkind;
-	/* PKI IH */
-	ih3->fsz = OCTEON_PCI_CMD_O3;
-
-	if (!setup->s.gather) {
-		ih3->dlengsz = setup->s.u.datasize;
-	} else {
-		ih3->gather = 1;
-		ih3->dlengsz = setup->s.u.gatherptrs;
-	}
-
-	pki_ih3->w = 1;
-	pki_ih3->raw = 0;
-	pki_ih3->utag = 0;
-	pki_ih3->utt = 1;
-	pki_ih3->uqpg = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.use_qpg;
-
-	port = (int)lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.port;
-
-	if (tag)
-		pki_ih3->tag = tag;
-	else
-		pki_ih3->tag = LIO_DATA(port);
-
-	pki_ih3->tagtype = OCTEON_ORDERED_TAG;
-	pki_ih3->qpg = lio_dev->instr_queue[setup->s.iq_no]->txpciq.s.qpg;
-	pki_ih3->pm = 0x0; /* parse from L2 */
-	pki_ih3->sl = 32;  /* sl will be sizeof(pki_ih3) + irh + ossp0 + ossp1*/
-
-	irh = (struct octeon_instr_irh *)&cmd->cmd3.irh;
-
-	irh->opcode = LIO_OPCODE;
-	irh->subcode = LIO_OPCODE_NW_DATA;
-
-	packet_params.pkt_params32 = 0;
-	packet_params.s.ip_csum = setup->s.ip_csum;
-	packet_params.s.transport_csum = setup->s.transport_csum;
-	packet_params.s.tnl_csum = setup->s.tnl_csum;
-	packet_params.s.tsflag = setup->s.timestamp;
-
-	irh->ossp = packet_params.pkt_params32;
-}
-
-int lio_setup_sc_buffer_pool(struct lio_device *lio_dev);
-void lio_free_sc_buffer_pool(struct lio_device *lio_dev);
-
-struct lio_soft_command *
-lio_alloc_soft_command(struct lio_device *lio_dev,
-		       uint32_t datasize, uint32_t rdatasize,
-		       uint32_t ctxsize);
-void lio_prepare_soft_command(struct lio_device *lio_dev,
-			      struct lio_soft_command *sc,
-			      uint8_t opcode, uint8_t subcode,
-			      uint32_t irh_ossp, uint64_t ossp0,
-			      uint64_t ossp1);
-int lio_send_soft_command(struct lio_device *lio_dev,
-			  struct lio_soft_command *sc);
-void lio_free_soft_command(struct lio_soft_command *sc);
-
-/** Send control packet to the device
- *  @param lio_dev - lio device pointer
- *  @param nctrl   - control structure with command, timeout, and callback info
- *
- *  @returns IQ_FAILED if it failed to add to the input queue. IQ_STOP if it the
- *  queue should be stopped, and LIO_IQ_SEND_OK if it sent okay.
- */
-int lio_send_ctrl_pkt(struct lio_device *lio_dev,
-		      struct lio_ctrl_pkt *ctrl_pkt);
-
-/** Maximum ordered requests to process in every invocation of
- *  lio_process_ordered_list(). The function will continue to process requests
- *  as long as it can find one that has finished processing. If it keeps
- *  finding requests that have completed, the function can run for ever. The
- *  value defined here sets an upper limit on the number of requests it can
- *  process before it returns control to the poll thread.
- */
-#define LIO_MAX_ORD_REQS_TO_PROCESS	4096
-
-/** Error codes used in Octeon Host-Core communication.
- *
- *   31		16 15		0
- *   ----------------------------
- * |		|		|
- *   ----------------------------
- *   Error codes are 32-bit wide. The upper 16-bits, called Major Error Number,
- *   are reserved to identify the group to which the error code belongs. The
- *   lower 16-bits, called Minor Error Number, carry the actual code.
- *
- *   So error codes are (MAJOR NUMBER << 16)| MINOR_NUMBER.
- */
-/** Status for a request.
- *  If the request is successfully queued, the driver will return
- *  a LIO_REQUEST_PENDING status. LIO_REQUEST_TIMEOUT is only returned by
- *  the driver if the response for request failed to arrive before a
- *  time-out period or if the request processing * got interrupted due to
- *  a signal respectively.
- */
-enum {
-	/** A value of 0x00000000 indicates no error i.e. success */
-	LIO_REQUEST_DONE	= 0x00000000,
-	/** (Major number: 0x0000; Minor Number: 0x0001) */
-	LIO_REQUEST_PENDING	= 0x00000001,
-	LIO_REQUEST_TIMEOUT	= 0x00000003,
-
-};
-
-/*------ Error codes used by firmware (bits 15..0 set by firmware */
-#define LIO_FIRMWARE_MAJOR_ERROR_CODE	 0x0001
-#define LIO_FIRMWARE_STATUS_CODE(status) \
-	((LIO_FIRMWARE_MAJOR_ERROR_CODE << 16) | (status))
-
-/** Initialize the response lists. The number of response lists to create is
- *  given by count.
- *  @param lio_dev - the lio device structure.
- */
-void lio_setup_response_list(struct lio_device *lio_dev);
-
-/** Check the status of first entry in the ordered list. If the instruction at
- *  that entry finished processing or has timed-out, the entry is cleaned.
- *  @param lio_dev - the lio device structure.
- *  @return 1 if the ordered list is empty, 0 otherwise.
- */
-int lio_process_ordered_list(struct lio_device *lio_dev);
-
-#define LIO_INCR_INSTRQUEUE_PKT_COUNT(lio_dev, iq_no, field, count)	\
-	(((lio_dev)->instr_queue[iq_no]->stats.field) += count)
-
-static inline void
-lio_swap_8B_data(uint64_t *data, uint32_t blocks)
-{
-	while (blocks) {
-		*data = rte_cpu_to_be_64(*data);
-		blocks--;
-		data++;
-	}
-}
-
-static inline uint64_t
-lio_map_ring(void *buf)
-{
-	rte_iova_t dma_addr;
-
-	dma_addr = rte_mbuf_data_iova_default(((struct rte_mbuf *)buf));
-
-	return (uint64_t)dma_addr;
-}
-
-static inline uint64_t
-lio_map_ring_info(struct lio_droq *droq, uint32_t i)
-{
-	rte_iova_t dma_addr;
-
-	dma_addr = droq->info_list_dma + (i * LIO_DROQ_INFO_SIZE);
-
-	return (uint64_t)dma_addr;
-}
-
-static inline int
-lio_opcode_slow_path(union octeon_rh *rh)
-{
-	uint16_t subcode1, subcode2;
-
-	subcode1 = LIO_OPCODE_SUBCODE(rh->r.opcode, rh->r.subcode);
-	subcode2 = LIO_OPCODE_SUBCODE(LIO_OPCODE, LIO_OPCODE_NW_DATA);
-
-	return subcode2 != subcode1;
-}
-
-static inline void
-lio_add_sg_size(struct lio_sg_entry *sg_entry,
-		uint16_t size, uint32_t pos)
-{
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-	sg_entry->u.size[pos] = size;
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-	sg_entry->u.size[3 - pos] = size;
-#endif
-}
-
-/* Macro to increment index.
- * Index is incremented by count; if the sum exceeds
- * max, index is wrapped-around to the start.
- */
-static inline uint32_t
-lio_incr_index(uint32_t index, uint32_t count, uint32_t max)
-{
-	if ((index + count) >= max)
-		index = index + count - max;
-	else
-		index += count;
-
-	return index;
-}
-
-int lio_setup_droq(struct lio_device *lio_dev, int q_no, int num_descs,
-		   int desc_size, struct rte_mempool *mpool,
-		   unsigned int socket_id);
-uint16_t lio_dev_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-			   uint16_t budget);
-void lio_delete_droq_queue(struct lio_device *lio_dev, int oq_no);
-
-void lio_delete_sglist(struct lio_instr_queue *txq);
-int lio_setup_sglists(struct lio_device *lio_dev, int iq_no,
-		      int fw_mapped_iq, int num_descs, unsigned int socket_id);
-uint16_t lio_dev_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts,
-			   uint16_t nb_pkts);
-int lio_wait_for_instr_fetch(struct lio_device *lio_dev);
-int lio_setup_iq(struct lio_device *lio_dev, int q_index,
-		 union octeon_txpciq iq_no, uint32_t num_descs, void *app_ctx,
-		 unsigned int socket_id);
-int lio_flush_iq(struct lio_device *lio_dev, struct lio_instr_queue *iq);
-void lio_delete_instruction_queue(struct lio_device *lio_dev, int iq_no);
-/** Setup instruction queue zero for the device
- *  @param lio_dev which lio device to setup
- *
- *  @return 0 if success. -1 if fails
- */
-int lio_setup_instr_queue0(struct lio_device *lio_dev);
-void lio_free_instr_queue0(struct lio_device *lio_dev);
-void lio_dev_clear_queues(struct rte_eth_dev *eth_dev);
-#endif	/* _LIO_RXTX_H_ */
diff --git a/drivers/net/liquidio/lio_struct.h b/drivers/net/liquidio/lio_struct.h
deleted file mode 100644
index 10270c560e..0000000000
--- a/drivers/net/liquidio/lio_struct.h
+++ /dev/null
@@ -1,661 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Cavium, Inc
- */
-
-#ifndef _LIO_STRUCT_H_
-#define _LIO_STRUCT_H_
-
-#include <stdio.h>
-#include <stdint.h>
-#include <sys/queue.h>
-
-#include <rte_spinlock.h>
-#include <rte_atomic.h>
-
-#include "lio_hw_defs.h"
-
-struct lio_stailq_node {
-	STAILQ_ENTRY(lio_stailq_node) entries;
-};
-
-STAILQ_HEAD(lio_stailq_head, lio_stailq_node);
-
-struct lio_version {
-	uint16_t major;
-	uint16_t minor;
-	uint16_t micro;
-	uint16_t reserved;
-};
-
-/** Input Queue statistics. Each input queue has four stats fields. */
-struct lio_iq_stats {
-	uint64_t instr_posted; /**< Instructions posted to this queue. */
-	uint64_t instr_processed; /**< Instructions processed in this queue. */
-	uint64_t instr_dropped; /**< Instructions that could not be processed */
-	uint64_t bytes_sent; /**< Bytes sent through this queue. */
-	uint64_t tx_done; /**< Num of packets sent to network. */
-	uint64_t tx_iq_busy; /**< Num of times this iq was found to be full. */
-	uint64_t tx_dropped; /**< Num of pkts dropped due to xmitpath errors. */
-	uint64_t tx_tot_bytes; /**< Total count of bytes sent to network. */
-};
-
-/** Output Queue statistics. Each output queue has four stats fields. */
-struct lio_droq_stats {
-	/** Number of packets received in this queue. */
-	uint64_t pkts_received;
-
-	/** Bytes received by this queue. */
-	uint64_t bytes_received;
-
-	/** Packets dropped due to no memory available. */
-	uint64_t dropped_nomem;
-
-	/** Packets dropped due to large number of pkts to process. */
-	uint64_t dropped_toomany;
-
-	/** Number of packets  sent to stack from this queue. */
-	uint64_t rx_pkts_received;
-
-	/** Number of Bytes sent to stack from this queue. */
-	uint64_t rx_bytes_received;
-
-	/** Num of Packets dropped due to receive path failures. */
-	uint64_t rx_dropped;
-
-	/** Num of vxlan packets received; */
-	uint64_t rx_vxlan;
-
-	/** Num of failures of rte_pktmbuf_alloc() */
-	uint64_t rx_alloc_failure;
-
-};
-
-/** The Descriptor Ring Output Queue structure.
- *  This structure has all the information required to implement a
- *  DROQ.
- */
-struct lio_droq {
-	/** A spinlock to protect access to this ring. */
-	rte_spinlock_t lock;
-
-	uint32_t q_no;
-
-	uint32_t pkt_count;
-
-	struct lio_device *lio_dev;
-
-	/** The 8B aligned descriptor ring starts at this address. */
-	struct lio_droq_desc *desc_ring;
-
-	/** Index in the ring where the driver should read the next packet */
-	uint32_t read_idx;
-
-	/** Index in the ring where Octeon will write the next packet */
-	uint32_t write_idx;
-
-	/** Index in the ring where the driver will refill the descriptor's
-	 * buffer
-	 */
-	uint32_t refill_idx;
-
-	/** Packets pending to be processed */
-	rte_atomic64_t pkts_pending;
-
-	/** Number of  descriptors in this ring. */
-	uint32_t nb_desc;
-
-	/** The number of descriptors pending refill. */
-	uint32_t refill_count;
-
-	uint32_t refill_threshold;
-
-	/** The 8B aligned info ptrs begin from this address. */
-	struct lio_droq_info *info_list;
-
-	/** The receive buffer list. This list has the virtual addresses of the
-	 *  buffers.
-	 */
-	struct lio_recv_buffer *recv_buf_list;
-
-	/** The size of each buffer pointed by the buffer pointer. */
-	uint32_t buffer_size;
-
-	/** Pointer to the mapped packet credit register.
-	 *  Host writes number of info/buffer ptrs available to this register
-	 */
-	void *pkts_credit_reg;
-
-	/** Pointer to the mapped packet sent register.
-	 *  Octeon writes the number of packets DMA'ed to host memory
-	 *  in this register.
-	 */
-	void *pkts_sent_reg;
-
-	/** Statistics for this DROQ. */
-	struct lio_droq_stats stats;
-
-	/** DMA mapped address of the DROQ descriptor ring. */
-	size_t desc_ring_dma;
-
-	/** Info ptr list are allocated at this virtual address. */
-	size_t info_base_addr;
-
-	/** DMA mapped address of the info list */
-	size_t info_list_dma;
-
-	/** Allocated size of info list. */
-	uint32_t info_alloc_size;
-
-	/** Memory zone **/
-	const struct rte_memzone *desc_ring_mz;
-	const struct rte_memzone *info_mz;
-	struct rte_mempool *mpool;
-};
-
-/** Receive Header */
-union octeon_rh {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-	uint64_t rh64;
-	struct	{
-		uint64_t opcode : 4;
-		uint64_t subcode : 8;
-		uint64_t len : 3; /** additional 64-bit words */
-		uint64_t reserved : 17;
-		uint64_t ossp : 32; /** opcode/subcode specific parameters */
-	} r;
-	struct	{
-		uint64_t opcode : 4;
-		uint64_t subcode : 8;
-		uint64_t len : 3; /** additional 64-bit words */
-		uint64_t extra : 28;
-		uint64_t vlan : 12;
-		uint64_t priority : 3;
-		uint64_t csum_verified : 3; /** checksum verified. */
-		uint64_t has_hwtstamp : 1; /** Has hardware timestamp.1 = yes.*/
-		uint64_t encap_on : 1;
-		uint64_t has_hash : 1; /** Has hash (rth or rss). 1 = yes. */
-	} r_dh;
-	struct {
-		uint64_t opcode : 4;
-		uint64_t subcode : 8;
-		uint64_t len : 3; /** additional 64-bit words */
-		uint64_t reserved : 8;
-		uint64_t extra : 25;
-		uint64_t gmxport : 16;
-	} r_nic_info;
-#else
-	uint64_t rh64;
-	struct {
-		uint64_t ossp : 32; /** opcode/subcode specific parameters */
-		uint64_t reserved : 17;
-		uint64_t len : 3; /** additional 64-bit words */
-		uint64_t subcode : 8;
-		uint64_t opcode : 4;
-	} r;
-	struct {
-		uint64_t has_hash : 1; /** Has hash (rth or rss). 1 = yes. */
-		uint64_t encap_on : 1;
-		uint64_t has_hwtstamp : 1;  /** 1 = has hwtstamp */
-		uint64_t csum_verified : 3; /** checksum verified. */
-		uint64_t priority : 3;
-		uint64_t vlan : 12;
-		uint64_t extra : 28;
-		uint64_t len : 3; /** additional 64-bit words */
-		uint64_t subcode : 8;
-		uint64_t opcode : 4;
-	} r_dh;
-	struct {
-		uint64_t gmxport : 16;
-		uint64_t extra : 25;
-		uint64_t reserved : 8;
-		uint64_t len : 3; /** additional 64-bit words */
-		uint64_t subcode : 8;
-		uint64_t opcode : 4;
-	} r_nic_info;
-#endif
-};
-
-#define OCTEON_RH_SIZE (sizeof(union octeon_rh))
-
-/** The txpciq info passed to host from the firmware */
-union octeon_txpciq {
-	uint64_t txpciq64;
-
-	struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-		uint64_t q_no : 8;
-		uint64_t port : 8;
-		uint64_t pkind : 6;
-		uint64_t use_qpg : 1;
-		uint64_t qpg : 11;
-		uint64_t aura_num : 10;
-		uint64_t reserved : 20;
-#else
-		uint64_t reserved : 20;
-		uint64_t aura_num : 10;
-		uint64_t qpg : 11;
-		uint64_t use_qpg : 1;
-		uint64_t pkind : 6;
-		uint64_t port : 8;
-		uint64_t q_no : 8;
-#endif
-	} s;
-};
-
-/** The instruction (input) queue.
- *  The input queue is used to post raw (instruction) mode data or packet
- *  data to Octeon device from the host. Each input queue for
- *  a LIO device has one such structure to represent it.
- */
-struct lio_instr_queue {
-	/** A spinlock to protect access to the input ring.  */
-	rte_spinlock_t lock;
-
-	rte_spinlock_t post_lock;
-
-	struct lio_device *lio_dev;
-
-	uint32_t pkt_in_done;
-
-	rte_atomic64_t iq_flush_running;
-
-	/** Flag that indicates if the queue uses 64 byte commands. */
-	uint32_t iqcmd_64B:1;
-
-	/** Queue info. */
-	union octeon_txpciq txpciq;
-
-	uint32_t rsvd:17;
-
-	uint32_t status:8;
-
-	/** Number of  descriptors in this ring. */
-	uint32_t nb_desc;
-
-	/** Index in input ring where the driver should write the next packet */
-	uint32_t host_write_index;
-
-	/** Index in input ring where Octeon is expected to read the next
-	 *  packet.
-	 */
-	uint32_t lio_read_index;
-
-	/** This index aids in finding the window in the queue where Octeon
-	 *  has read the commands.
-	 */
-	uint32_t flush_index;
-
-	/** This field keeps track of the instructions pending in this queue. */
-	rte_atomic64_t instr_pending;
-
-	/** Pointer to the Virtual Base addr of the input ring. */
-	uint8_t *base_addr;
-
-	struct lio_request_list *request_list;
-
-	/** Octeon doorbell register for the ring. */
-	void *doorbell_reg;
-
-	/** Octeon instruction count register for this ring. */
-	void *inst_cnt_reg;
-
-	/** Number of instructions pending to be posted to Octeon. */
-	uint32_t fill_cnt;
-
-	/** Statistics for this input queue. */
-	struct lio_iq_stats stats;
-
-	/** DMA mapped base address of the input descriptor ring. */
-	uint64_t base_addr_dma;
-
-	/** Application context */
-	void *app_ctx;
-
-	/* network stack queue index */
-	int q_index;
-
-	/* Memory zone */
-	const struct rte_memzone *iq_mz;
-};
-
-/** This structure is used by driver to store information required
- *  to free the mbuff when the packet has been fetched by Octeon.
- *  Bytes offset below assume worst-case of a 64-bit system.
- */
-struct lio_buf_free_info {
-	/** Bytes 1-8. Pointer to network device private structure. */
-	struct lio_device *lio_dev;
-
-	/** Bytes 9-16. Pointer to mbuff. */
-	struct rte_mbuf *mbuf;
-
-	/** Bytes 17-24. Pointer to gather list. */
-	struct lio_gather *g;
-
-	/** Bytes 25-32. Physical address of mbuf->data or gather list. */
-	uint64_t dptr;
-
-	/** Bytes 33-47. Piggybacked soft command, if any */
-	struct lio_soft_command *sc;
-
-	/** Bytes 48-63. iq no */
-	uint64_t iq_no;
-};
-
-/* The Scatter-Gather List Entry. The scatter or gather component used with
- * input instruction has this format.
- */
-struct lio_sg_entry {
-	/** The first 64 bit gives the size of data in each dptr. */
-	union {
-		uint16_t size[4];
-		uint64_t size64;
-	} u;
-
-	/** The 4 dptr pointers for this entry. */
-	uint64_t ptr[4];
-};
-
-#define LIO_SG_ENTRY_SIZE	(sizeof(struct lio_sg_entry))
-
-/** Structure of a node in list of gather components maintained by
- *  driver for each network device.
- */
-struct lio_gather {
-	/** List manipulation. Next and prev pointers. */
-	struct lio_stailq_node list;
-
-	/** Size of the gather component at sg in bytes. */
-	int sg_size;
-
-	/** Number of bytes that sg was adjusted to make it 8B-aligned. */
-	int adjust;
-
-	/** Gather component that can accommodate max sized fragment list
-	 *  received from the IP layer.
-	 */
-	struct lio_sg_entry *sg;
-};
-
-struct lio_rss_ctx {
-	uint16_t hash_key_size;
-	uint8_t  hash_key[LIO_RSS_MAX_KEY_SZ];
-	/* Ideally a factor of number of queues */
-	uint8_t  itable[LIO_RSS_MAX_TABLE_SZ];
-	uint8_t  itable_size;
-	uint8_t  ip;
-	uint8_t  tcp_hash;
-	uint8_t  ipv6;
-	uint8_t  ipv6_tcp_hash;
-	uint8_t  ipv6_ex;
-	uint8_t  ipv6_tcp_ex_hash;
-	uint8_t  hash_disable;
-};
-
-struct lio_io_enable {
-	uint64_t iq;
-	uint64_t oq;
-	uint64_t iq64B;
-};
-
-struct lio_fn_list {
-	void (*setup_iq_regs)(struct lio_device *, uint32_t);
-	void (*setup_oq_regs)(struct lio_device *, uint32_t);
-
-	int (*setup_mbox)(struct lio_device *);
-	void (*free_mbox)(struct lio_device *);
-
-	int (*setup_device_regs)(struct lio_device *);
-	int (*enable_io_queues)(struct lio_device *);
-	void (*disable_io_queues)(struct lio_device *);
-};
-
-struct lio_pf_vf_hs_word {
-#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
-	/** PKIND value assigned for the DPI interface */
-	uint64_t pkind : 8;
-
-	/** OCTEON core clock multiplier */
-	uint64_t core_tics_per_us : 16;
-
-	/** OCTEON coprocessor clock multiplier */
-	uint64_t coproc_tics_per_us : 16;
-
-	/** app that currently running on OCTEON */
-	uint64_t app_mode : 8;
-
-	/** RESERVED */
-	uint64_t reserved : 16;
-
-#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-
-	/** RESERVED */
-	uint64_t reserved : 16;
-
-	/** app that currently running on OCTEON */
-	uint64_t app_mode : 8;
-
-	/** OCTEON coprocessor clock multiplier */
-	uint64_t coproc_tics_per_us : 16;
-
-	/** OCTEON core clock multiplier */
-	uint64_t core_tics_per_us : 16;
-
-	/** PKIND value assigned for the DPI interface */
-	uint64_t pkind : 8;
-#endif
-};
-
-struct lio_sriov_info {
-	/** Number of rings assigned to VF */
-	uint32_t rings_per_vf;
-
-	/** Number of VF devices enabled */
-	uint32_t num_vfs;
-};
-
-/* Head of a response list */
-struct lio_response_list {
-	/** List structure to add delete pending entries to */
-	struct lio_stailq_head head;
-
-	/** A lock for this response list */
-	rte_spinlock_t lock;
-
-	rte_atomic64_t pending_req_count;
-};
-
-/* Structure to define the configuration attributes for each Input queue. */
-struct lio_iq_config {
-	/* Max number of IQs available */
-	uint8_t max_iqs;
-
-	/** Pending list size (usually set to the sum of the size of all Input
-	 *  queues)
-	 */
-	uint32_t pending_list_size;
-
-	/** Command size - 32 or 64 bytes */
-	uint32_t instr_type;
-};
-
-/* Structure to define the configuration attributes for each Output queue. */
-struct lio_oq_config {
-	/* Max number of OQs available */
-	uint8_t max_oqs;
-
-	/** If set, the Output queue uses info-pointer mode. (Default: 1 ) */
-	uint32_t info_ptr;
-
-	/** The number of buffers that were consumed during packet processing by
-	 *  the driver on this Output queue before the driver attempts to
-	 *  replenish the descriptor ring with new buffers.
-	 */
-	uint32_t refill_threshold;
-};
-
-/* Structure to define the configuration. */
-struct lio_config {
-	uint16_t card_type;
-	const char *card_name;
-
-	/** Input Queue attributes. */
-	struct lio_iq_config iq;
-
-	/** Output Queue attributes. */
-	struct lio_oq_config oq;
-
-	int num_nic_ports;
-
-	int num_def_tx_descs;
-
-	/* Num of desc for rx rings */
-	int num_def_rx_descs;
-
-	int def_rx_buf_size;
-};
-
-/** Status of a RGMII Link on Octeon as seen by core driver. */
-union octeon_link_status {
-	uint64_t link_status64;
-
-	struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-		uint64_t duplex : 8;
-		uint64_t mtu : 16;
-		uint64_t speed : 16;
-		uint64_t link_up : 1;
-		uint64_t autoneg : 1;
-		uint64_t if_mode : 5;
-		uint64_t pause : 1;
-		uint64_t flashing : 1;
-		uint64_t reserved : 15;
-#else
-		uint64_t reserved : 15;
-		uint64_t flashing : 1;
-		uint64_t pause : 1;
-		uint64_t if_mode : 5;
-		uint64_t autoneg : 1;
-		uint64_t link_up : 1;
-		uint64_t speed : 16;
-		uint64_t mtu : 16;
-		uint64_t duplex : 8;
-#endif
-	} s;
-};
-
-/** The rxpciq info passed to host from the firmware */
-union octeon_rxpciq {
-	uint64_t rxpciq64;
-
-	struct {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-		uint64_t q_no : 8;
-		uint64_t reserved : 56;
-#else
-		uint64_t reserved : 56;
-		uint64_t q_no : 8;
-#endif
-	} s;
-};
-
-/** Information for a OCTEON ethernet interface shared between core & host. */
-struct octeon_link_info {
-	union octeon_link_status link;
-	uint64_t hw_addr;
-
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
-	uint64_t gmxport : 16;
-	uint64_t macaddr_is_admin_assigned : 1;
-	uint64_t vlan_is_admin_assigned : 1;
-	uint64_t rsvd : 30;
-	uint64_t num_txpciq : 8;
-	uint64_t num_rxpciq : 8;
-#else
-	uint64_t num_rxpciq : 8;
-	uint64_t num_txpciq : 8;
-	uint64_t rsvd : 30;
-	uint64_t vlan_is_admin_assigned : 1;
-	uint64_t macaddr_is_admin_assigned : 1;
-	uint64_t gmxport : 16;
-#endif
-
-	union octeon_txpciq txpciq[LIO_MAX_IOQS_PER_IF];
-	union octeon_rxpciq rxpciq[LIO_MAX_IOQS_PER_IF];
-};
-
-/* -----------------------  THE LIO DEVICE  --------------------------- */
-/** The lio device.
- *  Each lio device has this structure to represent all its
- *  components.
- */
-struct lio_device {
-	/** PCI device pointer */
-	struct rte_pci_device *pci_dev;
-
-	/** Octeon Chip type */
-	uint16_t chip_id;
-	uint16_t pf_num;
-	uint16_t vf_num;
-
-	/** This device's PCIe port used for traffic. */
-	uint16_t pcie_port;
-
-	/** The state of this device */
-	rte_atomic64_t status;
-
-	uint8_t intf_open;
-
-	struct octeon_link_info linfo;
-
-	uint8_t *hw_addr;
-
-	struct lio_fn_list fn_list;
-
-	uint32_t num_iqs;
-
-	/** Guards each glist */
-	rte_spinlock_t *glist_lock;
-	/** Array of gather component linked lists */
-	struct lio_stailq_head *glist_head;
-
-	/* The pool containing pre allocated buffers used for soft commands */
-	struct rte_mempool *sc_buf_pool;
-
-	/** The input instruction queues */
-	struct lio_instr_queue *instr_queue[LIO_MAX_POSSIBLE_INSTR_QUEUES];
-
-	/** The singly-linked tail queues of instruction response */
-	struct lio_response_list response_list;
-
-	uint32_t num_oqs;
-
-	/** The DROQ output queues  */
-	struct lio_droq *droq[LIO_MAX_POSSIBLE_OUTPUT_QUEUES];
-
-	struct lio_io_enable io_qmask;
-
-	struct lio_sriov_info sriov_info;
-
-	struct lio_pf_vf_hs_word pfvf_hsword;
-
-	/** Mail Box details of each lio queue. */
-	struct lio_mbox **mbox;
-
-	char dev_string[LIO_DEVICE_NAME_LEN]; /* Device print string */
-
-	const struct lio_config *default_config;
-
-	struct rte_eth_dev      *eth_dev;
-
-	uint64_t ifflags;
-	uint8_t max_rx_queues;
-	uint8_t max_tx_queues;
-	uint8_t nb_rx_queues;
-	uint8_t nb_tx_queues;
-	uint8_t port_configured;
-	struct lio_rss_ctx rss_state;
-	uint16_t port_id;
-	char firmware_version[LIO_FW_VERSION_LENGTH];
-};
-#endif /* _LIO_STRUCT_H_ */
diff --git a/drivers/net/liquidio/meson.build b/drivers/net/liquidio/meson.build
deleted file mode 100644
index ebadbf3dea..0000000000
--- a/drivers/net/liquidio/meson.build
+++ /dev/null
@@ -1,16 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
-
-if is_windows
-    build = false
-    reason = 'not supported on Windows'
-    subdir_done()
-endif
-
-sources = files(
-        'base/lio_23xx_vf.c',
-        'base/lio_mbox.c',
-        'lio_ethdev.c',
-        'lio_rxtx.c',
-)
-includes += include_directories('base')
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index b1df17ce8c..f68bbc27a7 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -36,7 +36,6 @@ drivers = [
         'ipn3ke',
         'ixgbe',
         'kni',
-        'liquidio',
         'mana',
         'memif',
         'mlx4',
-- 
2.40.1


^ permalink raw reply	[relevance 1%]

* Minutes of Technical Board Meeting, 2023-01-11
       [not found]       ` <DS0PR11MB73090EC350B82E0730D0D9A197CE9@DS0PR11MB7309.namprd11.prod.outlook.com>
@ 2023-05-05 15:05  3%     ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-05-05 15:05 UTC (permalink / raw)
  To: dev; +Cc: Thomas Monjalon


NOTE: The technical board meetings are on every second Wednesday at
https://meet.jit.si/DPDK at 3 pm UTC. Meetings are public, and DPDK
community members are welcome to attend.

NOTE: Next meeting will be on Wednesday 2023-01-25 @ 3pm UTC, and will
be chaired by Bruce.

Agenda Items
============

1) C99 standard
---------------

Future support for C11 atomics raised the question of should C99 be
required for DPDK. Several places use C99 already but it is not project
wide. DPDK is using C11 now but marked as extension where used.

The open issues are:
  - do not want to require application to require C99 but
    want to allow applications using C99. This impacts inline in headers.
  - Need to announce. Should not cause API/ABI breakage.
  - the testing and infrastructure are impacted as well.
  - need to keep inline for performance reasons.

Bruce is adding build support for test and compatibility.
Investigating what fallout is from project wide enablement.

2) Technical Writer
------------------

Possible candidate did not work out. Two candidates under
review.

3) MIT License
--------------

Original Governing Board wording for MIT license exception
became overly complicated. Linux Foundation legal expert
revised it. Governing Board is reviewing.

4) Governing Board
------------------

DPDK Technical Board member to Governing Board:
  - past Stephen; current Thomas; next Aaron

Recent vote on modification to charter to codify treasurer role.

Discussion on marketing. The existing Linux Foundation model
has DPDK project paying for things that are not necessary and
not getting the expected support.

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] net/liquidio: removed LiquidIO ethdev driver
  @ 2023-05-02 14:18  5% ` Ferruh Yigit
  2023-05-08 13:44  1% ` [dpdk-dev] [PATCH v2] net/liquidio: remove " jerinj
  1 sibling, 0 replies; 200+ results
From: Ferruh Yigit @ 2023-05-02 14:18 UTC (permalink / raw)
  To: jerinj, dev, Thomas Monjalon, Shijith Thotton,
	Srisivasubramanian Srinivasan, Anatoly Burakov, David Marchand

On 4/28/2023 11:31 AM, jerinj@marvell.com wrote:
> From: Jerin Jacob <jerinj@marvell.com>
> 
> The LiquidIO product line has been substituted with CN9K/CN10K
> OCTEON product line smart NICs located at drivers/net/octeon_ep/.
> 
> DPDK 20.08 has categorized the LiquidIO driver as UNMAINTAINED
> because of the absence of updates in the driver.
> 
> Due to the above reasons, the driver removed from DPDK 23.07.
> 
> Also removed deprecation notice entry for the removal in
> doc/guides/rel_notes/deprecation.rst.
> 
> Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> ---
>  MAINTAINERS                              |    8 -
>  doc/guides/nics/features/liquidio.ini    |   29 -
>  doc/guides/nics/index.rst                |    1 -
>  doc/guides/nics/liquidio.rst             |  169 --
>  doc/guides/rel_notes/deprecation.rst     |    7 -
>  doc/guides/rel_notes/release_23_07.rst   |    9 +-
>  drivers/net/liquidio/base/lio_23xx_reg.h |  165 --
>  drivers/net/liquidio/base/lio_23xx_vf.c  |  513 ------
>  drivers/net/liquidio/base/lio_23xx_vf.h  |   63 -
>  drivers/net/liquidio/base/lio_hw_defs.h  |  239 ---
>  drivers/net/liquidio/base/lio_mbox.c     |  246 ---
>  drivers/net/liquidio/base/lio_mbox.h     |  102 -
>  drivers/net/liquidio/lio_ethdev.c        | 2147 ----------------------
>  drivers/net/liquidio/lio_ethdev.h        |  179 --
>  drivers/net/liquidio/lio_logs.h          |   58 -
>  drivers/net/liquidio/lio_rxtx.c          | 1804 ------------------
>  drivers/net/liquidio/lio_rxtx.h          |  740 --------
>  drivers/net/liquidio/lio_struct.h        |  661 -------
>  drivers/net/liquidio/meson.build         |   16 -
>  drivers/net/meson.build                  |    1 -
>  20 files changed, 1 insertion(+), 7156 deletions(-)
>  delete mode 100644 doc/guides/nics/features/liquidio.ini
>  delete mode 100644 doc/guides/nics/liquidio.rst
>  delete mode 100644 drivers/net/liquidio/base/lio_23xx_reg.h
>  delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.c
>  delete mode 100644 drivers/net/liquidio/base/lio_23xx_vf.h
>  delete mode 100644 drivers/net/liquidio/base/lio_hw_defs.h
>  delete mode 100644 drivers/net/liquidio/base/lio_mbox.c
>  delete mode 100644 drivers/net/liquidio/base/lio_mbox.h
>  delete mode 100644 drivers/net/liquidio/lio_ethdev.c
>  delete mode 100644 drivers/net/liquidio/lio_ethdev.h
>  delete mode 100644 drivers/net/liquidio/lio_logs.h
>  delete mode 100644 drivers/net/liquidio/lio_rxtx.c
>  delete mode 100644 drivers/net/liquidio/lio_rxtx.h
>  delete mode 100644 drivers/net/liquidio/lio_struct.h
>  delete mode 100644 drivers/net/liquidio/meson.build
> 

This cause warning in the ABI check script [1], not because there is an
ABI breakage, but because how script works, that needs to be fixed as well.

[1]
Checking ABI compatibility of build-gcc-shared
.../dpdk-next-net/devtools/../devtools/check-abi.sh
/tmp/dpdk-abiref/v22.11.1/build-gcc-shared
.../dpdk-next-net/build-gcc-shared/install
Error: cannot find librte_net_liquidio.so.23.0 in
.../dpdk-next-net/build-gcc-shared/install

<...>

> --- a/doc/guides/rel_notes/release_23_07.rst
> +++ b/doc/guides/rel_notes/release_23_07.rst
> @@ -59,14 +59,7 @@ New Features
>  Removed Items
>  -------------
>  
> -.. This section should contain removed items in this release. Sample format:
> -
> -   * Add a short 1-2 sentence description of the removed item
> -     in the past tense.
> -
> -   This section is a comment. Do not overwrite or remove it.
> -   Also, make sure to start the actual text at the margin.
> -   =======================================================
> +* Removed LiquidIO ethdev driver located at ``drivers/net/liquidio/``
>  
>  

No need to remove the section comment.

Rest looks good to me.


^ permalink raw reply	[relevance 5%]

* [PATCH v8 10/14] eal: expand most macros to empty when using MSVC
  @ 2023-05-02  3:15  5%   ` Tyler Retzlaff
  2023-05-02  3:15  3%   ` [PATCH v8 12/14] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
  1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-05-02  3:15 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
	Tyler Retzlaff

For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.

Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/eal/include/rte_branch_prediction.h |  8 +++++
 lib/eal/include/rte_common.h            | 54 +++++++++++++++++++++++++++++++++
 lib/eal/include/rte_compat.h            | 20 ++++++++++++
 3 files changed, 82 insertions(+)

diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 0256a9d..1eff9f6 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -25,7 +25,11 @@
  *
  */
 #ifndef likely
+#ifndef RTE_TOOLCHAIN_MSVC
 #define likely(x)	__builtin_expect(!!(x), 1)
+#else
+#define likely(x)	(!!(x))
+#endif
 #endif /* likely */
 
 /**
@@ -39,7 +43,11 @@
  *
  */
 #ifndef unlikely
+#ifndef RTE_TOOLCHAIN_MSVC
 #define unlikely(x)	__builtin_expect(!!(x), 0)
+#else
+#define unlikely(x)	(!!(x))
+#endif
 #endif /* unlikely */
 
 #ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..0c55a23 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -41,6 +41,10 @@
 #define RTE_STD_C11
 #endif
 
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __extension__
+#endif
+
 /*
  * RTE_TOOLCHAIN_GCC is defined if the target is built with GCC,
  * while a host application (like pmdinfogen) may have another compiler.
@@ -65,7 +69,11 @@
 /**
  * Force alignment
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_aligned(a) __attribute__((__aligned__(a)))
+#else
+#define __rte_aligned(a)
+#endif
 
 #ifdef RTE_ARCH_STRICT_ALIGN
 typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -80,16 +88,29 @@
 /**
  * Force a structure to be packed
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_packed __attribute__((__packed__))
+#else
+#define __rte_packed
+#endif
 
 /**
  * Macro to mark a type that is not subject to type-based aliasing rules
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_may_alias __attribute__((__may_alias__))
+#else
+#define __rte_may_alias
+#endif
 
 /******* Macro to mark functions and fields scheduled for removal *****/
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_deprecated	__attribute__((__deprecated__))
 #define __rte_deprecated_msg(msg)	__attribute__((__deprecated__(msg)))
+#else
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#endif
 
 /**
  *  Macro to mark macros and defines scheduled for removal
@@ -110,14 +131,22 @@
 /**
  * Force symbol to be generated even if it appears to be unused.
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_used __attribute__((used))
+#else
+#define __rte_used
+#endif
 
 /*********** Macros to eliminate unused variable warnings ********/
 
 /**
  * short definition to mark a function parameter unused
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_unused __attribute__((__unused__))
+#else
+#define __rte_unused
+#endif
 
 /**
  * Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +170,7 @@
  * even if the underlying stdio implementation is ANSI-compliant,
  * so this must be overridden.
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #if RTE_CC_IS_GNU
 #define __rte_format_printf(format_index, first_arg) \
 	__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +178,9 @@
 #define __rte_format_printf(format_index, first_arg) \
 	__attribute__((format(printf, format_index, first_arg)))
 #endif
+#else
+#define __rte_format_printf(format_index, first_arg)
+#endif
 
 /**
  * Tells compiler that the function returns a value that points to
@@ -222,7 +255,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
 /**
  * Hint never returning function
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_noreturn __attribute__((noreturn))
+#else
+#define __rte_noreturn
+#endif
 
 /**
  * Issue a warning in case the function's return value is ignored.
@@ -247,12 +284,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
  *  }
  * @endcode
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_warn_unused_result __attribute__((warn_unused_result))
+#else
+#define __rte_warn_unused_result
+#endif
 
 /**
  * Force a function to be inlined
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_always_inline inline __attribute__((always_inline))
+#else
+#define __rte_always_inline
+#endif
 
 /**
  * Force a function to be noinlined
@@ -437,7 +482,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
 #define RTE_CACHE_LINE_MIN_SIZE 64
 
 /** Force alignment to cache line. */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#else
+#define __rte_cache_aligned
+#endif
 
 /** Force minimum cache line alignment. */
 #define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
@@ -812,12 +861,17 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
  *  struct wrapper *w = container_of(x, struct wrapper, c);
  */
 #ifndef container_of
+#ifndef RTE_TOOLCHAIN_MSVC
 #define container_of(ptr, type, member)	__extension__ ({		\
 			const typeof(((type *)0)->member) *_ptr = (ptr); \
 			__rte_unused type *_target_ptr =	\
 				(type *)(ptr);				\
 			(type *)(((uintptr_t)_ptr) - offsetof(type, member)); \
 		})
+#else
+#define container_of(ptr, type, member) \
+			((type *)((uintptr_t)(ptr) - offsetof(type, member)))
+#endif
 #endif
 
 /** Swap two variables. */
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..6a4b5ee 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
 
 #ifndef ALLOW_EXPERIMENTAL_API
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_experimental \
 __attribute__((deprecated("Symbol is not yet part of stable ABI"), \
 section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
 
 #else
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_experimental \
 __attribute__((section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
 
 #endif
 
@@ -30,23 +38,35 @@
 
 #if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 __attribute__((error("Symbol is not public ABI"), \
 section(".text.internal")))
+#else
+#define __rte_internal
+#endif
 
 #elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 _Pragma("GCC diagnostic push") \
 _Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
 __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
 section(".text.internal"))) \
 _Pragma("GCC diagnostic pop")
+#else
+#define __rte_internal
+#endif
 
 #else
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 __attribute__((section(".text.internal")))
+#else
+#define __rte_internal
+#endif
 
 #endif
 
-- 
1.8.3.1


^ permalink raw reply	[relevance 5%]

* [PATCH v8 12/14] telemetry: avoid expanding versioned symbol macros on MSVC
    2023-05-02  3:15  5%   ` [PATCH v8 10/14] eal: expand most macros to empty when using MSVC Tyler Retzlaff
@ 2023-05-02  3:15  3%   ` Tyler Retzlaff
  1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-05-02  3:15 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
	Tyler Retzlaff

Windows does not support versioned symbols. Fortunately Windows also
doesn't have an exported stable ABI.

Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
functions.

Windows does have a way to achieve similar versioning for symbols but it
is not a simple #define so it will be done as a work package later.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index 2bac2de..284c16e 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -82,8 +82,16 @@
 /* mark the v23 function as the older version, and v24 as the default version */
 VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
 BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
 MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
 		int64_t x), rte_tel_data_add_array_int_v24);
+#else
+int
+rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
+{
+	return rte_tel_data_add_array_int_v24(d, x);
+}
+#endif
 
 int
 rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
@@ -220,8 +228,16 @@
 /* mark the v23 function as the older version, and v24 as the default version */
 VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
 BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
 MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
 		const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
+#else
+int
+rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
+{
+	return rte_tel_data_add_dict_int_v24(d, name, val);
+}
+#endif
 
 int
 rte_tel_data_add_dict_uint(struct rte_tel_data *d,
-- 
1.8.3.1


^ permalink raw reply	[relevance 3%]

* Re: [PATCH 1/3] security: introduce out of place support for inline ingress
  2023-04-18  8:33  4%     ` Jerin Jacob
@ 2023-04-24 22:41  3%       ` Thomas Monjalon
  2023-05-19  8:07  4%         ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2023-04-24 22:41 UTC (permalink / raw)
  To: Stephen Hemminger, Jerin Jacob
  Cc: Nithin Dabilpuram, Akhil Goyal, jerinj, dev, Morten Brørup,
	techboard

18/04/2023 10:33, Jerin Jacob:
> On Tue, Apr 11, 2023 at 11:36 PM Stephen Hemminger
> <stephen@networkplumber.org> wrote:
> >
> > On Tue, 11 Apr 2023 15:34:07 +0530
> > Nithin Dabilpuram <ndabilpuram@marvell.com> wrote:
> >
> > > diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> > > index 4bacf9fcd9..866cd4e8ee 100644
> > > --- a/lib/security/rte_security.h
> > > +++ b/lib/security/rte_security.h
> > > @@ -275,6 +275,17 @@ struct rte_security_ipsec_sa_options {
> > >        */
> > >       uint32_t ip_reassembly_en : 1;
> > >
> > > +     /** Enable out of place processing on inline inbound packets.
> > > +      *
> > > +      * * 1: Enable driver to perform Out-of-place(OOP) processing for this inline
> > > +      *      inbound SA if supported by driver. PMD need to register mbuf
> > > +      *      dynamic field using rte_security_oop_dynfield_register()
> > > +      *      and security session creation would fail if dynfield is not
> > > +      *      registered successfully.
> > > +      * * 0: Disable OOP processing for this session (default).
> > > +      */
> > > +     uint32_t ingress_oop : 1;
> > > +
> > >       /** Reserved bit fields for future extension
> > >        *
> > >        * User should ensure reserved_opts is cleared as it may change in
> > > @@ -282,7 +293,7 @@ struct rte_security_ipsec_sa_options {
> > >        *
> > >        * Note: Reduce number of bits in reserved_opts for every new option.
> > >        */
> > > -     uint32_t reserved_opts : 17;
> > > +     uint32_t reserved_opts : 16;
> > >  };
> >
> > NAK
> > Let me repeat the reserved bit rant. YAGNI
> >
> > Reserved space is not usable without ABI breakage unless the existing
> > code enforces that reserved space has to be zero.
> >
> > Just saying "User should ensure reserved_opts is cleared" is not enough.
> 
> Yes. I think, we need to enforce to have _init functions for the
> structures which is using reserved filed.
> 
> On the same note on YAGNI, I am wondering why NOT introduce
> RTE_NEXT_ABI marco kind of scheme to compile out ABI breaking changes.
> By keeping RTE_NEXT_ABI disable by default, enable explicitly if user
> wants it to avoid waiting for one year any ABI breaking changes.
> There are a lot of "fixed appliance" customers (not OS distribution
> driven customer) they are willing to recompile DPDK for new feature.
> What we are loosing with this scheme?

RTE_NEXT_ABI is described in the ABI policy.
We are not doing it currently, but I think we could
when it is not too much complicate in the code.

The only problems I see are:
- more #ifdef clutter
- 2 binary versions to test
- CI and checks must handle RTE_NEXT_ABI version




^ permalink raw reply	[relevance 3%]

* Re: [RFC] lib: set/get max memzone segments
  @ 2023-04-21  8:34  4%     ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-04-21  8:34 UTC (permalink / raw)
  To: Tyler Retzlaff
  Cc: Ophir Munk, dev, Bruce Richardson, Devendra Singh Rawat,
	Alok Prasad, Matan Azrad, Lior Margalit

20/04/2023 20:20, Tyler Retzlaff:
> On Thu, Apr 20, 2023 at 09:43:28AM +0200, Thomas Monjalon wrote:
> > 19/04/2023 16:51, Tyler Retzlaff:
> > > On Wed, Apr 19, 2023 at 11:36:34AM +0300, Ophir Munk wrote:
> > > > In current DPDK the RTE_MAX_MEMZONE definition is unconditionally hard
> > > > coded as 2560.  For applications requiring different values of this
> > > > parameter – it is more convenient to set the max value via an rte API -
> > > > rather than changing the dpdk source code per application.  In many
> > > > organizations, the possibility to compile a private DPDK library for a
> > > > particular application does not exist at all.  With this option there is
> > > > no need to recompile DPDK and it allows using an in-box packaged DPDK.
> > > > An example usage for updating the RTE_MAX_MEMZONE would be of an
> > > > application that uses the DPDK mempool library which is based on DPDK
> > > > memzone library.  The application may need to create a number of
> > > > steering tables, each of which will require its own mempool allocation.
> > > > This commit is not about how to optimize the application usage of
> > > > mempool nor about how to improve the mempool implementation based on
> > > > memzone.  It is about how to make the max memzone definition - run-time
> > > > customized.
> > > > This commit adds an API which must be called before rte_eal_init():
> > > > rte_memzone_max_set(int max).  If not called, the default memzone
> > > > (RTE_MAX_MEMZONE) is used.  There is also an API to query the effective
> > > > max memzone: rte_memzone_max_get().
> > > > 
> > > > Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
> > > > ---
> > > 
> > > the use case of each application may want a different non-hard coded
> > > value makes sense.
> > > 
> > > it's less clear to me that requiring it be called before eal init makes
> > > sense over just providing it as configuration to eal init so that it is
> > > composed.
> > 
> > Why do you think it would be better as EAL init option?
> > From an API perspective, I think it is simpler to call a dedicated function.
> > And I don't think a user wants to deal with it when starting the application.
> 
> because a dedicated function that can be called detached from the eal
> state enables an opportunity for accidental and confusing use outside
> the correct context.
> 
> i know the above prescribes not to do this but.
> 
> now you can call set after eal init, but we protect about calling it
> after init by failing. what do we do sensibly with the failure?

It would be a developer mistake which could be fix during development stage
very easily. I don't see a problem here.

> > > can you elaborate further on why you need get if you have a one-shot
> > > set? why would the application not know the value if you can only ever
> > > call it once before init?
> > 
> > The "get" function is used in this patch by test and qede driver.
> > The application could use it as well, especially to query the default value.
> 
> this seems incoherent to me, why does the application not know if it has
> called set or not? if it called set it knows what the value is, if it didn't
> call set it knows what the default is.

No the application doesn't know the default, it is an internal value.

> anyway, the use case is valid and i would like to see the ability to
> change it dynamically i'd prefer not to see an api like this be introduced
> as prescribed but that's for you folks to decide.
> 
> anyway, i own a lot of apis that operate just like the proposed and
> they're great source of support overhead. i prefer not to rely on
> documenting a contract when i can enforce the contract and implicit state
> machine mechanically with the api instead.
> 
> fwiw a nicer pattern for doing this one of framework influencing config
> might look something like this.
> 
> struct eal_config config;
> 
> eal_config_init(&config); // defaults are set entire state made valid
> eal_config_set_max_memzone(&config, 1024); // default is overridden
> 
> rte_eal_init(&config);

In general, we discovered that functions doing too much are bad
for usability and for ABI stability.
In the function eal_config_init() that you propose,
any change in the struct eal_config will be an ABI breakage.



^ permalink raw reply	[relevance 4%]

* Re: [PATCH] eventdev: fix alignment padding
  2023-04-18 11:06  4% ` Morten Brørup
@ 2023-04-18 12:40  3%   ` Mattias Rönnblom
  0 siblings, 0 replies; 200+ results
From: Mattias Rönnblom @ 2023-04-18 12:40 UTC (permalink / raw)
  To: Morten Brørup, Sivaprasad Tummala, jerinj; +Cc: dev

On 2023-04-18 13:06, Morten Brørup wrote:
>> From: Sivaprasad Tummala [mailto:sivaprasad.tummala@amd.com]
>> Sent: Tuesday, 18 April 2023 12.46
>>
>> fixed the padding required to align to cacheline size.
>>
>> Fixes: 54f17843a887 ("eventdev: add port maintenance API")
>> Cc: mattias.ronnblom@ericsson.com
>>
>> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
>> ---
>>   lib/eventdev/rte_eventdev_core.h | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/lib/eventdev/rte_eventdev_core.h
>> b/lib/eventdev/rte_eventdev_core.h
>> index c328bdbc82..c27a52ccc0 100644
>> --- a/lib/eventdev/rte_eventdev_core.h
>> +++ b/lib/eventdev/rte_eventdev_core.h
>> @@ -65,7 +65,7 @@ struct rte_event_fp_ops {
>>   	/**< PMD Tx adapter enqueue same destination function. */
>>   	event_crypto_adapter_enqueue_t ca_enqueue;
>>   	/**< PMD Crypto adapter enqueue function. */
>> -	uintptr_t reserved[6];
>> +	uintptr_t reserved[5];
>>   } __rte_cache_aligned;
> 
> This fix changes the size (reduces it by one cache line) of the elements in the public rte_event_fp_ops array, and thus breaks the ABI.
> 
> BTW, the patch it fixes, which was dated November 2021, also broke the ABI.

21.11 has a new ABI version, so that's not an issue.

> 
>>
>>   extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
>> --
>> 2.34.1


^ permalink raw reply	[relevance 3%]

* RE: [PATCH] eventdev: fix alignment padding
  @ 2023-04-18 11:06  4% ` Morten Brørup
  2023-04-18 12:40  3%   ` Mattias Rönnblom
    1 sibling, 1 reply; 200+ results
From: Morten Brørup @ 2023-04-18 11:06 UTC (permalink / raw)
  To: Sivaprasad Tummala, jerinj; +Cc: dev, mattias.ronnblom

> From: Sivaprasad Tummala [mailto:sivaprasad.tummala@amd.com]
> Sent: Tuesday, 18 April 2023 12.46
> 
> fixed the padding required to align to cacheline size.
> 
> Fixes: 54f17843a887 ("eventdev: add port maintenance API")
> Cc: mattias.ronnblom@ericsson.com
> 
> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> ---
>  lib/eventdev/rte_eventdev_core.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/lib/eventdev/rte_eventdev_core.h
> b/lib/eventdev/rte_eventdev_core.h
> index c328bdbc82..c27a52ccc0 100644
> --- a/lib/eventdev/rte_eventdev_core.h
> +++ b/lib/eventdev/rte_eventdev_core.h
> @@ -65,7 +65,7 @@ struct rte_event_fp_ops {
>  	/**< PMD Tx adapter enqueue same destination function. */
>  	event_crypto_adapter_enqueue_t ca_enqueue;
>  	/**< PMD Crypto adapter enqueue function. */
> -	uintptr_t reserved[6];
> +	uintptr_t reserved[5];
>  } __rte_cache_aligned;

This fix changes the size (reduces it by one cache line) of the elements in the public rte_event_fp_ops array, and thus breaks the ABI.

BTW, the patch it fixes, which was dated November 2021, also broke the ABI.

> 
>  extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
> --
> 2.34.1


^ permalink raw reply	[relevance 4%]

* RE: [RFC 0/4] Support VFIO sparse mmap in PCI bus
  2023-04-18  7:46  3% ` David Marchand
  2023-04-18  9:27  0%   ` Xia, Chenbo
@ 2023-04-18  9:33  0%   ` Xia, Chenbo
  1 sibling, 0 replies; 200+ results
From: Xia, Chenbo @ 2023-04-18  9:33 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, skori, Cao, Yahui, Li, Miao

David,

Sorry that I missed one comment...

> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Tuesday, April 18, 2023 3:47 PM
> To: Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; skori@marvell.com
> Subject: Re: [RFC 0/4] Support VFIO sparse mmap in PCI bus
> 
> Hello Chenbo,
> 
> On Tue, Apr 18, 2023 at 7:49 AM Chenbo Xia <chenbo.xia@intel.com> wrote:
> >
> > This series introduces a VFIO standard capability, called sparse
> > mmap to PCI bus. In linux kernel, it's defined as
> > VFIO_REGION_INFO_CAP_SPARSE_MMAP. Sparse mmap means instead of
> > mmap whole BAR region into DPDK process, only mmap part of the
> > BAR region after getting sparse mmap information from kernel.
> > For the rest of BAR region that is not mmap-ed, DPDK process
> > can use pread/pwrite system calls to access. Sparse mmap is
> > useful when kernel does not want userspace to mmap whole BAR
> > region, or kernel wants to control over access to specific BAR
> > region. Vendors can choose to enable this feature or not for
> > their devices in their specific kernel modules.
> 
> Sorry, I did not take the time to look into the details.
> Could you summarize what would be the benefit of this series?
> 
> 
> >
> > In this patchset:
> >
> > Patch 1-3 is mainly for introducing BAR access APIs so that
> > driver could use them to access specific BAR using pread/pwrite
> > system calls when part of the BAR is not mmap-able.
> >
> > Patch 4 adds the VFIO sparse mmap support finally. A question
> > is for all sparse mmap regions, should they be mapped to a
> > continuous virtual address region that follows device-specific
> > BAR layout or not. In theory, there could be three options to
> > support this feature.
> >
> > Option 1: Map sparse mmap regions independently
> > ======================================================
> > In this approach, we mmap each sparse mmap region one by one
> > and each region could be located anywhere in process address
> > space. But accessing the mmaped BAR will not be as easy as
> > 'bar_base_address + bar_offset', driver needs to check the
> > sparse mmap information to access specific BAR register.
> >
> > Patch 4 in this patchset adopts this option. Driver API change
> > is introduced in bus_pci_driver.h. Corresponding changes in
> > all drivers are also done and currently I am assuming drivers
> > do not support this feature so they will not check the
> > 'is_sparse' flag but assumes it to be false. Note that it will
> > not break any driver and each vendor can add related logic when
> > they start to support this feature. This is only because I don't
> > want to introduce complexity to drivers that do not want to
> > support this feature.
> >
> > Option 2: Map sparse mmap regions based on device-specific BAR layout
> > ======================================================================
> > In this approach, the sparse mmap regions are mapped to continuous
> > virtual address region that follows device-specific BAR layout.
> > For example, the BAR size is 0x4000 and only 0-0x1000 (sparse mmap
> > region #1) and 0x3000-0x4000 (sparse mmap region #2) could be
> > mmaped. Region #1 will be mapped at 'base_addr' and region #2
> > will be mapped at 'base_addr + 0x3000'. The good thing is if
> > we implement like this, driver can still access all BAR registers
> > using 'bar_base_address + bar_offset' way and we don't need
> > to introduce any driver API change. But the address space
> > range 'base_addr + 0x1000' to 'base_addr + 0x3000' may need to
> > be reserved so it could result in waste of address space or memory
> > (when we use MAP_ANONYMOUS and MAP_PRIVATE flag to reserve this
> > range). Meanwhile, driver needs to know which part of BAR is
> > mmaped (this is possible since the range is defined by vendor's
> > specific kernel module).
> >
> > Option 3: Support both option 1 & 2
> > ===================================
> > We could define a driver flag to let driver choose which way it
> > perfers since either option has its own Pros & Cons.
> >
> > Please share your comments, Thanks!
> >
> >
> > Chenbo Xia (4):
> >   bus/pci: introduce an internal representation of PCI device
> 
> I think this first patch main motivation was to avoid ABI issues.
> Since v22.11, the rte_pci_device object is opaque to applications.
> 
> So, do we still need this patch?

I think it could be good to reduce unnecessary driver APIs..
Hiding these region information could be friendly to driver developer?

Thanks,
Chenbo

> 
> 
> >   bus/pci: avoid depending on private value in kernel source
> >   bus/pci: introduce helper for MMIO read and write
> >   bus/pci: add VFIO sparse mmap support
> >
> >  drivers/baseband/acc/rte_acc100_pmd.c         |   6 +-
> >  drivers/baseband/acc/rte_vrb_pmd.c            |   6 +-
> >  .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |   6 +-
> >  drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |   6 +-
> >  drivers/bus/pci/bsd/pci.c                     |  43 +-
> >  drivers/bus/pci/bus_pci_driver.h              |  24 +-
> >  drivers/bus/pci/linux/pci.c                   |  91 +++-
> >  drivers/bus/pci/linux/pci_init.h              |  14 +-
> >  drivers/bus/pci/linux/pci_uio.c               |  34 +-
> >  drivers/bus/pci/linux/pci_vfio.c              | 445 ++++++++++++++----
> >  drivers/bus/pci/pci_common.c                  |  57 ++-
> >  drivers/bus/pci/pci_common_uio.c              |  12 +-
> >  drivers/bus/pci/private.h                     |  25 +-
> >  drivers/bus/pci/rte_bus_pci.h                 |  48 ++
> >  drivers/bus/pci/version.map                   |   3 +
> >  drivers/common/cnxk/roc_dev.c                 |   4 +-
> >  drivers/common/cnxk/roc_dpi.c                 |   2 +-
> >  drivers/common/cnxk/roc_ml.c                  |  22 +-
> >  drivers/common/qat/dev/qat_dev_gen1.c         |   2 +-
> >  drivers/common/qat/dev/qat_dev_gen4.c         |   4 +-
> >  drivers/common/sfc_efx/sfc_efx.c              |   2 +-
> >  drivers/compress/octeontx/otx_zip.c           |   4 +-
> >  drivers/crypto/ccp/ccp_dev.c                  |   4 +-
> >  drivers/crypto/cnxk/cnxk_cryptodev_ops.c      |   2 +-
> >  drivers/crypto/nitrox/nitrox_device.c         |   4 +-
> >  drivers/crypto/octeontx/otx_cryptodev_ops.c   |   6 +-
> >  drivers/crypto/virtio/virtio_pci.c            |   6 +-
> >  drivers/dma/cnxk/cnxk_dmadev.c                |   2 +-
> >  drivers/dma/hisilicon/hisi_dmadev.c           |   6 +-
> >  drivers/dma/idxd/idxd_pci.c                   |   4 +-
> >  drivers/dma/ioat/ioat_dmadev.c                |   2 +-
> >  drivers/event/dlb2/pf/dlb2_main.c             |  16 +-
> >  drivers/event/octeontx/ssovf_probe.c          |  38 +-
> >  drivers/event/octeontx/timvf_probe.c          |  18 +-
> >  drivers/event/skeleton/skeleton_eventdev.c    |   2 +-
> >  drivers/mempool/octeontx/octeontx_fpavf.c     |   6 +-
> >  drivers/net/ark/ark_ethdev.c                  |   4 +-
> >  drivers/net/atlantic/atl_ethdev.c             |   2 +-
> >  drivers/net/avp/avp_ethdev.c                  |  20 +-
> >  drivers/net/axgbe/axgbe_ethdev.c              |   4 +-
> >  drivers/net/bnx2x/bnx2x_ethdev.c              |   6 +-
> >  drivers/net/bnxt/bnxt_ethdev.c                |   8 +-
> >  drivers/net/cpfl/cpfl_ethdev.c                |   4 +-
> >  drivers/net/cxgbe/cxgbe_ethdev.c              |   2 +-
> >  drivers/net/cxgbe/cxgbe_main.c                |   2 +-
> >  drivers/net/cxgbe/cxgbevf_ethdev.c            |   2 +-
> >  drivers/net/cxgbe/cxgbevf_main.c              |   2 +-
> >  drivers/net/e1000/em_ethdev.c                 |   4 +-
> >  drivers/net/e1000/igb_ethdev.c                |   4 +-
> >  drivers/net/ena/ena_ethdev.c                  |   4 +-
> >  drivers/net/enetc/enetc_ethdev.c              |   2 +-
> >  drivers/net/enic/enic_main.c                  |   4 +-
> >  drivers/net/fm10k/fm10k_ethdev.c              |   2 +-
> >  drivers/net/gve/gve_ethdev.c                  |   4 +-
> >  drivers/net/hinic/base/hinic_pmd_hwif.c       |  14 +-
> >  drivers/net/hns3/hns3_ethdev.c                |   2 +-
> >  drivers/net/hns3/hns3_ethdev_vf.c             |   2 +-
> >  drivers/net/hns3/hns3_rxtx.c                  |   4 +-
> >  drivers/net/i40e/i40e_ethdev.c                |   2 +-
> >  drivers/net/iavf/iavf_ethdev.c                |   2 +-
> >  drivers/net/ice/ice_dcf.c                     |   2 +-
> >  drivers/net/ice/ice_ethdev.c                  |   2 +-
> >  drivers/net/idpf/idpf_ethdev.c                |   4 +-
> >  drivers/net/igc/igc_ethdev.c                  |   2 +-
> >  drivers/net/ionic/ionic_dev_pci.c             |   2 +-
> >  drivers/net/ixgbe/ixgbe_ethdev.c              |   4 +-
> >  drivers/net/liquidio/lio_ethdev.c             |   4 +-
> >  drivers/net/nfp/nfp_ethdev.c                  |   2 +-
> >  drivers/net/nfp/nfp_ethdev_vf.c               |   6 +-
> >  drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c    |   4 +-
> >  drivers/net/ngbe/ngbe_ethdev.c                |   2 +-
> >  drivers/net/octeon_ep/otx_ep_ethdev.c         |   2 +-
> >  drivers/net/octeontx/base/octeontx_pkivf.c    |   6 +-
> >  drivers/net/octeontx/base/octeontx_pkovf.c    |  12 +-
> >  drivers/net/qede/qede_main.c                  |   6 +-
> >  drivers/net/sfc/sfc.c                         |   2 +-
> >  drivers/net/thunderx/nicvf_ethdev.c           |   2 +-
> >  drivers/net/txgbe/txgbe_ethdev.c              |   2 +-
> >  drivers/net/txgbe/txgbe_ethdev_vf.c           |   2 +-
> >  drivers/net/virtio/virtio_pci.c               |   6 +-
> >  drivers/net/vmxnet3/vmxnet3_ethdev.c          |   4 +-
> >  drivers/raw/cnxk_bphy/cnxk_bphy.c             |  10 +-
> >  drivers/raw/cnxk_bphy/cnxk_bphy_cgx.c         |   6 +-
> >  drivers/raw/ifpga/afu_pmd_n3000.c             |   4 +-
> >  drivers/raw/ifpga/ifpga_rawdev.c              |   6 +-
> >  drivers/raw/ntb/ntb_hw_intel.c                |   8 +-
> >  drivers/vdpa/ifc/ifcvf_vdpa.c                 |   6 +-
> >  drivers/vdpa/sfc/sfc_vdpa_hw.c                |   2 +-
> >  drivers/vdpa/sfc/sfc_vdpa_ops.c               |   2 +-
> >  lib/eal/include/rte_vfio.h                    |   1 -
> >  90 files changed, 853 insertions(+), 352 deletions(-)
> 
> 
> --
> David Marchand


^ permalink raw reply	[relevance 0%]

* RE: [RFC 0/4] Support VFIO sparse mmap in PCI bus
  2023-04-18  7:46  3% ` David Marchand
@ 2023-04-18  9:27  0%   ` Xia, Chenbo
  2023-04-18  9:33  0%   ` Xia, Chenbo
  1 sibling, 0 replies; 200+ results
From: Xia, Chenbo @ 2023-04-18  9:27 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, skori, Cao, Yahui, Li, Miao

Hi David,

> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Tuesday, April 18, 2023 3:47 PM
> To: Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; skori@marvell.com
> Subject: Re: [RFC 0/4] Support VFIO sparse mmap in PCI bus
> 
> Hello Chenbo,
> 
> On Tue, Apr 18, 2023 at 7:49 AM Chenbo Xia <chenbo.xia@intel.com> wrote:
> >
> > This series introduces a VFIO standard capability, called sparse
> > mmap to PCI bus. In linux kernel, it's defined as
> > VFIO_REGION_INFO_CAP_SPARSE_MMAP. Sparse mmap means instead of
> > mmap whole BAR region into DPDK process, only mmap part of the
> > BAR region after getting sparse mmap information from kernel.
> > For the rest of BAR region that is not mmap-ed, DPDK process
> > can use pread/pwrite system calls to access. Sparse mmap is
> > useful when kernel does not want userspace to mmap whole BAR
> > region, or kernel wants to control over access to specific BAR
> > region. Vendors can choose to enable this feature or not for
> > their devices in their specific kernel modules.
> 
> Sorry, I did not take the time to look into the details.
> Could you summarize what would be the benefit of this series?

It could be different benefit for different vendor. There was one discussion:
http://inbox.dpdk.org/dev/CO6PR18MB386016A2634AF375F5B4BA8CB4899@CO6PR18MB3860.namprd18.prod.outlook.com/

Above problem is some device has very large BAR, and we don't want DPDK to map
the whole BAR.

For Intel devices, one benefit is that we want our kernel module to control over
access to specific BAR region so we will let DPDK process unable to mmap that region.
(Because after mmap, kernel will not know if userspace is accessing device BAR).

So that's why I summarize as 'Sparse mmap is useful when kernel does not want
userspace to mmap whole BAR region, or kernel wants to control over access to
specific BAR region'. It could be more usage for other vendors that I have not realized

Thanks,
Chenbo

> 
> 
> >
> > In this patchset:
> >
> > Patch 1-3 is mainly for introducing BAR access APIs so that
> > driver could use them to access specific BAR using pread/pwrite
> > system calls when part of the BAR is not mmap-able.
> >
> > Patch 4 adds the VFIO sparse mmap support finally. A question
> > is for all sparse mmap regions, should they be mapped to a
> > continuous virtual address region that follows device-specific
> > BAR layout or not. In theory, there could be three options to
> > support this feature.
> >
> > Option 1: Map sparse mmap regions independently
> > ======================================================
> > In this approach, we mmap each sparse mmap region one by one
> > and each region could be located anywhere in process address
> > space. But accessing the mmaped BAR will not be as easy as
> > 'bar_base_address + bar_offset', driver needs to check the
> > sparse mmap information to access specific BAR register.
> >
> > Patch 4 in this patchset adopts this option. Driver API change
> > is introduced in bus_pci_driver.h. Corresponding changes in
> > all drivers are also done and currently I am assuming drivers
> > do not support this feature so they will not check the
> > 'is_sparse' flag but assumes it to be false. Note that it will
> > not break any driver and each vendor can add related logic when
> > they start to support this feature. This is only because I don't
> > want to introduce complexity to drivers that do not want to
> > support this feature.
> >
> > Option 2: Map sparse mmap regions based on device-specific BAR layout
> > ======================================================================
> > In this approach, the sparse mmap regions are mapped to continuous
> > virtual address region that follows device-specific BAR layout.
> > For example, the BAR size is 0x4000 and only 0-0x1000 (sparse mmap
> > region #1) and 0x3000-0x4000 (sparse mmap region #2) could be
> > mmaped. Region #1 will be mapped at 'base_addr' and region #2
> > will be mapped at 'base_addr + 0x3000'. The good thing is if
> > we implement like this, driver can still access all BAR registers
> > using 'bar_base_address + bar_offset' way and we don't need
> > to introduce any driver API change. But the address space
> > range 'base_addr + 0x1000' to 'base_addr + 0x3000' may need to
> > be reserved so it could result in waste of address space or memory
> > (when we use MAP_ANONYMOUS and MAP_PRIVATE flag to reserve this
> > range). Meanwhile, driver needs to know which part of BAR is
> > mmaped (this is possible since the range is defined by vendor's
> > specific kernel module).
> >
> > Option 3: Support both option 1 & 2
> > ===================================
> > We could define a driver flag to let driver choose which way it
> > perfers since either option has its own Pros & Cons.
> >
> > Please share your comments, Thanks!
> >
> >
> > Chenbo Xia (4):
> >   bus/pci: introduce an internal representation of PCI device
> 
> I think this first patch main motivation was to avoid ABI issues.
> Since v22.11, the rte_pci_device object is opaque to applications.
> 
> So, do we still need this patch?
> 
> 
> >   bus/pci: avoid depending on private value in kernel source
> >   bus/pci: introduce helper for MMIO read and write
> >   bus/pci: add VFIO sparse mmap support
> >
> >  drivers/baseband/acc/rte_acc100_pmd.c         |   6 +-
> >  drivers/baseband/acc/rte_vrb_pmd.c            |   6 +-
> >  .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |   6 +-
> >  drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |   6 +-
> >  drivers/bus/pci/bsd/pci.c                     |  43 +-
> >  drivers/bus/pci/bus_pci_driver.h              |  24 +-
> >  drivers/bus/pci/linux/pci.c                   |  91 +++-
> >  drivers/bus/pci/linux/pci_init.h              |  14 +-
> >  drivers/bus/pci/linux/pci_uio.c               |  34 +-
> >  drivers/bus/pci/linux/pci_vfio.c              | 445 ++++++++++++++----
> >  drivers/bus/pci/pci_common.c                  |  57 ++-
> >  drivers/bus/pci/pci_common_uio.c              |  12 +-
> >  drivers/bus/pci/private.h                     |  25 +-
> >  drivers/bus/pci/rte_bus_pci.h                 |  48 ++
> >  drivers/bus/pci/version.map                   |   3 +
> >  drivers/common/cnxk/roc_dev.c                 |   4 +-
> >  drivers/common/cnxk/roc_dpi.c                 |   2 +-
> >  drivers/common/cnxk/roc_ml.c                  |  22 +-
> >  drivers/common/qat/dev/qat_dev_gen1.c         |   2 +-
> >  drivers/common/qat/dev/qat_dev_gen4.c         |   4 +-
> >  drivers/common/sfc_efx/sfc_efx.c              |   2 +-
> >  drivers/compress/octeontx/otx_zip.c           |   4 +-
> >  drivers/crypto/ccp/ccp_dev.c                  |   4 +-
> >  drivers/crypto/cnxk/cnxk_cryptodev_ops.c      |   2 +-
> >  drivers/crypto/nitrox/nitrox_device.c         |   4 +-
> >  drivers/crypto/octeontx/otx_cryptodev_ops.c   |   6 +-
> >  drivers/crypto/virtio/virtio_pci.c            |   6 +-
> >  drivers/dma/cnxk/cnxk_dmadev.c                |   2 +-
> >  drivers/dma/hisilicon/hisi_dmadev.c           |   6 +-
> >  drivers/dma/idxd/idxd_pci.c                   |   4 +-
> >  drivers/dma/ioat/ioat_dmadev.c                |   2 +-
> >  drivers/event/dlb2/pf/dlb2_main.c             |  16 +-
> >  drivers/event/octeontx/ssovf_probe.c          |  38 +-
> >  drivers/event/octeontx/timvf_probe.c          |  18 +-
> >  drivers/event/skeleton/skeleton_eventdev.c    |   2 +-
> >  drivers/mempool/octeontx/octeontx_fpavf.c     |   6 +-
> >  drivers/net/ark/ark_ethdev.c                  |   4 +-
> >  drivers/net/atlantic/atl_ethdev.c             |   2 +-
> >  drivers/net/avp/avp_ethdev.c                  |  20 +-
> >  drivers/net/axgbe/axgbe_ethdev.c              |   4 +-
> >  drivers/net/bnx2x/bnx2x_ethdev.c              |   6 +-
> >  drivers/net/bnxt/bnxt_ethdev.c                |   8 +-
> >  drivers/net/cpfl/cpfl_ethdev.c                |   4 +-
> >  drivers/net/cxgbe/cxgbe_ethdev.c              |   2 +-
> >  drivers/net/cxgbe/cxgbe_main.c                |   2 +-
> >  drivers/net/cxgbe/cxgbevf_ethdev.c            |   2 +-
> >  drivers/net/cxgbe/cxgbevf_main.c              |   2 +-
> >  drivers/net/e1000/em_ethdev.c                 |   4 +-
> >  drivers/net/e1000/igb_ethdev.c                |   4 +-
> >  drivers/net/ena/ena_ethdev.c                  |   4 +-
> >  drivers/net/enetc/enetc_ethdev.c              |   2 +-
> >  drivers/net/enic/enic_main.c                  |   4 +-
> >  drivers/net/fm10k/fm10k_ethdev.c              |   2 +-
> >  drivers/net/gve/gve_ethdev.c                  |   4 +-
> >  drivers/net/hinic/base/hinic_pmd_hwif.c       |  14 +-
> >  drivers/net/hns3/hns3_ethdev.c                |   2 +-
> >  drivers/net/hns3/hns3_ethdev_vf.c             |   2 +-
> >  drivers/net/hns3/hns3_rxtx.c                  |   4 +-
> >  drivers/net/i40e/i40e_ethdev.c                |   2 +-
> >  drivers/net/iavf/iavf_ethdev.c                |   2 +-
> >  drivers/net/ice/ice_dcf.c                     |   2 +-
> >  drivers/net/ice/ice_ethdev.c                  |   2 +-
> >  drivers/net/idpf/idpf_ethdev.c                |   4 +-
> >  drivers/net/igc/igc_ethdev.c                  |   2 +-
> >  drivers/net/ionic/ionic_dev_pci.c             |   2 +-
> >  drivers/net/ixgbe/ixgbe_ethdev.c              |   4 +-
> >  drivers/net/liquidio/lio_ethdev.c             |   4 +-
> >  drivers/net/nfp/nfp_ethdev.c                  |   2 +-
> >  drivers/net/nfp/nfp_ethdev_vf.c               |   6 +-
> >  drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c    |   4 +-
> >  drivers/net/ngbe/ngbe_ethdev.c                |   2 +-
> >  drivers/net/octeon_ep/otx_ep_ethdev.c         |   2 +-
> >  drivers/net/octeontx/base/octeontx_pkivf.c    |   6 +-
> >  drivers/net/octeontx/base/octeontx_pkovf.c    |  12 +-
> >  drivers/net/qede/qede_main.c                  |   6 +-
> >  drivers/net/sfc/sfc.c                         |   2 +-
> >  drivers/net/thunderx/nicvf_ethdev.c           |   2 +-
> >  drivers/net/txgbe/txgbe_ethdev.c              |   2 +-
> >  drivers/net/txgbe/txgbe_ethdev_vf.c           |   2 +-
> >  drivers/net/virtio/virtio_pci.c               |   6 +-
> >  drivers/net/vmxnet3/vmxnet3_ethdev.c          |   4 +-
> >  drivers/raw/cnxk_bphy/cnxk_bphy.c             |  10 +-
> >  drivers/raw/cnxk_bphy/cnxk_bphy_cgx.c         |   6 +-
> >  drivers/raw/ifpga/afu_pmd_n3000.c             |   4 +-
> >  drivers/raw/ifpga/ifpga_rawdev.c              |   6 +-
> >  drivers/raw/ntb/ntb_hw_intel.c                |   8 +-
> >  drivers/vdpa/ifc/ifcvf_vdpa.c                 |   6 +-
> >  drivers/vdpa/sfc/sfc_vdpa_hw.c                |   2 +-
> >  drivers/vdpa/sfc/sfc_vdpa_ops.c               |   2 +-
> >  lib/eal/include/rte_vfio.h                    |   1 -
> >  90 files changed, 853 insertions(+), 352 deletions(-)
> 
> 
> --
> David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [PATCH v4 1/4] doc: announce new cpu flag added to rte_cpu_flag_t
  2023-04-18  8:52  3%         ` Ferruh Yigit
@ 2023-04-18  9:22  3%           ` Bruce Richardson
  0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2023-04-18  9:22 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Sivaprasad Tummala, david.hunt, dev, david.marchand, Thomas Monjalon

On Tue, Apr 18, 2023 at 09:52:49AM +0100, Ferruh Yigit wrote:
> On 4/18/2023 9:25 AM, Sivaprasad Tummala wrote:
> > A new flag RTE_CPUFLAG_MONITORX is added to rte_cpu_flag_t in
> > DPDK 23.07 release to support monitorx instruction on EPYC processors.
> > This results in ABI breakage for legacy apps.
> > 
> > Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > ---
> >  doc/guides/rel_notes/deprecation.rst | 3 +++
> >  1 file changed, 3 insertions(+)
> > 
> > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > index dcc1ca1696..831713983f 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -163,3 +163,6 @@ Deprecation Notices
> >    The new port library API (functions rte_swx_port_*)
> >    will gradually transition from experimental to stable status
> >    starting with DPDK 23.07 release.
> > +
> > +* eal/x86: The enum ``rte_cpu_flag_t`` will be extended with a new cpu flag
> > +  ``RTE_CPUFLAG_MONITORX`` to support monitorx instruction on EPYC processors.
> 
> 
> OK to add new CPU flag,
> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
> 
> 
> But @David, @Bruce, is it OK to break ABI whenever a new CPU flag is
> added, should we hide CPU flags better?
> 
> Or other option can be drop the 'RTE_CPUFLAG_NUMFLAGS' and allow
> appending new flags to the end although this may lead enum become more
> messy by time.

+1 top drop the NUMFLAGS value. We should not break ABI each time we need a
new flag.

^ permalink raw reply	[relevance 3%]

* Re: [PATCH v4 1/4] doc: announce new cpu flag added to rte_cpu_flag_t
  2023-04-18  8:25  3%       ` [PATCH v4 1/4] doc: announce new cpu flag added to rte_cpu_flag_t Sivaprasad Tummala
@ 2023-04-18  8:52  3%         ` Ferruh Yigit
  2023-04-18  9:22  3%           ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2023-04-18  8:52 UTC (permalink / raw)
  To: Sivaprasad Tummala, david.hunt
  Cc: dev, david.marchand, Bruce Richardson, Thomas Monjalon

On 4/18/2023 9:25 AM, Sivaprasad Tummala wrote:
> A new flag RTE_CPUFLAG_MONITORX is added to rte_cpu_flag_t in
> DPDK 23.07 release to support monitorx instruction on EPYC processors.
> This results in ABI breakage for legacy apps.
> 
> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> ---
>  doc/guides/rel_notes/deprecation.rst | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index dcc1ca1696..831713983f 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -163,3 +163,6 @@ Deprecation Notices
>    The new port library API (functions rte_swx_port_*)
>    will gradually transition from experimental to stable status
>    starting with DPDK 23.07 release.
> +
> +* eal/x86: The enum ``rte_cpu_flag_t`` will be extended with a new cpu flag
> +  ``RTE_CPUFLAG_MONITORX`` to support monitorx instruction on EPYC processors.


OK to add new CPU flag,
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>


But @David, @Bruce, is it OK to break ABI whenever a new CPU flag is
added, should we hide CPU flags better?

Or other option can be drop the 'RTE_CPUFLAG_NUMFLAGS' and allow
appending new flags to the end although this may lead enum become more
messy by time.

^ permalink raw reply	[relevance 3%]

* Re: [PATCH 1/3] security: introduce out of place support for inline ingress
  2023-04-11 18:05  3%   ` Stephen Hemminger
@ 2023-04-18  8:33  4%     ` Jerin Jacob
  2023-04-24 22:41  3%       ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2023-04-18  8:33 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: Nithin Dabilpuram, Thomas Monjalon, Akhil Goyal, jerinj, dev,
	Morten Brørup, techboard

On Tue, Apr 11, 2023 at 11:36 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> On Tue, 11 Apr 2023 15:34:07 +0530
> Nithin Dabilpuram <ndabilpuram@marvell.com> wrote:
>
> > diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> > index 4bacf9fcd9..866cd4e8ee 100644
> > --- a/lib/security/rte_security.h
> > +++ b/lib/security/rte_security.h
> > @@ -275,6 +275,17 @@ struct rte_security_ipsec_sa_options {
> >        */
> >       uint32_t ip_reassembly_en : 1;
> >
> > +     /** Enable out of place processing on inline inbound packets.
> > +      *
> > +      * * 1: Enable driver to perform Out-of-place(OOP) processing for this inline
> > +      *      inbound SA if supported by driver. PMD need to register mbuf
> > +      *      dynamic field using rte_security_oop_dynfield_register()
> > +      *      and security session creation would fail if dynfield is not
> > +      *      registered successfully.
> > +      * * 0: Disable OOP processing for this session (default).
> > +      */
> > +     uint32_t ingress_oop : 1;
> > +
> >       /** Reserved bit fields for future extension
> >        *
> >        * User should ensure reserved_opts is cleared as it may change in
> > @@ -282,7 +293,7 @@ struct rte_security_ipsec_sa_options {
> >        *
> >        * Note: Reduce number of bits in reserved_opts for every new option.
> >        */
> > -     uint32_t reserved_opts : 17;
> > +     uint32_t reserved_opts : 16;
> >  };
>
> NAK
> Let me repeat the reserved bit rant. YAGNI
>
> Reserved space is not usable without ABI breakage unless the existing
> code enforces that reserved space has to be zero.
>
> Just saying "User should ensure reserved_opts is cleared" is not enough.

Yes. I think, we need to enforce to have _init functions for the
structures which is using reserved filed.

On the same note on YAGNI, I am wondering why NOT introduce
RTE_NEXT_ABI marco kind of scheme to compile out ABI breaking changes.
By keeping RTE_NEXT_ABI disable by default, enable explicitly if user
wants it to avoid waiting for one year any ABI breaking changes.
There are a lot of "fixed appliance" customers (not OS distribution
driven customer) they are willing to recompile DPDK for new feature.
What we are loosing with this scheme?




>
>

^ permalink raw reply	[relevance 4%]

* [PATCH v4 1/4] doc: announce new cpu flag added to rte_cpu_flag_t
  @ 2023-04-18  8:25  3%       ` Sivaprasad Tummala
  2023-04-18  8:52  3%         ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Sivaprasad Tummala @ 2023-04-18  8:25 UTC (permalink / raw)
  To: david.hunt; +Cc: dev, david.marchand, ferruh.yigit

A new flag RTE_CPUFLAG_MONITORX is added to rte_cpu_flag_t in
DPDK 23.07 release to support monitorx instruction on EPYC processors.
This results in ABI breakage for legacy apps.

Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
---
 doc/guides/rel_notes/deprecation.rst | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index dcc1ca1696..831713983f 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -163,3 +163,6 @@ Deprecation Notices
   The new port library API (functions rte_swx_port_*)
   will gradually transition from experimental to stable status
   starting with DPDK 23.07 release.
+
+* eal/x86: The enum ``rte_cpu_flag_t`` will be extended with a new cpu flag
+  ``RTE_CPUFLAG_MONITORX`` to support monitorx instruction on EPYC processors.
-- 
2.34.1


^ permalink raw reply	[relevance 3%]

* Re: [RFC 0/4] Support VFIO sparse mmap in PCI bus
  @ 2023-04-18  7:46  3% ` David Marchand
  2023-04-18  9:27  0%   ` Xia, Chenbo
  2023-04-18  9:33  0%   ` Xia, Chenbo
  0 siblings, 2 replies; 200+ results
From: David Marchand @ 2023-04-18  7:46 UTC (permalink / raw)
  To: Chenbo Xia; +Cc: dev, skori

Hello Chenbo,

On Tue, Apr 18, 2023 at 7:49 AM Chenbo Xia <chenbo.xia@intel.com> wrote:
>
> This series introduces a VFIO standard capability, called sparse
> mmap to PCI bus. In linux kernel, it's defined as
> VFIO_REGION_INFO_CAP_SPARSE_MMAP. Sparse mmap means instead of
> mmap whole BAR region into DPDK process, only mmap part of the
> BAR region after getting sparse mmap information from kernel.
> For the rest of BAR region that is not mmap-ed, DPDK process
> can use pread/pwrite system calls to access. Sparse mmap is
> useful when kernel does not want userspace to mmap whole BAR
> region, or kernel wants to control over access to specific BAR
> region. Vendors can choose to enable this feature or not for
> their devices in their specific kernel modules.

Sorry, I did not take the time to look into the details.
Could you summarize what would be the benefit of this series?


>
> In this patchset:
>
> Patch 1-3 is mainly for introducing BAR access APIs so that
> driver could use them to access specific BAR using pread/pwrite
> system calls when part of the BAR is not mmap-able.
>
> Patch 4 adds the VFIO sparse mmap support finally. A question
> is for all sparse mmap regions, should they be mapped to a
> continuous virtual address region that follows device-specific
> BAR layout or not. In theory, there could be three options to
> support this feature.
>
> Option 1: Map sparse mmap regions independently
> ======================================================
> In this approach, we mmap each sparse mmap region one by one
> and each region could be located anywhere in process address
> space. But accessing the mmaped BAR will not be as easy as
> 'bar_base_address + bar_offset', driver needs to check the
> sparse mmap information to access specific BAR register.
>
> Patch 4 in this patchset adopts this option. Driver API change
> is introduced in bus_pci_driver.h. Corresponding changes in
> all drivers are also done and currently I am assuming drivers
> do not support this feature so they will not check the
> 'is_sparse' flag but assumes it to be false. Note that it will
> not break any driver and each vendor can add related logic when
> they start to support this feature. This is only because I don't
> want to introduce complexity to drivers that do not want to
> support this feature.
>
> Option 2: Map sparse mmap regions based on device-specific BAR layout
> ======================================================================
> In this approach, the sparse mmap regions are mapped to continuous
> virtual address region that follows device-specific BAR layout.
> For example, the BAR size is 0x4000 and only 0-0x1000 (sparse mmap
> region #1) and 0x3000-0x4000 (sparse mmap region #2) could be
> mmaped. Region #1 will be mapped at 'base_addr' and region #2
> will be mapped at 'base_addr + 0x3000'. The good thing is if
> we implement like this, driver can still access all BAR registers
> using 'bar_base_address + bar_offset' way and we don't need
> to introduce any driver API change. But the address space
> range 'base_addr + 0x1000' to 'base_addr + 0x3000' may need to
> be reserved so it could result in waste of address space or memory
> (when we use MAP_ANONYMOUS and MAP_PRIVATE flag to reserve this
> range). Meanwhile, driver needs to know which part of BAR is
> mmaped (this is possible since the range is defined by vendor's
> specific kernel module).
>
> Option 3: Support both option 1 & 2
> ===================================
> We could define a driver flag to let driver choose which way it
> perfers since either option has its own Pros & Cons.
>
> Please share your comments, Thanks!
>
>
> Chenbo Xia (4):
>   bus/pci: introduce an internal representation of PCI device

I think this first patch main motivation was to avoid ABI issues.
Since v22.11, the rte_pci_device object is opaque to applications.

So, do we still need this patch?


>   bus/pci: avoid depending on private value in kernel source
>   bus/pci: introduce helper for MMIO read and write
>   bus/pci: add VFIO sparse mmap support
>
>  drivers/baseband/acc/rte_acc100_pmd.c         |   6 +-
>  drivers/baseband/acc/rte_vrb_pmd.c            |   6 +-
>  .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |   6 +-
>  drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |   6 +-
>  drivers/bus/pci/bsd/pci.c                     |  43 +-
>  drivers/bus/pci/bus_pci_driver.h              |  24 +-
>  drivers/bus/pci/linux/pci.c                   |  91 +++-
>  drivers/bus/pci/linux/pci_init.h              |  14 +-
>  drivers/bus/pci/linux/pci_uio.c               |  34 +-
>  drivers/bus/pci/linux/pci_vfio.c              | 445 ++++++++++++++----
>  drivers/bus/pci/pci_common.c                  |  57 ++-
>  drivers/bus/pci/pci_common_uio.c              |  12 +-
>  drivers/bus/pci/private.h                     |  25 +-
>  drivers/bus/pci/rte_bus_pci.h                 |  48 ++
>  drivers/bus/pci/version.map                   |   3 +
>  drivers/common/cnxk/roc_dev.c                 |   4 +-
>  drivers/common/cnxk/roc_dpi.c                 |   2 +-
>  drivers/common/cnxk/roc_ml.c                  |  22 +-
>  drivers/common/qat/dev/qat_dev_gen1.c         |   2 +-
>  drivers/common/qat/dev/qat_dev_gen4.c         |   4 +-
>  drivers/common/sfc_efx/sfc_efx.c              |   2 +-
>  drivers/compress/octeontx/otx_zip.c           |   4 +-
>  drivers/crypto/ccp/ccp_dev.c                  |   4 +-
>  drivers/crypto/cnxk/cnxk_cryptodev_ops.c      |   2 +-
>  drivers/crypto/nitrox/nitrox_device.c         |   4 +-
>  drivers/crypto/octeontx/otx_cryptodev_ops.c   |   6 +-
>  drivers/crypto/virtio/virtio_pci.c            |   6 +-
>  drivers/dma/cnxk/cnxk_dmadev.c                |   2 +-
>  drivers/dma/hisilicon/hisi_dmadev.c           |   6 +-
>  drivers/dma/idxd/idxd_pci.c                   |   4 +-
>  drivers/dma/ioat/ioat_dmadev.c                |   2 +-
>  drivers/event/dlb2/pf/dlb2_main.c             |  16 +-
>  drivers/event/octeontx/ssovf_probe.c          |  38 +-
>  drivers/event/octeontx/timvf_probe.c          |  18 +-
>  drivers/event/skeleton/skeleton_eventdev.c    |   2 +-
>  drivers/mempool/octeontx/octeontx_fpavf.c     |   6 +-
>  drivers/net/ark/ark_ethdev.c                  |   4 +-
>  drivers/net/atlantic/atl_ethdev.c             |   2 +-
>  drivers/net/avp/avp_ethdev.c                  |  20 +-
>  drivers/net/axgbe/axgbe_ethdev.c              |   4 +-
>  drivers/net/bnx2x/bnx2x_ethdev.c              |   6 +-
>  drivers/net/bnxt/bnxt_ethdev.c                |   8 +-
>  drivers/net/cpfl/cpfl_ethdev.c                |   4 +-
>  drivers/net/cxgbe/cxgbe_ethdev.c              |   2 +-
>  drivers/net/cxgbe/cxgbe_main.c                |   2 +-
>  drivers/net/cxgbe/cxgbevf_ethdev.c            |   2 +-
>  drivers/net/cxgbe/cxgbevf_main.c              |   2 +-
>  drivers/net/e1000/em_ethdev.c                 |   4 +-
>  drivers/net/e1000/igb_ethdev.c                |   4 +-
>  drivers/net/ena/ena_ethdev.c                  |   4 +-
>  drivers/net/enetc/enetc_ethdev.c              |   2 +-
>  drivers/net/enic/enic_main.c                  |   4 +-
>  drivers/net/fm10k/fm10k_ethdev.c              |   2 +-
>  drivers/net/gve/gve_ethdev.c                  |   4 +-
>  drivers/net/hinic/base/hinic_pmd_hwif.c       |  14 +-
>  drivers/net/hns3/hns3_ethdev.c                |   2 +-
>  drivers/net/hns3/hns3_ethdev_vf.c             |   2 +-
>  drivers/net/hns3/hns3_rxtx.c                  |   4 +-
>  drivers/net/i40e/i40e_ethdev.c                |   2 +-
>  drivers/net/iavf/iavf_ethdev.c                |   2 +-
>  drivers/net/ice/ice_dcf.c                     |   2 +-
>  drivers/net/ice/ice_ethdev.c                  |   2 +-
>  drivers/net/idpf/idpf_ethdev.c                |   4 +-
>  drivers/net/igc/igc_ethdev.c                  |   2 +-
>  drivers/net/ionic/ionic_dev_pci.c             |   2 +-
>  drivers/net/ixgbe/ixgbe_ethdev.c              |   4 +-
>  drivers/net/liquidio/lio_ethdev.c             |   4 +-
>  drivers/net/nfp/nfp_ethdev.c                  |   2 +-
>  drivers/net/nfp/nfp_ethdev_vf.c               |   6 +-
>  drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c    |   4 +-
>  drivers/net/ngbe/ngbe_ethdev.c                |   2 +-
>  drivers/net/octeon_ep/otx_ep_ethdev.c         |   2 +-
>  drivers/net/octeontx/base/octeontx_pkivf.c    |   6 +-
>  drivers/net/octeontx/base/octeontx_pkovf.c    |  12 +-
>  drivers/net/qede/qede_main.c                  |   6 +-
>  drivers/net/sfc/sfc.c                         |   2 +-
>  drivers/net/thunderx/nicvf_ethdev.c           |   2 +-
>  drivers/net/txgbe/txgbe_ethdev.c              |   2 +-
>  drivers/net/txgbe/txgbe_ethdev_vf.c           |   2 +-
>  drivers/net/virtio/virtio_pci.c               |   6 +-
>  drivers/net/vmxnet3/vmxnet3_ethdev.c          |   4 +-
>  drivers/raw/cnxk_bphy/cnxk_bphy.c             |  10 +-
>  drivers/raw/cnxk_bphy/cnxk_bphy_cgx.c         |   6 +-
>  drivers/raw/ifpga/afu_pmd_n3000.c             |   4 +-
>  drivers/raw/ifpga/ifpga_rawdev.c              |   6 +-
>  drivers/raw/ntb/ntb_hw_intel.c                |   8 +-
>  drivers/vdpa/ifc/ifcvf_vdpa.c                 |   6 +-
>  drivers/vdpa/sfc/sfc_vdpa_hw.c                |   2 +-
>  drivers/vdpa/sfc/sfc_vdpa_ops.c               |   2 +-
>  lib/eal/include/rte_vfio.h                    |   1 -
>  90 files changed, 853 insertions(+), 352 deletions(-)


-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* [PATCH v7 10/14] eal: expand most macros to empty when using MSVC
  @ 2023-04-17 16:10  5%   ` Tyler Retzlaff
  2023-04-17 16:10  3%   ` [PATCH v7 12/14] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
  1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-17 16:10 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
	Tyler Retzlaff

For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.

Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/eal/include/rte_branch_prediction.h |  8 +++++
 lib/eal/include/rte_common.h            | 54 +++++++++++++++++++++++++++++++++
 lib/eal/include/rte_compat.h            | 20 ++++++++++++
 3 files changed, 82 insertions(+)

diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 0256a9d..1eff9f6 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -25,7 +25,11 @@
  *
  */
 #ifndef likely
+#ifndef RTE_TOOLCHAIN_MSVC
 #define likely(x)	__builtin_expect(!!(x), 1)
+#else
+#define likely(x)	(!!(x))
+#endif
 #endif /* likely */
 
 /**
@@ -39,7 +43,11 @@
  *
  */
 #ifndef unlikely
+#ifndef RTE_TOOLCHAIN_MSVC
 #define unlikely(x)	__builtin_expect(!!(x), 0)
+#else
+#define unlikely(x)	(!!(x))
+#endif
 #endif /* unlikely */
 
 #ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..0c55a23 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -41,6 +41,10 @@
 #define RTE_STD_C11
 #endif
 
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __extension__
+#endif
+
 /*
  * RTE_TOOLCHAIN_GCC is defined if the target is built with GCC,
  * while a host application (like pmdinfogen) may have another compiler.
@@ -65,7 +69,11 @@
 /**
  * Force alignment
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_aligned(a) __attribute__((__aligned__(a)))
+#else
+#define __rte_aligned(a)
+#endif
 
 #ifdef RTE_ARCH_STRICT_ALIGN
 typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -80,16 +88,29 @@
 /**
  * Force a structure to be packed
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_packed __attribute__((__packed__))
+#else
+#define __rte_packed
+#endif
 
 /**
  * Macro to mark a type that is not subject to type-based aliasing rules
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_may_alias __attribute__((__may_alias__))
+#else
+#define __rte_may_alias
+#endif
 
 /******* Macro to mark functions and fields scheduled for removal *****/
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_deprecated	__attribute__((__deprecated__))
 #define __rte_deprecated_msg(msg)	__attribute__((__deprecated__(msg)))
+#else
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#endif
 
 /**
  *  Macro to mark macros and defines scheduled for removal
@@ -110,14 +131,22 @@
 /**
  * Force symbol to be generated even if it appears to be unused.
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_used __attribute__((used))
+#else
+#define __rte_used
+#endif
 
 /*********** Macros to eliminate unused variable warnings ********/
 
 /**
  * short definition to mark a function parameter unused
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_unused __attribute__((__unused__))
+#else
+#define __rte_unused
+#endif
 
 /**
  * Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +170,7 @@
  * even if the underlying stdio implementation is ANSI-compliant,
  * so this must be overridden.
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #if RTE_CC_IS_GNU
 #define __rte_format_printf(format_index, first_arg) \
 	__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +178,9 @@
 #define __rte_format_printf(format_index, first_arg) \
 	__attribute__((format(printf, format_index, first_arg)))
 #endif
+#else
+#define __rte_format_printf(format_index, first_arg)
+#endif
 
 /**
  * Tells compiler that the function returns a value that points to
@@ -222,7 +255,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
 /**
  * Hint never returning function
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_noreturn __attribute__((noreturn))
+#else
+#define __rte_noreturn
+#endif
 
 /**
  * Issue a warning in case the function's return value is ignored.
@@ -247,12 +284,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
  *  }
  * @endcode
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_warn_unused_result __attribute__((warn_unused_result))
+#else
+#define __rte_warn_unused_result
+#endif
 
 /**
  * Force a function to be inlined
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_always_inline inline __attribute__((always_inline))
+#else
+#define __rte_always_inline
+#endif
 
 /**
  * Force a function to be noinlined
@@ -437,7 +482,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
 #define RTE_CACHE_LINE_MIN_SIZE 64
 
 /** Force alignment to cache line. */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#else
+#define __rte_cache_aligned
+#endif
 
 /** Force minimum cache line alignment. */
 #define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
@@ -812,12 +861,17 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
  *  struct wrapper *w = container_of(x, struct wrapper, c);
  */
 #ifndef container_of
+#ifndef RTE_TOOLCHAIN_MSVC
 #define container_of(ptr, type, member)	__extension__ ({		\
 			const typeof(((type *)0)->member) *_ptr = (ptr); \
 			__rte_unused type *_target_ptr =	\
 				(type *)(ptr);				\
 			(type *)(((uintptr_t)_ptr) - offsetof(type, member)); \
 		})
+#else
+#define container_of(ptr, type, member) \
+			((type *)((uintptr_t)(ptr) - offsetof(type, member)))
+#endif
 #endif
 
 /** Swap two variables. */
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..6a4b5ee 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
 
 #ifndef ALLOW_EXPERIMENTAL_API
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_experimental \
 __attribute__((deprecated("Symbol is not yet part of stable ABI"), \
 section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
 
 #else
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_experimental \
 __attribute__((section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
 
 #endif
 
@@ -30,23 +38,35 @@
 
 #if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 __attribute__((error("Symbol is not public ABI"), \
 section(".text.internal")))
+#else
+#define __rte_internal
+#endif
 
 #elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 _Pragma("GCC diagnostic push") \
 _Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
 __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
 section(".text.internal"))) \
 _Pragma("GCC diagnostic pop")
+#else
+#define __rte_internal
+#endif
 
 #else
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 __attribute__((section(".text.internal")))
+#else
+#define __rte_internal
+#endif
 
 #endif
 
-- 
1.8.3.1


^ permalink raw reply	[relevance 5%]

* [PATCH v7 12/14] telemetry: avoid expanding versioned symbol macros on MSVC
    2023-04-17 16:10  5%   ` [PATCH v7 10/14] eal: expand most macros to empty when using MSVC Tyler Retzlaff
@ 2023-04-17 16:10  3%   ` Tyler Retzlaff
  1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-17 16:10 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
	Tyler Retzlaff

Windows does not support versioned symbols. Fortunately Windows also
doesn't have an exported stable ABI.

Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
functions.

Windows does have a way to achieve similar versioning for symbols but it
is not a simple #define so it will be done as a work package later.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index 2bac2de..284c16e 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -82,8 +82,16 @@
 /* mark the v23 function as the older version, and v24 as the default version */
 VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
 BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
 MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
 		int64_t x), rte_tel_data_add_array_int_v24);
+#else
+int
+rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
+{
+	return rte_tel_data_add_array_int_v24(d, x);
+}
+#endif
 
 int
 rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
@@ -220,8 +228,16 @@
 /* mark the v23 function as the older version, and v24 as the default version */
 VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
 BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
 MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
 		const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
+#else
+int
+rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
+{
+	return rte_tel_data_add_dict_int_v24(d, name, val);
+}
+#endif
 
 int
 rte_tel_data_add_dict_uint(struct rte_tel_data *d,
-- 
1.8.3.1


^ permalink raw reply	[relevance 3%]

* [PATCH v3 1/4] doc: announce new cpu flag added to rte_cpu_flag_t
  2023-04-13 11:53  3% ` [PATCH v2 2/3] doc: announce new cpu flag added to rte_cpu_flag_t Sivaprasad Tummala
@ 2023-04-17  4:31  3%   ` Sivaprasad Tummala
    0 siblings, 1 reply; 200+ results
From: Sivaprasad Tummala @ 2023-04-17  4:31 UTC (permalink / raw)
  To: david.hunt; +Cc: dev

A new flag RTE_CPUFLAG_MONITORX is added to rte_cpu_flag_t in
DPDK 23.07 release to support monitorx instruction on EPYC processors.
This results in ABI breakage for legacy apps.

Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
---
 doc/guides/rel_notes/deprecation.rst | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index dcc1ca1696..831713983f 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -163,3 +163,6 @@ Deprecation Notices
   The new port library API (functions rte_swx_port_*)
   will gradually transition from experimental to stable status
   starting with DPDK 23.07 release.
+
+* eal/x86: The enum ``rte_cpu_flag_t`` will be extended with a new cpu flag
+  ``RTE_CPUFLAG_MONITORX`` to support monitorx instruction on EPYC processors.
-- 
2.34.1


^ permalink raw reply	[relevance 3%]

* RE: [PATCH v5 11/14] eal: expand most macros to empty when using MSVC
  2023-04-15 20:52  4%           ` Tyler Retzlaff
@ 2023-04-15 22:41  4%             ` Morten Brørup
  0 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2023-04-15 22:41 UTC (permalink / raw)
  To: Tyler Retzlaff
  Cc: bruce.richardson, david.marchand, thomas, konstantin.ananyev, dev

> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Saturday, 15 April 2023 22.52
> 
> On Sat, Apr 15, 2023 at 09:16:21AM +0200, Morten Brørup wrote:
> > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > Sent: Friday, 14 April 2023 19.02
> > >
> > > On Fri, Apr 14, 2023 at 08:45:17AM +0200, Morten Brørup wrote:
> > > > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > > > Sent: Thursday, 13 April 2023 23.26
> > > > >
> > > > > For now expand a lot of common rte macros empty. The catch here
> is
> > > we
> > > > > need to test that most of the macros do what they should but at
> the
> > > same
> > > > > time they are blocking work needed to bootstrap of the unit
> tests.
> > > > >
> > > > > Later we will return and provide (where possible) expansions
> that
> > > work
> > > > > correctly for msvc and where not possible provide some alternate
> > > macros
> > > > > to achieve the same outcome.
> > > > >
> > > > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> >
> > [...]
> >
> > > > >  /**
> > > > >   * Force alignment
> > > > >   */
> > > > > +#ifndef RTE_TOOLCHAIN_MSVC
> > > > >  #define __rte_aligned(a) __attribute__((__aligned__(a)))
> > > > > +#else
> > > > > +#define __rte_aligned(a)
> > > > > +#endif
> > > >
> > > > It should be reviewed that __rte_aligned() is only used for
> > > optimization purposes, and is not required for DPDK to function
> > > properly.
> > >
> > > so to expand on what i have in mind (and explain why i leave it
> expanded
> > > empty for now)
> > >
> > > while msvc has a __declspec for align there is a mismatch between
> > > where gcc and msvc want it placed to control alignment of objects.
> > >
> > > msvc support won't be functional in 23.07 because of atomics. so
> once
> > > we reach the 23.11 cycle (where we can merge c11 changes) it means
> we
> > > can also use standard _Alignas which can accomplish the same thing
> > > but portably.
> >
> > That (C11 standard _Alignas) should be the roadmap for solving the
> alignment requirements.
> >
> > This should be a general principle for DPDK... if the C standard
> offers something, don't reinvent our own. And as a consequence of the
> upgrade to C11, we should deprecate all our own now-obsolete substitutes
> for these.
> >
> > >
> > > full disclosure the catch is i still have to properly locate the
> <thing>
> > > that does the alignment and some small questions about the expansion
> and
> > > use of the existing macro.
> > >
> > > on the subject of DPDK requiring proper alignment, you're right it
> > > is generally for performance but also for pre-c11 atomics.
> > >
> > > one question i have been asking myself is would the community see
> value
> > > in more compile time assertions / testing of the size and alignment
> of
> > > structures and offset of structure fields? we have a few key
> > > RTE_BUILD_BUG_ON() assertions but i've discovered they don't offer
> > > comprehensive protection.
> >
> > Absolutely. Catching bugs at build time is much better than any
> alternative!
> 
> that's handy feedback. i am now encouraged to include more compile time
> checks in advance of or along with changes related to structure abi.

Sounds good.

Disclaimer: "Absolutely" was my personal response. But I seriously doubt that anyone in the DPDK community would object to more build time checks. Stability and code quality carries a lot of weight in DPDK community discussions.

With that said, please expect that maintainers might want you to split your patches, so the additional checks are separated from the MSVC changes.

> follow on question, once we do get to use c11 would something like
> _Static_assert be preferable over RTE_BUILD_BUG_ON? structures sensitive
> to layout could be co-located with the asserts right at the point of
> definition. or is there something extra RTE_BUILD_BUG_ON gives us?

People may have different opinions on RTE_BUILD_BUG_ON vs. _Static_assert or static_assert.

Personally, I prefer static_assert/_Static_assert. It also has the advantage that it can be used in the global scope, directly following the structure definitions (like you mention), whereas RTE_BUILD_BUG_ON must be inside a code block (which can probably be worked around by making a dummy static inline function only containing the RTE_BUILD_BUG_ON).

And in the spirit of my proposal of not using home-grown macros as alternatives to what the C standard provides, I think we should deprecate and get rid of RTE_BUILD_BUG_ON in favor of static_assert/_Static_assert introduced by the C11 standard. (My personal opinion, no such principle decision has been made!)

If we want to keep RTE_BUILD_BUG_ON for some reason, we could change its implementation to use static_assert/_Static_assert instead of creating an invalid pointer to make the compilation fail.

> 
> >
> > > > >  /**
> > > > >   * Force a structure to be packed
> > > > >   */
> > > > > +#ifndef RTE_TOOLCHAIN_MSVC
> > > > >  #define __rte_packed __attribute__((__packed__))
> > > > > +#else
> > > > > +#define __rte_packed
> > > > > +#endif
> > > >
> > > > Similar comment as for __rte_aligned(); however, I consider it
> more
> > > likely that structure packing is a functional requirement, and not
> just
> > > used for optimization. Based on my experience, it may be used for
> > > packing network structures; perhaps not in DPDK itself but maybe in
> DPDK
> > > applications.
> > >
> > > so interestingly i've discovered this is kind of a mess and as you
> note
> > > some places we can't just "fix" it for abi compatibility reasons.
> > >
> > > in some instances the packing is being applied to structures where
> it is
> > > essentially a noop. i.e. natural alignment gets you the same thing
> so it
> > > is superfluous.
> > >
> > > in some instances the packing is being applied to structures that
> are
> > > private and it appears to be completely unnecessary e.g. some
> structure
> > > that isn't nested into something else and sizeof() or offsetof()
> fields
> > > don't matter in the context of their use.
> > >
> > > in some instances it is completely necessary usually when type
> punning
> > > buffers containing network framing etc...
> > >
> > > unfortunately the standard doesn't offer me an out here as there is
> an
> > > issue of placement of the pragma/attributes that do the packing.
> > >
> > > for places it isn't needed it, whatever i just expand empty. for
> places
> > > it is superfluous again because msvc has no stable abi (we're not
> > > established yet) again i just expand empty. finally for the places
> where
> > > it is needed i'll probably need to expand conditionally but i think
> the
> > > instances are far fewer than current use.
> >
> > Optimally, we will have a common macro (or other solution) to support
> both GCC/CLANG and MSVC to replace or supplement __rte_packed. However,
> the cost of this may be an API break if we replace __rte_packed.
> >
> > >
> > > >
> > > > The same risk applies to __rte_aligned(), but with lower
> probability.
> > >
> > > so that's the long winded story of why they are both expanded empty
> for
> > > now for msvc. but when the time comes i want to submit patch series
> that
> > > focus on each specifically to generate robust discussion.
> >
> > Sounds like the right path to take.
> >
> > Now, I'm thinking ahead here...
> >
> > We should be prepared to accept a major ABI/API break at one point in
> time, to replace our home-grown macros with C11 standard solutions and
> to fully support MSVC. This is not happening anytime soon, but the
> Techboard should acknowledge that this is going to happen (with an
> unspecified release), so it can be formally announced. The sooner it is
> announced, the more time developers will have to prepare for it.
> 
> so, just to avoid any confusion i want to make it clear that i am not
> planning to submit changes that would change abi as a part of supporting
> msvc (aside from changing to standard atomics which we agreed on).

Thank you for clarifying.

> 
> in general there are some cleanups we could make in the area of code
> maintainability and portability and we may want to discuss the
> advantages or disadvantages of making those changes. but i think those
> changes are a topic unrelated to windows or msvc specifically.

This was the point I was trying to make, when I proposed accepting a major ABI/API break. Sorry about my unclear wording.

If we collect a wish list of breaking changes, I would personally prefer a "big bang" major ABI/API break, rather than a series of incremental API/ABI breaks over multiple DPDK release. In this regard, we could mix both changes driven by the migration to pure C11 (e.g. getting rid of now-obsolete macros, such as RTE_BUILD_BUG_ON, and compiler intrinsics, such as __rte_aligned) and MSVC portability changes (e.g. an improved macro to support structure packing).

> 
> >
> > All the details do not need to be known at the time of the
> announcement; they can be added along the way, based on the discussions
> from your future patches.
> 
> >
> > >
> > > ty

^ permalink raw reply	[relevance 4%]

* Re: [PATCH v5 11/14] eal: expand most macros to empty when using MSVC
  2023-04-15  7:16  3%         ` Morten Brørup
@ 2023-04-15 20:52  4%           ` Tyler Retzlaff
  2023-04-15 22:41  4%             ` Morten Brørup
  0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-04-15 20:52 UTC (permalink / raw)
  To: Morten Brørup
  Cc: bruce.richardson, david.marchand, thomas, konstantin.ananyev, dev

On Sat, Apr 15, 2023 at 09:16:21AM +0200, Morten Brørup wrote:
> > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > Sent: Friday, 14 April 2023 19.02
> > 
> > On Fri, Apr 14, 2023 at 08:45:17AM +0200, Morten Brørup wrote:
> > > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > > Sent: Thursday, 13 April 2023 23.26
> > > >
> > > > For now expand a lot of common rte macros empty. The catch here is
> > we
> > > > need to test that most of the macros do what they should but at the
> > same
> > > > time they are blocking work needed to bootstrap of the unit tests.
> > > >
> > > > Later we will return and provide (where possible) expansions that
> > work
> > > > correctly for msvc and where not possible provide some alternate
> > macros
> > > > to achieve the same outcome.
> > > >
> > > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> 
> [...]
> 
> > > >  /**
> > > >   * Force alignment
> > > >   */
> > > > +#ifndef RTE_TOOLCHAIN_MSVC
> > > >  #define __rte_aligned(a) __attribute__((__aligned__(a)))
> > > > +#else
> > > > +#define __rte_aligned(a)
> > > > +#endif
> > >
> > > It should be reviewed that __rte_aligned() is only used for
> > optimization purposes, and is not required for DPDK to function
> > properly.
> > 
> > so to expand on what i have in mind (and explain why i leave it expanded
> > empty for now)
> > 
> > while msvc has a __declspec for align there is a mismatch between
> > where gcc and msvc want it placed to control alignment of objects.
> > 
> > msvc support won't be functional in 23.07 because of atomics. so once
> > we reach the 23.11 cycle (where we can merge c11 changes) it means we
> > can also use standard _Alignas which can accomplish the same thing
> > but portably.
> 
> That (C11 standard _Alignas) should be the roadmap for solving the alignment requirements.
> 
> This should be a general principle for DPDK... if the C standard offers something, don't reinvent our own. And as a consequence of the upgrade to C11, we should deprecate all our own now-obsolete substitutes for these.
> 
> > 
> > full disclosure the catch is i still have to properly locate the <thing>
> > that does the alignment and some small questions about the expansion and
> > use of the existing macro.
> > 
> > on the subject of DPDK requiring proper alignment, you're right it
> > is generally for performance but also for pre-c11 atomics.
> > 
> > one question i have been asking myself is would the community see value
> > in more compile time assertions / testing of the size and alignment of
> > structures and offset of structure fields? we have a few key
> > RTE_BUILD_BUG_ON() assertions but i've discovered they don't offer
> > comprehensive protection.
> 
> Absolutely. Catching bugs at build time is much better than any alternative!

that's handy feedback. i am now encouraged to include more compile time
checks in advance of or along with changes related to structure abi.
follow on question, once we do get to use c11 would something like
_Static_assert be preferable over RTE_BUILD_BUG_ON? structures sensitive
to layout could be co-located with the asserts right at the point of
definition. or is there something extra RTE_BUILD_BUG_ON gives us?

> 
> > > >  /**
> > > >   * Force a structure to be packed
> > > >   */
> > > > +#ifndef RTE_TOOLCHAIN_MSVC
> > > >  #define __rte_packed __attribute__((__packed__))
> > > > +#else
> > > > +#define __rte_packed
> > > > +#endif
> > >
> > > Similar comment as for __rte_aligned(); however, I consider it more
> > likely that structure packing is a functional requirement, and not just
> > used for optimization. Based on my experience, it may be used for
> > packing network structures; perhaps not in DPDK itself but maybe in DPDK
> > applications.
> > 
> > so interestingly i've discovered this is kind of a mess and as you note
> > some places we can't just "fix" it for abi compatibility reasons.
> > 
> > in some instances the packing is being applied to structures where it is
> > essentially a noop. i.e. natural alignment gets you the same thing so it
> > is superfluous.
> > 
> > in some instances the packing is being applied to structures that are
> > private and it appears to be completely unnecessary e.g. some structure
> > that isn't nested into something else and sizeof() or offsetof() fields
> > don't matter in the context of their use.
> > 
> > in some instances it is completely necessary usually when type punning
> > buffers containing network framing etc...
> > 
> > unfortunately the standard doesn't offer me an out here as there is an
> > issue of placement of the pragma/attributes that do the packing.
> > 
> > for places it isn't needed it, whatever i just expand empty. for places
> > it is superfluous again because msvc has no stable abi (we're not
> > established yet) again i just expand empty. finally for the places where
> > it is needed i'll probably need to expand conditionally but i think the
> > instances are far fewer than current use.
> 
> Optimally, we will have a common macro (or other solution) to support both GCC/CLANG and MSVC to replace or supplement __rte_packed. However, the cost of this may be an API break if we replace __rte_packed.
> 
> > 
> > >
> > > The same risk applies to __rte_aligned(), but with lower probability.
> > 
> > so that's the long winded story of why they are both expanded empty for
> > now for msvc. but when the time comes i want to submit patch series that
> > focus on each specifically to generate robust discussion.
> 
> Sounds like the right path to take.
> 
> Now, I'm thinking ahead here...
> 
> We should be prepared to accept a major ABI/API break at one point in time, to replace our home-grown macros with C11 standard solutions and to fully support MSVC. This is not happening anytime soon, but the Techboard should acknowledge that this is going to happen (with an unspecified release), so it can be formally announced. The sooner it is announced, the more time developers will have to prepare for it.

so, just to avoid any confusion i want to make it clear that i am not
planning to submit changes that would change abi as a part of supporting
msvc (aside from changing to standard atomics which we agreed on).

in general there are some cleanups we could make in the area of code
maintainability and portability and we may want to discuss the
advantages or disadvantages of making those changes. but i think those
changes are a topic unrelated to windows or msvc specifically.

> 
> All the details do not need to be known at the time of the announcement; they can be added along the way, based on the discussions from your future patches.

> 
> > 
> > ty

^ permalink raw reply	[relevance 4%]

* RE: [PATCH v5 11/14] eal: expand most macros to empty when using MSVC
  2023-04-14 17:02  4%       ` Tyler Retzlaff
@ 2023-04-15  7:16  3%         ` Morten Brørup
  2023-04-15 20:52  4%           ` Tyler Retzlaff
  0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2023-04-15  7:16 UTC (permalink / raw)
  To: Tyler Retzlaff, bruce.richardson, david.marchand, thomas,
	konstantin.ananyev
  Cc: dev

> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Friday, 14 April 2023 19.02
> 
> On Fri, Apr 14, 2023 at 08:45:17AM +0200, Morten Brørup wrote:
> > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > Sent: Thursday, 13 April 2023 23.26
> > >
> > > For now expand a lot of common rte macros empty. The catch here is
> we
> > > need to test that most of the macros do what they should but at the
> same
> > > time they are blocking work needed to bootstrap of the unit tests.
> > >
> > > Later we will return and provide (where possible) expansions that
> work
> > > correctly for msvc and where not possible provide some alternate
> macros
> > > to achieve the same outcome.
> > >
> > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>

[...]

> > >  /**
> > >   * Force alignment
> > >   */
> > > +#ifndef RTE_TOOLCHAIN_MSVC
> > >  #define __rte_aligned(a) __attribute__((__aligned__(a)))
> > > +#else
> > > +#define __rte_aligned(a)
> > > +#endif
> >
> > It should be reviewed that __rte_aligned() is only used for
> optimization purposes, and is not required for DPDK to function
> properly.
> 
> so to expand on what i have in mind (and explain why i leave it expanded
> empty for now)
> 
> while msvc has a __declspec for align there is a mismatch between
> where gcc and msvc want it placed to control alignment of objects.
> 
> msvc support won't be functional in 23.07 because of atomics. so once
> we reach the 23.11 cycle (where we can merge c11 changes) it means we
> can also use standard _Alignas which can accomplish the same thing
> but portably.

That (C11 standard _Alignas) should be the roadmap for solving the alignment requirements.

This should be a general principle for DPDK... if the C standard offers something, don't reinvent our own. And as a consequence of the upgrade to C11, we should deprecate all our own now-obsolete substitutes for these.

> 
> full disclosure the catch is i still have to properly locate the <thing>
> that does the alignment and some small questions about the expansion and
> use of the existing macro.
> 
> on the subject of DPDK requiring proper alignment, you're right it
> is generally for performance but also for pre-c11 atomics.
> 
> one question i have been asking myself is would the community see value
> in more compile time assertions / testing of the size and alignment of
> structures and offset of structure fields? we have a few key
> RTE_BUILD_BUG_ON() assertions but i've discovered they don't offer
> comprehensive protection.

Absolutely. Catching bugs at build time is much better than any alternative!

> > >  /**
> > >   * Force a structure to be packed
> > >   */
> > > +#ifndef RTE_TOOLCHAIN_MSVC
> > >  #define __rte_packed __attribute__((__packed__))
> > > +#else
> > > +#define __rte_packed
> > > +#endif
> >
> > Similar comment as for __rte_aligned(); however, I consider it more
> likely that structure packing is a functional requirement, and not just
> used for optimization. Based on my experience, it may be used for
> packing network structures; perhaps not in DPDK itself but maybe in DPDK
> applications.
> 
> so interestingly i've discovered this is kind of a mess and as you note
> some places we can't just "fix" it for abi compatibility reasons.
> 
> in some instances the packing is being applied to structures where it is
> essentially a noop. i.e. natural alignment gets you the same thing so it
> is superfluous.
> 
> in some instances the packing is being applied to structures that are
> private and it appears to be completely unnecessary e.g. some structure
> that isn't nested into something else and sizeof() or offsetof() fields
> don't matter in the context of their use.
> 
> in some instances it is completely necessary usually when type punning
> buffers containing network framing etc...
> 
> unfortunately the standard doesn't offer me an out here as there is an
> issue of placement of the pragma/attributes that do the packing.
> 
> for places it isn't needed it, whatever i just expand empty. for places
> it is superfluous again because msvc has no stable abi (we're not
> established yet) again i just expand empty. finally for the places where
> it is needed i'll probably need to expand conditionally but i think the
> instances are far fewer than current use.

Optimally, we will have a common macro (or other solution) to support both GCC/CLANG and MSVC to replace or supplement __rte_packed. However, the cost of this may be an API break if we replace __rte_packed.

> 
> >
> > The same risk applies to __rte_aligned(), but with lower probability.
> 
> so that's the long winded story of why they are both expanded empty for
> now for msvc. but when the time comes i want to submit patch series that
> focus on each specifically to generate robust discussion.

Sounds like the right path to take.

Now, I'm thinking ahead here...

We should be prepared to accept a major ABI/API break at one point in time, to replace our home-grown macros with C11 standard solutions and to fully support MSVC. This is not happening anytime soon, but the Techboard should acknowledge that this is going to happen (with an unspecified release), so it can be formally announced. The sooner it is announced, the more time developers will have to prepare for it.

All the details do not need to be known at the time of the announcement; they can be added along the way, based on the discussions from your future patches.

> 
> ty

^ permalink raw reply	[relevance 3%]

* [PATCH v6 11/15] eal: expand most macros to empty when using MSVC
  @ 2023-04-15  1:15  5%   ` Tyler Retzlaff
  2023-04-15  1:15  3%   ` [PATCH v6 13/15] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
  1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-15  1:15 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
	Tyler Retzlaff

For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.

Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
 lib/eal/include/rte_branch_prediction.h |  8 +++++
 lib/eal/include/rte_common.h            | 54 +++++++++++++++++++++++++++++++++
 lib/eal/include/rte_compat.h            | 20 ++++++++++++
 3 files changed, 82 insertions(+)

diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 0256a9d..1eff9f6 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -25,7 +25,11 @@
  *
  */
 #ifndef likely
+#ifndef RTE_TOOLCHAIN_MSVC
 #define likely(x)	__builtin_expect(!!(x), 1)
+#else
+#define likely(x)	(!!(x))
+#endif
 #endif /* likely */
 
 /**
@@ -39,7 +43,11 @@
  *
  */
 #ifndef unlikely
+#ifndef RTE_TOOLCHAIN_MSVC
 #define unlikely(x)	__builtin_expect(!!(x), 0)
+#else
+#define unlikely(x)	(!!(x))
+#endif
 #endif /* unlikely */
 
 #ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..5417f68 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -62,10 +62,18 @@
 		__GNUC_PATCHLEVEL__)
 #endif
 
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __extension__
+#endif
+
 /**
  * Force alignment
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_aligned(a) __attribute__((__aligned__(a)))
+#else
+#define __rte_aligned(a)
+#endif
 
 #ifdef RTE_ARCH_STRICT_ALIGN
 typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -80,16 +88,29 @@
 /**
  * Force a structure to be packed
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_packed __attribute__((__packed__))
+#else
+#define __rte_packed
+#endif
 
 /**
  * Macro to mark a type that is not subject to type-based aliasing rules
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_may_alias __attribute__((__may_alias__))
+#else
+#define __rte_may_alias
+#endif
 
 /******* Macro to mark functions and fields scheduled for removal *****/
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_deprecated	__attribute__((__deprecated__))
 #define __rte_deprecated_msg(msg)	__attribute__((__deprecated__(msg)))
+#else
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#endif
 
 /**
  *  Macro to mark macros and defines scheduled for removal
@@ -110,14 +131,22 @@
 /**
  * Force symbol to be generated even if it appears to be unused.
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_used __attribute__((used))
+#else
+#define __rte_used
+#endif
 
 /*********** Macros to eliminate unused variable warnings ********/
 
 /**
  * short definition to mark a function parameter unused
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_unused __attribute__((__unused__))
+#else
+#define __rte_unused
+#endif
 
 /**
  * Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +170,7 @@
  * even if the underlying stdio implementation is ANSI-compliant,
  * so this must be overridden.
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #if RTE_CC_IS_GNU
 #define __rte_format_printf(format_index, first_arg) \
 	__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +178,9 @@
 #define __rte_format_printf(format_index, first_arg) \
 	__attribute__((format(printf, format_index, first_arg)))
 #endif
+#else
+#define __rte_format_printf(format_index, first_arg)
+#endif
 
 /**
  * Tells compiler that the function returns a value that points to
@@ -222,7 +255,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
 /**
  * Hint never returning function
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_noreturn __attribute__((noreturn))
+#else
+#define __rte_noreturn
+#endif
 
 /**
  * Issue a warning in case the function's return value is ignored.
@@ -247,12 +284,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
  *  }
  * @endcode
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_warn_unused_result __attribute__((warn_unused_result))
+#else
+#define __rte_warn_unused_result
+#endif
 
 /**
  * Force a function to be inlined
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_always_inline inline __attribute__((always_inline))
+#else
+#define __rte_always_inline
+#endif
 
 /**
  * Force a function to be noinlined
@@ -437,7 +482,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
 #define RTE_CACHE_LINE_MIN_SIZE 64
 
 /** Force alignment to cache line. */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#else
+#define __rte_cache_aligned
+#endif
 
 /** Force minimum cache line alignment. */
 #define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
@@ -812,12 +861,17 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
  *  struct wrapper *w = container_of(x, struct wrapper, c);
  */
 #ifndef container_of
+#ifndef RTE_TOOLCHAIN_MSVC
 #define container_of(ptr, type, member)	__extension__ ({		\
 			const typeof(((type *)0)->member) *_ptr = (ptr); \
 			__rte_unused type *_target_ptr =	\
 				(type *)(ptr);				\
 			(type *)(((uintptr_t)_ptr) - offsetof(type, member)); \
 		})
+#else
+#define container_of(ptr, type, member) \
+			((type *)((uintptr_t)(ptr) - offsetof(type, member)))
+#endif
 #endif
 
 /** Swap two variables. */
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..6a4b5ee 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
 
 #ifndef ALLOW_EXPERIMENTAL_API
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_experimental \
 __attribute__((deprecated("Symbol is not yet part of stable ABI"), \
 section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
 
 #else
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_experimental \
 __attribute__((section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
 
 #endif
 
@@ -30,23 +38,35 @@
 
 #if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 __attribute__((error("Symbol is not public ABI"), \
 section(".text.internal")))
+#else
+#define __rte_internal
+#endif
 
 #elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 _Pragma("GCC diagnostic push") \
 _Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
 __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
 section(".text.internal"))) \
 _Pragma("GCC diagnostic pop")
+#else
+#define __rte_internal
+#endif
 
 #else
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 __attribute__((section(".text.internal")))
+#else
+#define __rte_internal
+#endif
 
 #endif
 
-- 
1.8.3.1


^ permalink raw reply	[relevance 5%]

* [PATCH v6 13/15] telemetry: avoid expanding versioned symbol macros on MSVC
    2023-04-15  1:15  5%   ` [PATCH v6 11/15] eal: expand most macros to empty when using MSVC Tyler Retzlaff
@ 2023-04-15  1:15  3%   ` Tyler Retzlaff
  1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-15  1:15 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
	Tyler Retzlaff

Windows does not support versioned symbols. Fortunately Windows also
doesn't have an exported stable ABI.

Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
functions.

Windows does have a way to achieve similar versioning for symbols but it
is not a simple #define so it will be done as a work package later.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index 2bac2de..284c16e 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -82,8 +82,16 @@
 /* mark the v23 function as the older version, and v24 as the default version */
 VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
 BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
 MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
 		int64_t x), rte_tel_data_add_array_int_v24);
+#else
+int
+rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
+{
+	return rte_tel_data_add_array_int_v24(d, x);
+}
+#endif
 
 int
 rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
@@ -220,8 +228,16 @@
 /* mark the v23 function as the older version, and v24 as the default version */
 VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
 BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
 MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
 		const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
+#else
+int
+rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
+{
+	return rte_tel_data_add_dict_int_v24(d, name, val);
+}
+#endif
 
 int
 rte_tel_data_add_dict_uint(struct rte_tel_data *d,
-- 
1.8.3.1


^ permalink raw reply	[relevance 3%]

* Re: [PATCH v5 11/14] eal: expand most macros to empty when using MSVC
  @ 2023-04-14 17:02  4%       ` Tyler Retzlaff
  2023-04-15  7:16  3%         ` Morten Brørup
  0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-04-14 17:02 UTC (permalink / raw)
  To: Morten Brørup
  Cc: dev, bruce.richardson, david.marchand, thomas, konstantin.ananyev

On Fri, Apr 14, 2023 at 08:45:17AM +0200, Morten Brørup wrote:
> > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > Sent: Thursday, 13 April 2023 23.26
> > 
> > For now expand a lot of common rte macros empty. The catch here is we
> > need to test that most of the macros do what they should but at the same
> > time they are blocking work needed to bootstrap of the unit tests.
> > 
> > Later we will return and provide (where possible) expansions that work
> > correctly for msvc and where not possible provide some alternate macros
> > to achieve the same outcome.
> > 
> > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > ---
> >  lib/eal/include/rte_branch_prediction.h |  8 ++++++
> >  lib/eal/include/rte_common.h            | 45
> > +++++++++++++++++++++++++++++++++
> >  lib/eal/include/rte_compat.h            | 20 +++++++++++++++
> >  3 files changed, 73 insertions(+)
> > 
> > diff --git a/lib/eal/include/rte_branch_prediction.h
> > b/lib/eal/include/rte_branch_prediction.h
> > index 0256a9d..d9a0224 100644
> > --- a/lib/eal/include/rte_branch_prediction.h
> > +++ b/lib/eal/include/rte_branch_prediction.h
> > @@ -25,7 +25,11 @@
> >   *
> >   */
> >  #ifndef likely
> > +#ifndef RTE_TOOLCHAIN_MSVC
> >  #define likely(x)	__builtin_expect(!!(x), 1)
> > +#else
> > +#define likely(x)	(x)
> 
> This must be (!!(x)), because x may be non-Boolean, e.g. likely(n & 0x10), and likely() must return Boolean (0 or 1).

yes, you're right. will fix.

> 
> > +#endif
> >  #endif /* likely */
> > 
> >  /**
> > @@ -39,7 +43,11 @@
> >   *
> >   */
> >  #ifndef unlikely
> > +#ifndef RTE_TOOLCHAIN_MSVC
> >  #define unlikely(x)	__builtin_expect(!!(x), 0)
> > +#else
> > +#define unlikely(x)	(x)
> 
> This must also be (!!(x)), for the same reason as above.

ack

> 
> > +#endif
> >  #endif /* unlikely */
> > 
> >  #ifdef __cplusplus
> > diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
> > index 2f464e3..1bdaa2d 100644
> > --- a/lib/eal/include/rte_common.h
> > +++ b/lib/eal/include/rte_common.h
> > @@ -65,7 +65,11 @@
> >  /**
> >   * Force alignment
> >   */
> > +#ifndef RTE_TOOLCHAIN_MSVC
> >  #define __rte_aligned(a) __attribute__((__aligned__(a)))
> > +#else
> > +#define __rte_aligned(a)
> > +#endif
> 
> It should be reviewed that __rte_aligned() is only used for optimization purposes, and is not required for DPDK to function properly.

so to expand on what i have in mind (and explain why i leave it expanded
empty for now)

while msvc has a __declspec for align there is a mismatch between
where gcc and msvc want it placed to control alignment of objects.

msvc support won't be functional in 23.07 because of atomics. so once
we reach the 23.11 cycle (where we can merge c11 changes) it means we
can also use standard _Alignas which can accomplish the same thing
but portably.

full disclosure the catch is i still have to properly locate the <thing>
that does the alignment and some small questions about the expansion and
use of the existing macro.

on the subject of DPDK requiring proper alignment, you're right it
is generally for performance but also for pre-c11 atomics.

one question i have been asking myself is would the community see value
in more compile time assertions / testing of the size and alignment of
structures and offset of structure fields? we have a few key
RTE_BUILD_BUG_ON() assertions but i've discovered they don't offer
comprehensive protection.

> 
> > 
> >  #ifdef RTE_ARCH_STRICT_ALIGN
> >  typedef uint64_t unaligned_uint64_t __rte_aligned(1);
> > @@ -80,16 +84,29 @@
> >  /**
> >   * Force a structure to be packed
> >   */
> > +#ifndef RTE_TOOLCHAIN_MSVC
> >  #define __rte_packed __attribute__((__packed__))
> > +#else
> > +#define __rte_packed
> > +#endif
> 
> Similar comment as for __rte_aligned(); however, I consider it more likely that structure packing is a functional requirement, and not just used for optimization. Based on my experience, it may be used for packing network structures; perhaps not in DPDK itself but maybe in DPDK applications.

so interestingly i've discovered this is kind of a mess and as you note
some places we can't just "fix" it for abi compatibility reasons.

in some instances the packing is being applied to structures where it is
essentially a noop. i.e. natural alignment gets you the same thing so it
is superfluous.

in some instances the packing is being applied to structures that are
private and it appears to be completely unnecessary e.g. some structure
that isn't nested into something else and sizeof() or offsetof() fields
don't matter in the context of their use.

in some instances it is completely necessary usually when type punning
buffers containing network framing etc...

unfortunately the standard doesn't offer me an out here as there is an
issue of placement of the pragma/attributes that do the packing.

for places it isn't needed it, whatever i just expand empty. for places
it is superfluous again because msvc has no stable abi (we're not
established yet) again i just expand empty. finally for the places where
it is needed i'll probably need to expand conditionally but i think the
instances are far fewer than current use.

> 
> The same risk applies to __rte_aligned(), but with lower probability.

so that's the long winded story of why they are both expanded empty for
now for msvc. but when the time comes i want to submit patch series that
focus on each specifically to generate robust discussion.

ty

^ permalink raw reply	[relevance 4%]

* Re: [PATCH] reorder: improve buffer structure layout
  2023-04-14 14:54  3%   ` Bruce Richardson
@ 2023-04-14 15:30  0%     ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-04-14 15:30 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: Volodymyr Fialko, dev, Reshma Pattan, jerinj, anoobj

On Fri, 14 Apr 2023 15:54:13 +0100
Bruce Richardson <bruce.richardson@intel.com> wrote:

> On Fri, Apr 14, 2023 at 07:52:30AM -0700, Stephen Hemminger wrote:
> > On Fri, 14 Apr 2023 10:43:43 +0200
> > Volodymyr Fialko <vfialko@marvell.com> wrote:
> >   
> > > diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c
> > > index f55f383700..7418202b04 100644
> > > --- a/lib/reorder/rte_reorder.c
> > > +++ b/lib/reorder/rte_reorder.c
> > > @@ -46,9 +46,10 @@ struct rte_reorder_buffer {
> > >  	char name[RTE_REORDER_NAMESIZE];
> > >  	uint32_t min_seqn;  /**< Lowest seq. number that can be in the buffer */
> > >  	unsigned int memsize; /**< memory area size of reorder buffer */
> > > +	int is_initialized; /**< flag indicates that buffer was initialized */
> > > +
> > >  	struct cir_buffer ready_buf; /**< temp buffer for dequeued entries */
> > >  	struct cir_buffer order_buf; /**< buffer used to reorder entries */
> > > -	int is_initialized;
> > >  } __rte_cache_aligned;
> > >  
> > >  static void  
> > 
> > Since this is ABI change it will have to wait for 23.11 release  
> 
> It shouldn't be an ABI change. This struct is defined in a C file, rather
> than a header, so is not exposed to end applications.
> 
> /Bruce

Sorry, Bruce is right. 
You might want to use uint8_t or bool for a simple flag.

^ permalink raw reply	[relevance 0%]

* Re: [PATCH] reorder: improve buffer structure layout
  2023-04-14 14:52  3% ` Stephen Hemminger
@ 2023-04-14 14:54  3%   ` Bruce Richardson
  2023-04-14 15:30  0%     ` Stephen Hemminger
  0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2023-04-14 14:54 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Volodymyr Fialko, dev, Reshma Pattan, jerinj, anoobj

On Fri, Apr 14, 2023 at 07:52:30AM -0700, Stephen Hemminger wrote:
> On Fri, 14 Apr 2023 10:43:43 +0200
> Volodymyr Fialko <vfialko@marvell.com> wrote:
> 
> > diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c
> > index f55f383700..7418202b04 100644
> > --- a/lib/reorder/rte_reorder.c
> > +++ b/lib/reorder/rte_reorder.c
> > @@ -46,9 +46,10 @@ struct rte_reorder_buffer {
> >  	char name[RTE_REORDER_NAMESIZE];
> >  	uint32_t min_seqn;  /**< Lowest seq. number that can be in the buffer */
> >  	unsigned int memsize; /**< memory area size of reorder buffer */
> > +	int is_initialized; /**< flag indicates that buffer was initialized */
> > +
> >  	struct cir_buffer ready_buf; /**< temp buffer for dequeued entries */
> >  	struct cir_buffer order_buf; /**< buffer used to reorder entries */
> > -	int is_initialized;
> >  } __rte_cache_aligned;
> >  
> >  static void
> 
> Since this is ABI change it will have to wait for 23.11 release

It shouldn't be an ABI change. This struct is defined in a C file, rather
than a header, so is not exposed to end applications.

/Bruce

^ permalink raw reply	[relevance 3%]

* Re: [PATCH] reorder: improve buffer structure layout
  @ 2023-04-14 14:52  3% ` Stephen Hemminger
  2023-04-14 14:54  3%   ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-04-14 14:52 UTC (permalink / raw)
  To: Volodymyr Fialko; +Cc: dev, Reshma Pattan, jerinj, anoobj

On Fri, 14 Apr 2023 10:43:43 +0200
Volodymyr Fialko <vfialko@marvell.com> wrote:

> diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c
> index f55f383700..7418202b04 100644
> --- a/lib/reorder/rte_reorder.c
> +++ b/lib/reorder/rte_reorder.c
> @@ -46,9 +46,10 @@ struct rte_reorder_buffer {
>  	char name[RTE_REORDER_NAMESIZE];
>  	uint32_t min_seqn;  /**< Lowest seq. number that can be in the buffer */
>  	unsigned int memsize; /**< memory area size of reorder buffer */
> +	int is_initialized; /**< flag indicates that buffer was initialized */
> +
>  	struct cir_buffer ready_buf; /**< temp buffer for dequeued entries */
>  	struct cir_buffer order_buf; /**< buffer used to reorder entries */
> -	int is_initialized;
>  } __rte_cache_aligned;
>  
>  static void

Since this is ABI change it will have to wait for 23.11 release

^ permalink raw reply	[relevance 3%]

* [PATCH v5 13/14] telemetry: avoid expanding versioned symbol macros on MSVC
    2023-04-13 21:26  6%   ` [PATCH v5 11/14] eal: expand most macros to empty when using MSVC Tyler Retzlaff
@ 2023-04-13 21:26  3%   ` Tyler Retzlaff
  1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-13 21:26 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
	Tyler Retzlaff

Windows does not support versioned symbols. Fortunately Windows also
doesn't have an exported stable ABI.

Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
functions.

Windows does have a way to achieve similar versioning for symbols but it
is not a simple #define so it will be done as a work package later.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index 2bac2de..284c16e 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -82,8 +82,16 @@
 /* mark the v23 function as the older version, and v24 as the default version */
 VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
 BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
 MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
 		int64_t x), rte_tel_data_add_array_int_v24);
+#else
+int
+rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
+{
+	return rte_tel_data_add_array_int_v24(d, x);
+}
+#endif
 
 int
 rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
@@ -220,8 +228,16 @@
 /* mark the v23 function as the older version, and v24 as the default version */
 VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
 BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
 MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
 		const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
+#else
+int
+rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
+{
+	return rte_tel_data_add_dict_int_v24(d, name, val);
+}
+#endif
 
 int
 rte_tel_data_add_dict_uint(struct rte_tel_data *d,
-- 
1.8.3.1


^ permalink raw reply	[relevance 3%]

* [PATCH v5 11/14] eal: expand most macros to empty when using MSVC
  @ 2023-04-13 21:26  6%   ` Tyler Retzlaff
    2023-04-13 21:26  3%   ` [PATCH v5 13/14] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
  1 sibling, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-04-13 21:26 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
	Tyler Retzlaff

For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.

Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
 lib/eal/include/rte_branch_prediction.h |  8 ++++++
 lib/eal/include/rte_common.h            | 45 +++++++++++++++++++++++++++++++++
 lib/eal/include/rte_compat.h            | 20 +++++++++++++++
 3 files changed, 73 insertions(+)

diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 0256a9d..d9a0224 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -25,7 +25,11 @@
  *
  */
 #ifndef likely
+#ifndef RTE_TOOLCHAIN_MSVC
 #define likely(x)	__builtin_expect(!!(x), 1)
+#else
+#define likely(x)	(x)
+#endif
 #endif /* likely */
 
 /**
@@ -39,7 +43,11 @@
  *
  */
 #ifndef unlikely
+#ifndef RTE_TOOLCHAIN_MSVC
 #define unlikely(x)	__builtin_expect(!!(x), 0)
+#else
+#define unlikely(x)	(x)
+#endif
 #endif /* unlikely */
 
 #ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..1bdaa2d 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -65,7 +65,11 @@
 /**
  * Force alignment
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_aligned(a) __attribute__((__aligned__(a)))
+#else
+#define __rte_aligned(a)
+#endif
 
 #ifdef RTE_ARCH_STRICT_ALIGN
 typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -80,16 +84,29 @@
 /**
  * Force a structure to be packed
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_packed __attribute__((__packed__))
+#else
+#define __rte_packed
+#endif
 
 /**
  * Macro to mark a type that is not subject to type-based aliasing rules
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_may_alias __attribute__((__may_alias__))
+#else
+#define __rte_may_alias
+#endif
 
 /******* Macro to mark functions and fields scheduled for removal *****/
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_deprecated	__attribute__((__deprecated__))
 #define __rte_deprecated_msg(msg)	__attribute__((__deprecated__(msg)))
+#else
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#endif
 
 /**
  *  Macro to mark macros and defines scheduled for removal
@@ -110,14 +127,22 @@
 /**
  * Force symbol to be generated even if it appears to be unused.
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_used __attribute__((used))
+#else
+#define __rte_used
+#endif
 
 /*********** Macros to eliminate unused variable warnings ********/
 
 /**
  * short definition to mark a function parameter unused
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_unused __attribute__((__unused__))
+#else
+#define __rte_unused
+#endif
 
 /**
  * Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +166,7 @@
  * even if the underlying stdio implementation is ANSI-compliant,
  * so this must be overridden.
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #if RTE_CC_IS_GNU
 #define __rte_format_printf(format_index, first_arg) \
 	__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +174,9 @@
 #define __rte_format_printf(format_index, first_arg) \
 	__attribute__((format(printf, format_index, first_arg)))
 #endif
+#else
+#define __rte_format_printf(format_index, first_arg)
+#endif
 
 /**
  * Tells compiler that the function returns a value that points to
@@ -222,7 +251,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
 /**
  * Hint never returning function
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_noreturn __attribute__((noreturn))
+#else
+#define __rte_noreturn
+#endif
 
 /**
  * Issue a warning in case the function's return value is ignored.
@@ -247,12 +280,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
  *  }
  * @endcode
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_warn_unused_result __attribute__((warn_unused_result))
+#else
+#define __rte_warn_unused_result
+#endif
 
 /**
  * Force a function to be inlined
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_always_inline inline __attribute__((always_inline))
+#else
+#define __rte_always_inline
+#endif
 
 /**
  * Force a function to be noinlined
@@ -437,7 +478,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
 #define RTE_CACHE_LINE_MIN_SIZE 64
 
 /** Force alignment to cache line. */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#else
+#define __rte_cache_aligned
+#endif
 
 /** Force minimum cache line alignment. */
 #define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..6a4b5ee 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
 
 #ifndef ALLOW_EXPERIMENTAL_API
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_experimental \
 __attribute__((deprecated("Symbol is not yet part of stable ABI"), \
 section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
 
 #else
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_experimental \
 __attribute__((section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
 
 #endif
 
@@ -30,23 +38,35 @@
 
 #if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 __attribute__((error("Symbol is not public ABI"), \
 section(".text.internal")))
+#else
+#define __rte_internal
+#endif
 
 #elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 _Pragma("GCC diagnostic push") \
 _Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
 __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
 section(".text.internal"))) \
 _Pragma("GCC diagnostic pop")
+#else
+#define __rte_internal
+#endif
 
 #else
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 __attribute__((section(".text.internal")))
+#else
+#define __rte_internal
+#endif
 
 #endif
 
-- 
1.8.3.1


^ permalink raw reply	[relevance 6%]

* [PATCH v2 2/3] doc: announce new cpu flag added to rte_cpu_flag_t
  @ 2023-04-13 11:53  3% ` Sivaprasad Tummala
  2023-04-17  4:31  3%   ` [PATCH v3 1/4] " Sivaprasad Tummala
  0 siblings, 1 reply; 200+ results
From: Sivaprasad Tummala @ 2023-04-13 11:53 UTC (permalink / raw)
  To: david.hunt; +Cc: dev

A new flag RTE_CPUFLAG_MONITORX is added to rte_cpu_flag_t in
DPDK 23.07 release to support monitorx instruction on Epyc processors.
This results in ABI breakage for legacy apps.

Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
---
 doc/guides/rel_notes/deprecation.rst | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index dcc1ca1696..65e849616d 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -163,3 +163,6 @@ Deprecation Notices
   The new port library API (functions rte_swx_port_*)
   will gradually transition from experimental to stable status
   starting with DPDK 23.07 release.
+
+* eal/x86: The enum ``rte_cpu_flag_t`` will be extended with a new cpu flag
+  ``RTE_CPUFLAG_MONITORX`` to support monitorx instruction on Epyc processors.
-- 
2.34.1


^ permalink raw reply	[relevance 3%]

* Re: [PATCH v3 11/11] telemetry: avoid expanding versioned symbol macros on msvc
  2023-04-11 20:34  0%       ` Tyler Retzlaff
@ 2023-04-12  8:50  0%         ` Bruce Richardson
  0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2023-04-12  8:50 UTC (permalink / raw)
  To: Tyler Retzlaff; +Cc: dev, david.marchand, thomas, mb, konstantin.ananyev

On Tue, Apr 11, 2023 at 01:34:14PM -0700, Tyler Retzlaff wrote:
> On Tue, Apr 11, 2023 at 11:24:07AM +0100, Bruce Richardson wrote:
> > On Wed, Apr 05, 2023 at 05:45:19PM -0700, Tyler Retzlaff wrote:
> > > Windows does not support versioned symbols. Fortunately Windows also
> > > doesn't have an exported stable ABI.
> > > 
> > > Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
> > > and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
> > > functions.
> > > 
> > > Windows does have a way to achieve similar versioning for symbols but it
> > > is not a simple #define so it will be done as a work package later.
> > > 
> > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > > ---
> > >  lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
> > >  1 file changed, 16 insertions(+)
> > > 
> > > diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
> > > index 2bac2de..284c16e 100644
> > > --- a/lib/telemetry/telemetry_data.c
> > > +++ b/lib/telemetry/telemetry_data.c
> > > @@ -82,8 +82,16 @@
> > >  /* mark the v23 function as the older version, and v24 as the default version */
> > >  VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
> > >  BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
> > > +#ifndef RTE_TOOLCHAIN_MSVC
> > >  MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
> > >  		int64_t x), rte_tel_data_add_array_int_v24);
> > > +#else
> > > +int
> > > +rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
> > > +{
> > > +	return rte_tel_data_add_array_int_v24(d, x);
> > > +}
> > > +#endif
> > >  
> > 
> > Can't see any general way to do this from the versioning header file, so
> > agree that we need some changes here. Rather than defining a public
> > funcion, we could keep the diff reduced by just using a macro alias here,
> > right? For example:
> > 
> > #ifdef RTE_TOOLCHAIN_MSVC
> > #define rte_tel_data_add_array_int rte_tel_data_add_array_int_v24
> > #else
> > MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
> > 		int64_t x), rte_tel_data_add_array_int_v24);
> > #endif
> > 
> > If this is a temporary measure, I'd tend towards the shortest solution that
> > can work. However, no strong opinions, so, either using functions as you
> > have it, or macros:
> 
> so i have to leave it as it is the reason being the version.map ->
> exports.def generation does not handle this. the .def only contains the
> rte_tel_data_add_array_int symbol. if we expand it away to the _v24 name
> the link will fail.
> 

Ah, thanks for clarifying

> let's consume the change as-is for now and i will work on the
> generalized solution when changes are integrated that actually make the
> windows dso/dll functional.
> 

Sure, good for now. Keep my ack on any future versions.
> > 
> > Acked-by: Bruce Richardson <bruce.richardson@intel.com>

^ permalink raw reply	[relevance 0%]

* [PATCH v4 13/14] telemetry: avoid expanding versioned symbol macros on MSVC
    2023-04-11 21:12  6%   ` [PATCH v4 11/14] eal: expand most macros to empty when using MSVC Tyler Retzlaff
@ 2023-04-11 21:12  3%   ` Tyler Retzlaff
  1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-11 21:12 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
	Tyler Retzlaff

Windows does not support versioned symbols. Fortunately Windows also
doesn't have an exported stable ABI.

Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
functions.

Windows does have a way to achieve similar versioning for symbols but it
is not a simple #define so it will be done as a work package later.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index 2bac2de..284c16e 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -82,8 +82,16 @@
 /* mark the v23 function as the older version, and v24 as the default version */
 VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
 BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
 MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
 		int64_t x), rte_tel_data_add_array_int_v24);
+#else
+int
+rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
+{
+	return rte_tel_data_add_array_int_v24(d, x);
+}
+#endif
 
 int
 rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
@@ -220,8 +228,16 @@
 /* mark the v23 function as the older version, and v24 as the default version */
 VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
 BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
 MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
 		const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
+#else
+int
+rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
+{
+	return rte_tel_data_add_dict_int_v24(d, name, val);
+}
+#endif
 
 int
 rte_tel_data_add_dict_uint(struct rte_tel_data *d,
-- 
1.8.3.1


^ permalink raw reply	[relevance 3%]

* [PATCH v4 11/14] eal: expand most macros to empty when using MSVC
  @ 2023-04-11 21:12  6%   ` Tyler Retzlaff
  2023-04-11 21:12  3%   ` [PATCH v4 13/14] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
  1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-11 21:12 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
	Tyler Retzlaff

For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.

Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
 lib/eal/include/rte_branch_prediction.h |  8 +++++++
 lib/eal/include/rte_common.h            | 41 +++++++++++++++++++++++++++++++++
 lib/eal/include/rte_compat.h            | 20 ++++++++++++++++
 3 files changed, 69 insertions(+)

diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 0256a9d..d9a0224 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -25,7 +25,11 @@
  *
  */
 #ifndef likely
+#ifndef RTE_TOOLCHAIN_MSVC
 #define likely(x)	__builtin_expect(!!(x), 1)
+#else
+#define likely(x)	(x)
+#endif
 #endif /* likely */
 
 /**
@@ -39,7 +43,11 @@
  *
  */
 #ifndef unlikely
+#ifndef RTE_TOOLCHAIN_MSVC
 #define unlikely(x)	__builtin_expect(!!(x), 0)
+#else
+#define unlikely(x)	(x)
+#endif
 #endif /* unlikely */
 
 #ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..dd41315 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -65,7 +65,11 @@
 /**
  * Force alignment
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_aligned(a) __attribute__((__aligned__(a)))
+#else
+#define __rte_aligned(a)
+#endif
 
 #ifdef RTE_ARCH_STRICT_ALIGN
 typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -85,11 +89,20 @@
 /**
  * Macro to mark a type that is not subject to type-based aliasing rules
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_may_alias __attribute__((__may_alias__))
+#else
+#define __rte_may_alias
+#endif
 
 /******* Macro to mark functions and fields scheduled for removal *****/
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_deprecated	__attribute__((__deprecated__))
 #define __rte_deprecated_msg(msg)	__attribute__((__deprecated__(msg)))
+#else
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#endif
 
 /**
  *  Macro to mark macros and defines scheduled for removal
@@ -110,14 +123,22 @@
 /**
  * Force symbol to be generated even if it appears to be unused.
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_used __attribute__((used))
+#else
+#define __rte_used
+#endif
 
 /*********** Macros to eliminate unused variable warnings ********/
 
 /**
  * short definition to mark a function parameter unused
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_unused __attribute__((__unused__))
+#else
+#define __rte_unused
+#endif
 
 /**
  * Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +162,7 @@
  * even if the underlying stdio implementation is ANSI-compliant,
  * so this must be overridden.
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #if RTE_CC_IS_GNU
 #define __rte_format_printf(format_index, first_arg) \
 	__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +170,9 @@
 #define __rte_format_printf(format_index, first_arg) \
 	__attribute__((format(printf, format_index, first_arg)))
 #endif
+#else
+#define __rte_format_printf(format_index, first_arg)
+#endif
 
 /**
  * Tells compiler that the function returns a value that points to
@@ -222,7 +247,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
 /**
  * Hint never returning function
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_noreturn __attribute__((noreturn))
+#else
+#define __rte_noreturn
+#endif
 
 /**
  * Issue a warning in case the function's return value is ignored.
@@ -247,12 +276,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
  *  }
  * @endcode
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_warn_unused_result __attribute__((warn_unused_result))
+#else
+#define __rte_warn_unused_result
+#endif
 
 /**
  * Force a function to be inlined
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_always_inline inline __attribute__((always_inline))
+#else
+#define __rte_always_inline
+#endif
 
 /**
  * Force a function to be noinlined
@@ -437,7 +474,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
 #define RTE_CACHE_LINE_MIN_SIZE 64
 
 /** Force alignment to cache line. */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#else
+#define __rte_cache_aligned
+#endif
 
 /** Force minimum cache line alignment. */
 #define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..6a4b5ee 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
 
 #ifndef ALLOW_EXPERIMENTAL_API
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_experimental \
 __attribute__((deprecated("Symbol is not yet part of stable ABI"), \
 section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
 
 #else
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_experimental \
 __attribute__((section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
 
 #endif
 
@@ -30,23 +38,35 @@
 
 #if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 __attribute__((error("Symbol is not public ABI"), \
 section(".text.internal")))
+#else
+#define __rte_internal
+#endif
 
 #elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 _Pragma("GCC diagnostic push") \
 _Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
 __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
 section(".text.internal"))) \
 _Pragma("GCC diagnostic pop")
+#else
+#define __rte_internal
+#endif
 
 #else
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 __attribute__((section(".text.internal")))
+#else
+#define __rte_internal
+#endif
 
 #endif
 
-- 
1.8.3.1


^ permalink raw reply	[relevance 6%]

* Re: [PATCH v3 11/11] telemetry: avoid expanding versioned symbol macros on msvc
  2023-04-11 10:24  0%     ` Bruce Richardson
@ 2023-04-11 20:34  0%       ` Tyler Retzlaff
  2023-04-12  8:50  0%         ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-04-11 20:34 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, david.marchand, thomas, mb, konstantin.ananyev

On Tue, Apr 11, 2023 at 11:24:07AM +0100, Bruce Richardson wrote:
> On Wed, Apr 05, 2023 at 05:45:19PM -0700, Tyler Retzlaff wrote:
> > Windows does not support versioned symbols. Fortunately Windows also
> > doesn't have an exported stable ABI.
> > 
> > Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
> > and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
> > functions.
> > 
> > Windows does have a way to achieve similar versioning for symbols but it
> > is not a simple #define so it will be done as a work package later.
> > 
> > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > ---
> >  lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
> >  1 file changed, 16 insertions(+)
> > 
> > diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
> > index 2bac2de..284c16e 100644
> > --- a/lib/telemetry/telemetry_data.c
> > +++ b/lib/telemetry/telemetry_data.c
> > @@ -82,8 +82,16 @@
> >  /* mark the v23 function as the older version, and v24 as the default version */
> >  VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
> >  BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
> > +#ifndef RTE_TOOLCHAIN_MSVC
> >  MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
> >  		int64_t x), rte_tel_data_add_array_int_v24);
> > +#else
> > +int
> > +rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
> > +{
> > +	return rte_tel_data_add_array_int_v24(d, x);
> > +}
> > +#endif
> >  
> 
> Can't see any general way to do this from the versioning header file, so
> agree that we need some changes here. Rather than defining a public
> funcion, we could keep the diff reduced by just using a macro alias here,
> right? For example:
> 
> #ifdef RTE_TOOLCHAIN_MSVC
> #define rte_tel_data_add_array_int rte_tel_data_add_array_int_v24
> #else
> MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
> 		int64_t x), rte_tel_data_add_array_int_v24);
> #endif
> 
> If this is a temporary measure, I'd tend towards the shortest solution that
> can work. However, no strong opinions, so, either using functions as you
> have it, or macros:

so i have to leave it as it is the reason being the version.map ->
exports.def generation does not handle this. the .def only contains the
rte_tel_data_add_array_int symbol. if we expand it away to the _v24 name
the link will fail.

let's consume the change as-is for now and i will work on the
generalized solution when changes are integrated that actually make the
windows dso/dll functional.

> 
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>

^ permalink raw reply	[relevance 0%]

* Re: [PATCH 1/3] security: introduce out of place support for inline ingress
  2023-04-11 10:04  4% ` [PATCH 1/3] " Nithin Dabilpuram
@ 2023-04-11 18:05  3%   ` Stephen Hemminger
  2023-04-18  8:33  4%     ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-04-11 18:05 UTC (permalink / raw)
  To: Nithin Dabilpuram; +Cc: Thomas Monjalon, Akhil Goyal, jerinj, dev

On Tue, 11 Apr 2023 15:34:07 +0530
Nithin Dabilpuram <ndabilpuram@marvell.com> wrote:

> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 4bacf9fcd9..866cd4e8ee 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -275,6 +275,17 @@ struct rte_security_ipsec_sa_options {
>  	 */
>  	uint32_t ip_reassembly_en : 1;
>  
> +	/** Enable out of place processing on inline inbound packets.
> +	 *
> +	 * * 1: Enable driver to perform Out-of-place(OOP) processing for this inline
> +	 *      inbound SA if supported by driver. PMD need to register mbuf
> +	 *      dynamic field using rte_security_oop_dynfield_register()
> +	 *      and security session creation would fail if dynfield is not
> +	 *      registered successfully.
> +	 * * 0: Disable OOP processing for this session (default).
> +	 */
> +	uint32_t ingress_oop : 1;
> +
>  	/** Reserved bit fields for future extension
>  	 *
>  	 * User should ensure reserved_opts is cleared as it may change in
> @@ -282,7 +293,7 @@ struct rte_security_ipsec_sa_options {
>  	 *
>  	 * Note: Reduce number of bits in reserved_opts for every new option.
>  	 */
> -	uint32_t reserved_opts : 17;
> +	uint32_t reserved_opts : 16;
>  };

NAK
Let me repeat the reserved bit rant. YAGNI

Reserved space is not usable without ABI breakage unless the existing
code enforces that reserved space has to be zero.

Just saying "User should ensure reserved_opts is cleared" is not enough.



^ permalink raw reply	[relevance 3%]

* Re: [PATCH v3 11/11] telemetry: avoid expanding versioned symbol macros on msvc
  2023-04-06  0:45  3%   ` [PATCH v3 11/11] telemetry: avoid expanding versioned symbol macros on msvc Tyler Retzlaff
@ 2023-04-11 10:24  0%     ` Bruce Richardson
  2023-04-11 20:34  0%       ` Tyler Retzlaff
  0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2023-04-11 10:24 UTC (permalink / raw)
  To: Tyler Retzlaff; +Cc: dev, david.marchand, thomas, mb, konstantin.ananyev

On Wed, Apr 05, 2023 at 05:45:19PM -0700, Tyler Retzlaff wrote:
> Windows does not support versioned symbols. Fortunately Windows also
> doesn't have an exported stable ABI.
> 
> Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
> and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
> functions.
> 
> Windows does have a way to achieve similar versioning for symbols but it
> is not a simple #define so it will be done as a work package later.
> 
> Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> ---
>  lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
> 
> diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
> index 2bac2de..284c16e 100644
> --- a/lib/telemetry/telemetry_data.c
> +++ b/lib/telemetry/telemetry_data.c
> @@ -82,8 +82,16 @@
>  /* mark the v23 function as the older version, and v24 as the default version */
>  VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
>  BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
> +#ifndef RTE_TOOLCHAIN_MSVC
>  MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
>  		int64_t x), rte_tel_data_add_array_int_v24);
> +#else
> +int
> +rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
> +{
> +	return rte_tel_data_add_array_int_v24(d, x);
> +}
> +#endif
>  

Can't see any general way to do this from the versioning header file, so
agree that we need some changes here. Rather than defining a public
funcion, we could keep the diff reduced by just using a macro alias here,
right? For example:

#ifdef RTE_TOOLCHAIN_MSVC
#define rte_tel_data_add_array_int rte_tel_data_add_array_int_v24
#else
MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
		int64_t x), rte_tel_data_add_array_int_v24);
#endif

If this is a temporary measure, I'd tend towards the shortest solution that
can work. However, no strong opinions, so, either using functions as you
have it, or macros:

Acked-by: Bruce Richardson <bruce.richardson@intel.com>


^ permalink raw reply	[relevance 0%]

* [PATCH 1/3] security: introduce out of place support for inline ingress
  2023-03-09  8:56  4% [RFC 1/2] security: introduce out of place support for inline ingress Nithin Dabilpuram
@ 2023-04-11 10:04  4% ` Nithin Dabilpuram
  2023-04-11 18:05  3%   ` Stephen Hemminger
  0 siblings, 1 reply; 200+ results
From: Nithin Dabilpuram @ 2023-04-11 10:04 UTC (permalink / raw)
  To: Thomas Monjalon, Akhil Goyal; +Cc: jerinj, dev, Nithin Dabilpuram

Similar to out of place(OOP) processing support that exists for
Lookaside crypto/security sessions, Inline ingress security
sessions may also need out of place processing in usecases
where original encrypted packet needs to be retained for post
processing. So for NIC's which have such a kind of HW support,
a new SA option is provided to indicate whether OOP needs to
be enabled on that Inline ingress security session or not.

Since for inline ingress sessions, packet is not received by
CPU until the processing is done, we can only have per-SA
option and not per-packet option like Lookaside sessions.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 devtools/libabigail.abignore       |  4 +++
 lib/security/rte_security.c        | 17 +++++++++++++
 lib/security/rte_security.h        | 39 +++++++++++++++++++++++++++++-
 lib/security/rte_security_driver.h |  8 ++++++
 lib/security/version.map           |  2 ++
 5 files changed, 69 insertions(+), 1 deletion(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 3ff51509de..414baac060 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -40,3 +40,7 @@
 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
 ; Temporary exceptions till next major ABI version ;
 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+
+; Ignore change to reserved opts for new SA option
+[suppress_type]
+       name = rte_security_ipsec_sa_options
diff --git a/lib/security/rte_security.c b/lib/security/rte_security.c
index e102c55e55..c2199dd8db 100644
--- a/lib/security/rte_security.c
+++ b/lib/security/rte_security.c
@@ -27,7 +27,10 @@
 } while (0)
 
 #define RTE_SECURITY_DYNFIELD_NAME "rte_security_dynfield_metadata"
+#define RTE_SECURITY_OOP_DYNFIELD_NAME "rte_security_oop_dynfield_metadata"
+
 int rte_security_dynfield_offset = -1;
+int rte_security_oop_dynfield_offset = -1;
 
 int
 rte_security_dynfield_register(void)
@@ -42,6 +45,20 @@ rte_security_dynfield_register(void)
 	return rte_security_dynfield_offset;
 }
 
+int
+rte_security_oop_dynfield_register(void)
+{
+	static const struct rte_mbuf_dynfield dynfield_desc = {
+		.name = RTE_SECURITY_OOP_DYNFIELD_NAME,
+		.size = sizeof(rte_security_oop_dynfield_t),
+		.align = __alignof__(rte_security_oop_dynfield_t),
+	};
+
+	rte_security_oop_dynfield_offset =
+		rte_mbuf_dynfield_register(&dynfield_desc);
+	return rte_security_oop_dynfield_offset;
+}
+
 void *
 rte_security_session_create(struct rte_security_ctx *instance,
 			    struct rte_security_session_conf *conf,
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 4bacf9fcd9..866cd4e8ee 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -275,6 +275,17 @@ struct rte_security_ipsec_sa_options {
 	 */
 	uint32_t ip_reassembly_en : 1;
 
+	/** Enable out of place processing on inline inbound packets.
+	 *
+	 * * 1: Enable driver to perform Out-of-place(OOP) processing for this inline
+	 *      inbound SA if supported by driver. PMD need to register mbuf
+	 *      dynamic field using rte_security_oop_dynfield_register()
+	 *      and security session creation would fail if dynfield is not
+	 *      registered successfully.
+	 * * 0: Disable OOP processing for this session (default).
+	 */
+	uint32_t ingress_oop : 1;
+
 	/** Reserved bit fields for future extension
 	 *
 	 * User should ensure reserved_opts is cleared as it may change in
@@ -282,7 +293,7 @@ struct rte_security_ipsec_sa_options {
 	 *
 	 * Note: Reduce number of bits in reserved_opts for every new option.
 	 */
-	uint32_t reserved_opts : 17;
+	uint32_t reserved_opts : 16;
 };
 
 /** IPSec security association direction */
@@ -812,6 +823,13 @@ typedef uint64_t rte_security_dynfield_t;
 /** Dynamic mbuf field for device-specific metadata */
 extern int rte_security_dynfield_offset;
 
+/** Out-of-Place(OOP) processing field type */
+typedef struct rte_mbuf *rte_security_oop_dynfield_t;
+/** Dynamic mbuf field for pointer to original mbuf for
+ * OOP processing session.
+ */
+extern int rte_security_oop_dynfield_offset;
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice
@@ -834,6 +852,25 @@ rte_security_dynfield(struct rte_mbuf *mbuf)
 		rte_security_dynfield_t *);
 }
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Get pointer to mbuf field for original mbuf pointer when
+ * Out-Of-Place(OOP) processing is enabled in security session.
+ *
+ * @param       mbuf    packet to access
+ * @return pointer to mbuf field
+ */
+__rte_experimental
+static inline rte_security_oop_dynfield_t *
+rte_security_oop_dynfield(struct rte_mbuf *mbuf)
+{
+	return RTE_MBUF_DYNFIELD(mbuf,
+			rte_security_oop_dynfield_offset,
+			rte_security_oop_dynfield_t *);
+}
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice
diff --git a/lib/security/rte_security_driver.h b/lib/security/rte_security_driver.h
index 421e6f7780..91e7786ab7 100644
--- a/lib/security/rte_security_driver.h
+++ b/lib/security/rte_security_driver.h
@@ -190,6 +190,14 @@ typedef int (*security_macsec_sa_stats_get_t)(void *device, uint16_t sa_id,
 __rte_internal
 int rte_security_dynfield_register(void);
 
+/**
+ * @internal
+ * Register mbuf dynamic field for Security inline ingress Out-of-Place(OOP)
+ * processing.
+ */
+__rte_internal
+int rte_security_oop_dynfield_register(void);
+
 /**
  * Update the mbuf with provided metadata.
  *
diff --git a/lib/security/version.map b/lib/security/version.map
index 07dcce9ffb..59a95f40bd 100644
--- a/lib/security/version.map
+++ b/lib/security/version.map
@@ -23,10 +23,12 @@ EXPERIMENTAL {
 	rte_security_macsec_sc_stats_get;
 	rte_security_session_stats_get;
 	rte_security_session_update;
+	rte_security_oop_dynfield_offset;
 };
 
 INTERNAL {
 	global:
 
 	rte_security_dynfield_register;
+	rte_security_oop_dynfield_register;
 };
-- 
2.25.1


^ permalink raw reply	[relevance 4%]

* Re: [PATCH v2] version: 23.07-rc0
  2023-04-03  9:37 10% ` [PATCH v2] " David Marchand
@ 2023-04-06  7:44  0%   ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2023-04-06  7:44 UTC (permalink / raw)
  To: dev; +Cc: thomas

On Mon, Apr 3, 2023 at 11:45 AM David Marchand
<david.marchand@redhat.com> wrote:
>
> Start a new release cycle with empty release notes.
> Bump version and ABI minor.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>

Applied!


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* [PATCH v3 08/11] eal: expand most macros to empty when using msvc
  @ 2023-04-06  0:45  6%   ` Tyler Retzlaff
  2023-04-06  0:45  3%   ` [PATCH v3 11/11] telemetry: avoid expanding versioned symbol macros on msvc Tyler Retzlaff
  1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-06  0:45 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
	Tyler Retzlaff

For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.

Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
 lib/eal/include/rte_branch_prediction.h |  8 ++++++++
 lib/eal/include/rte_common.h            | 33 +++++++++++++++++++++++++++++++++
 lib/eal/include/rte_compat.h            | 20 ++++++++++++++++++++
 3 files changed, 61 insertions(+)

diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 0256a9d..d9a0224 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -25,7 +25,11 @@
  *
  */
 #ifndef likely
+#ifndef RTE_TOOLCHAIN_MSVC
 #define likely(x)	__builtin_expect(!!(x), 1)
+#else
+#define likely(x)	(x)
+#endif
 #endif /* likely */
 
 /**
@@ -39,7 +43,11 @@
  *
  */
 #ifndef unlikely
+#ifndef RTE_TOOLCHAIN_MSVC
 #define unlikely(x)	__builtin_expect(!!(x), 0)
+#else
+#define unlikely(x)	(x)
+#endif
 #endif /* unlikely */
 
 #ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..a724e22 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -65,7 +65,11 @@
 /**
  * Force alignment
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_aligned(a) __attribute__((__aligned__(a)))
+#else
+#define __rte_aligned(a)
+#endif
 
 #ifdef RTE_ARCH_STRICT_ALIGN
 typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -88,8 +92,13 @@
 #define __rte_may_alias __attribute__((__may_alias__))
 
 /******* Macro to mark functions and fields scheduled for removal *****/
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_deprecated	__attribute__((__deprecated__))
 #define __rte_deprecated_msg(msg)	__attribute__((__deprecated__(msg)))
+#else
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#endif
 
 /**
  *  Macro to mark macros and defines scheduled for removal
@@ -117,7 +126,11 @@
 /**
  * short definition to mark a function parameter unused
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_unused __attribute__((__unused__))
+#else
+#define __rte_unused
+#endif
 
 /**
  * Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +154,7 @@
  * even if the underlying stdio implementation is ANSI-compliant,
  * so this must be overridden.
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #if RTE_CC_IS_GNU
 #define __rte_format_printf(format_index, first_arg) \
 	__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +162,9 @@
 #define __rte_format_printf(format_index, first_arg) \
 	__attribute__((format(printf, format_index, first_arg)))
 #endif
+#else
+#define __rte_format_printf(format_index, first_arg)
+#endif
 
 /**
  * Tells compiler that the function returns a value that points to
@@ -222,7 +239,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
 /**
  * Hint never returning function
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_noreturn __attribute__((noreturn))
+#else
+#define __rte_noreturn
+#endif
 
 /**
  * Issue a warning in case the function's return value is ignored.
@@ -247,12 +268,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
  *  }
  * @endcode
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_warn_unused_result __attribute__((warn_unused_result))
+#else
+#define __rte_warn_unused_result
+#endif
 
 /**
  * Force a function to be inlined
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_always_inline inline __attribute__((always_inline))
+#else
+#define __rte_always_inline
+#endif
 
 /**
  * Force a function to be noinlined
@@ -437,7 +466,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
 #define RTE_CACHE_LINE_MIN_SIZE 64
 
 /** Force alignment to cache line. */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#else
+#define __rte_cache_aligned
+#endif
 
 /** Force minimum cache line alignment. */
 #define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..6a4b5ee 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
 
 #ifndef ALLOW_EXPERIMENTAL_API
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_experimental \
 __attribute__((deprecated("Symbol is not yet part of stable ABI"), \
 section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
 
 #else
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_experimental \
 __attribute__((section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
 
 #endif
 
@@ -30,23 +38,35 @@
 
 #if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 __attribute__((error("Symbol is not public ABI"), \
 section(".text.internal")))
+#else
+#define __rte_internal
+#endif
 
 #elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 _Pragma("GCC diagnostic push") \
 _Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
 __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
 section(".text.internal"))) \
 _Pragma("GCC diagnostic pop")
+#else
+#define __rte_internal
+#endif
 
 #else
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 __attribute__((section(".text.internal")))
+#else
+#define __rte_internal
+#endif
 
 #endif
 
-- 
1.8.3.1


^ permalink raw reply	[relevance 6%]

* [PATCH v3 11/11] telemetry: avoid expanding versioned symbol macros on msvc
    2023-04-06  0:45  6%   ` [PATCH v3 08/11] eal: expand most macros to empty when using msvc Tyler Retzlaff
@ 2023-04-06  0:45  3%   ` Tyler Retzlaff
  2023-04-11 10:24  0%     ` Bruce Richardson
  1 sibling, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-04-06  0:45 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
	Tyler Retzlaff

Windows does not support versioned symbols. Fortunately Windows also
doesn't have an exported stable ABI.

Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
functions.

Windows does have a way to achieve similar versioning for symbols but it
is not a simple #define so it will be done as a work package later.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
 lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index 2bac2de..284c16e 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -82,8 +82,16 @@
 /* mark the v23 function as the older version, and v24 as the default version */
 VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
 BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
 MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
 		int64_t x), rte_tel_data_add_array_int_v24);
+#else
+int
+rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
+{
+	return rte_tel_data_add_array_int_v24(d, x);
+}
+#endif
 
 int
 rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
@@ -220,8 +228,16 @@
 /* mark the v23 function as the older version, and v24 as the default version */
 VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
 BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
 MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
 		const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
+#else
+int
+rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
+{
+	return rte_tel_data_add_dict_int_v24(d, name, val);
+}
+#endif
 
 int
 rte_tel_data_add_dict_uint(struct rte_tel_data *d,
-- 
1.8.3.1


^ permalink raw reply	[relevance 3%]

* [PATCH] MAINTAINERS: sort file entries
@ 2023-04-05 23:12 17% Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-04-05 23:12 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger, Thomas Monjalon

The list of file paths (F:) is only partially sorted
in some cases.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 MAINTAINERS | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 8df23e50999f..5fa432b00aac 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -83,26 +83,26 @@ Developers and Maintainers Tools
 M: Thomas Monjalon <thomas@monjalon.net>
 F: MAINTAINERS
 F: devtools/build-dict.sh
-F: devtools/check-abi.sh
 F: devtools/check-abi-version.sh
+F: devtools/check-abi.sh
 F: devtools/check-doc-vs-code.sh
 F: devtools/check-dup-includes.sh
-F: devtools/check-maintainers.sh
 F: devtools/check-forbidden-tokens.awk
 F: devtools/check-git-log.sh
+F: devtools/check-maintainers.sh
 F: devtools/check-spdx-tag.sh
 F: devtools/check-symbol-change.sh
 F: devtools/check-symbol-maps.sh
 F: devtools/checkpatches.sh
 F: devtools/get-maintainer.sh
 F: devtools/git-log-fixes.sh
+F: devtools/libabigail.abignore
 F: devtools/load-devel-config
 F: devtools/parse-flow-support.sh
 F: devtools/process-iwyu.py
 F: devtools/update-abi.sh
 F: devtools/update-patches.py
 F: devtools/update_version_map_abi.py
-F: devtools/libabigail.abignore
 F: devtools/words-case.txt
 F: license/
 F: .editorconfig
@@ -114,16 +114,16 @@ F: Makefile
 F: meson.build
 F: meson_options.txt
 F: config/
+F: buildtools/call-sphinx-build.py
 F: buildtools/check-symbols.sh
 F: buildtools/chkincs/
-F: buildtools/call-sphinx-build.py
 F: buildtools/get-cpu-count.py
 F: buildtools/get-numa-count.py
 F: buildtools/list-dir-globs.py
 F: buildtools/map-list-symbol.sh
 F: buildtools/pkg-config/
-F: buildtools/symlink-drivers-solibs.sh
 F: buildtools/symlink-drivers-solibs.py
+F: buildtools/symlink-drivers-solibs.sh
 F: devtools/test-meson-builds.sh
 F: devtools/check-meson.py
 
-- 
2.39.2


^ permalink raw reply	[relevance 17%]

* Re: [PATCH v2 9/9] telemetry: avoid expanding versioned symbol macros on msvc
  2023-04-05 16:02  0%       ` Tyler Retzlaff
@ 2023-04-05 16:17  0%         ` Bruce Richardson
  0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2023-04-05 16:17 UTC (permalink / raw)
  To: Tyler Retzlaff; +Cc: dev, david.marchand, thomas, mb, konstantin.ananyev

On Wed, Apr 05, 2023 at 09:02:10AM -0700, Tyler Retzlaff wrote:
> On Wed, Apr 05, 2023 at 11:56:05AM +0100, Bruce Richardson wrote:
> > On Tue, Apr 04, 2023 at 01:07:27PM -0700, Tyler Retzlaff wrote:
> > > Windows does not support versioned symbols. Fortunately Windows also
> > > doesn't have an exported stable ABI.
> > > 
> > > Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
> > > and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
> > > functions.
> > > 
> > > Windows does have a way to achieve similar versioning for symbols but it
> > > is not a simple #define so it will be done as a work package later.
> > > 
> > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > 
> > Does this require a change in telemetry itself? Can it be done via the
> > header file with the versioning macros in it, so it would apply to any
> > other versioned functions we have in DPDK?
> 
> i didn't spend a lot of time thinking if the existing macros could be
> made to expand in the way needed. there is a way of doing versioning on
> windows but it is foreign to how this symbol versioning scheme works so
> i plan to investigate it separately after i get unit tests running.
> 
> for now i know what i'm doing is ugly but i need to get protection of
> unit tests so i'm doing minimal changes to get to that point. if you're
> not comfortable with this going in on a temporary basis i can remove it
> from this series and we can work on it as a separated patch set.
> 
> my bar is pretty low here, as long as it doesn't break any existing
> linux/gcc/clang etc ok, if msvc is not right i'll take a second pass
> and design each stop-gap properly. it already doesn't work so things
> aren't made worse.
> 
> let me know if i need to carve this out of the series.
> 
It's not that ugly. :-) If no other clear solution is apparent, I can certainly
live with this.

/Bruce

^ permalink raw reply	[relevance 0%]

* Re: [PATCH v2 9/9] telemetry: avoid expanding versioned symbol macros on msvc
  2023-04-05 10:56  0%     ` Bruce Richardson
@ 2023-04-05 16:02  0%       ` Tyler Retzlaff
  2023-04-05 16:17  0%         ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-04-05 16:02 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, david.marchand, thomas, mb, konstantin.ananyev

On Wed, Apr 05, 2023 at 11:56:05AM +0100, Bruce Richardson wrote:
> On Tue, Apr 04, 2023 at 01:07:27PM -0700, Tyler Retzlaff wrote:
> > Windows does not support versioned symbols. Fortunately Windows also
> > doesn't have an exported stable ABI.
> > 
> > Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
> > and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
> > functions.
> > 
> > Windows does have a way to achieve similar versioning for symbols but it
> > is not a simple #define so it will be done as a work package later.
> > 
> > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> 
> Does this require a change in telemetry itself? Can it be done via the
> header file with the versioning macros in it, so it would apply to any
> other versioned functions we have in DPDK?

i didn't spend a lot of time thinking if the existing macros could be
made to expand in the way needed. there is a way of doing versioning on
windows but it is foreign to how this symbol versioning scheme works so
i plan to investigate it separately after i get unit tests running.

for now i know what i'm doing is ugly but i need to get protection of
unit tests so i'm doing minimal changes to get to that point. if you're
not comfortable with this going in on a temporary basis i can remove it
from this series and we can work on it as a separated patch set.

my bar is pretty low here, as long as it doesn't break any existing
linux/gcc/clang etc ok, if msvc is not right i'll take a second pass
and design each stop-gap properly. it already doesn't work so things
aren't made worse.

let me know if i need to carve this out of the series.

ty

> 
> /Bruce
> 
> > ---
> >  lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
> >  1 file changed, 16 insertions(+)
> > 
> > diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
> > index 2bac2de..284c16e 100644
> > --- a/lib/telemetry/telemetry_data.c
> > +++ b/lib/telemetry/telemetry_data.c
> > @@ -82,8 +82,16 @@
> >  /* mark the v23 function as the older version, and v24 as the default version */
> >  VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
> >  BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
> > +#ifndef RTE_TOOLCHAIN_MSVC
> >  MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
> >  		int64_t x), rte_tel_data_add_array_int_v24);
> > +#else
> > +int
> > +rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
> > +{
> > +	return rte_tel_data_add_array_int_v24(d, x);
> > +}
> > +#endif
> >  
> >  int
> >  rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
> > @@ -220,8 +228,16 @@
> >  /* mark the v23 function as the older version, and v24 as the default version */
> >  VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
> >  BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
> > +#ifndef RTE_TOOLCHAIN_MSVC
> >  MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
> >  		const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
> > +#else
> > +int
> > +rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
> > +{
> > +	return rte_tel_data_add_dict_int_v24(d, name, val);
> > +}
> > +#endif
> >  
> >  int
> >  rte_tel_data_add_dict_uint(struct rte_tel_data *d,
> > -- 
> > 1.8.3.1
> > 

^ permalink raw reply	[relevance 0%]

* [PATCH v2 0/3] vhost: add device op to offload the interrupt kick
@ 2023-04-05 12:40  3% Eelco Chaudron
    2023-05-08 13:58  0% ` [PATCH v2 0/3] " Eelco Chaudron
  0 siblings, 2 replies; 200+ results
From: Eelco Chaudron @ 2023-04-05 12:40 UTC (permalink / raw)
  To: maxime.coquelin, chenbo.xia; +Cc: dev

This series adds an operation callback which gets called every time the
library wants to call eventfd_write(). This eventfd_write() call could
result in a system call, which could potentially block the PMD thread.

The callback function can decide whether it's ok to handle the
eventfd_write() now or have the newly introduced function,
rte_vhost_notify_guest(), called at a later time.

This can be used by 3rd party applications, like OVS, to avoid system
calls being called as part of the PMD threads.

v2: - Used vhost_virtqueue->index to find index for operation.
    - Aligned function name to VDUSE RFC patchset.
    - Added error and offload statistics counter.
    - Mark new API as experimental.
    - Change the virtual queue spin lock to read/write spin lock.
    - Made shared counters atomic.
    - Add versioned rte_vhost_driver_callback_register() for
      ABI compliance.

Eelco Chaudron (3):
      vhost: Change vhost_virtqueue access lock to a read/write one.
      vhost: make the guest_notifications statistic counter atomic.
      vhost: add device op to offload the interrupt kick


 lib/eal/include/generic/rte_rwlock.h | 17 +++++
 lib/vhost/meson.build                |  2 +
 lib/vhost/rte_vhost.h                | 23 ++++++-
 lib/vhost/socket.c                   | 72 ++++++++++++++++++++--
 lib/vhost/version.map                |  9 +++
 lib/vhost/vhost.c                    | 92 +++++++++++++++++++++-------
 lib/vhost/vhost.h                    | 70 ++++++++++++++-------
 lib/vhost/vhost_user.c               | 14 ++---
 lib/vhost/virtio_net.c               | 90 +++++++++++++--------------
 9 files changed, 288 insertions(+), 101 deletions(-)


^ permalink raw reply	[relevance 3%]

* Re: [PATCH v2 9/9] telemetry: avoid expanding versioned symbol macros on msvc
  2023-04-04 20:07  3%   ` [PATCH v2 9/9] telemetry: avoid expanding versioned symbol macros on msvc Tyler Retzlaff
@ 2023-04-05 10:56  0%     ` Bruce Richardson
  2023-04-05 16:02  0%       ` Tyler Retzlaff
  0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2023-04-05 10:56 UTC (permalink / raw)
  To: Tyler Retzlaff; +Cc: dev, david.marchand, thomas, mb, konstantin.ananyev

On Tue, Apr 04, 2023 at 01:07:27PM -0700, Tyler Retzlaff wrote:
> Windows does not support versioned symbols. Fortunately Windows also
> doesn't have an exported stable ABI.
> 
> Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
> and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
> functions.
> 
> Windows does have a way to achieve similar versioning for symbols but it
> is not a simple #define so it will be done as a work package later.
> 
> Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>

Does this require a change in telemetry itself? Can it be done via the
header file with the versioning macros in it, so it would apply to any
other versioned functions we have in DPDK?

/Bruce

> ---
>  lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
> 
> diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
> index 2bac2de..284c16e 100644
> --- a/lib/telemetry/telemetry_data.c
> +++ b/lib/telemetry/telemetry_data.c
> @@ -82,8 +82,16 @@
>  /* mark the v23 function as the older version, and v24 as the default version */
>  VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
>  BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
> +#ifndef RTE_TOOLCHAIN_MSVC
>  MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
>  		int64_t x), rte_tel_data_add_array_int_v24);
> +#else
> +int
> +rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
> +{
> +	return rte_tel_data_add_array_int_v24(d, x);
> +}
> +#endif
>  
>  int
>  rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
> @@ -220,8 +228,16 @@
>  /* mark the v23 function as the older version, and v24 as the default version */
>  VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
>  BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
> +#ifndef RTE_TOOLCHAIN_MSVC
>  MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
>  		const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
> +#else
> +int
> +rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
> +{
> +	return rte_tel_data_add_dict_int_v24(d, name, val);
> +}
> +#endif
>  
>  int
>  rte_tel_data_add_dict_uint(struct rte_tel_data *d,
> -- 
> 1.8.3.1
> 

^ permalink raw reply	[relevance 0%]

* [PATCH v2 6/9] eal: expand most macros to empty when using msvc
  @ 2023-04-04 20:07  6%   ` Tyler Retzlaff
  2023-04-04 20:07  3%   ` [PATCH v2 9/9] telemetry: avoid expanding versioned symbol macros on msvc Tyler Retzlaff
  1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-04 20:07 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
	Tyler Retzlaff

For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.

Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
 lib/eal/include/rte_branch_prediction.h |  8 ++++++++
 lib/eal/include/rte_common.h            | 33 +++++++++++++++++++++++++++++++++
 lib/eal/include/rte_compat.h            | 20 ++++++++++++++++++++
 3 files changed, 61 insertions(+)

diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 0256a9d..3589c97 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -25,7 +25,11 @@
  *
  */
 #ifndef likely
+#ifndef RTE_TOOLCHAIN_MSVC
 #define likely(x)	__builtin_expect(!!(x), 1)
+#else
+#define likely(x)	(!!(x) == 1)
+#endif
 #endif /* likely */
 
 /**
@@ -39,7 +43,11 @@
  *
  */
 #ifndef unlikely
+#ifndef RTE_TOOLCHAIN_MSVC
 #define unlikely(x)	__builtin_expect(!!(x), 0)
+#else
+#define unlikely(x)	(!!(x) == 0)
+#endif
 #endif /* unlikely */
 
 #ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..a724e22 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -65,7 +65,11 @@
 /**
  * Force alignment
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_aligned(a) __attribute__((__aligned__(a)))
+#else
+#define __rte_aligned(a)
+#endif
 
 #ifdef RTE_ARCH_STRICT_ALIGN
 typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -88,8 +92,13 @@
 #define __rte_may_alias __attribute__((__may_alias__))
 
 /******* Macro to mark functions and fields scheduled for removal *****/
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_deprecated	__attribute__((__deprecated__))
 #define __rte_deprecated_msg(msg)	__attribute__((__deprecated__(msg)))
+#else
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#endif
 
 /**
  *  Macro to mark macros and defines scheduled for removal
@@ -117,7 +126,11 @@
 /**
  * short definition to mark a function parameter unused
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_unused __attribute__((__unused__))
+#else
+#define __rte_unused
+#endif
 
 /**
  * Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +154,7 @@
  * even if the underlying stdio implementation is ANSI-compliant,
  * so this must be overridden.
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #if RTE_CC_IS_GNU
 #define __rte_format_printf(format_index, first_arg) \
 	__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +162,9 @@
 #define __rte_format_printf(format_index, first_arg) \
 	__attribute__((format(printf, format_index, first_arg)))
 #endif
+#else
+#define __rte_format_printf(format_index, first_arg)
+#endif
 
 /**
  * Tells compiler that the function returns a value that points to
@@ -222,7 +239,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
 /**
  * Hint never returning function
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_noreturn __attribute__((noreturn))
+#else
+#define __rte_noreturn
+#endif
 
 /**
  * Issue a warning in case the function's return value is ignored.
@@ -247,12 +268,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
  *  }
  * @endcode
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_warn_unused_result __attribute__((warn_unused_result))
+#else
+#define __rte_warn_unused_result
+#endif
 
 /**
  * Force a function to be inlined
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_always_inline inline __attribute__((always_inline))
+#else
+#define __rte_always_inline
+#endif
 
 /**
  * Force a function to be noinlined
@@ -437,7 +466,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
 #define RTE_CACHE_LINE_MIN_SIZE 64
 
 /** Force alignment to cache line. */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#else
+#define __rte_cache_aligned
+#endif
 
 /** Force minimum cache line alignment. */
 #define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..6a4b5ee 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
 
 #ifndef ALLOW_EXPERIMENTAL_API
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_experimental \
 __attribute__((deprecated("Symbol is not yet part of stable ABI"), \
 section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
 
 #else
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_experimental \
 __attribute__((section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
 
 #endif
 
@@ -30,23 +38,35 @@
 
 #if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 __attribute__((error("Symbol is not public ABI"), \
 section(".text.internal")))
+#else
+#define __rte_internal
+#endif
 
 #elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 _Pragma("GCC diagnostic push") \
 _Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
 __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
 section(".text.internal"))) \
 _Pragma("GCC diagnostic pop")
+#else
+#define __rte_internal
+#endif
 
 #else
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 __attribute__((section(".text.internal")))
+#else
+#define __rte_internal
+#endif
 
 #endif
 
-- 
1.8.3.1


^ permalink raw reply	[relevance 6%]

* [PATCH v2 9/9] telemetry: avoid expanding versioned symbol macros on msvc
    2023-04-04 20:07  6%   ` [PATCH v2 6/9] eal: expand most macros to empty when using msvc Tyler Retzlaff
@ 2023-04-04 20:07  3%   ` Tyler Retzlaff
  2023-04-05 10:56  0%     ` Bruce Richardson
  1 sibling, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-04-04 20:07 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, david.marchand, thomas, mb, konstantin.ananyev,
	Tyler Retzlaff

Windows does not support versioned symbols. Fortunately Windows also
doesn't have an exported stable ABI.

Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
functions.

Windows does have a way to achieve similar versioning for symbols but it
is not a simple #define so it will be done as a work package later.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
 lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index 2bac2de..284c16e 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -82,8 +82,16 @@
 /* mark the v23 function as the older version, and v24 as the default version */
 VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
 BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
 MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
 		int64_t x), rte_tel_data_add_array_int_v24);
+#else
+int
+rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
+{
+	return rte_tel_data_add_array_int_v24(d, x);
+}
+#endif
 
 int
 rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
@@ -220,8 +228,16 @@
 /* mark the v23 function as the older version, and v24 as the default version */
 VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
 BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
 MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
 		const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
+#else
+int
+rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
+{
+	return rte_tel_data_add_dict_int_v24(d, name, val);
+}
+#endif
 
 int
 rte_tel_data_add_dict_uint(struct rte_tel_data *d,
-- 
1.8.3.1


^ permalink raw reply	[relevance 3%]

* [PATCH 6/9] eal: expand most macros to empty when using msvc
  @ 2023-04-03 21:52  6% ` Tyler Retzlaff
  2023-04-03 21:52  3% ` [PATCH 9/9] telemetry: avoid expanding versioned symbol macros on msvc Tyler Retzlaff
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-03 21:52 UTC (permalink / raw)
  To: dev; +Cc: bruce.richardson, david.marchand, thomas, mb, Tyler Retzlaff

For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.

Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
 lib/eal/include/rte_branch_prediction.h |  8 ++++++++
 lib/eal/include/rte_common.h            | 33 +++++++++++++++++++++++++++++++++
 lib/eal/include/rte_compat.h            | 20 ++++++++++++++++++++
 3 files changed, 61 insertions(+)

diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 0256a9d..3589c97 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -25,7 +25,11 @@
  *
  */
 #ifndef likely
+#ifndef RTE_TOOLCHAIN_MSVC
 #define likely(x)	__builtin_expect(!!(x), 1)
+#else
+#define likely(x)	(!!(x) == 1)
+#endif
 #endif /* likely */
 
 /**
@@ -39,7 +43,11 @@
  *
  */
 #ifndef unlikely
+#ifndef RTE_TOOLCHAIN_MSVC
 #define unlikely(x)	__builtin_expect(!!(x), 0)
+#else
+#define unlikely(x)	(!!(x) == 0)
+#endif
 #endif /* unlikely */
 
 #ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..a724e22 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -65,7 +65,11 @@
 /**
  * Force alignment
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_aligned(a) __attribute__((__aligned__(a)))
+#else
+#define __rte_aligned(a)
+#endif
 
 #ifdef RTE_ARCH_STRICT_ALIGN
 typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -88,8 +92,13 @@
 #define __rte_may_alias __attribute__((__may_alias__))
 
 /******* Macro to mark functions and fields scheduled for removal *****/
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_deprecated	__attribute__((__deprecated__))
 #define __rte_deprecated_msg(msg)	__attribute__((__deprecated__(msg)))
+#else
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#endif
 
 /**
  *  Macro to mark macros and defines scheduled for removal
@@ -117,7 +126,11 @@
 /**
  * short definition to mark a function parameter unused
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_unused __attribute__((__unused__))
+#else
+#define __rte_unused
+#endif
 
 /**
  * Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +154,7 @@
  * even if the underlying stdio implementation is ANSI-compliant,
  * so this must be overridden.
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #if RTE_CC_IS_GNU
 #define __rte_format_printf(format_index, first_arg) \
 	__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +162,9 @@
 #define __rte_format_printf(format_index, first_arg) \
 	__attribute__((format(printf, format_index, first_arg)))
 #endif
+#else
+#define __rte_format_printf(format_index, first_arg)
+#endif
 
 /**
  * Tells compiler that the function returns a value that points to
@@ -222,7 +239,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
 /**
  * Hint never returning function
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_noreturn __attribute__((noreturn))
+#else
+#define __rte_noreturn
+#endif
 
 /**
  * Issue a warning in case the function's return value is ignored.
@@ -247,12 +268,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
  *  }
  * @endcode
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_warn_unused_result __attribute__((warn_unused_result))
+#else
+#define __rte_warn_unused_result
+#endif
 
 /**
  * Force a function to be inlined
  */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_always_inline inline __attribute__((always_inline))
+#else
+#define __rte_always_inline
+#endif
 
 /**
  * Force a function to be noinlined
@@ -437,7 +466,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
 #define RTE_CACHE_LINE_MIN_SIZE 64
 
 /** Force alignment to cache line. */
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#else
+#define __rte_cache_aligned
+#endif
 
 /** Force minimum cache line alignment. */
 #define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..6a4b5ee 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
 
 #ifndef ALLOW_EXPERIMENTAL_API
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_experimental \
 __attribute__((deprecated("Symbol is not yet part of stable ABI"), \
 section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
 
 #else
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_experimental \
 __attribute__((section(".text.experimental")))
+#else
+#define __rte_experimental
+#endif
 
 #endif
 
@@ -30,23 +38,35 @@
 
 #if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 __attribute__((error("Symbol is not public ABI"), \
 section(".text.internal")))
+#else
+#define __rte_internal
+#endif
 
 #elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 _Pragma("GCC diagnostic push") \
 _Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
 __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
 section(".text.internal"))) \
 _Pragma("GCC diagnostic pop")
+#else
+#define __rte_internal
+#endif
 
 #else
 
+#ifndef RTE_TOOLCHAIN_MSVC
 #define __rte_internal \
 __attribute__((section(".text.internal")))
+#else
+#define __rte_internal
+#endif
 
 #endif
 
-- 
1.8.3.1


^ permalink raw reply	[relevance 6%]

* [PATCH 9/9] telemetry: avoid expanding versioned symbol macros on msvc
    2023-04-03 21:52  6% ` [PATCH 6/9] eal: expand most macros to empty when using msvc Tyler Retzlaff
@ 2023-04-03 21:52  3% ` Tyler Retzlaff
                     ` (6 subsequent siblings)
  8 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-04-03 21:52 UTC (permalink / raw)
  To: dev; +Cc: bruce.richardson, david.marchand, thomas, mb, Tyler Retzlaff

Windows does not support versioned symbols. Fortunately Windows also
doesn't have an exported stable ABI.

Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
functions.

Windows does have a way to achieve similar versioning for symbols but it
is not a simple #define so it will be done as a work package later.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
 lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index 2bac2de..284c16e 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -82,8 +82,16 @@
 /* mark the v23 function as the older version, and v24 as the default version */
 VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
 BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
 MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
 		int64_t x), rte_tel_data_add_array_int_v24);
+#else
+int
+rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
+{
+	return rte_tel_data_add_array_int_v24(d, x);
+}
+#endif
 
 int
 rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
@@ -220,8 +228,16 @@
 /* mark the v23 function as the older version, and v24 as the default version */
 VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
 BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
+#ifndef RTE_TOOLCHAIN_MSVC
 MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
 		const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
+#else
+int
+rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
+{
+	return rte_tel_data_add_dict_int_v24(d, name, val);
+}
+#endif
 
 int
 rte_tel_data_add_dict_uint(struct rte_tel_data *d,
-- 
1.8.3.1


^ permalink raw reply	[relevance 3%]

* [PATCH v2] devtools: add script to check for non inclusive naming
  @ 2023-04-03 14:47 14% ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-04-03 14:47 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

Shell script to find use of words that not be used.
By default it prints matches.  The -q (quiet) option
is used to just count. There is also -l option
which lists lines matching (like grep -l).

Uses the word lists from Inclusive Naming Initiative
see https://inclusivenaming.org/word-lists/

Examples:
 $ ./devtools/check-naming-policy.sh -q
 Total files: 37 errors, 90 warnings, 2 suggestions

 $ ./devtools/check-naming-policy.sh -q -l lib/eal
 Total lines: 32 errors, 8 warnings, 0 suggestions

Add MAINTAINERS file entry for the new tool and resort
the list files back into to alphabetic order

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
v2 - fix typo in words
   - add subtree (pathspec) option
   - update maintainers file (and fix alphabetic order)

 MAINTAINERS                     |   8 ++-
 devtools/check-naming-policy.sh | 107 ++++++++++++++++++++++++++++++++
 devtools/naming/tier1.txt       |   8 +++
 devtools/naming/tier2.txt       |   1 +
 devtools/naming/tier3.txt       |   4 ++
 5 files changed, 125 insertions(+), 3 deletions(-)
 create mode 100755 devtools/check-naming-policy.sh
 create mode 100644 devtools/naming/tier1.txt
 create mode 100644 devtools/naming/tier2.txt
 create mode 100644 devtools/naming/tier3.txt

diff --git a/MAINTAINERS b/MAINTAINERS
index 8df23e50999f..b5881113ba85 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -83,26 +83,28 @@ Developers and Maintainers Tools
 M: Thomas Monjalon <thomas@monjalon.net>
 F: MAINTAINERS
 F: devtools/build-dict.sh
-F: devtools/check-abi.sh
 F: devtools/check-abi-version.sh
+F: devtools/check-abi.sh
 F: devtools/check-doc-vs-code.sh
 F: devtools/check-dup-includes.sh
-F: devtools/check-maintainers.sh
 F: devtools/check-forbidden-tokens.awk
 F: devtools/check-git-log.sh
+F: devtools/check-maintainers.sh
+F: devtools/check-naming-policy.sh
 F: devtools/check-spdx-tag.sh
 F: devtools/check-symbol-change.sh
 F: devtools/check-symbol-maps.sh
 F: devtools/checkpatches.sh
 F: devtools/get-maintainer.sh
 F: devtools/git-log-fixes.sh
+F: devtools/libabigail.abignore
 F: devtools/load-devel-config
+F: devtools/naming/
 F: devtools/parse-flow-support.sh
 F: devtools/process-iwyu.py
 F: devtools/update-abi.sh
 F: devtools/update-patches.py
 F: devtools/update_version_map_abi.py
-F: devtools/libabigail.abignore
 F: devtools/words-case.txt
 F: license/
 F: .editorconfig
diff --git a/devtools/check-naming-policy.sh b/devtools/check-naming-policy.sh
new file mode 100755
index 000000000000..90347b415652
--- /dev/null
+++ b/devtools/check-naming-policy.sh
@@ -0,0 +1,107 @@
+#! /bin/bash
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2023 Stephen Hemminger
+#
+# This script scans the source tree and creates list of files
+# containing words that are recommended to bavoide by the
+# Inclusive Naming Initiative.
+# See: https://inclusivenaming.org/word-lists/
+#
+# The options are:
+#   -q = quiet mode, produces summary count only
+#   -l = show lines instead of files with recommendations
+#   -v = verbose, show a header between each tier
+#
+# Default is to scan all of DPDK source and documentation.
+# Optional pathspec can be used to limit specific tree.
+#
+#  Example:
+#    check-naming-policy.sh -q doc/*
+#
+
+errors=0
+warnings=0
+suggestions=0
+quiet=false
+veborse=false
+lines='-l'
+
+print_usage () {
+    echo "usage: $(basename $0) [-l] [-q] [-v] [<pathspec>]"
+    exit 1
+}
+
+# Locate word list files
+selfdir=$(dirname $(readlink -f $0))
+words=$selfdir/naming
+
+# These give false positives
+skipfiles=( ':^devtools/naming/' \
+	    ':^doc/guides/rel_notes/' \
+	    ':^doc/guides/contributing/coding_style.rst' \
+	    ':^doc/guides/prog_guide/glossary.rst' \
+)
+# These are obsolete
+skipfiles+=( \
+	    ':^drivers/net/liquidio/' \
+	    ':^drivers/net/bnx2x/' \
+	    ':^lib/table/' \
+	    ':^lib/port/' \
+	    ':^lib/pipeline/' \
+	    ':^examples/pipeline/' \
+)
+
+#
+# check_wordlist wordfile description
+check_wordlist() {
+    local list=$words/$1
+    local description=$2
+
+    git grep -i $lines -f $list -- ${skipfiles[@]} $pathspec > $tmpfile
+    count=$(wc -l < $tmpfile)
+    if ! $quiet; then
+	if [ $count -gt 0 ]; then
+	    if $verbose; then
+   		    echo $description
+		    echo $description | tr '[:print:]' '-'
+	    fi
+   	    cat $tmpfile
+	    echo
+	fi
+    fi
+    return $count
+}
+
+while getopts lqvh ARG ; do
+	case $ARG in
+		l ) lines= ;;
+		q ) quiet=true ;;
+		v ) verbose=true ;;
+		h ) print_usage ; exit 0 ;;
+		? ) print_usage ; exit 1 ;;
+	esac
+done
+shift $(($OPTIND - 1))
+
+tmpfile=$(mktemp -t dpdk.checknames.XXXXXX)
+trap 'rm -f -- "$tmpfile"' INT TERM HUP EXIT
+
+pathspec=$*
+
+check_wordlist tier1.txt "Tier 1: Replace immediately"
+errors=$?
+
+check_wordlist tier2.txt "Tier 2: Strongly consider replacing"
+warnings=$?
+
+check_wordlist tier3.txt "Tier 3: Recommend to replace"
+suggestions=$?
+
+if [ -z "$lines" ] ; then
+    echo -n "Total lines: "
+else
+    echo -n "Total files: "
+fi
+
+echo $errors "errors," $warnings "warnings," $suggestions "suggestions"
+exit $errors
diff --git a/devtools/naming/tier1.txt b/devtools/naming/tier1.txt
new file mode 100644
index 000000000000..a0e9b549c218
--- /dev/null
+++ b/devtools/naming/tier1.txt
@@ -0,0 +1,8 @@
+abort
+blackhat
+blacklist
+cripple
+master
+slave
+whitehat
+whitelist
diff --git a/devtools/naming/tier2.txt b/devtools/naming/tier2.txt
new file mode 100644
index 000000000000..cd4280d1625c
--- /dev/null
+++ b/devtools/naming/tier2.txt
@@ -0,0 +1 @@
+sanity
diff --git a/devtools/naming/tier3.txt b/devtools/naming/tier3.txt
new file mode 100644
index 000000000000..072f6468ea47
--- /dev/null
+++ b/devtools/naming/tier3.txt
@@ -0,0 +1,4 @@
+man.in.the.middle
+segregate
+segregation
+tribe
-- 
2.39.2


^ permalink raw reply	[relevance 14%]

* [PATCH v2] version: 23.07-rc0
  2023-04-03  6:59 10% [PATCH] version: 23.07-rc0 David Marchand
@ 2023-04-03  9:37 10% ` David Marchand
  2023-04-06  7:44  0%   ` David Marchand
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2023-04-03  9:37 UTC (permalink / raw)
  To: dev; +Cc: thomas

Start a new release cycle with empty release notes.
Bump version and ABI minor.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v1:
- fix ABI reference git repository,

---
 .github/workflows/build.yml            |   3 +-
 ABI_VERSION                            |   2 +-
 VERSION                                |   2 +-
 doc/guides/rel_notes/index.rst         |   1 +
 doc/guides/rel_notes/release_23_07.rst | 138 +++++++++++++++++++++++++
 5 files changed, 142 insertions(+), 4 deletions(-)
 create mode 100644 doc/guides/rel_notes/release_23_07.rst

diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index e24e47a216..edd39cbd62 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -26,8 +26,7 @@ jobs:
       MINGW: ${{ matrix.config.cross == 'mingw' }}
       MINI: ${{ matrix.config.mini != '' }}
       PPC64LE: ${{ matrix.config.cross == 'ppc64le' }}
-      REF_GIT_REPO: https://dpdk.org/git/dpdk-stable
-      REF_GIT_TAG: v22.11.1
+      REF_GIT_TAG: v23.03
       RISCV64: ${{ matrix.config.cross == 'riscv64' }}
       RUN_TESTS: ${{ contains(matrix.config.checks, 'tests') }}
 
diff --git a/ABI_VERSION b/ABI_VERSION
index a12b18e437..3c8ce91a46 100644
--- a/ABI_VERSION
+++ b/ABI_VERSION
@@ -1 +1 @@
-23.1
+23.2
diff --git a/VERSION b/VERSION
index 533bf9aa13..d3c78a13bf 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-23.03.0
+23.07.0-rc0
diff --git a/doc/guides/rel_notes/index.rst b/doc/guides/rel_notes/index.rst
index 57475a8158..d8dfa621ec 100644
--- a/doc/guides/rel_notes/index.rst
+++ b/doc/guides/rel_notes/index.rst
@@ -8,6 +8,7 @@ Release Notes
     :maxdepth: 1
     :numbered:
 
+    release_23_07
     release_23_03
     release_22_11
     release_22_07
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
new file mode 100644
index 0000000000..a9b1293689
--- /dev/null
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -0,0 +1,138 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright 2023 The DPDK contributors
+
+.. include:: <isonum.txt>
+
+DPDK Release 23.07
+==================
+
+.. **Read this first.**
+
+   The text in the sections below explains how to update the release notes.
+
+   Use proper spelling, capitalization and punctuation in all sections.
+
+   Variable and config names should be quoted as fixed width text:
+   ``LIKE_THIS``.
+
+   Build the docs and view the output file to ensure the changes are correct::
+
+      ninja -C build doc
+      xdg-open build/doc/guides/html/rel_notes/release_23_07.html
+
+
+New Features
+------------
+
+.. This section should contain new features added in this release.
+   Sample format:
+
+   * **Add a title in the past tense with a full stop.**
+
+     Add a short 1-2 sentence description in the past tense.
+     The description should be enough to allow someone scanning
+     the release notes to understand the new feature.
+
+     If the feature adds a lot of sub-features you can use a bullet list
+     like this:
+
+     * Added feature foo to do something.
+     * Enhanced feature bar to do something else.
+
+     Refer to the previous release notes for examples.
+
+     Suggested order in release notes items:
+     * Core libs (EAL, mempool, ring, mbuf, buses)
+     * Device abstraction libs and PMDs (ordered alphabetically by vendor name)
+       - ethdev (lib, PMDs)
+       - cryptodev (lib, PMDs)
+       - eventdev (lib, PMDs)
+       - etc
+     * Other libs
+     * Apps, Examples, Tools (if significant)
+
+     This section is a comment. Do not overwrite or remove it.
+     Also, make sure to start the actual text at the margin.
+     =======================================================
+
+
+Removed Items
+-------------
+
+.. This section should contain removed items in this release. Sample format:
+
+   * Add a short 1-2 sentence description of the removed item
+     in the past tense.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
+
+
+API Changes
+-----------
+
+.. This section should contain API changes. Sample format:
+
+   * sample: Add a short 1-2 sentence description of the API change
+     which was announced in the previous releases and made in this release.
+     Start with a scope label like "ethdev:".
+     Use fixed width quotes for ``function_names`` or ``struct_names``.
+     Use the past tense.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
+
+
+ABI Changes
+-----------
+
+.. This section should contain ABI changes. Sample format:
+
+   * sample: Add a short 1-2 sentence description of the ABI change
+     which was announced in the previous releases and made in this release.
+     Start with a scope label like "ethdev:".
+     Use fixed width quotes for ``function_names`` or ``struct_names``.
+     Use the past tense.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
+
+* No ABI change that would break compatibility with 22.11.
+
+
+Known Issues
+------------
+
+.. This section should contain new known issues in this release. Sample format:
+
+   * **Add title in present tense with full stop.**
+
+     Add a short 1-2 sentence description of the known issue
+     in the present tense. Add information on any known workarounds.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
+
+
+Tested Platforms
+----------------
+
+.. This section should contain a list of platforms that were tested
+   with this release.
+
+   The format is:
+
+   * <vendor> platform with <vendor> <type of devices> combinations
+
+     * List of CPU
+     * List of OS
+     * List of devices
+     * Other relevant details...
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
-- 
2.39.2


^ permalink raw reply	[relevance 10%]

* [PATCH] version: 23.07-rc0
@ 2023-04-03  6:59 10% David Marchand
  2023-04-03  9:37 10% ` [PATCH v2] " David Marchand
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2023-04-03  6:59 UTC (permalink / raw)
  To: dev; +Cc: thomas

Start a new release cycle with empty release notes.
Bump version and ABI minor.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 .github/workflows/build.yml            |   2 +-
 ABI_VERSION                            |   2 +-
 VERSION                                |   2 +-
 doc/guides/rel_notes/index.rst         |   1 +
 doc/guides/rel_notes/release_23_07.rst | 138 +++++++++++++++++++++++++
 5 files changed, 142 insertions(+), 3 deletions(-)
 create mode 100644 doc/guides/rel_notes/release_23_07.rst

diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index e24e47a216..e824f8841c 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -27,7 +27,7 @@ jobs:
       MINI: ${{ matrix.config.mini != '' }}
       PPC64LE: ${{ matrix.config.cross == 'ppc64le' }}
       REF_GIT_REPO: https://dpdk.org/git/dpdk-stable
-      REF_GIT_TAG: v22.11.1
+      REF_GIT_TAG: v23.03
       RISCV64: ${{ matrix.config.cross == 'riscv64' }}
       RUN_TESTS: ${{ contains(matrix.config.checks, 'tests') }}
 
diff --git a/ABI_VERSION b/ABI_VERSION
index a12b18e437..3c8ce91a46 100644
--- a/ABI_VERSION
+++ b/ABI_VERSION
@@ -1 +1 @@
-23.1
+23.2
diff --git a/VERSION b/VERSION
index 533bf9aa13..d3c78a13bf 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-23.03.0
+23.07.0-rc0
diff --git a/doc/guides/rel_notes/index.rst b/doc/guides/rel_notes/index.rst
index 57475a8158..d8dfa621ec 100644
--- a/doc/guides/rel_notes/index.rst
+++ b/doc/guides/rel_notes/index.rst
@@ -8,6 +8,7 @@ Release Notes
     :maxdepth: 1
     :numbered:
 
+    release_23_07
     release_23_03
     release_22_11
     release_22_07
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
new file mode 100644
index 0000000000..a9b1293689
--- /dev/null
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -0,0 +1,138 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright 2023 The DPDK contributors
+
+.. include:: <isonum.txt>
+
+DPDK Release 23.07
+==================
+
+.. **Read this first.**
+
+   The text in the sections below explains how to update the release notes.
+
+   Use proper spelling, capitalization and punctuation in all sections.
+
+   Variable and config names should be quoted as fixed width text:
+   ``LIKE_THIS``.
+
+   Build the docs and view the output file to ensure the changes are correct::
+
+      ninja -C build doc
+      xdg-open build/doc/guides/html/rel_notes/release_23_07.html
+
+
+New Features
+------------
+
+.. This section should contain new features added in this release.
+   Sample format:
+
+   * **Add a title in the past tense with a full stop.**
+
+     Add a short 1-2 sentence description in the past tense.
+     The description should be enough to allow someone scanning
+     the release notes to understand the new feature.
+
+     If the feature adds a lot of sub-features you can use a bullet list
+     like this:
+
+     * Added feature foo to do something.
+     * Enhanced feature bar to do something else.
+
+     Refer to the previous release notes for examples.
+
+     Suggested order in release notes items:
+     * Core libs (EAL, mempool, ring, mbuf, buses)
+     * Device abstraction libs and PMDs (ordered alphabetically by vendor name)
+       - ethdev (lib, PMDs)
+       - cryptodev (lib, PMDs)
+       - eventdev (lib, PMDs)
+       - etc
+     * Other libs
+     * Apps, Examples, Tools (if significant)
+
+     This section is a comment. Do not overwrite or remove it.
+     Also, make sure to start the actual text at the margin.
+     =======================================================
+
+
+Removed Items
+-------------
+
+.. This section should contain removed items in this release. Sample format:
+
+   * Add a short 1-2 sentence description of the removed item
+     in the past tense.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
+
+
+API Changes
+-----------
+
+.. This section should contain API changes. Sample format:
+
+   * sample: Add a short 1-2 sentence description of the API change
+     which was announced in the previous releases and made in this release.
+     Start with a scope label like "ethdev:".
+     Use fixed width quotes for ``function_names`` or ``struct_names``.
+     Use the past tense.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
+
+
+ABI Changes
+-----------
+
+.. This section should contain ABI changes. Sample format:
+
+   * sample: Add a short 1-2 sentence description of the ABI change
+     which was announced in the previous releases and made in this release.
+     Start with a scope label like "ethdev:".
+     Use fixed width quotes for ``function_names`` or ``struct_names``.
+     Use the past tense.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
+
+* No ABI change that would break compatibility with 22.11.
+
+
+Known Issues
+------------
+
+.. This section should contain new known issues in this release. Sample format:
+
+   * **Add title in present tense with full stop.**
+
+     Add a short 1-2 sentence description of the known issue
+     in the present tense. Add information on any known workarounds.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
+
+
+Tested Platforms
+----------------
+
+.. This section should contain a list of platforms that were tested
+   with this release.
+
+   The format is:
+
+   * <vendor> platform with <vendor> <type of devices> combinations
+
+     * List of CPU
+     * List of OS
+     * List of devices
+     * Other relevant details...
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
-- 
2.39.2


^ permalink raw reply	[relevance 10%]

* DPDK 23.03 released
@ 2023-03-31 17:17  3% Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-03-31 17:17 UTC (permalink / raw)
  To: announce

A new major release is available:
	https://fast.dpdk.org/rel/dpdk-23.03.tar.xz

Winter release numbers are quite small as usual:
	1048 commits from 161 authors
	1379 files changed, 85721 insertions(+), 25814 deletions(-)

It is not planned to start a maintenance branch for 23.03.
This version is ABI-compatible with 22.11.

Below are some new features:
	- lock annotations
	- ARM power management monitor/wakeup
	- machine learning inference device API and test application
	- platform bus
	- 400G link speed
	- queue mapping of aggregated ports
	- flow quota
	- more flow matching (ICMPv6, IPv6 routing extension)
	- more flow actions (flex modify, congestion management)
	- Intel cpfl IPU driver
	- Marvell CNXK machine learning inference
	- SHAKE hash algorithm for crypto
	- LZ4 algorithm for compression
	- more telemetry endpoints
	- more tracepoints
	- DTS hello world

More details in the release notes:
	https://doc.dpdk.org/guides/rel_notes/release_23_03.html

The test framework DTS is being improved and migrated into the mainline.
Please join the DTS effort for contributing, reviewing or testing.


There are 34 new contributors (including authors, reviewers and testers).
Welcome to Alok Prasad, Alvaro Karsz, Anup Prabhu, Boleslav Stankevich,
Boris Ouretskey, Chenyu Huang, Edwin Brossette, Fengnan Chang,
Francesco Mancino, Haijun Chu, Hiral Shah, Isaac Boukris, J.J. Martzki,
Jesna K E, Joshua Washington, Kamalakshitha Aligeri, Krzysztof Karas,
Leo Xu, Maayan Kashani, Michal Schmidt, Mohammad Iqbal Ahmad,
Nathan Brown, Patrick Robb, Prince Takkar, Rushil Gupta,
Saoirse O'Donovan, Shivah Shankar S, Shiyang He, Song Jiale,
Vikash Poddar, Visa Hankala, Yevgeny Kliteynik, Zerun Fu,
and Zhuobin Huang.

Below is the number of commits per employer (with authors count):
	265     Marvell (33)
	256     Intel (49)
	175     NVIDIA (20)
	 98     Red Hat (6)
	 68     Huawei (3)
	 55     Corigine (9)
	 49     Microsoft (3)
	 13     Arm (5)
	 10     PANTHEON.tech (1)
	  9     Trustnet (1)
	  9     AMD (2)
	  8     Ark Networks (2)
	        ...

A big thank to all courageous people who took on the non rewarding task
of reviewing other's job.
Based on Reviewed-by and Acked-by tags, the top non-PMD reviewers are:
	 48     Maxime Coquelin <maxime.coquelin@redhat.com>
	 46     Ferruh Yigit <ferruh.yigit@amd.com>
	 44     Morten Brørup <mb@smartsharesystems.com>
	 25     Ori Kam <orika@nvidia.com>
	 24     Tyler Retzlaff <roretzla@linux.microsoft.com>
	 23     Chengwen Feng <fengchengwen@huawei.com>
	 21     David Marchand <david.marchand@redhat.com>
	 21     Akhil Goyal <gakhil@marvell.com>


The next version will be 23.07 in July.
The new features for 23.07 can be submitted during the next 3 weeks:
        http://core.dpdk.org/roadmap#dates
Please share your roadmap.

One last ask; please fill this quick survey before April 7th
to help planning the next DPDK Summit:
https://docs.google.com/forms/d/1104swKV4-_nNT6GimkRBNVac1uAqX7o2P936bcGsgMc

Thanks everyone



^ permalink raw reply	[relevance 3%]

* [PATCH v12 18/22] hash: move rte_hash_set_alg out header
  2023-03-29 23:40  2% [PATCH v12 00/22] Covert static log types in libraries to dynamic Stephen Hemminger
@ 2023-03-29 23:40  2% ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-03-29 23:40 UTC (permalink / raw)
  To: dev
  Cc: Stephen Hemminger, Ruifeng Wang, Yipeng Wang, Sameh Gobriel,
	Bruce Richardson, Vladimir Medvedkin

The code for setting algorithm for hash is not at all perf sensitive,
and doing it inline has a couple of problems. First, it means that if
multiple files include the header, then the initialization gets done
multiple times. But also, it makes it harder to fix usage of RTE_LOG().

Despite what the checking script say. This is not an ABI change, the
previous version inlined the same code; therefore both old and new code
will work the same.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
 lib/hash/meson.build     |  1 +
 lib/hash/rte_crc_arm64.h |  8 ++---
 lib/hash/rte_crc_x86.h   | 10 +++---
 lib/hash/rte_hash_crc.c  | 68 ++++++++++++++++++++++++++++++++++++++++
 lib/hash/rte_hash_crc.h  | 48 ++--------------------------
 lib/hash/version.map     |  7 +++++
 6 files changed, 88 insertions(+), 54 deletions(-)
 create mode 100644 lib/hash/rte_hash_crc.c

diff --git a/lib/hash/meson.build b/lib/hash/meson.build
index e56ee8572564..c345c6f561fc 100644
--- a/lib/hash/meson.build
+++ b/lib/hash/meson.build
@@ -19,6 +19,7 @@ indirect_headers += files(
 
 sources = files(
     'rte_cuckoo_hash.c',
+    'rte_hash_crc.c',
     'rte_fbk_hash.c',
     'rte_thash.c',
     'rte_thash_gfni.c'
diff --git a/lib/hash/rte_crc_arm64.h b/lib/hash/rte_crc_arm64.h
index c9f52510871b..414fe065caa8 100644
--- a/lib/hash/rte_crc_arm64.h
+++ b/lib/hash/rte_crc_arm64.h
@@ -53,7 +53,7 @@ crc32c_arm64_u64(uint64_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_ARM64))
+	if (likely(rte_hash_crc32_alg & CRC32_ARM64))
 		return crc32c_arm64_u8(data, init_val);
 
 	return crc32c_1byte(data, init_val);
@@ -67,7 +67,7 @@ rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_ARM64))
+	if (likely(rte_hash_crc32_alg & CRC32_ARM64))
 		return crc32c_arm64_u16(data, init_val);
 
 	return crc32c_2bytes(data, init_val);
@@ -81,7 +81,7 @@ rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_ARM64))
+	if (likely(rte_hash_crc32_alg & CRC32_ARM64))
 		return crc32c_arm64_u32(data, init_val);
 
 	return crc32c_1word(data, init_val);
@@ -95,7 +95,7 @@ rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_8byte(uint64_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_ARM64))
+	if (likely(rte_hash_crc32_alg & CRC32_ARM64))
 		return crc32c_arm64_u64(data, init_val);
 
 	return crc32c_2words(data, init_val);
diff --git a/lib/hash/rte_crc_x86.h b/lib/hash/rte_crc_x86.h
index 205bc182be77..3b865e251db2 100644
--- a/lib/hash/rte_crc_x86.h
+++ b/lib/hash/rte_crc_x86.h
@@ -67,7 +67,7 @@ crc32c_sse42_u64(uint64_t data, uint64_t init_val)
 static inline uint32_t
 rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_SSE42))
+	if (likely(rte_hash_crc32_alg & CRC32_SSE42))
 		return crc32c_sse42_u8(data, init_val);
 
 	return crc32c_1byte(data, init_val);
@@ -81,7 +81,7 @@ rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_SSE42))
+	if (likely(rte_hash_crc32_alg & CRC32_SSE42))
 		return crc32c_sse42_u16(data, init_val);
 
 	return crc32c_2bytes(data, init_val);
@@ -95,7 +95,7 @@ rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_SSE42))
+	if (likely(rte_hash_crc32_alg & CRC32_SSE42))
 		return crc32c_sse42_u32(data, init_val);
 
 	return crc32c_1word(data, init_val);
@@ -110,11 +110,11 @@ static inline uint32_t
 rte_hash_crc_8byte(uint64_t data, uint32_t init_val)
 {
 #ifdef RTE_ARCH_X86_64
-	if (likely(crc32_alg == CRC32_SSE42_x64))
+	if (likely(rte_hash_crc32_alg == CRC32_SSE42_x64))
 		return crc32c_sse42_u64(data, init_val);
 #endif
 
-	if (likely(crc32_alg & CRC32_SSE42))
+	if (likely(rte_hash_crc32_alg & CRC32_SSE42))
 		return crc32c_sse42_u64_mimic(data, init_val);
 
 	return crc32c_2words(data, init_val);
diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c
new file mode 100644
index 000000000000..1439d8a71f6a
--- /dev/null
+++ b/lib/hash/rte_hash_crc.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#include <rte_cpuflags.h>
+#include <rte_log.h>
+
+#include "rte_hash_crc.h"
+
+RTE_LOG_REGISTER_SUFFIX(hash_crc_logtype, crc, INFO);
+#define RTE_LOGTYPE_HASH_CRC hash_crc_logtype
+
+uint8_t rte_hash_crc32_alg = CRC32_SW;
+
+/**
+ * Allow or disallow use of SSE4.2/ARMv8 intrinsics for CRC32 hash
+ * calculation.
+ *
+ * @param alg
+ *   An OR of following flags:
+ *   - (CRC32_SW) Don't use SSE4.2/ARMv8 intrinsics (default non-[x86/ARMv8])
+ *   - (CRC32_SSE42) Use SSE4.2 intrinsics if available
+ *   - (CRC32_SSE42_x64) Use 64-bit SSE4.2 intrinsic if available (default x86)
+ *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
+ *
+ */
+void
+rte_hash_crc_set_alg(uint8_t alg)
+{
+	rte_hash_crc32_alg = CRC32_SW;
+
+	if (alg == CRC32_SW)
+		return;
+
+#if defined RTE_ARCH_X86
+	if (!(alg & CRC32_SSE42_x64))
+		RTE_LOG(WARNING, HASH_CRC,
+			"Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
+	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
+		rte_hash_crc32_alg = CRC32_SSE42;
+	else
+		rte_hash_crc32_alg = CRC32_SSE42_x64;
+#endif
+
+#if defined RTE_ARCH_ARM64
+	if (!(alg & CRC32_ARM64))
+		RTE_LOG(WARNING, HASH_CRC,
+			"Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
+		rte_hash_crc32_alg = CRC32_ARM64;
+#endif
+
+	if (rte_hash_crc32_alg == CRC32_SW)
+		RTE_LOG(WARNING, HASH_CRC,
+			"Unsupported CRC32 algorithm requested using CRC32_SW\n");
+}
+
+/* Setting the best available algorithm */
+RTE_INIT(rte_hash_crc_init_alg)
+{
+#if defined(RTE_ARCH_X86)
+	rte_hash_crc_set_alg(CRC32_SSE42_x64);
+#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
+	rte_hash_crc_set_alg(CRC32_ARM64);
+#else
+	rte_hash_crc_set_alg(CRC32_SW);
+#endif
+}
diff --git a/lib/hash/rte_hash_crc.h b/lib/hash/rte_hash_crc.h
index 0249ad16c5b6..e8145ee44204 100644
--- a/lib/hash/rte_hash_crc.h
+++ b/lib/hash/rte_hash_crc.h
@@ -20,8 +20,6 @@ extern "C" {
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_config.h>
-#include <rte_cpuflags.h>
-#include <rte_log.h>
 
 #include "rte_crc_sw.h"
 
@@ -31,7 +29,7 @@ extern "C" {
 #define CRC32_SSE42_x64     (CRC32_x64|CRC32_SSE42)
 #define CRC32_ARM64         (1U << 3)
 
-static uint8_t crc32_alg = CRC32_SW;
+extern uint8_t rte_hash_crc32_alg;
 
 #if defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
 #include "rte_crc_arm64.h"
@@ -53,48 +51,8 @@ static uint8_t crc32_alg = CRC32_SW;
  *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
  *
  */
-static inline void
-rte_hash_crc_set_alg(uint8_t alg)
-{
-	crc32_alg = CRC32_SW;
-
-	if (alg == CRC32_SW)
-		return;
-
-#if defined RTE_ARCH_X86
-	if (!(alg & CRC32_SSE42_x64))
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
-	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
-		crc32_alg = CRC32_SSE42;
-	else
-		crc32_alg = CRC32_SSE42_x64;
-#endif
-
-#if defined RTE_ARCH_ARM64
-	if (!(alg & CRC32_ARM64))
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
-	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
-		crc32_alg = CRC32_ARM64;
-#endif
-
-	if (crc32_alg == CRC32_SW)
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_SW\n");
-}
-
-/* Setting the best available algorithm */
-RTE_INIT(rte_hash_crc_init_alg)
-{
-#if defined(RTE_ARCH_X86)
-	rte_hash_crc_set_alg(CRC32_SSE42_x64);
-#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
-	rte_hash_crc_set_alg(CRC32_ARM64);
-#else
-	rte_hash_crc_set_alg(CRC32_SW);
-#endif
-}
+void
+rte_hash_crc_set_alg(uint8_t alg);
 
 #ifdef __DOXYGEN__
 
diff --git a/lib/hash/version.map b/lib/hash/version.map
index f03b047b2eec..8b22aad5626b 100644
--- a/lib/hash/version.map
+++ b/lib/hash/version.map
@@ -9,6 +9,7 @@ DPDK_23 {
 	rte_hash_add_key_with_hash;
 	rte_hash_add_key_with_hash_data;
 	rte_hash_count;
+	rte_hash_crc_set_alg;
 	rte_hash_create;
 	rte_hash_del_key;
 	rte_hash_del_key_with_hash;
@@ -56,3 +57,9 @@ EXPERIMENTAL {
 	rte_thash_gfni;
 	rte_thash_gfni_bulk;
 };
+
+INTERNAL {
+	global:
+
+	rte_hash_crc32_alg;
+};
-- 
2.39.2


^ permalink raw reply	[relevance 2%]

* [PATCH v12 00/22] Covert static log types in libraries to dynamic
@ 2023-03-29 23:40  2% Stephen Hemminger
  2023-03-29 23:40  2% ` [PATCH v12 18/22] hash: move rte_hash_set_alg out header Stephen Hemminger
  0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-03-29 23:40 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

This patchset removes the main uses of static LOGTYPE's in DPDK
libraries. It starts with the easy one and goes on to the more complex ones.

There are several options on how to treat the old static types:
leave them there, mark as deprecated, or remove them.
This version removes them since there is no guarantee in current
DPDK policies that says they can't be removed.

Note: there is one patch in this series that will get
flagged incorrectly as an ABI change.

v12 - rebase and add table and pipeline libraries

v11 - fix include check on arm cross build

v10 - add necessary rte_compat.h in thash_gfni stub for arm

v9 - fix handling of crc32 alg in lib/hash.
     make it an internal global variable.
     fix gfni stubs for case where they are not used.

Stephen Hemminger (22):
  gso: don't log message on non TCP/UDP
  eal: drop no longer used GSO logtype
  log: drop unused RTE_LOGTYPE_TIMER
  efd: convert RTE_LOGTYPE_EFD to dynamic type
  mbuf: convert RTE_LOGTYPE_MBUF to dynamic type
  acl: convert RTE_LOGTYPE_ACL to dynamic type
  examples/power: replace use of RTE_LOGTYPE_POWER
  examples/l3fwd-power: replace use of RTE_LOGTYPE_POWER
  power: convert RTE_LOGTYPE_POWER to dynamic type
  ring: convert RTE_LOGTYPE_RING to dynamic type
  mempool: convert RTE_LOGTYPE_MEMPOOL to dynamic type
  lpm: convert RTE_LOGTYPE_LPM to dynamic types
  kni: convert RTE_LOGTYPE_KNI to dynamic type
  sched: convert RTE_LOGTYPE_SCHED to dynamic type
  examples/ipsec-secgw: replace RTE_LOGTYPE_PORT
  port: convert RTE_LOGTYPE_PORT to dynamic type
  hash: move rte_thash_gfni stubs out of header file
  hash: move rte_hash_set_alg out header
  hash: convert RTE_LOGTYPE_HASH to dynamic type
  table: convert RTE_LOGTYPE_TABLE to dynamic type
  app/test: remove use of RTE_LOGTYPE_PIPELINE
  pipeline: convert RTE_LOGTYPE_PIPELINE to dynamic type

 app/test/test_acl.c             |  3 +-
 app/test/test_table_acl.c       | 50 +++++++++++-------------
 app/test/test_table_pipeline.c  | 40 +++++++++----------
 examples/distributor/main.c     |  2 +-
 examples/ipsec-secgw/sa.c       |  6 +--
 examples/l3fwd-power/main.c     | 17 +++++----
 lib/acl/acl_bld.c               |  1 +
 lib/acl/acl_gen.c               |  1 +
 lib/acl/acl_log.h               |  4 ++
 lib/acl/rte_acl.c               |  4 ++
 lib/acl/tb_mem.c                |  3 +-
 lib/eal/common/eal_common_log.c | 17 ---------
 lib/eal/include/rte_log.h       | 34 ++++++++---------
 lib/efd/rte_efd.c               |  4 ++
 lib/fib/fib_log.h               |  4 ++
 lib/fib/rte_fib.c               |  3 ++
 lib/fib/rte_fib6.c              |  2 +
 lib/gso/rte_gso.c               |  4 +-
 lib/gso/rte_gso.h               |  1 +
 lib/hash/meson.build            |  9 ++++-
 lib/hash/rte_crc_arm64.h        |  8 ++--
 lib/hash/rte_crc_x86.h          | 10 ++---
 lib/hash/rte_cuckoo_hash.c      |  5 +++
 lib/hash/rte_fbk_hash.c         |  5 +++
 lib/hash/rte_hash_crc.c         | 68 +++++++++++++++++++++++++++++++++
 lib/hash/rte_hash_crc.h         | 48 ++---------------------
 lib/hash/rte_thash.c            |  3 ++
 lib/hash/rte_thash_gfni.c       | 50 ++++++++++++++++++++++++
 lib/hash/rte_thash_gfni.h       | 30 +++++----------
 lib/hash/version.map            | 11 ++++++
 lib/kni/rte_kni.c               |  3 ++
 lib/lpm/lpm_log.h               |  4 ++
 lib/lpm/rte_lpm.c               |  3 ++
 lib/lpm/rte_lpm6.c              |  1 +
 lib/mbuf/mbuf_log.h             |  4 ++
 lib/mbuf/rte_mbuf.c             |  4 ++
 lib/mbuf/rte_mbuf_dyn.c         |  2 +
 lib/mbuf/rte_mbuf_pool_ops.c    |  2 +
 lib/mempool/rte_mempool.c       |  2 +
 lib/mempool/rte_mempool.h       |  8 ++++
 lib/mempool/version.map         |  3 ++
 lib/pipeline/rte_pipeline.c     |  2 +
 lib/pipeline/rte_pipeline.h     |  5 +++
 lib/port/rte_port_ethdev.c      |  3 ++
 lib/port/rte_port_eventdev.c    |  4 ++
 lib/port/rte_port_fd.c          |  3 ++
 lib/port/rte_port_frag.c        |  3 ++
 lib/port/rte_port_kni.c         |  3 ++
 lib/port/rte_port_ras.c         |  3 ++
 lib/port/rte_port_ring.c        |  3 ++
 lib/port/rte_port_sched.c       |  3 ++
 lib/port/rte_port_source_sink.c |  3 ++
 lib/port/rte_port_sym_crypto.c  |  3 ++
 lib/power/guest_channel.c       |  3 +-
 lib/power/power_common.c        |  2 +
 lib/power/power_common.h        |  3 +-
 lib/power/power_kvm_vm.c        |  1 +
 lib/power/rte_power.c           |  1 +
 lib/rib/rib_log.h               |  4 ++
 lib/rib/rte_rib.c               |  3 ++
 lib/rib/rte_rib6.c              |  3 ++
 lib/ring/rte_ring.c             |  3 ++
 lib/sched/rte_pie.c             |  1 +
 lib/sched/rte_sched.c           |  5 +++
 lib/sched/rte_sched_log.h       |  4 ++
 lib/table/meson.build           |  1 +
 lib/table/rte_table.c           |  8 ++++
 lib/table/rte_table.h           |  4 ++
 68 files changed, 391 insertions(+), 176 deletions(-)
 create mode 100644 lib/acl/acl_log.h
 create mode 100644 lib/fib/fib_log.h
 create mode 100644 lib/hash/rte_hash_crc.c
 create mode 100644 lib/hash/rte_thash_gfni.c
 create mode 100644 lib/lpm/lpm_log.h
 create mode 100644 lib/mbuf/mbuf_log.h
 create mode 100644 lib/rib/rib_log.h
 create mode 100644 lib/sched/rte_sched_log.h
 create mode 100644 lib/table/rte_table.c

-- 
2.39.2


^ permalink raw reply	[relevance 2%]

* Re: [PATCH v3 03/15] graph: move node process into inline function
  2023-03-29 15:34  3%     ` Stephen Hemminger
@ 2023-03-29 15:41  0%       ` Jerin Jacob
  0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2023-03-29 15:41 UTC (permalink / raw)
  To: Stephen Hemminger
  Cc: Zhirun Yan, dev, jerinj, kirankumark, ndabilpuram, cunming.liang,
	haiyue.wang

On Wed, Mar 29, 2023 at 9:04 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> On Wed, 29 Mar 2023 15:43:28 +0900
> Zhirun Yan <zhirun.yan@intel.com> wrote:
>
> > +/**
> > + * @internal
> > + *
> > + * Enqueue a given node to the tail of the graph reel.
> > + *
> > + * @param graph
> > + *   Pointer Graph object.
> > + * @param node
> > + *   Pointer to node object to be enqueued.
> > + */
> > +static __rte_always_inline void
> > +__rte_node_process(struct rte_graph *graph, struct rte_node *node)
> > +{
> > +     uint64_t start;
> > +     uint16_t rc;
> > +     void **objs;
> > +
> > +     RTE_ASSERT(node->fence == RTE_GRAPH_FENCE);
> > +     objs = node->objs;
> > +     rte_prefetch0(objs);
> > +
> > +     if (rte_graph_has_stats_feature()) {
> > +             start = rte_rdtsc();
> > +             rc = node->process(graph, node, objs, node->idx);
> > +             node->total_cycles += rte_rdtsc() - start;
> > +             node->total_calls++;
> > +             node->total_objs += rc;
> > +     } else {
> > +             node->process(graph, node, objs, node->idx);
> > +     }
> > +     node->idx = 0;
> > +}
> > +
>
> Why inline? Doing everything as inlines has long term ABI
> impacts. And this is not a super critical performance path.

This is one of the real fast path routine.

^ permalink raw reply	[relevance 0%]

* Re: [PATCH v3 03/15] graph: move node process into inline function
  @ 2023-03-29 15:34  3%     ` Stephen Hemminger
  2023-03-29 15:41  0%       ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-03-29 15:34 UTC (permalink / raw)
  To: Zhirun Yan
  Cc: dev, jerinj, kirankumark, ndabilpuram, cunming.liang, haiyue.wang

On Wed, 29 Mar 2023 15:43:28 +0900
Zhirun Yan <zhirun.yan@intel.com> wrote:

> +/**
> + * @internal
> + *
> + * Enqueue a given node to the tail of the graph reel.
> + *
> + * @param graph
> + *   Pointer Graph object.
> + * @param node
> + *   Pointer to node object to be enqueued.
> + */
> +static __rte_always_inline void
> +__rte_node_process(struct rte_graph *graph, struct rte_node *node)
> +{
> +	uint64_t start;
> +	uint16_t rc;
> +	void **objs;
> +
> +	RTE_ASSERT(node->fence == RTE_GRAPH_FENCE);
> +	objs = node->objs;
> +	rte_prefetch0(objs);
> +
> +	if (rte_graph_has_stats_feature()) {
> +		start = rte_rdtsc();
> +		rc = node->process(graph, node, objs, node->idx);
> +		node->total_cycles += rte_rdtsc() - start;
> +		node->total_calls++;
> +		node->total_objs += rc;
> +	} else {
> +		node->process(graph, node, objs, node->idx);
> +	}
> +	node->idx = 0;
> +}
> +

Why inline? Doing everything as inlines has long term ABI
impacts. And this is not a super critical performance path.

^ permalink raw reply	[relevance 3%]

* Re: [PATCH v2 0/2] ABI check updates
  2023-03-23 17:15  9% ` [PATCH v2 " David Marchand
  2023-03-23 17:15 21%   ` [PATCH v2 1/2] devtools: unify configuration for ABI check David Marchand
  2023-03-23 17:15 41%   ` [PATCH v2 2/2] devtools: stop depending on libabigail xml format David Marchand
@ 2023-03-28 18:38  4%   ` Thomas Monjalon
  2 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-03-28 18:38 UTC (permalink / raw)
  To: David Marchand; +Cc: dev

23/03/2023 18:15, David Marchand:
> This series moves ABI exceptions in a single configuration file and
> simplifies the ABI check so that no artefact depending on libabigail
> version is stored in the CI.

Applied, thanks.



^ permalink raw reply	[relevance 4%]

* [PATCH v2 2/2] devtools: stop depending on libabigail xml format
  2023-03-23 17:15  9% ` [PATCH v2 " David Marchand
  2023-03-23 17:15 21%   ` [PATCH v2 1/2] devtools: unify configuration for ABI check David Marchand
@ 2023-03-23 17:15 41%   ` David Marchand
  2023-03-28 18:38  4%   ` [PATCH v2 0/2] ABI check updates Thomas Monjalon
  2 siblings, 0 replies; 200+ results
From: David Marchand @ 2023-03-23 17:15 UTC (permalink / raw)
  To: dev; +Cc: Aaron Conole, Michael Santana, Thomas Monjalon, Bruce Richardson

A ABI reference depends on:
- DPDK build options,
- toolchain compiler and versions,
- libabigail version.

The reason for the latter point is that, when the ABI reference was
generated, ABI xml files were dumped in a format depending on the
libabigail version.
Those xml files were then later used to compare against modified
code.

There are a few disadvantages with this method:
- since the xml files are dependent on the libabigail version, when
  updating CI environments, a change in the libabigail package requires
  regenerating the ABI references,
- comparing xml files with abidiff is not well tested, as we (DPDK)
  uncovered bugs in libabigail that were not hit with comparing .so,

Switch to comparing .so directly, remove this dependence and update GHA
script.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 .ci/linux-build.sh            |  4 ----
 .github/workflows/build.yml   |  2 +-
 MAINTAINERS                   |  1 -
 devtools/check-abi.sh         | 17 +++++++++--------
 devtools/gen-abi.sh           | 27 ---------------------------
 devtools/test-meson-builds.sh |  5 -----
 6 files changed, 10 insertions(+), 46 deletions(-)
 delete mode 100755 devtools/gen-abi.sh

diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index 150b38bd7a..9631e342b5 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -130,8 +130,6 @@ fi
 if [ "$ABI_CHECKS" = "true" ]; then
     if [ "$(cat libabigail/VERSION 2>/dev/null)" != "$LIBABIGAIL_VERSION" ]; then
         rm -rf libabigail
-        # if we change libabigail, invalidate existing abi cache
-        rm -rf reference
     fi
 
     if [ ! -d libabigail ]; then
@@ -153,7 +151,6 @@ if [ "$ABI_CHECKS" = "true" ]; then
         meson setup $OPTS -Dexamples= $refsrcdir $refsrcdir/build
         ninja -C $refsrcdir/build
         DESTDIR=$(pwd)/reference ninja -C $refsrcdir/build install
-        devtools/gen-abi.sh reference
         find reference/usr/local -name '*.a' -delete
         rm -rf reference/usr/local/bin
         rm -rf reference/usr/local/share
@@ -161,7 +158,6 @@ if [ "$ABI_CHECKS" = "true" ]; then
     fi
 
     DESTDIR=$(pwd)/install ninja -C build install
-    devtools/gen-abi.sh install
     devtools/check-abi.sh reference install ${ABI_CHECKS_WARN_ONLY:-}
 fi
 
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index bbcb535afb..e24e47a216 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -70,7 +70,7 @@ jobs:
       run: |
         echo 'ccache=ccache-${{ matrix.config.os }}-${{ matrix.config.compiler }}-${{ matrix.config.cross }}-'$(date -u +%Y-w%W) >> $GITHUB_OUTPUT
         echo 'libabigail=libabigail-${{ matrix.config.os }}' >> $GITHUB_OUTPUT
-        echo 'abi=abi-${{ matrix.config.os }}-${{ matrix.config.compiler }}-${{ matrix.config.cross }}-${{ env.LIBABIGAIL_VERSION }}-${{ env.REF_GIT_TAG }}' >> $GITHUB_OUTPUT
+        echo 'abi=abi-${{ matrix.config.os }}-${{ matrix.config.compiler }}-${{ matrix.config.cross }}-${{ env.REF_GIT_TAG }}' >> $GITHUB_OUTPUT
     - name: Retrieve ccache cache
       uses: actions/cache@v3
       with:
diff --git a/MAINTAINERS b/MAINTAINERS
index 1a33ad8592..280058adfc 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -94,7 +94,6 @@ F: devtools/check-spdx-tag.sh
 F: devtools/check-symbol-change.sh
 F: devtools/check-symbol-maps.sh
 F: devtools/checkpatches.sh
-F: devtools/gen-abi.sh
 F: devtools/get-maintainer.sh
 F: devtools/git-log-fixes.sh
 F: devtools/load-devel-config
diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index f74432be5d..39e3798931 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -37,20 +37,21 @@ fi
 
 export newdir ABIDIFF_OPTIONS ABIDIFF_SUPPRESSIONS
 export diff_func='run_diff() {
-	dump=$1
-	name=$(basename $dump)
-	if grep -q "; SKIP_LIBRARY=${name%.dump}\>" $ABIDIFF_SUPPRESSIONS; then
+	lib=$1
+	name=$(basename $lib)
+	if grep -q "; SKIP_LIBRARY=${name%.so.*}\>" $ABIDIFF_SUPPRESSIONS; then
 		echo "Skipped $name" >&2
 		return 0
 	fi
-	dump2=$(find $newdir -name $name)
-	if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
+	# Look for a library with the same major ABI version
+	lib2=$(find $newdir -name "${name%.*}.*" -a ! -type l)
+	if [ -z "$lib2" ] || [ ! -e "$lib2" ]; then
 		echo "Error: cannot find $name in $newdir" >&2
 		return 1
 	fi
-	abidiff $ABIDIFF_OPTIONS $dump $dump2 || {
+	abidiff $ABIDIFF_OPTIONS $lib $lib2 || {
 		abiret=$?
-		echo "Error: ABI issue reported for abidiff $ABIDIFF_OPTIONS $dump $dump2" >&2
+		echo "Error: ABI issue reported for abidiff $ABIDIFF_OPTIONS $lib $lib2" >&2
 		if [ $(($abiret & 3)) -ne 0 ]; then
 			echo "ABIDIFF_ERROR|ABIDIFF_USAGE_ERROR, this could be a script or environment issue." >&2
 		fi
@@ -65,7 +66,7 @@ export diff_func='run_diff() {
 }'
 
 error=
-find $refdir -name "*.dump" |
+find $refdir -name "*.so.*" -a ! -type l |
 xargs -n1 -P0 sh -c 'eval "$diff_func"; run_diff $0' ||
 error=1
 
diff --git a/devtools/gen-abi.sh b/devtools/gen-abi.sh
deleted file mode 100755
index 61f7510ea1..0000000000
--- a/devtools/gen-abi.sh
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/bin/sh -e
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright (c) 2019 Red Hat, Inc.
-
-if [ $# != 1 ]; then
-	echo "Usage: $0 installdir" >&2
-	exit 1
-fi
-
-installdir=$1
-if [ ! -d $installdir ]; then
-	echo "Error: install directory '$installdir' does not exist." >&2
-	exit 1
-fi
-
-dumpdir=$installdir/dump
-rm -rf $dumpdir
-mkdir -p $dumpdir
-for f in $(find $installdir -name "*.so.*"); do
-	if test -L $f; then
-		continue
-	fi
-
-	libname=$(basename $f)
-	echo $dumpdir/${libname%.so*}.dump $f
-done |
-xargs -n2 -P0 abidw --out-file
diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 48f4e52df3..9131088c9d 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -204,7 +204,6 @@ build () # <directory> <target cc | cross file> <ABI check> [meson options]
 				-Dexamples= $*
 			compile $abirefdir/build
 			install_target $abirefdir/build $abirefdir/$targetdir
-			$srcdir/devtools/gen-abi.sh $abirefdir/$targetdir
 
 			# save disk space by removing static libs and apps
 			find $abirefdir/$targetdir/usr/local -name '*.a' -delete
@@ -215,10 +214,6 @@ build () # <directory> <target cc | cross file> <ABI check> [meson options]
 		install_target $builds_dir/$targetdir \
 			$(readlink -f $builds_dir/$targetdir/install)
 		echo "Checking ABI compatibility of $targetdir" >&$verbose
-		echo $srcdir/devtools/gen-abi.sh \
-			$(readlink -f $builds_dir/$targetdir/install) >&$veryverbose
-		$srcdir/devtools/gen-abi.sh \
-			$(readlink -f $builds_dir/$targetdir/install) >&$veryverbose
 		echo $srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
 			$(readlink -f $builds_dir/$targetdir/install) >&$veryverbose
 		$srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
-- 
2.39.2


^ permalink raw reply	[relevance 41%]

* [PATCH v2 1/2] devtools: unify configuration for ABI check
  2023-03-23 17:15  9% ` [PATCH v2 " David Marchand
@ 2023-03-23 17:15 21%   ` David Marchand
  2023-03-23 17:15 41%   ` [PATCH v2 2/2] devtools: stop depending on libabigail xml format David Marchand
  2023-03-28 18:38  4%   ` [PATCH v2 0/2] ABI check updates Thomas Monjalon
  2 siblings, 0 replies; 200+ results
From: David Marchand @ 2023-03-23 17:15 UTC (permalink / raw)
  To: dev; +Cc: Thomas Monjalon

We have been skipping removed libraries in the ABI check by updating the
check-abi.sh script itself.
See, for example, commit 33584c19ddc2 ("raw/dpaa2_qdma: remove driver").

Having two places for exception is a bit confusing, and those exceptions
are best placed in a single configuration file out of the check script.

Besides, a next patch will switch the check from comparing ABI xml files
to directly comparing .so files. In this mode, libabigail does not
support the soname_regexp syntax used for the mlx glue libraries.

Let's handle these special cases in libabigail.abignore using comments.

Taking the raw/dpaa2_qdma driver as an example, it would be possible to
skip it by adding:

 ; SKIP_LIBRARY=librte_net_mlx4_glue
+; SKIP_LIBRARY=librte_raw_dpaa2_qdma

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 devtools/check-abi.sh        |  9 +++++++--
 devtools/libabigail.abignore | 12 +++++++++---
 2 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index d253a12768..f74432be5d 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -10,7 +10,8 @@ fi
 refdir=$1
 newdir=$2
 warnonly=${3:-}
-ABIDIFF_OPTIONS="--suppr $(dirname $0)/libabigail.abignore --no-added-syms"
+ABIDIFF_SUPPRESSIONS=$(dirname $(readlink -f $0))/libabigail.abignore
+ABIDIFF_OPTIONS="--suppr $ABIDIFF_SUPPRESSIONS --no-added-syms"
 
 if [ ! -d $refdir ]; then
 	echo "Error: reference directory '$refdir' does not exist." >&2
@@ -34,10 +35,14 @@ else
 	ABIDIFF_OPTIONS="$ABIDIFF_OPTIONS --headers-dir2 $incdir2"
 fi
 
-export newdir ABIDIFF_OPTIONS
+export newdir ABIDIFF_OPTIONS ABIDIFF_SUPPRESSIONS
 export diff_func='run_diff() {
 	dump=$1
 	name=$(basename $dump)
+	if grep -q "; SKIP_LIBRARY=${name%.dump}\>" $ABIDIFF_SUPPRESSIONS; then
+		echo "Skipped $name" >&2
+		return 0
+	fi
 	dump2=$(find $newdir -name $name)
 	if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
 		echo "Error: cannot find $name in $newdir" >&2
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 7a93de3ba1..3ff51509de 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -16,9 +16,15 @@
 [suppress_variable]
         name_regexp = _pmd_info$
 
-; Ignore changes on soname for mlx glue internal drivers
-[suppress_file]
-        soname_regexp = ^librte_.*mlx.*glue\.
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+; Special rules to skip libraries ;
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;
+; This is not a libabigail rule (see check-abi.sh).
+; This is used for driver removal and other special cases like mlx glue libs.
+;
+; SKIP_LIBRARY=librte_common_mlx5_glue
+; SKIP_LIBRARY=librte_net_mlx4_glue
 
 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
 ; Experimental APIs exceptions ;
-- 
2.39.2


^ permalink raw reply	[relevance 21%]

* [PATCH v2 0/2] ABI check updates
  @ 2023-03-23 17:15  9% ` David Marchand
  2023-03-23 17:15 21%   ` [PATCH v2 1/2] devtools: unify configuration for ABI check David Marchand
                     ` (2 more replies)
  0 siblings, 3 replies; 200+ results
From: David Marchand @ 2023-03-23 17:15 UTC (permalink / raw)
  To: dev

This series moves ABI exceptions in a single configuration file and
simplifies the ABI check so that no artefact depending on libabigail
version is stored in the CI.

-- 
David Marchand

Changes since v1:
- rebased after abi check parallelisation rework,


David Marchand (2):
  devtools: unify configuration for ABI check
  devtools: stop depending on libabigail xml format

 .ci/linux-build.sh            |  4 ----
 .github/workflows/build.yml   |  2 +-
 MAINTAINERS                   |  1 -
 devtools/check-abi.sh         | 24 +++++++++++++++---------
 devtools/gen-abi.sh           | 27 ---------------------------
 devtools/libabigail.abignore  | 12 +++++++++---
 devtools/test-meson-builds.sh |  5 -----
 7 files changed, 25 insertions(+), 50 deletions(-)
 delete mode 100755 devtools/gen-abi.sh

-- 
2.39.2


^ permalink raw reply	[relevance 9%]

* Re: [dpdk-dev] [RFC] ethdev: improve link speed to string
  @ 2023-03-23 14:40  3%                 ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2023-03-23 14:40 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Min Hu (Connor), Andrew Rybchenko, thomas, dev

On 2/10/2023 2:41 PM, Ferruh Yigit wrote:
> On 1/19/2023 4:45 PM, Stephen Hemminger wrote:
>> On Thu, 19 Jan 2023 11:41:12 +0000
>> Ferruh Yigit <ferruh.yigit@amd.com> wrote:
>>
>>>>>>> Nothing good will happen if you try to use the function to
>>>>>>> print two different link speeds in one log message.  
>>>>>> You are right.
>>>>>> And use malloc for "name" will result in memory leakage, which is also
>>>>>> not a good option.
>>>>>>
>>>>>> BTW, do you think if we need to modify the function
>>>>>> "rte_eth_link_speed_to_str"?  
>>>>>
>>>>> IMHO it would be more pain than gain in this case.
>>>>>
>>>>> .
>>>>>  
>>>> Agree with you. Thanks Andrew
>>>>  
>>>
>>> It can be option to update the API as following in next ABI break release:
>>>
>>> const char *
>>> rte_eth_link_speed_to_str(uint32_t link_speed, char *buf, size_t buf_size);
>>>
>>> For this a deprecation notice needs to be sent and approved, not sure
>>> though if it worth.
>>>
>>>
>>> Meanwhile, what do you think to update string 'Invalid' to something
>>> like 'Irregular' or 'Erratic', does this help to convey the right message?
>>
>>
>> API versioning is possible here.
> 
> 
> Agree, ABI versioning can be used here.
> 
> @Connor, what do you think?

Updating patch status as rejected, if you still pursue the feature
please send a separate patch that updates the API via ABI versioning.

Thanks,
ferruh

^ permalink raw reply	[relevance 3%]

* Re: [PATCH 0/5] fix segment fault when parse args
  2023-03-23 11:58  3%             ` fengchengwen
@ 2023-03-23 12:51  3%               ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-03-23 12:51 UTC (permalink / raw)
  To: Olivier Matz, Ferruh Yigit, fengchengwen; +Cc: dev, David Marchand

23/03/2023 12:58, fengchengwen:
> On 2023/3/22 21:49, Thomas Monjalon wrote:
> > 22/03/2023 09:53, Ferruh Yigit:
> >> On 3/22/2023 1:15 AM, fengchengwen wrote:
> >>> On 2023/3/21 21:50, Ferruh Yigit wrote:
> >>>> On 3/17/2023 2:43 AM, fengchengwen wrote:
> >>>>> On 2023/3/17 2:18, Ferruh Yigit wrote:
> >>>>>> On 3/14/2023 12:48 PM, Chengwen Feng wrote:
> >>>>>>> The rte_kvargs_process() was used to parse KV pairs, it also supports
> >>>>>>> to parse 'only keys' (e.g. socket_id) type. And the callback function 
> >>>>>>> parameter 'value' is NULL when parsed 'only keys'.
> >>>>>>>
> >>>>>>> It may leads to segment fault when parse args with 'only key', this 
> >>>>>>> patchset fixes rest of them.
> >>>>>>>
> >>>>>>> Chengwen Feng (5):
> >>>>>>>   app/pdump: fix segment fault when parse args
> >>>>>>>   net/memif: fix segment fault when parse devargs
> >>>>>>>   net/pcap: fix segment fault when parse devargs
> >>>>>>>   net/ring: fix segment fault when parse devargs
> >>>>>>>   net/sfc: fix segment fault when parse devargs
> >>>>>>
> >>>>>> Hi Chengwen,
> >>>>>>
> >>>>>> Did you scan all `rte_kvargs_process()` instances?
> >>>>>
> >>>>> No, I was just looking at the modules I was concerned about.
> >>>>> I looked at it briefly, and some modules had the same problem.
> >>>>>
> >>>>>>
> >>>>>>
> >>>>>> And if there would be a way to tell kvargs that a value is expected (or
> >>>>>> not) this checks could be done in kvargs layer, I think this also can be
> >>>>>> to look at.
> >>>>>
> >>>>> Yes, the way to tell kvargs may lead to a lot of modifys and also break ABI.
> >>>>> I also think about just set value = "" when only exist key, It could perfectly solve the above segment scene.
> >>>>> But it also break the API's behavior.
> >>>>>
> >>>>
> >>>> What about having a new API, like `rte_kvargs_process_extended()`,
> >>>>
> >>>> That gets an additional flag as parameter, which may have values like
> >>>> following to indicate if key expects a value or not:
> >>>> ARG_MAY_HAVE_VALUE  --> "key=value" OR 'key'
> >>>> ARG_WITH_VALUE      --> "key=value"
> >>>> ARG_NO_VALUE        --> 'key'
> >>>>
> >>>> Default flag can be 'ARG_MAY_HAVE_VALUE' and it becomes same as
> >>>> `rte_kvargs_process()`.
> >>>>
> >>>> This way instead of adding checks, relevant usage can be replaced by
> >>>> `rte_kvargs_process_extended()`, this requires similar amount of change
> >>>> but code will be more clean I think.
> >>>>
> >>>> Do you think does this work?
> >>>
> >>> Yes, it can work.
> >>>
> >>> But I think the introduction of new API adds some complexity.
> >>> And a good API definition could more simpler.
> >>>
> >>
> >> Other option is changing existing API, but that may be widely used and
> >> changing it impacts applications, I don't think it worth.
> > 
> > I've planned a change in kvargs API 5 years ago and never did it:
> >>From doc/guides/rel_notes/deprecation.rst:
> > "
> > * kvargs: The function ``rte_kvargs_process`` will get a new parameter
> >   for returning key match count. It will ease handling of no-match case.
> > "
> 
> I think it's okay to add extra parameter for rte_kvargs_process. But it will
> break ABI.
> Also I notice patchset was deferred in patchwork.
> 
> Does it mean that the new version can't accept until the 23.11 release cycle ?

It is a bit too late to take a decision in 23.03 cycle.
Let's continue this discussion.
We can either have some fixes in 23.07 or have an ABI breaking change in 23.11.



^ permalink raw reply	[relevance 3%]

* Re: [PATCH 0/5] fix segment fault when parse args
  2023-03-22 13:49  0%           ` Thomas Monjalon
@ 2023-03-23 11:58  3%             ` fengchengwen
  2023-03-23 12:51  3%               ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: fengchengwen @ 2023-03-23 11:58 UTC (permalink / raw)
  To: Thomas Monjalon, Olivier Matz, Ferruh Yigit; +Cc: dev, David Marchand

On 2023/3/22 21:49, Thomas Monjalon wrote:
> 22/03/2023 09:53, Ferruh Yigit:
>> On 3/22/2023 1:15 AM, fengchengwen wrote:
>>> On 2023/3/21 21:50, Ferruh Yigit wrote:
>>>> On 3/17/2023 2:43 AM, fengchengwen wrote:
>>>>> On 2023/3/17 2:18, Ferruh Yigit wrote:
>>>>>> On 3/14/2023 12:48 PM, Chengwen Feng wrote:
>>>>>>> The rte_kvargs_process() was used to parse KV pairs, it also supports
>>>>>>> to parse 'only keys' (e.g. socket_id) type. And the callback function 
>>>>>>> parameter 'value' is NULL when parsed 'only keys'.
>>>>>>>
>>>>>>> It may leads to segment fault when parse args with 'only key', this 
>>>>>>> patchset fixes rest of them.
>>>>>>>
>>>>>>> Chengwen Feng (5):
>>>>>>>   app/pdump: fix segment fault when parse args
>>>>>>>   net/memif: fix segment fault when parse devargs
>>>>>>>   net/pcap: fix segment fault when parse devargs
>>>>>>>   net/ring: fix segment fault when parse devargs
>>>>>>>   net/sfc: fix segment fault when parse devargs
>>>>>>
>>>>>> Hi Chengwen,
>>>>>>
>>>>>> Did you scan all `rte_kvargs_process()` instances?
>>>>>
>>>>> No, I was just looking at the modules I was concerned about.
>>>>> I looked at it briefly, and some modules had the same problem.
>>>>>
>>>>>>
>>>>>>
>>>>>> And if there would be a way to tell kvargs that a value is expected (or
>>>>>> not) this checks could be done in kvargs layer, I think this also can be
>>>>>> to look at.
>>>>>
>>>>> Yes, the way to tell kvargs may lead to a lot of modifys and also break ABI.
>>>>> I also think about just set value = "" when only exist key, It could perfectly solve the above segment scene.
>>>>> But it also break the API's behavior.
>>>>>
>>>>
>>>> What about having a new API, like `rte_kvargs_process_extended()`,
>>>>
>>>> That gets an additional flag as parameter, which may have values like
>>>> following to indicate if key expects a value or not:
>>>> ARG_MAY_HAVE_VALUE  --> "key=value" OR 'key'
>>>> ARG_WITH_VALUE      --> "key=value"
>>>> ARG_NO_VALUE        --> 'key'
>>>>
>>>> Default flag can be 'ARG_MAY_HAVE_VALUE' and it becomes same as
>>>> `rte_kvargs_process()`.
>>>>
>>>> This way instead of adding checks, relevant usage can be replaced by
>>>> `rte_kvargs_process_extended()`, this requires similar amount of change
>>>> but code will be more clean I think.
>>>>
>>>> Do you think does this work?
>>>
>>> Yes, it can work.
>>>
>>> But I think the introduction of new API adds some complexity.
>>> And a good API definition could more simpler.
>>>
>>
>> Other option is changing existing API, but that may be widely used and
>> changing it impacts applications, I don't think it worth.
> 
> I've planned a change in kvargs API 5 years ago and never did it:
>>From doc/guides/rel_notes/deprecation.rst:
> "
> * kvargs: The function ``rte_kvargs_process`` will get a new parameter
>   for returning key match count. It will ease handling of no-match case.
> "

I think it's okay to add extra parameter for rte_kvargs_process. But it will
break ABI.
Also I notice patchset was deferred in patchwork.

Does it mean that the new version can't accept until the 23.11 release cycle ?

> 
>> Of course we can live with as it is and add checks to the callback
>> functions, although I still believe a new 'process()' API is better idea.
> 
> 
> 
> .
> 

^ permalink raw reply	[relevance 3%]

* Re: [PATCH 0/5] fix segment fault when parse args
  2023-03-22  8:53  0%         ` Ferruh Yigit
@ 2023-03-22 13:49  0%           ` Thomas Monjalon
  2023-03-23 11:58  3%             ` fengchengwen
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2023-03-22 13:49 UTC (permalink / raw)
  To: fengchengwen, Olivier Matz, Ferruh Yigit; +Cc: dev, David Marchand

22/03/2023 09:53, Ferruh Yigit:
> On 3/22/2023 1:15 AM, fengchengwen wrote:
> > On 2023/3/21 21:50, Ferruh Yigit wrote:
> >> On 3/17/2023 2:43 AM, fengchengwen wrote:
> >>> On 2023/3/17 2:18, Ferruh Yigit wrote:
> >>>> On 3/14/2023 12:48 PM, Chengwen Feng wrote:
> >>>>> The rte_kvargs_process() was used to parse KV pairs, it also supports
> >>>>> to parse 'only keys' (e.g. socket_id) type. And the callback function 
> >>>>> parameter 'value' is NULL when parsed 'only keys'.
> >>>>>
> >>>>> It may leads to segment fault when parse args with 'only key', this 
> >>>>> patchset fixes rest of them.
> >>>>>
> >>>>> Chengwen Feng (5):
> >>>>>   app/pdump: fix segment fault when parse args
> >>>>>   net/memif: fix segment fault when parse devargs
> >>>>>   net/pcap: fix segment fault when parse devargs
> >>>>>   net/ring: fix segment fault when parse devargs
> >>>>>   net/sfc: fix segment fault when parse devargs
> >>>>
> >>>> Hi Chengwen,
> >>>>
> >>>> Did you scan all `rte_kvargs_process()` instances?
> >>>
> >>> No, I was just looking at the modules I was concerned about.
> >>> I looked at it briefly, and some modules had the same problem.
> >>>
> >>>>
> >>>>
> >>>> And if there would be a way to tell kvargs that a value is expected (or
> >>>> not) this checks could be done in kvargs layer, I think this also can be
> >>>> to look at.
> >>>
> >>> Yes, the way to tell kvargs may lead to a lot of modifys and also break ABI.
> >>> I also think about just set value = "" when only exist key, It could perfectly solve the above segment scene.
> >>> But it also break the API's behavior.
> >>>
> >>
> >> What about having a new API, like `rte_kvargs_process_extended()`,
> >>
> >> That gets an additional flag as parameter, which may have values like
> >> following to indicate if key expects a value or not:
> >> ARG_MAY_HAVE_VALUE  --> "key=value" OR 'key'
> >> ARG_WITH_VALUE      --> "key=value"
> >> ARG_NO_VALUE        --> 'key'
> >>
> >> Default flag can be 'ARG_MAY_HAVE_VALUE' and it becomes same as
> >> `rte_kvargs_process()`.
> >>
> >> This way instead of adding checks, relevant usage can be replaced by
> >> `rte_kvargs_process_extended()`, this requires similar amount of change
> >> but code will be more clean I think.
> >>
> >> Do you think does this work?
> > 
> > Yes, it can work.
> > 
> > But I think the introduction of new API adds some complexity.
> > And a good API definition could more simpler.
> > 
> 
> Other option is changing existing API, but that may be widely used and
> changing it impacts applications, I don't think it worth.

I've planned a change in kvargs API 5 years ago and never did it:
From doc/guides/rel_notes/deprecation.rst:
"
* kvargs: The function ``rte_kvargs_process`` will get a new parameter
  for returning key match count. It will ease handling of no-match case.
"

> Of course we can live with as it is and add checks to the callback
> functions, although I still believe a new 'process()' API is better idea.




^ permalink raw reply	[relevance 0%]

* Re: [PATCH 0/5] fix segment fault when parse args
  2023-03-22  1:15  0%       ` fengchengwen
@ 2023-03-22  8:53  0%         ` Ferruh Yigit
  2023-03-22 13:49  0%           ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2023-03-22  8:53 UTC (permalink / raw)
  To: fengchengwen, thomas, Olivier Matz; +Cc: dev, David Marchand

On 3/22/2023 1:15 AM, fengchengwen wrote:
> On 2023/3/21 21:50, Ferruh Yigit wrote:
>> On 3/17/2023 2:43 AM, fengchengwen wrote:
>>> On 2023/3/17 2:18, Ferruh Yigit wrote:
>>>> On 3/14/2023 12:48 PM, Chengwen Feng wrote:
>>>>> The rte_kvargs_process() was used to parse KV pairs, it also supports
>>>>> to parse 'only keys' (e.g. socket_id) type. And the callback function 
>>>>> parameter 'value' is NULL when parsed 'only keys'.
>>>>>
>>>>> It may leads to segment fault when parse args with 'only key', this 
>>>>> patchset fixes rest of them.
>>>>>
>>>>> Chengwen Feng (5):
>>>>>   app/pdump: fix segment fault when parse args
>>>>>   net/memif: fix segment fault when parse devargs
>>>>>   net/pcap: fix segment fault when parse devargs
>>>>>   net/ring: fix segment fault when parse devargs
>>>>>   net/sfc: fix segment fault when parse devargs
>>>>
>>>> Hi Chengwen,
>>>>
>>>> Did you scan all `rte_kvargs_process()` instances?
>>>
>>> No, I was just looking at the modules I was concerned about.
>>> I looked at it briefly, and some modules had the same problem.
>>>
>>>>
>>>>
>>>> And if there would be a way to tell kvargs that a value is expected (or
>>>> not) this checks could be done in kvargs layer, I think this also can be
>>>> to look at.
>>>
>>> Yes, the way to tell kvargs may lead to a lot of modifys and also break ABI.
>>> I also think about just set value = "" when only exist key, It could perfectly solve the above segment scene.
>>> But it also break the API's behavior.
>>>
>>
>> What about having a new API, like `rte_kvargs_process_extended()`,
>>
>> That gets an additional flag as parameter, which may have values like
>> following to indicate if key expects a value or not:
>> ARG_MAY_HAVE_VALUE  --> "key=value" OR 'key'
>> ARG_WITH_VALUE      --> "key=value"
>> ARG_NO_VALUE        --> 'key'
>>
>> Default flag can be 'ARG_MAY_HAVE_VALUE' and it becomes same as
>> `rte_kvargs_process()`.
>>
>> This way instead of adding checks, relevant usage can be replaced by
>> `rte_kvargs_process_extended()`, this requires similar amount of change
>> but code will be more clean I think.
>>
>> Do you think does this work?
> 
> Yes, it can work.
> 
> But I think the introduction of new API adds some complexity.
> And a good API definition could more simpler.
> 

Other option is changing existing API, but that may be widely used and
changing it impacts applications, I don't think it worth.

Of course we can live with as it is and add checks to the callback
functions, although I still believe a new 'process()' API is better idea.

>>
>>
>>>
>>> Or continue fix the exist code (about 10+ place more),
>>> for new invoking, because the 'arg_handler_t' already well documented (52ab17efdecf935792ee1d0cb749c0dbd536c083),
>>> they'll take the initiative to prevent this.
>>>
>>>
>>> Hope for more advise for the next.
>>>
>>>> .
>>>>
>>
>> .
>>


^ permalink raw reply	[relevance 0%]

* Re: [PATCH 0/5] fix segment fault when parse args
  2023-03-21 13:50  0%     ` Ferruh Yigit
@ 2023-03-22  1:15  0%       ` fengchengwen
  2023-03-22  8:53  0%         ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: fengchengwen @ 2023-03-22  1:15 UTC (permalink / raw)
  To: Ferruh Yigit, thomas, Olivier Matz; +Cc: dev, David Marchand

On 2023/3/21 21:50, Ferruh Yigit wrote:
> On 3/17/2023 2:43 AM, fengchengwen wrote:
>> On 2023/3/17 2:18, Ferruh Yigit wrote:
>>> On 3/14/2023 12:48 PM, Chengwen Feng wrote:
>>>> The rte_kvargs_process() was used to parse KV pairs, it also supports
>>>> to parse 'only keys' (e.g. socket_id) type. And the callback function 
>>>> parameter 'value' is NULL when parsed 'only keys'.
>>>>
>>>> It may leads to segment fault when parse args with 'only key', this 
>>>> patchset fixes rest of them.
>>>>
>>>> Chengwen Feng (5):
>>>>   app/pdump: fix segment fault when parse args
>>>>   net/memif: fix segment fault when parse devargs
>>>>   net/pcap: fix segment fault when parse devargs
>>>>   net/ring: fix segment fault when parse devargs
>>>>   net/sfc: fix segment fault when parse devargs
>>>
>>> Hi Chengwen,
>>>
>>> Did you scan all `rte_kvargs_process()` instances?
>>
>> No, I was just looking at the modules I was concerned about.
>> I looked at it briefly, and some modules had the same problem.
>>
>>>
>>>
>>> And if there would be a way to tell kvargs that a value is expected (or
>>> not) this checks could be done in kvargs layer, I think this also can be
>>> to look at.
>>
>> Yes, the way to tell kvargs may lead to a lot of modifys and also break ABI.
>> I also think about just set value = "" when only exist key, It could perfectly solve the above segment scene.
>> But it also break the API's behavior.
>>
> 
> What about having a new API, like `rte_kvargs_process_extended()`,
> 
> That gets an additional flag as parameter, which may have values like
> following to indicate if key expects a value or not:
> ARG_MAY_HAVE_VALUE  --> "key=value" OR 'key'
> ARG_WITH_VALUE      --> "key=value"
> ARG_NO_VALUE        --> 'key'
> 
> Default flag can be 'ARG_MAY_HAVE_VALUE' and it becomes same as
> `rte_kvargs_process()`.
> 
> This way instead of adding checks, relevant usage can be replaced by
> `rte_kvargs_process_extended()`, this requires similar amount of change
> but code will be more clean I think.
> 
> Do you think does this work?

Yes, it can work.

But I think the introduction of new API adds some complexity.
And a good API definition could more simpler.

> 
> 
>>
>> Or continue fix the exist code (about 10+ place more),
>> for new invoking, because the 'arg_handler_t' already well documented (52ab17efdecf935792ee1d0cb749c0dbd536c083),
>> they'll take the initiative to prevent this.
>>
>>
>> Hope for more advise for the next.
>>
>>> .
>>>
> 
> .
> 

^ permalink raw reply	[relevance 0%]

* Re: [PATCH 0/5] fix segment fault when parse args
  2023-03-17  2:43  3%   ` fengchengwen
@ 2023-03-21 13:50  0%     ` Ferruh Yigit
  2023-03-22  1:15  0%       ` fengchengwen
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2023-03-21 13:50 UTC (permalink / raw)
  To: fengchengwen, thomas, Olivier Matz; +Cc: dev, David Marchand

On 3/17/2023 2:43 AM, fengchengwen wrote:
> On 2023/3/17 2:18, Ferruh Yigit wrote:
>> On 3/14/2023 12:48 PM, Chengwen Feng wrote:
>>> The rte_kvargs_process() was used to parse KV pairs, it also supports
>>> to parse 'only keys' (e.g. socket_id) type. And the callback function 
>>> parameter 'value' is NULL when parsed 'only keys'.
>>>
>>> It may leads to segment fault when parse args with 'only key', this 
>>> patchset fixes rest of them.
>>>
>>> Chengwen Feng (5):
>>>   app/pdump: fix segment fault when parse args
>>>   net/memif: fix segment fault when parse devargs
>>>   net/pcap: fix segment fault when parse devargs
>>>   net/ring: fix segment fault when parse devargs
>>>   net/sfc: fix segment fault when parse devargs
>>
>> Hi Chengwen,
>>
>> Did you scan all `rte_kvargs_process()` instances?
> 
> No, I was just looking at the modules I was concerned about.
> I looked at it briefly, and some modules had the same problem.
> 
>>
>>
>> And if there would be a way to tell kvargs that a value is expected (or
>> not) this checks could be done in kvargs layer, I think this also can be
>> to look at.
> 
> Yes, the way to tell kvargs may lead to a lot of modifys and also break ABI.
> I also think about just set value = "" when only exist key, It could perfectly solve the above segment scene.
> But it also break the API's behavior.
> 

What about having a new API, like `rte_kvargs_process_extended()`,

That gets an additional flag as parameter, which may have values like
following to indicate if key expects a value or not:
ARG_MAY_HAVE_VALUE  --> "key=value" OR 'key'
ARG_WITH_VALUE      --> "key=value"
ARG_NO_VALUE        --> 'key'

Default flag can be 'ARG_MAY_HAVE_VALUE' and it becomes same as
`rte_kvargs_process()`.

This way instead of adding checks, relevant usage can be replaced by
`rte_kvargs_process_extended()`, this requires similar amount of change
but code will be more clean I think.

Do you think does this work?


> 
> Or continue fix the exist code (about 10+ place more),
> for new invoking, because the 'arg_handler_t' already well documented (52ab17efdecf935792ee1d0cb749c0dbd536c083),
> they'll take the initiative to prevent this.
> 
> 
> Hope for more advise for the next.
> 
>> .
>>


^ permalink raw reply	[relevance 0%]

* [PATCH v2 2/2] ci: test compilation with debug in GHA
  @ 2023-03-20 12:18 19%   ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2023-03-20 12:18 UTC (permalink / raw)
  To: dev; +Cc: Aaron Conole, Michael Santana

We often miss compilation issues with -O0 -g.
Switch to debug in GHA for the gcc job.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v1:
- rather than introduce a new job, updated the ABI check job
  to build with debug,

---
 .ci/linux-build.sh          | 8 +++++++-
 .github/workflows/build.yml | 3 ++-
 2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index ab0994388a..150b38bd7a 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -65,6 +65,12 @@ if [ "$RISCV64" = "true" ]; then
     cross_file=config/riscv/riscv64_linux_gcc
 fi
 
+buildtype=debugoptimized
+
+if [ "$BUILD_DEBUG" = "true" ]; then
+    buildtype=debug
+fi
+
 if [ "$BUILD_DOCS" = "true" ]; then
     OPTS="$OPTS -Denable_docs=true"
 fi
@@ -85,7 +91,7 @@ fi
 
 OPTS="$OPTS -Dplatform=generic"
 OPTS="$OPTS -Ddefault_library=$DEF_LIB"
-OPTS="$OPTS -Dbuildtype=debugoptimized"
+OPTS="$OPTS -Dbuildtype=$buildtype"
 OPTS="$OPTS -Dcheck_includes=true"
 if [ "$MINI" = "true" ]; then
     OPTS="$OPTS -Denable_drivers=net/null"
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 154be70cc1..bbcb535afb 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -18,6 +18,7 @@ jobs:
       ABI_CHECKS: ${{ contains(matrix.config.checks, 'abi') }}
       ASAN: ${{ contains(matrix.config.checks, 'asan') }}
       BUILD_32BIT: ${{ matrix.config.cross == 'i386' }}
+      BUILD_DEBUG: ${{ contains(matrix.config.checks, 'debug') }}
       BUILD_DOCS: ${{ contains(matrix.config.checks, 'doc') }}
       CC: ccache ${{ matrix.config.compiler }}
       DEF_LIB: ${{ matrix.config.library }}
@@ -39,7 +40,7 @@ jobs:
             mini: mini
           - os: ubuntu-20.04
             compiler: gcc
-            checks: abi+doc+tests
+            checks: abi+debug+doc+tests
           - os: ubuntu-20.04
             compiler: clang
             checks: asan+doc+tests
-- 
2.39.2


^ permalink raw reply	[relevance 19%]

* [PATCH 2/2] ci: test compilation with debug
  @ 2023-03-20 10:26  5% ` David Marchand
    1 sibling, 0 replies; 200+ results
From: David Marchand @ 2023-03-20 10:26 UTC (permalink / raw)
  To: dev; +Cc: Aaron Conole, Michael Santana

We often miss compilation issues with -O0 -g.
Add a test in GHA.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 .ci/linux-build.sh          | 8 +++++++-
 .github/workflows/build.yml | 4 ++++
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index ab0994388a..150b38bd7a 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -65,6 +65,12 @@ if [ "$RISCV64" = "true" ]; then
     cross_file=config/riscv/riscv64_linux_gcc
 fi
 
+buildtype=debugoptimized
+
+if [ "$BUILD_DEBUG" = "true" ]; then
+    buildtype=debug
+fi
+
 if [ "$BUILD_DOCS" = "true" ]; then
     OPTS="$OPTS -Denable_docs=true"
 fi
@@ -85,7 +91,7 @@ fi
 
 OPTS="$OPTS -Dplatform=generic"
 OPTS="$OPTS -Ddefault_library=$DEF_LIB"
-OPTS="$OPTS -Dbuildtype=debugoptimized"
+OPTS="$OPTS -Dbuildtype=$buildtype"
 OPTS="$OPTS -Dcheck_includes=true"
 if [ "$MINI" = "true" ]; then
     OPTS="$OPTS -Denable_drivers=net/null"
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 154be70cc1..d90ecfc6f0 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -18,6 +18,7 @@ jobs:
       ABI_CHECKS: ${{ contains(matrix.config.checks, 'abi') }}
       ASAN: ${{ contains(matrix.config.checks, 'asan') }}
       BUILD_32BIT: ${{ matrix.config.cross == 'i386' }}
+      BUILD_DEBUG: ${{ contains(matrix.config.checks, 'debug') }}
       BUILD_DOCS: ${{ contains(matrix.config.checks, 'doc') }}
       CC: ccache ${{ matrix.config.compiler }}
       DEF_LIB: ${{ matrix.config.library }}
@@ -37,6 +38,9 @@ jobs:
           - os: ubuntu-20.04
             compiler: gcc
             mini: mini
+          - os: ubuntu-20.04
+            compiler: gcc
+            checks: debug
           - os: ubuntu-20.04
             compiler: gcc
             checks: abi+doc+tests
-- 
2.39.2


^ permalink raw reply	[relevance 5%]

* Re: [PATCH 0/5] fix segment fault when parse args
  @ 2023-03-17  2:43  3%   ` fengchengwen
  2023-03-21 13:50  0%     ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: fengchengwen @ 2023-03-17  2:43 UTC (permalink / raw)
  To: Ferruh Yigit, thomas; +Cc: dev, David Marchand

On 2023/3/17 2:18, Ferruh Yigit wrote:
> On 3/14/2023 12:48 PM, Chengwen Feng wrote:
>> The rte_kvargs_process() was used to parse KV pairs, it also supports
>> to parse 'only keys' (e.g. socket_id) type. And the callback function 
>> parameter 'value' is NULL when parsed 'only keys'.
>>
>> It may leads to segment fault when parse args with 'only key', this 
>> patchset fixes rest of them.
>>
>> Chengwen Feng (5):
>>   app/pdump: fix segment fault when parse args
>>   net/memif: fix segment fault when parse devargs
>>   net/pcap: fix segment fault when parse devargs
>>   net/ring: fix segment fault when parse devargs
>>   net/sfc: fix segment fault when parse devargs
> 
> Hi Chengwen,
> 
> Did you scan all `rte_kvargs_process()` instances?

No, I was just looking at the modules I was concerned about.
I looked at it briefly, and some modules had the same problem.

> 
> 
> And if there would be a way to tell kvargs that a value is expected (or
> not) this checks could be done in kvargs layer, I think this also can be
> to look at.

Yes, the way to tell kvargs may lead to a lot of modifys and also break ABI.
I also think about just set value = "" when only exist key, It could perfectly solve the above segment scene.
But it also break the API's behavior.


Or continue fix the exist code (about 10+ place more),
for new invoking, because the 'arg_handler_t' already well documented (52ab17efdecf935792ee1d0cb749c0dbd536c083),
they'll take the initiative to prevent this.


Hope for more advise for the next.

> .
> 

^ permalink raw reply	[relevance 3%]

* Re: [PATCH 1/5] ethdev: support setting and querying rss algorithm
  2023-03-16 13:10  3%     ` Dongdong Liu
@ 2023-03-16 14:31  0%       ` Ivan Malov
  0 siblings, 0 replies; 200+ results
From: Ivan Malov @ 2023-03-16 14:31 UTC (permalink / raw)
  To: Dongdong Liu
  Cc: dev, ferruh.yigit, thomas, andrew.rybchenko, reshma.pattan,
	stable, yisen.zhuang, Jie Hai

Hi,

Thanks for responding and PSB.

On Thu, 16 Mar 2023, Dongdong Liu wrote:

> Hi Ivan
>
> Many thanks for your review.
>
> On 2023/3/15 19:28, Ivan Malov wrote:
>> Hi,
>> 
>> On Wed, 15 Mar 2023, Dongdong Liu wrote:
>> 
>>> From: Jie Hai <haijie1@huawei.com>
>>> 
>>> Currently, rte_eth_rss_conf supports configuring rss hash
>>> functions, rss key and it's length, but not rss hash algorithm.
>>> 
>>> The structure ``rte_eth_rss_conf`` is extended by adding a new field,
>>> "func". This represents the RSS algorithms to apply. The following
>>> API is affected:
>>>     - rte_eth_dev_configure
>>>     - rte_eth_dev_rss_hash_update
>>>     - rte_eth_dev_rss_hash_conf_get
>>> 
>>> To prevent configuration failures caused by incorrect func input, check
>>> this parameter in advance. If it's incorrect, a warning is generated
>>> and the default value is set. Do the same for rte_eth_dev_rss_hash_update
>>> and rte_eth_dev_configure.
>>> 
>>> To check whether the drivers report the func field, it is set to default
>>> value before querying.
>>> 
>>> Signed-off-by: Jie Hai <haijie1@huawei.com>
>>> Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
>>> ---
>>> doc/guides/rel_notes/release_23_03.rst |  4 ++--
>>> lib/ethdev/rte_ethdev.c                | 18 ++++++++++++++++++
>>> lib/ethdev/rte_ethdev.h                |  5 +++++
>>> 3 files changed, 25 insertions(+), 2 deletions(-)
>>> 
>>> diff --git a/doc/guides/rel_notes/release_23_03.rst
>>> b/doc/guides/rel_notes/release_23_03.rst
>>> index af6f37389c..7879567427 100644
>>> --- a/doc/guides/rel_notes/release_23_03.rst
>>> +++ b/doc/guides/rel_notes/release_23_03.rst
>>> @@ -284,8 +284,8 @@ ABI Changes
>>>    Also, make sure to start the actual text at the margin.
>>>    =======================================================
>>> 
>>> -* No ABI change that would break compatibility with 22.11.
>>> -
>>> +* ethdev: Added "func" field to ``rte_eth_rss_conf`` structure for
>>> RSS hash
>>> +  algorithm.
>>> 
>>> Known Issues
>>> ------------
>>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>>> index 4d03255683..db561026bd 100644
>>> --- a/lib/ethdev/rte_ethdev.c
>>> +++ b/lib/ethdev/rte_ethdev.c
>>> @@ -1368,6 +1368,15 @@ rte_eth_dev_configure(uint16_t port_id,
>>> uint16_t nb_rx_q, uint16_t nb_tx_q,
>>>         goto rollback;
>>>     }
>>> 
>>> +    if (dev_conf->rx_adv_conf.rss_conf.func >=
>>> RTE_ETH_HASH_FUNCTION_MAX) {
>>> +        RTE_ETHDEV_LOG(WARNING,
>>> +            "Ethdev port_id=%u invalid rss hash function (%u),
>>> modified to default value (%u)\n",
>>> +            port_id, dev_conf->rx_adv_conf.rss_conf.func,
>>> +            RTE_ETH_HASH_FUNCTION_DEFAULT);
>>> +        dev->data->dev_conf.rx_adv_conf.rss_conf.func =
>>> +            RTE_ETH_HASH_FUNCTION_DEFAULT;
>> 
>> I have no strong opinion, but, to me, this behaviour conceals
>> programming errors. For example, if an application intends
>> to enable hash algorithm A but, due to a programming error,
>> passes a gibberish value here, chances are the error will
>> end up unnoticed. Especially in case the application
>> sets the log level to such that warnings are omitted.
> Good point, will fix.
>> 
>> Why not just return the error the standard way?
>
> Aha, The original intention is not to break the ABI,
> but I think it could not achieve that.
>> 
>>> +    }
>>> +
>>>     /* Check if Rx RSS distribution is disabled but RSS hash is
>>> enabled. */
>>>     if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
>>>         (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
>>> @@ -4553,6 +4562,13 @@ rte_eth_dev_rss_hash_update(uint16_t port_id,
>>>         return -ENOTSUP;
>>>     }
>>> 
>>> +    if (rss_conf->func >= RTE_ETH_HASH_FUNCTION_MAX) {
>>> +        RTE_ETHDEV_LOG(NOTICE,
>>> +            "Ethdev port_id=%u invalid rss hash function (%u),
>>> modified to default value (%u)\n",
>>> +            port_id, rss_conf->func, RTE_ETH_HASH_FUNCTION_DEFAULT);
>>> +        rss_conf->func = RTE_ETH_HASH_FUNCTION_DEFAULT;
>>> +    }
>>> +
>>>     if (*dev->dev_ops->rss_hash_update == NULL)
>>>         return -ENOTSUP;
>>>     ret = eth_err(port_id, (*dev->dev_ops->rss_hash_update)(dev,
>>> @@ -4580,6 +4596,8 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
>>>         return -EINVAL;
>>>     }
>>> 
>>> +    rss_conf->func = RTE_ETH_HASH_FUNCTION_DEFAULT;
>>> +
>>>     if (*dev->dev_ops->rss_hash_conf_get == NULL)
>>>         return -ENOTSUP;
>>>     ret = eth_err(port_id, (*dev->dev_ops->rss_hash_conf_get)(dev,
>>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>>> index 99fe9e238b..5abe2cb36d 100644
>>> --- a/lib/ethdev/rte_ethdev.h
>>> +++ b/lib/ethdev/rte_ethdev.h
>>> @@ -174,6 +174,7 @@ extern "C" {
>>> 
>>> #include "rte_ethdev_trace_fp.h"
>>> #include "rte_dev_info.h"
>>> +#include "rte_flow.h"
>>> 
>>> extern int rte_eth_dev_logtype;
>>> 
>>> @@ -461,11 +462,15 @@ struct rte_vlan_filter_conf {
>>>  * The *rss_hf* field of the *rss_conf* structure indicates the different
>>>  * types of IPv4/IPv6 packets to which the RSS hashing must be applied.
>>>  * Supplying an *rss_hf* equal to zero disables the RSS feature.
>>> + *
>>> + * The *func* field of the *rss_conf* structure indicates the different
>>> + * types of hash algorithms applied by the RSS hashing.
>> 
>> Consider:
>> 
>> The *func* field of the *rss_conf* structure indicates the algorithm to
>> use when computing hash. Passing RTE_ETH_HASH_FUNCTION_DEFAULT allows
>> the PMD to use its best-effort algorithm rather than a specific one.
>
> Look at some PMD drivers(i40e, hns3 etc), it seems the 
> RTE_ETH_HASH_FUNCTION_DEFAULT consider as no rss algorithm is set.

This does not seem to contradict the suggested description.

If they, however, treat this as "no RSS at all", then
perhaps it is a mistake, because if the user requests
Rx MQ mode "RSS" and selects algorithm DEFAULT, this
is clearly not the same as "no RSS". Not by a long
shot. Because for "no RSS" the user would have
passed MQ mode choice "NONE", I take it.

>
> Thanks,
> Dongdong
>>
>>>  */
>>> struct rte_eth_rss_conf {
>>>     uint8_t *rss_key;    /**< If not NULL, 40-byte hash key. */
>>>     uint8_t rss_key_len; /**< hash key length in bytes. */
>>>     uint64_t rss_hf;     /**< Hash functions to apply - see below. */
>>> +    enum rte_eth_hash_function func;    /**< Hash algorithm to apply. */
>>> };
>>> 
>>> /*
>>> --
>>> 2.22.0
>>> 
>>> 
>> 
>> Thank you.
>> 
>> .
>> 
>

Thank you.

^ permalink raw reply	[relevance 0%]

* Re: [PATCH 1/5] ethdev: support setting and querying rss algorithm
  2023-03-15 13:43  3%   ` Thomas Monjalon
@ 2023-03-16 13:16  3%     ` Dongdong Liu
  0 siblings, 0 replies; 200+ results
From: Dongdong Liu @ 2023-03-16 13:16 UTC (permalink / raw)
  To: Thomas Monjalon, Jie Hai
  Cc: dev, ferruh.yigit, andrew.rybchenko, reshma.pattan, stable,
	yisen.zhuang, david.marchand

Hi Thomas
On 2023/3/15 21:43, Thomas Monjalon wrote:
> 15/03/2023 12:00, Dongdong Liu:
>> From: Jie Hai <haijie1@huawei.com>
>> --- a/doc/guides/rel_notes/release_23_03.rst
>> +++ b/doc/guides/rel_notes/release_23_03.rst
>> -* No ABI change that would break compatibility with 22.11.
>> -
>> +* ethdev: Added "func" field to ``rte_eth_rss_conf`` structure for RSS hash
>> +  algorithm.
>
> We cannot break ABI compatibility until 23.11.
Got it. Thank you for reminding.

[PATCH 3/5] and [PATCH 4/5] do not relate with this ABI compatibility.
I will send them separately.

Thanks,
Dongdong
>
>
>
> .
>

^ permalink raw reply	[relevance 3%]

* Re: [PATCH 1/5] ethdev: support setting and querying rss algorithm
  2023-03-15 11:28  0%   ` Ivan Malov
@ 2023-03-16 13:10  3%     ` Dongdong Liu
  2023-03-16 14:31  0%       ` Ivan Malov
  0 siblings, 1 reply; 200+ results
From: Dongdong Liu @ 2023-03-16 13:10 UTC (permalink / raw)
  To: Ivan Malov
  Cc: dev, ferruh.yigit, thomas, andrew.rybchenko, reshma.pattan,
	stable, yisen.zhuang, Jie Hai

Hi Ivan

Many thanks for your review.

On 2023/3/15 19:28, Ivan Malov wrote:
> Hi,
>
> On Wed, 15 Mar 2023, Dongdong Liu wrote:
>
>> From: Jie Hai <haijie1@huawei.com>
>>
>> Currently, rte_eth_rss_conf supports configuring rss hash
>> functions, rss key and it's length, but not rss hash algorithm.
>>
>> The structure ``rte_eth_rss_conf`` is extended by adding a new field,
>> "func". This represents the RSS algorithms to apply. The following
>> API is affected:
>>     - rte_eth_dev_configure
>>     - rte_eth_dev_rss_hash_update
>>     - rte_eth_dev_rss_hash_conf_get
>>
>> To prevent configuration failures caused by incorrect func input, check
>> this parameter in advance. If it's incorrect, a warning is generated
>> and the default value is set. Do the same for rte_eth_dev_rss_hash_update
>> and rte_eth_dev_configure.
>>
>> To check whether the drivers report the func field, it is set to default
>> value before querying.
>>
>> Signed-off-by: Jie Hai <haijie1@huawei.com>
>> Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
>> ---
>> doc/guides/rel_notes/release_23_03.rst |  4 ++--
>> lib/ethdev/rte_ethdev.c                | 18 ++++++++++++++++++
>> lib/ethdev/rte_ethdev.h                |  5 +++++
>> 3 files changed, 25 insertions(+), 2 deletions(-)
>>
>> diff --git a/doc/guides/rel_notes/release_23_03.rst
>> b/doc/guides/rel_notes/release_23_03.rst
>> index af6f37389c..7879567427 100644
>> --- a/doc/guides/rel_notes/release_23_03.rst
>> +++ b/doc/guides/rel_notes/release_23_03.rst
>> @@ -284,8 +284,8 @@ ABI Changes
>>    Also, make sure to start the actual text at the margin.
>>    =======================================================
>>
>> -* No ABI change that would break compatibility with 22.11.
>> -
>> +* ethdev: Added "func" field to ``rte_eth_rss_conf`` structure for
>> RSS hash
>> +  algorithm.
>>
>> Known Issues
>> ------------
>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>> index 4d03255683..db561026bd 100644
>> --- a/lib/ethdev/rte_ethdev.c
>> +++ b/lib/ethdev/rte_ethdev.c
>> @@ -1368,6 +1368,15 @@ rte_eth_dev_configure(uint16_t port_id,
>> uint16_t nb_rx_q, uint16_t nb_tx_q,
>>         goto rollback;
>>     }
>>
>> +    if (dev_conf->rx_adv_conf.rss_conf.func >=
>> RTE_ETH_HASH_FUNCTION_MAX) {
>> +        RTE_ETHDEV_LOG(WARNING,
>> +            "Ethdev port_id=%u invalid rss hash function (%u),
>> modified to default value (%u)\n",
>> +            port_id, dev_conf->rx_adv_conf.rss_conf.func,
>> +            RTE_ETH_HASH_FUNCTION_DEFAULT);
>> +        dev->data->dev_conf.rx_adv_conf.rss_conf.func =
>> +            RTE_ETH_HASH_FUNCTION_DEFAULT;
>
> I have no strong opinion, but, to me, this behaviour conceals
> programming errors. For example, if an application intends
> to enable hash algorithm A but, due to a programming error,
> passes a gibberish value here, chances are the error will
> end up unnoticed. Especially in case the application
> sets the log level to such that warnings are omitted.
Good point, will fix.
>
> Why not just return the error the standard way?

Aha, The original intention is not to break the ABI,
but I think it could not achieve that.
>
>> +    }
>> +
>>     /* Check if Rx RSS distribution is disabled but RSS hash is
>> enabled. */
>>     if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
>>         (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
>> @@ -4553,6 +4562,13 @@ rte_eth_dev_rss_hash_update(uint16_t port_id,
>>         return -ENOTSUP;
>>     }
>>
>> +    if (rss_conf->func >= RTE_ETH_HASH_FUNCTION_MAX) {
>> +        RTE_ETHDEV_LOG(NOTICE,
>> +            "Ethdev port_id=%u invalid rss hash function (%u),
>> modified to default value (%u)\n",
>> +            port_id, rss_conf->func, RTE_ETH_HASH_FUNCTION_DEFAULT);
>> +        rss_conf->func = RTE_ETH_HASH_FUNCTION_DEFAULT;
>> +    }
>> +
>>     if (*dev->dev_ops->rss_hash_update == NULL)
>>         return -ENOTSUP;
>>     ret = eth_err(port_id, (*dev->dev_ops->rss_hash_update)(dev,
>> @@ -4580,6 +4596,8 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
>>         return -EINVAL;
>>     }
>>
>> +    rss_conf->func = RTE_ETH_HASH_FUNCTION_DEFAULT;
>> +
>>     if (*dev->dev_ops->rss_hash_conf_get == NULL)
>>         return -ENOTSUP;
>>     ret = eth_err(port_id, (*dev->dev_ops->rss_hash_conf_get)(dev,
>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>> index 99fe9e238b..5abe2cb36d 100644
>> --- a/lib/ethdev/rte_ethdev.h
>> +++ b/lib/ethdev/rte_ethdev.h
>> @@ -174,6 +174,7 @@ extern "C" {
>>
>> #include "rte_ethdev_trace_fp.h"
>> #include "rte_dev_info.h"
>> +#include "rte_flow.h"
>>
>> extern int rte_eth_dev_logtype;
>>
>> @@ -461,11 +462,15 @@ struct rte_vlan_filter_conf {
>>  * The *rss_hf* field of the *rss_conf* structure indicates the different
>>  * types of IPv4/IPv6 packets to which the RSS hashing must be applied.
>>  * Supplying an *rss_hf* equal to zero disables the RSS feature.
>> + *
>> + * The *func* field of the *rss_conf* structure indicates the different
>> + * types of hash algorithms applied by the RSS hashing.
>
> Consider:
>
> The *func* field of the *rss_conf* structure indicates the algorithm to
> use when computing hash. Passing RTE_ETH_HASH_FUNCTION_DEFAULT allows
> the PMD to use its best-effort algorithm rather than a specific one.

Look at some PMD drivers(i40e, hns3 etc), it seems the 
RTE_ETH_HASH_FUNCTION_DEFAULT consider as no rss algorithm is set.

Thanks,
Dongdong
>
>>  */
>> struct rte_eth_rss_conf {
>>     uint8_t *rss_key;    /**< If not NULL, 40-byte hash key. */
>>     uint8_t rss_key_len; /**< hash key length in bytes. */
>>     uint64_t rss_hf;     /**< Hash functions to apply - see below. */
>> +    enum rte_eth_hash_function func;    /**< Hash algorithm to apply. */
>> };
>>
>> /*
>> --
>> 2.22.0
>>
>>
>
> Thank you.
>
> .
>

^ permalink raw reply	[relevance 3%]

* [RFC v2 0/2] Add high-performance timer facility
  2023-02-28  9:39  3% [RFC 0/2] Add high-performance timer facility Mattias Rönnblom
  2023-02-28 16:01  0% ` Morten Brørup
@ 2023-03-15 17:03  3% ` Mattias Rönnblom
  1 sibling, 0 replies; 200+ results
From: Mattias Rönnblom @ 2023-03-15 17:03 UTC (permalink / raw)
  To: dev
  Cc: Erik Gabriel Carrillo, David Marchand, maria.lingemark,
	Stefan Sundkvist, Stephen Hemminger, Morten Brørup,
	Tyler Retzlaff, Mattias Rönnblom

This patchset is an attempt to introduce a high-performance, highly
scalable timer facility into DPDK.

More specifically, the goals for the htimer library are:

* Efficient handling of a handful up to hundreds of thousands of
  concurrent timers.
* Make adding and canceling timers low-overhead, constant-time
  operations.
* Provide a service functionally equivalent to that of
  <rte_timer.h>. API/ABI backward compatibility is secondary.

In the author's opinion, there are two main shortcomings with the
current DPDK timer library (i.e., rte_timer.[ch]).

One is the synchronization overhead, where heavy-weight full-barrier
type synchronization is used. rte_timer.c uses per-EAL/lcore skip
lists, but any thread may add or cancel (or otherwise access) timers
managed by another lcore (and thus resides in its timer skip list).

The other is an algorithmic shortcoming, with rte_timer.c's reliance
on a skip list, which is less efficient than certain alternatives.

This patchset implements a hierarchical timer wheel (HWT, in
rte_htw.c), as per the Varghese and Lauck paper "Hashed and
Hierarchical Timing Wheels: Data Structures for the Efficient
Implementation of a Timer Facility". A HWT is a data structure
purposely design for this task, and used by many operating system
kernel timer facilities.

To further improve the solution described by Varghese and Lauck, a
bitset is placed in front of each of the timer wheel in the HWT,
reducing overhead of rte_htimer_mgr_manage() (i.e., progressing time
and expiry processing).

Cycle-efficient scanning and manipulation of these bitsets are crucial
for the HWT's performance.

The htimer module keeps a per-lcore (or per-registered EAL thread) HWT
instance, much like rte_timer.c keeps a per-lcore skip list.

To avoid expensive synchronization overhead for thread-local timer
management, the HWTs are accessed only from the "owning" thread.  Any
interaction any other thread does with a particular lcore's timer
wheel goes over a set of DPDK rings. A side-effect of this design is
that all operations working toward a "remote" HWT must be
asynchronous.

The <rte_htimer.h> API is available only to EAL threads and registered
non-EAL threads.

The htimer API allows the application to supply the current time,
useful in case it already has retrieved this for other purposes,
saving the cost of a rdtsc instruction (or its equivalent).

Relative htimer does not retrieve a new time, but reuse the current
time (as known via/at-the-time of the manage-call), again to shave off
some cycles of overhead.

A semantic improvement compared to the <rte_timer.h> API is that the
htimer library can give a definite answer on the question if the timer
expiry callback was called, after a timer has been canceled.

The patchset includes a performance test case
'timer_htimer_htw_perf_autotest', which compares rte_timer, rte_htimer
and rte_htw timers in the same scenario.

'timer_htimer_htw_perf_autotest' suggests that rte_htimer is ~3-5x
faster than rte_timer for timer/timeout-heavy applications, in a
scenario where the timer always fires. For a scenario with a mix of
canceled and expired timers, the performance difference is greater.

In scenarios with few timeouts, rte_timer has lower overhead than
htimer, but both variants consume very little CPU time.

In certain scenarios, rte_timer does not suffer from
non-constant-time-add and cancel operations. On such is in case the
timer added is always last in the list, where htimer is only ~2-3x
faster.

The bitset implementation which the HWT implementation depends upon
seemed generic-enough and potentially useful outside the world of
HWTs, to justify being located in the EAL.

This patchset is very much an RFC, and the author is yet to form an
opinion on many important issues.

* If deemed a suitable replacement, should the htimer replace the
  current DPDK timer library in some particular (ABI-breaking)
  release, or should it live side-by-side with the then-legacy
  <rte_timer.h> API? A lot of things in and outside DPDK depend on
  <rte_timer.h>, so coexistence may be required to facilitate a smooth
  transition.

* Should the htimer and htw-related files be colocated with rte_timer.c
  in the timer library?

* Would it be useful for applications using asynchronous cancel to
  have the option of having the timer callback run not only in case of
  timer expiration, but also cancellation (on the target lcore)? The
  timer cb signature would need to include an additional parameter in
  that case.

* Should the rte_htimer be a nested struct, so the htw parts be separated
  from the htimer parts?

* <rte_htimer.h> is kept separate from <rte_htimer_mgr.h>, so that
  <rte_htw.h> may avoid a depedency to <rte_htimer_mgr.h>. Should it
  be so?

* rte_htimer struct is only supposed to be used by the application to
  give an indication of how much memory it needs to allocate, and is
  its member are not supposed to be directly accessed (w/ the possible
  exception of the owner_lcore_id field). Should there be a dummy
  struct, or a #define RTE_HTIMER_MEMSIZE or a rte_htimer_get_memsize()
  function instead, serving the same purpose? Better encapsulation,
  but more inconvenient for applications. Run-time dynamic sizing
  would force application-level dynamic allocations.

* Asynchronous cancellation is a little tricky to use for the
  application (primarily due to timer memory reclamation/race
  issues). Should this functionality be removed?
  
* Should rte_htimer_mgr_init() also retrieve the current time? If so,
  there should to be a variant which allows the user to specify the
  time (to match rte_htimer_mgr_manage_time()). One pitfall with the
  current proposed API is an application calling rte_htimer_mgr_init()
  and then immediately adding a timer with a relative timeout, in
  which case the current absolute time used is 0, which might be a
  surprise.

* Would the event timer adapter be best off using <rte_htw.h>
  directly, or <rte_htimer.h>? In the latter case, there needs to be a
  way to instantiate more HWTs (similar to the "alt" functions of
  <rte_timer.h>)?

* Should the PERIODICAL flag (and the complexity it brings) be
  removed? And leave the application with only single-shot timers, and
  the option to re-add them in the timer callback.

* Should the async result codes and the sync cancel error codes be merged
  into one set of result codes?

* Should the rte_htimer_mgr_async_add() have a flag which allow
  buffering add request messages until rte_htimer_mgr_process() is
  called? Or any manage function. Would reduce ring signaling overhead
  (i.e., burst enqueue operations instead of single-element
  enqueue). Could also be a rte_htimer_mgr_async_add_burst() function,
  solving the same "problem" a different way. (The signature of such
  a function would not be pretty.)

* Does the functionality provided by the rte_htimer_mgr_process()
  function match its the use cases? Should there me a more clear
  separation between expiry processing and asynchronous operation
  processing?

* Should the patchset be split into more commits? If so, how?

Thanks to Erik Carrillo for his assistance.

Mattias Rönnblom (2):
  eal: add bitset type
  eal: add high-performance timer facility

 app/test/meson.build                  |  12 +-
 app/test/test_bitset.c                | 645 +++++++++++++++++++
 app/test/test_htimer_mgr.c            | 674 ++++++++++++++++++++
 app/test/test_htimer_mgr_perf.c       | 322 ++++++++++
 app/test/test_htw.c                   | 478 ++++++++++++++
 app/test/test_htw_perf.c              | 181 ++++++
 app/test/test_timer_htimer_htw_perf.c | 693 ++++++++++++++++++++
 doc/api/doxy-api-index.md             |   5 +-
 doc/api/doxy-api.conf.in              |   1 +
 lib/eal/common/meson.build            |   1 +
 lib/eal/common/rte_bitset.c           |  29 +
 lib/eal/include/meson.build           |   1 +
 lib/eal/include/rte_bitset.h          | 879 ++++++++++++++++++++++++++
 lib/eal/version.map                   |   3 +
 lib/htimer/meson.build                |   7 +
 lib/htimer/rte_htimer.h               |  68 ++
 lib/htimer/rte_htimer_mgr.c           | 547 ++++++++++++++++
 lib/htimer/rte_htimer_mgr.h           | 516 +++++++++++++++
 lib/htimer/rte_htimer_msg.h           |  44 ++
 lib/htimer/rte_htimer_msg_ring.c      |  18 +
 lib/htimer/rte_htimer_msg_ring.h      |  55 ++
 lib/htimer/rte_htw.c                  | 445 +++++++++++++
 lib/htimer/rte_htw.h                  |  49 ++
 lib/htimer/version.map                |  17 +
 lib/meson.build                       |   1 +
 25 files changed, 5689 insertions(+), 2 deletions(-)
 create mode 100644 app/test/test_bitset.c
 create mode 100644 app/test/test_htimer_mgr.c
 create mode 100644 app/test/test_htimer_mgr_perf.c
 create mode 100644 app/test/test_htw.c
 create mode 100644 app/test/test_htw_perf.c
 create mode 100644 app/test/test_timer_htimer_htw_perf.c
 create mode 100644 lib/eal/common/rte_bitset.c
 create mode 100644 lib/eal/include/rte_bitset.h
 create mode 100644 lib/htimer/meson.build
 create mode 100644 lib/htimer/rte_htimer.h
 create mode 100644 lib/htimer/rte_htimer_mgr.c
 create mode 100644 lib/htimer/rte_htimer_mgr.h
 create mode 100644 lib/htimer/rte_htimer_msg.h
 create mode 100644 lib/htimer/rte_htimer_msg_ring.c
 create mode 100644 lib/htimer/rte_htimer_msg_ring.h
 create mode 100644 lib/htimer/rte_htw.c
 create mode 100644 lib/htimer/rte_htw.h
 create mode 100644 lib/htimer/version.map

-- 
2.34.1


^ permalink raw reply	[relevance 3%]

* Re: [PATCH 1/5] ethdev: support setting and querying rss algorithm
  2023-03-15 11:00 10% ` [PATCH 1/5] ethdev: support setting and querying rss algorithm Dongdong Liu
  2023-03-15 11:28  0%   ` Ivan Malov
@ 2023-03-15 13:43  3%   ` Thomas Monjalon
  2023-03-16 13:16  3%     ` Dongdong Liu
  1 sibling, 1 reply; 200+ results
From: Thomas Monjalon @ 2023-03-15 13:43 UTC (permalink / raw)
  To: Dongdong Liu, Jie Hai
  Cc: dev, ferruh.yigit, andrew.rybchenko, reshma.pattan, stable,
	yisen.zhuang, david.marchand

15/03/2023 12:00, Dongdong Liu:
> From: Jie Hai <haijie1@huawei.com>
> --- a/doc/guides/rel_notes/release_23_03.rst
> +++ b/doc/guides/rel_notes/release_23_03.rst
> -* No ABI change that would break compatibility with 22.11.
> -
> +* ethdev: Added "func" field to ``rte_eth_rss_conf`` structure for RSS hash
> +  algorithm.

We cannot break ABI compatibility until 23.11.




^ permalink raw reply	[relevance 3%]

* Re: [PATCH 1/5] ethdev: support setting and querying rss algorithm
  2023-03-15 11:00 10% ` [PATCH 1/5] ethdev: support setting and querying rss algorithm Dongdong Liu
@ 2023-03-15 11:28  0%   ` Ivan Malov
  2023-03-16 13:10  3%     ` Dongdong Liu
  2023-03-15 13:43  3%   ` Thomas Monjalon
  1 sibling, 1 reply; 200+ results
From: Ivan Malov @ 2023-03-15 11:28 UTC (permalink / raw)
  To: Dongdong Liu
  Cc: dev, ferruh.yigit, thomas, andrew.rybchenko, reshma.pattan,
	stable, yisen.zhuang, Jie Hai

Hi,

On Wed, 15 Mar 2023, Dongdong Liu wrote:

> From: Jie Hai <haijie1@huawei.com>
>
> Currently, rte_eth_rss_conf supports configuring rss hash
> functions, rss key and it's length, but not rss hash algorithm.
>
> The structure ``rte_eth_rss_conf`` is extended by adding a new field,
> "func". This represents the RSS algorithms to apply. The following
> API is affected:
> 	- rte_eth_dev_configure
> 	- rte_eth_dev_rss_hash_update
> 	- rte_eth_dev_rss_hash_conf_get
>
> To prevent configuration failures caused by incorrect func input, check
> this parameter in advance. If it's incorrect, a warning is generated
> and the default value is set. Do the same for rte_eth_dev_rss_hash_update
> and rte_eth_dev_configure.
>
> To check whether the drivers report the func field, it is set to default
> value before querying.
>
> Signed-off-by: Jie Hai <haijie1@huawei.com>
> Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
> ---
> doc/guides/rel_notes/release_23_03.rst |  4 ++--
> lib/ethdev/rte_ethdev.c                | 18 ++++++++++++++++++
> lib/ethdev/rte_ethdev.h                |  5 +++++
> 3 files changed, 25 insertions(+), 2 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst
> index af6f37389c..7879567427 100644
> --- a/doc/guides/rel_notes/release_23_03.rst
> +++ b/doc/guides/rel_notes/release_23_03.rst
> @@ -284,8 +284,8 @@ ABI Changes
>    Also, make sure to start the actual text at the margin.
>    =======================================================
>
> -* No ABI change that would break compatibility with 22.11.
> -
> +* ethdev: Added "func" field to ``rte_eth_rss_conf`` structure for RSS hash
> +  algorithm.
>
> Known Issues
> ------------
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 4d03255683..db561026bd 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -1368,6 +1368,15 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> 		goto rollback;
> 	}
>
> +	if (dev_conf->rx_adv_conf.rss_conf.func >= RTE_ETH_HASH_FUNCTION_MAX) {
> +		RTE_ETHDEV_LOG(WARNING,
> +			"Ethdev port_id=%u invalid rss hash function (%u), modified to default value (%u)\n",
> +			port_id, dev_conf->rx_adv_conf.rss_conf.func,
> +			RTE_ETH_HASH_FUNCTION_DEFAULT);
> +		dev->data->dev_conf.rx_adv_conf.rss_conf.func =
> +			RTE_ETH_HASH_FUNCTION_DEFAULT;

I have no strong opinion, but, to me, this behaviour conceals
programming errors. For example, if an application intends
to enable hash algorithm A but, due to a programming error,
passes a gibberish value here, chances are the error will
end up unnoticed. Especially in case the application
sets the log level to such that warnings are omitted.

Why not just return the error the standard way?

> +	}
> +
> 	/* Check if Rx RSS distribution is disabled but RSS hash is enabled. */
> 	if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
> 	    (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
> @@ -4553,6 +4562,13 @@ rte_eth_dev_rss_hash_update(uint16_t port_id,
> 		return -ENOTSUP;
> 	}
>
> +	if (rss_conf->func >= RTE_ETH_HASH_FUNCTION_MAX) {
> +		RTE_ETHDEV_LOG(NOTICE,
> +			"Ethdev port_id=%u invalid rss hash function (%u), modified to default value (%u)\n",
> +			port_id, rss_conf->func, RTE_ETH_HASH_FUNCTION_DEFAULT);
> +		rss_conf->func = RTE_ETH_HASH_FUNCTION_DEFAULT;
> +	}
> +
> 	if (*dev->dev_ops->rss_hash_update == NULL)
> 		return -ENOTSUP;
> 	ret = eth_err(port_id, (*dev->dev_ops->rss_hash_update)(dev,
> @@ -4580,6 +4596,8 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
> 		return -EINVAL;
> 	}
>
> +	rss_conf->func = RTE_ETH_HASH_FUNCTION_DEFAULT;
> +
> 	if (*dev->dev_ops->rss_hash_conf_get == NULL)
> 		return -ENOTSUP;
> 	ret = eth_err(port_id, (*dev->dev_ops->rss_hash_conf_get)(dev,
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index 99fe9e238b..5abe2cb36d 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -174,6 +174,7 @@ extern "C" {
>
> #include "rte_ethdev_trace_fp.h"
> #include "rte_dev_info.h"
> +#include "rte_flow.h"
>
> extern int rte_eth_dev_logtype;
>
> @@ -461,11 +462,15 @@ struct rte_vlan_filter_conf {
>  * The *rss_hf* field of the *rss_conf* structure indicates the different
>  * types of IPv4/IPv6 packets to which the RSS hashing must be applied.
>  * Supplying an *rss_hf* equal to zero disables the RSS feature.
> + *
> + * The *func* field of the *rss_conf* structure indicates the different
> + * types of hash algorithms applied by the RSS hashing.

Consider:

The *func* field of the *rss_conf* structure indicates the algorithm to
use when computing hash. Passing RTE_ETH_HASH_FUNCTION_DEFAULT allows
the PMD to use its best-effort algorithm rather than a specific one.

>  */
> struct rte_eth_rss_conf {
> 	uint8_t *rss_key;    /**< If not NULL, 40-byte hash key. */
> 	uint8_t rss_key_len; /**< hash key length in bytes. */
> 	uint64_t rss_hf;     /**< Hash functions to apply - see below. */
> +	enum rte_eth_hash_function func;	/**< Hash algorithm to apply. */
> };
>
> /*
> -- 
> 2.22.0
>
>

Thank you.

^ permalink raw reply	[relevance 0%]

* [PATCH 1/5] ethdev: support setting and querying rss algorithm
  @ 2023-03-15 11:00 10% ` Dongdong Liu
  2023-03-15 11:28  0%   ` Ivan Malov
  2023-03-15 13:43  3%   ` Thomas Monjalon
  0 siblings, 2 replies; 200+ results
From: Dongdong Liu @ 2023-03-15 11:00 UTC (permalink / raw)
  To: dev, ferruh.yigit, thomas, andrew.rybchenko, reshma.pattan
  Cc: stable, yisen.zhuang, liudongdong3, Jie Hai

From: Jie Hai <haijie1@huawei.com>

Currently, rte_eth_rss_conf supports configuring rss hash
functions, rss key and it's length, but not rss hash algorithm.

The structure ``rte_eth_rss_conf`` is extended by adding a new field,
"func". This represents the RSS algorithms to apply. The following
API is affected:
	- rte_eth_dev_configure
	- rte_eth_dev_rss_hash_update
	- rte_eth_dev_rss_hash_conf_get

To prevent configuration failures caused by incorrect func input, check
this parameter in advance. If it's incorrect, a warning is generated
and the default value is set. Do the same for rte_eth_dev_rss_hash_update
and rte_eth_dev_configure.

To check whether the drivers report the func field, it is set to default
value before querying.

Signed-off-by: Jie Hai <haijie1@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
 doc/guides/rel_notes/release_23_03.rst |  4 ++--
 lib/ethdev/rte_ethdev.c                | 18 ++++++++++++++++++
 lib/ethdev/rte_ethdev.h                |  5 +++++
 3 files changed, 25 insertions(+), 2 deletions(-)

diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst
index af6f37389c..7879567427 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -284,8 +284,8 @@ ABI Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
-* No ABI change that would break compatibility with 22.11.
-
+* ethdev: Added "func" field to ``rte_eth_rss_conf`` structure for RSS hash
+  algorithm.
 
 Known Issues
 ------------
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 4d03255683..db561026bd 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1368,6 +1368,15 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		goto rollback;
 	}
 
+	if (dev_conf->rx_adv_conf.rss_conf.func >= RTE_ETH_HASH_FUNCTION_MAX) {
+		RTE_ETHDEV_LOG(WARNING,
+			"Ethdev port_id=%u invalid rss hash function (%u), modified to default value (%u)\n",
+			port_id, dev_conf->rx_adv_conf.rss_conf.func,
+			RTE_ETH_HASH_FUNCTION_DEFAULT);
+		dev->data->dev_conf.rx_adv_conf.rss_conf.func =
+			RTE_ETH_HASH_FUNCTION_DEFAULT;
+	}
+
 	/* Check if Rx RSS distribution is disabled but RSS hash is enabled. */
 	if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
 	    (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
@@ -4553,6 +4562,13 @@ rte_eth_dev_rss_hash_update(uint16_t port_id,
 		return -ENOTSUP;
 	}
 
+	if (rss_conf->func >= RTE_ETH_HASH_FUNCTION_MAX) {
+		RTE_ETHDEV_LOG(NOTICE,
+			"Ethdev port_id=%u invalid rss hash function (%u), modified to default value (%u)\n",
+			port_id, rss_conf->func, RTE_ETH_HASH_FUNCTION_DEFAULT);
+		rss_conf->func = RTE_ETH_HASH_FUNCTION_DEFAULT;
+	}
+
 	if (*dev->dev_ops->rss_hash_update == NULL)
 		return -ENOTSUP;
 	ret = eth_err(port_id, (*dev->dev_ops->rss_hash_update)(dev,
@@ -4580,6 +4596,8 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
 		return -EINVAL;
 	}
 
+	rss_conf->func = RTE_ETH_HASH_FUNCTION_DEFAULT;
+
 	if (*dev->dev_ops->rss_hash_conf_get == NULL)
 		return -ENOTSUP;
 	ret = eth_err(port_id, (*dev->dev_ops->rss_hash_conf_get)(dev,
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 99fe9e238b..5abe2cb36d 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -174,6 +174,7 @@ extern "C" {
 
 #include "rte_ethdev_trace_fp.h"
 #include "rte_dev_info.h"
+#include "rte_flow.h"
 
 extern int rte_eth_dev_logtype;
 
@@ -461,11 +462,15 @@ struct rte_vlan_filter_conf {
  * The *rss_hf* field of the *rss_conf* structure indicates the different
  * types of IPv4/IPv6 packets to which the RSS hashing must be applied.
  * Supplying an *rss_hf* equal to zero disables the RSS feature.
+ *
+ * The *func* field of the *rss_conf* structure indicates the different
+ * types of hash algorithms applied by the RSS hashing.
  */
 struct rte_eth_rss_conf {
 	uint8_t *rss_key;    /**< If not NULL, 40-byte hash key. */
 	uint8_t rss_key_len; /**< hash key length in bytes. */
 	uint64_t rss_hf;     /**< Hash functions to apply - see below. */
+	enum rte_eth_hash_function func;	/**< Hash algorithm to apply. */
 };
 
 /*
-- 
2.22.0


^ permalink raw reply	[relevance 10%]

* Re: [PATCH v2 1/2] build: clarify configuration without IOVA field in mbuf
  2023-03-09 13:10  0%             ` Bruce Richardson
@ 2023-03-13 15:51  0%               ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-03-13 15:51 UTC (permalink / raw)
  To: fengchengwen, Bruce Richardson
  Cc: dev, dev, David Marchand, Qi Zhang, Morten Brørup,
	Shijith Thotton, Olivier Matz, Ruifeng Wang, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Jingjing Wu,
	Beilei Xing, Ankur Dwivedi, Anoob Joseph, Tejasree Kondoj,
	Kai Ji, Pablo de Lara, Radha Mohan Chintakuntla,
	Veerasenareddy Burru, Kevin Laatz, Pavan Nikhilesh,
	Mattias Rönnblom, Liang Ma, Peter Mccarthy, Jerin Jacob,
	Harry van Haaren, Artem V. Andreev, Andrew Rybchenko,
	Ashwin Sekhar T K, John W. Linville, Ciara Loftus, Chas Williams,
	Min Hu (Connor),
	Gaetan Rivet, Dongdong Liu, Yisen Zhuang, Konstantin Ananyev,
	Qiming Yang, Jakub Grajciar, Tetsuya Mukawa, Jakub Palider,
	Tomasz Duszynski, Sachin Saxena, Hemant Agrawal

09/03/2023 14:10, Bruce Richardson:
> On Thu, Mar 09, 2023 at 01:12:51PM +0100, Thomas Monjalon wrote:
> > 09/03/2023 12:23, fengchengwen:
> > > On 2023/3/9 15:29, Thomas Monjalon wrote:
> > > > 09/03/2023 02:43, fengchengwen:
> > > >> On 2023/3/7 0:13, Thomas Monjalon wrote:
> > > >>> --- a/doc/guides/rel_notes/release_22_11.rst
> > > >>> +++ b/doc/guides/rel_notes/release_22_11.rst
> > > >>> @@ -504,7 +504,7 @@ ABI Changes
> > > >>>    ``rte-worker-<lcore_id>`` so that DPDK can accommodate lcores higher than 99.
> > > >>>  
> > > >>>  * mbuf: Replaced ``buf_iova`` field with ``next`` field and added a new field
> > > >>> -  ``dynfield2`` at its place in second cacheline if ``RTE_IOVA_AS_PA`` is 0.
> > > >>> +  ``dynfield2`` at its place in second cacheline if ``RTE_IOVA_IN_MBUF`` is 0.
> > > >>
> > > >> Should add to release 23.03 rst.
> > > > 
> > > > Yes we could add a note in API changes.
> > > > 
> > > >> The original 22.11 still have RTE_IOVA_AS_PA definition.
> > > > 
> > > > Yes it was not a good idea to rename in the release notes.
> > > > 
> > > >>> -if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
> > > >>> -    build = false
> > > >>> -    reason = 'driver does not support disabling IOVA as PA mode'
> > > >>> +if not get_option('enable_iova_as_pa')
> > > >>>      subdir_done()
> > > >>>  endif
> > > >>
> > > >> Suggest keep original, and replace RTE_IOVA_AS_PA with RTE_IOVA_IN_MBUF:
> > > >> if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0
> > > >>      subdir_done()
> > > >> endif
> > > > 
> > > > Why testing the C macro in Meson?
> > > > It looks simpler to check the Meson option in Meson.
> > > 
> > > The macro was create in meson.build: config/meson.build:319:dpdk_conf.set10('RTE_IOVA_AS_PA', get_option('enable_iova_as_pa'))
> > > It can be regarded as alias of enable_iova_as_pa.
> > 
> > It is not strictly an alias, because it can be overriden via CFLAGS.
> > 
> > > This commit was mainly used to improve comprehensibility. so we should limit the 'enable_iova_as_pa' usage scope.
> > > and the 'if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0' is more comprehensibility than 'if not get_option('enable_iova_as_pa')'
> > 
> > To me, using Meson option in Meson files is more obvious.
> > 
> > Bruce, what do you think?
> > 
> 
> I'm not sure it matters much! However, I think of the two, using the
> reference to IOVA_IN_MBUF is clearer. It also allows the same terminology
> to be used in meson and C files. If we don't want to do a dpdk_conf lookup,
> we can always assign the option to a meson variable called iova_in_mbuf.

OK I'll query the C macro in the Meson files.



^ permalink raw reply	[relevance 0%]

* Re: [PATCH] lib/hash: new feature adding existing key
  @ 2023-03-13 15:48  3%   ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-03-13 15:48 UTC (permalink / raw)
  To: Abdullah Ömer Yamaç; +Cc: dev, Yipeng Wang

On Mon, 13 Mar 2023 07:35:48 +0000
Abdullah Ömer Yamaç <omer.yamac@ceng.metu.edu.tr> wrote:

> diff --git a/lib/hash/rte_cuckoo_hash.h b/lib/hash/rte_cuckoo_hash.h
> index eb2644f74b..e8b7283ec2 100644
> --- a/lib/hash/rte_cuckoo_hash.h
> +++ b/lib/hash/rte_cuckoo_hash.h
> @@ -193,6 +193,8 @@ struct rte_hash {
>  	/**< If read-write concurrency support is enabled */
>  	uint8_t ext_table_support;     /**< Enable extendable bucket table */
>  	uint8_t no_free_on_del;
> +	/**< If update is prohibited on adding same key */
> +	uint8_t no_update_data;
>  	/**< If key index should be freed on calling rte_hash_del_xxx APIs.
>  	 * If this is set, rte_hash_free_key_with_position must be called to
>  	 * free the key index associated with the deleted entry.
> diff --git a/lib/hash/rte_hash.h b/lib/hash/rte_hash.h

This ends up being an ABI change. So needs to wait for 23.11 release

^ permalink raw reply	[relevance 3%]

* Re: [PATCH] reorder: fix registration of dynamic field in mbuf
  @ 2023-03-13 10:19  3% ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2023-03-13 10:19 UTC (permalink / raw)
  To: Volodymyr Fialko, Reshma Pattan
  Cc: dev, Andrew Rybchenko, jerinj, anoobj, Thomas Monjalon

Hello,

On Mon, Mar 13, 2023 at 10:35 AM Volodymyr Fialko <vfialko@marvell.com> wrote:
>
> It's possible to initialize reorder buffer with user allocated memory via
> rte_reorder_init() function. In such case rte_reorder_create() not required
> and reorder dynamic field in rte_mbuf will be not registered.

Good catch.


>
> Fixes: 01f3496695b5 ("reorder: switch sequence number to dynamic mbuf field")

It seems worth backporting.
Cc: stable@dpdk.org

>
> Signed-off-by: Volodymyr Fialko <vfialko@marvell.com>
> ---
>  lib/reorder/rte_reorder.c | 40 ++++++++++++++++++++++++++++++---------
>  1 file changed, 31 insertions(+), 9 deletions(-)
>
> diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c
> index 6e029c9e02..a759a9c434 100644
> --- a/lib/reorder/rte_reorder.c
> +++ b/lib/reorder/rte_reorder.c
> @@ -54,6 +54,28 @@ struct rte_reorder_buffer {
>  static void
>  rte_reorder_free_mbufs(struct rte_reorder_buffer *b);
>
> +static int
> +rte_reorder_dynf_register(void)
> +{
> +       int ret;
> +
> +       static const struct rte_mbuf_dynfield reorder_seqn_dynfield_desc = {
> +               .name = RTE_REORDER_SEQN_DYNFIELD_NAME,
> +               .size = sizeof(rte_reorder_seqn_t),
> +               .align = __alignof__(rte_reorder_seqn_t),
> +       };
> +
> +       if (rte_reorder_seqn_dynfield_offset > 0)
> +               return 0;
> +
> +       ret = rte_mbuf_dynfield_register(&reorder_seqn_dynfield_desc);
> +       if (ret < 0)
> +               return ret;
> +       rte_reorder_seqn_dynfield_offset = ret;
> +
> +       return 0;
> +}

We don't need this helper (see my comment below, for
rte_reorder_create), you can simply move this block to
rte_reorder_init().


> +
>  struct rte_reorder_buffer *
>  rte_reorder_init(struct rte_reorder_buffer *b, unsigned int bufsize,
>                 const char *name, unsigned int size)
> @@ -85,6 +107,12 @@ rte_reorder_init(struct rte_reorder_buffer *b, unsigned int bufsize,
>                 rte_errno = EINVAL;
>                 return NULL;
>         }
> +       if (rte_reorder_dynf_register()) {
> +               RTE_LOG(ERR, REORDER, "Failed to register mbuf field for reorder sequence"
> +                                     " number\n");
> +               rte_errno = ENOMEM;

I think returning this new errno code is fine from a ABI pov.
An application would have to check for NULL return code in any case
and can't act differently based on rte_errno value.

However, this is a small change to the rte_reorder_init API, so it
needs some update, see:

 * @return
 *   The initialized reorder buffer instance, or NULL on error
 *   On error case, rte_errno will be set appropriately:
 *    - EINVAL - invalid parameters



> +               return NULL;
> +       }
>
>         memset(b, 0, bufsize);
>         strlcpy(b->name, name, sizeof(b->name));
> @@ -106,11 +134,6 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size)
>         struct rte_reorder_list *reorder_list;
>         const unsigned int bufsize = sizeof(struct rte_reorder_buffer) +
>                                         (2 * size * sizeof(struct rte_mbuf *));
> -       static const struct rte_mbuf_dynfield reorder_seqn_dynfield_desc = {
> -               .name = RTE_REORDER_SEQN_DYNFIELD_NAME,
> -               .size = sizeof(rte_reorder_seqn_t),
> -               .align = __alignof__(rte_reorder_seqn_t),
> -       };
>
>         reorder_list = RTE_TAILQ_CAST(rte_reorder_tailq.head, rte_reorder_list);
>
> @@ -128,10 +151,9 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size)
>                 return NULL;
>         }
>
> -       rte_reorder_seqn_dynfield_offset =
> -               rte_mbuf_dynfield_register(&reorder_seqn_dynfield_desc);
> -       if (rte_reorder_seqn_dynfield_offset < 0) {
> -               RTE_LOG(ERR, REORDER, "Failed to register mbuf field for reorder sequence number\n");
> +       if (rte_reorder_dynf_register()) {
> +               RTE_LOG(ERR, REORDER, "Failed to register mbuf field for reorder sequence"
> +                                     " number\n");

All rte_reorder_buffer objects need to go through rte_reorder_init().
You can check rte_reorder_init() return code.


>                 rte_errno = ENOMEM;
>                 return NULL;
>         }
> --
> 2.34.1
>


-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* Re: [PATCH v2 1/2] build: clarify configuration without IOVA field in mbuf
  2023-03-09 12:12  0%           ` Thomas Monjalon
@ 2023-03-09 13:10  0%             ` Bruce Richardson
  2023-03-13 15:51  0%               ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2023-03-09 13:10 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: fengchengwen, dev, David Marchand, Qi Zhang, Morten Brørup,
	Shijith Thotton, Olivier Matz, Ruifeng Wang, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Jingjing Wu,
	Beilei Xing, Ankur Dwivedi, Anoob Joseph, Tejasree Kondoj,
	Kai Ji, Pablo de Lara, Radha Mohan Chintakuntla,
	Veerasenareddy Burru, Kevin Laatz, Pavan Nikhilesh,
	Mattias Rönnblom, Liang Ma, Peter Mccarthy, Jerin Jacob,
	Harry van Haaren, Artem V. Andreev, Andrew Rybchenko,
	Ashwin Sekhar T K, John W. Linville, Ciara Loftus, Chas Williams,
	Min Hu (Connor),
	Gaetan Rivet, Dongdong Liu, Yisen Zhuang, Konstantin Ananyev,
	Qiming Yang, Jakub Grajciar, Tetsuya Mukawa, Jakub Palider,
	Tomasz Duszynski, Sachin Saxena, Hemant Agrawal

On Thu, Mar 09, 2023 at 01:12:51PM +0100, Thomas Monjalon wrote:
> 09/03/2023 12:23, fengchengwen:
> > On 2023/3/9 15:29, Thomas Monjalon wrote:
> > > 09/03/2023 02:43, fengchengwen:
> > >> On 2023/3/7 0:13, Thomas Monjalon wrote:
> > >>> --- a/doc/guides/rel_notes/release_22_11.rst
> > >>> +++ b/doc/guides/rel_notes/release_22_11.rst
> > >>> @@ -504,7 +504,7 @@ ABI Changes
> > >>>    ``rte-worker-<lcore_id>`` so that DPDK can accommodate lcores higher than 99.
> > >>>  
> > >>>  * mbuf: Replaced ``buf_iova`` field with ``next`` field and added a new field
> > >>> -  ``dynfield2`` at its place in second cacheline if ``RTE_IOVA_AS_PA`` is 0.
> > >>> +  ``dynfield2`` at its place in second cacheline if ``RTE_IOVA_IN_MBUF`` is 0.
> > >>
> > >> Should add to release 23.03 rst.
> > > 
> > > Yes we could add a note in API changes.
> > > 
> > >> The original 22.11 still have RTE_IOVA_AS_PA definition.
> > > 
> > > Yes it was not a good idea to rename in the release notes.
> > > 
> > >>> -if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
> > >>> -    build = false
> > >>> -    reason = 'driver does not support disabling IOVA as PA mode'
> > >>> +if not get_option('enable_iova_as_pa')
> > >>>      subdir_done()
> > >>>  endif
> > >>
> > >> Suggest keep original, and replace RTE_IOVA_AS_PA with RTE_IOVA_IN_MBUF:
> > >> if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0
> > >>      subdir_done()
> > >> endif
> > > 
> > > Why testing the C macro in Meson?
> > > It looks simpler to check the Meson option in Meson.
> > 
> > The macro was create in meson.build: config/meson.build:319:dpdk_conf.set10('RTE_IOVA_AS_PA', get_option('enable_iova_as_pa'))
> > It can be regarded as alias of enable_iova_as_pa.
> 
> It is not strictly an alias, because it can be overriden via CFLAGS.
> 
> > This commit was mainly used to improve comprehensibility. so we should limit the 'enable_iova_as_pa' usage scope.
> > and the 'if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0' is more comprehensibility than 'if not get_option('enable_iova_as_pa')'
> 
> To me, using Meson option in Meson files is more obvious.
> 
> Bruce, what do you think?
> 

I'm not sure it matters much! However, I think of the two, using the
reference to IOVA_IN_MBUF is clearer. It also allows the same terminology
to be used in meson and C files. If we don't want to do a dpdk_conf lookup,
we can always assign the option to a meson variable called iova_in_mbuf.

/Bruce

^ permalink raw reply	[relevance 0%]

* Re: [PATCH v2 1/2] build: clarify configuration without IOVA field in mbuf
  2023-03-09 11:23  0%         ` fengchengwen
@ 2023-03-09 12:12  0%           ` Thomas Monjalon
  2023-03-09 13:10  0%             ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2023-03-09 12:12 UTC (permalink / raw)
  To: Bruce Richardson, fengchengwen
  Cc: dev, David Marchand, Qi Zhang, Morten Brørup,
	Shijith Thotton, Olivier Matz, Ruifeng Wang, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Jingjing Wu,
	Beilei Xing, Ankur Dwivedi, Anoob Joseph, Tejasree Kondoj,
	Kai Ji, Pablo de Lara, Radha Mohan Chintakuntla,
	Veerasenareddy Burru, Kevin Laatz, Pavan Nikhilesh,
	Mattias Rönnblom, Liang Ma, Peter Mccarthy, Jerin Jacob,
	Harry van Haaren, Artem V. Andreev, Andrew Rybchenko,
	Ashwin Sekhar T K, John W. Linville, Ciara Loftus, Chas Williams,
	Min Hu (Connor),
	Gaetan Rivet, Dongdong Liu, Yisen Zhuang, Konstantin Ananyev,
	Qiming Yang, Jakub Grajciar, Tetsuya Mukawa, Jakub Palider,
	Tomasz Duszynski, Sachin Saxena, Hemant Agrawal

09/03/2023 12:23, fengchengwen:
> On 2023/3/9 15:29, Thomas Monjalon wrote:
> > 09/03/2023 02:43, fengchengwen:
> >> On 2023/3/7 0:13, Thomas Monjalon wrote:
> >>> --- a/doc/guides/rel_notes/release_22_11.rst
> >>> +++ b/doc/guides/rel_notes/release_22_11.rst
> >>> @@ -504,7 +504,7 @@ ABI Changes
> >>>    ``rte-worker-<lcore_id>`` so that DPDK can accommodate lcores higher than 99.
> >>>  
> >>>  * mbuf: Replaced ``buf_iova`` field with ``next`` field and added a new field
> >>> -  ``dynfield2`` at its place in second cacheline if ``RTE_IOVA_AS_PA`` is 0.
> >>> +  ``dynfield2`` at its place in second cacheline if ``RTE_IOVA_IN_MBUF`` is 0.
> >>
> >> Should add to release 23.03 rst.
> > 
> > Yes we could add a note in API changes.
> > 
> >> The original 22.11 still have RTE_IOVA_AS_PA definition.
> > 
> > Yes it was not a good idea to rename in the release notes.
> > 
> >>> -if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
> >>> -    build = false
> >>> -    reason = 'driver does not support disabling IOVA as PA mode'
> >>> +if not get_option('enable_iova_as_pa')
> >>>      subdir_done()
> >>>  endif
> >>
> >> Suggest keep original, and replace RTE_IOVA_AS_PA with RTE_IOVA_IN_MBUF:
> >> if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0
> >>      subdir_done()
> >> endif
> > 
> > Why testing the C macro in Meson?
> > It looks simpler to check the Meson option in Meson.
> 
> The macro was create in meson.build: config/meson.build:319:dpdk_conf.set10('RTE_IOVA_AS_PA', get_option('enable_iova_as_pa'))
> It can be regarded as alias of enable_iova_as_pa.

It is not strictly an alias, because it can be overriden via CFLAGS.

> This commit was mainly used to improve comprehensibility. so we should limit the 'enable_iova_as_pa' usage scope.
> and the 'if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0' is more comprehensibility than 'if not get_option('enable_iova_as_pa')'

To me, using Meson option in Meson files is more obvious.

Bruce, what do you think?

> >> Meson build 0.63.0 already support deprecated a option by a new option.
> >> When update to the new meson verion, the drivers' meson.build will not be modified.
> > 
> > I don't understand this comment.
> 
> I mean: the option "enable_iova_as_pa" need deprecated future.

Why deprecating this option?

> Based on this, I think we should limit 'enable_iova_as_pa' usage scope, this allows us to
> reduce the amount of change effort when it's about to deprecated.

I don't plan to deprecate this option.
And in general, we should avoid deprecating a compilation option.



^ permalink raw reply	[relevance 0%]

* Re: [PATCH v2 1/2] build: clarify configuration without IOVA field in mbuf
  2023-03-09  7:29  0%       ` Thomas Monjalon
@ 2023-03-09 11:23  0%         ` fengchengwen
  2023-03-09 12:12  0%           ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: fengchengwen @ 2023-03-09 11:23 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, David Marchand, Bruce Richardson, Qi Zhang,
	Morten Brørup, Shijith Thotton, Olivier Matz, Ruifeng Wang,
	Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
	Jingjing Wu, Beilei Xing, Ankur Dwivedi, Anoob Joseph,
	Tejasree Kondoj, Kai Ji, Pablo de Lara, Radha Mohan Chintakuntla,
	Veerasenareddy Burru, Kevin Laatz, Pavan Nikhilesh,
	Mattias Rönnblom, Liang Ma, Peter Mccarthy, Jerin Jacob,
	Harry van Haaren, Artem V. Andreev, Andrew Rybchenko,
	Ashwin Sekhar T K, John W. Linville, Ciara Loftus, Chas Williams,
	Min Hu (Connor),
	Gaetan Rivet, Dongdong Liu, Yisen Zhuang, Konstantin Ananyev,
	Qiming Yang, Jakub Grajciar, Tetsuya Mukawa, Jakub Palider,
	Tomasz Duszynski, Sachin Saxena, Hemant Agrawal



On 2023/3/9 15:29, Thomas Monjalon wrote:
> 09/03/2023 02:43, fengchengwen:
>> On 2023/3/7 0:13, Thomas Monjalon wrote:
>>> --- a/doc/guides/rel_notes/release_22_11.rst
>>> +++ b/doc/guides/rel_notes/release_22_11.rst
>>> @@ -504,7 +504,7 @@ ABI Changes
>>>    ``rte-worker-<lcore_id>`` so that DPDK can accommodate lcores higher than 99.
>>>  
>>>  * mbuf: Replaced ``buf_iova`` field with ``next`` field and added a new field
>>> -  ``dynfield2`` at its place in second cacheline if ``RTE_IOVA_AS_PA`` is 0.
>>> +  ``dynfield2`` at its place in second cacheline if ``RTE_IOVA_IN_MBUF`` is 0.
>>
>> Should add to release 23.03 rst.
> 
> Yes we could add a note in API changes.
> 
>> The original 22.11 still have RTE_IOVA_AS_PA definition.
> 
> Yes it was not a good idea to rename in the release notes.
> 
>>> -if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
>>> -    build = false
>>> -    reason = 'driver does not support disabling IOVA as PA mode'
>>> +if not get_option('enable_iova_as_pa')
>>>      subdir_done()
>>>  endif
>>
>> Suggest keep original, and replace RTE_IOVA_AS_PA with RTE_IOVA_IN_MBUF:
>> if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0
>>      subdir_done()
>> endif
> 
> Why testing the C macro in Meson?
> It looks simpler to check the Meson option in Meson.

The macro was create in meson.build: config/meson.build:319:dpdk_conf.set10('RTE_IOVA_AS_PA', get_option('enable_iova_as_pa'))
It can be regarded as alias of enable_iova_as_pa.

This commit was mainly used to improve comprehensibility. so we should limit the 'enable_iova_as_pa' usage scope.
and the 'if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0' is more comprehensibility than 'if not get_option('enable_iova_as_pa')'

> 
>> Meson build 0.63.0 already support deprecated a option by a new option.
>> When update to the new meson verion, the drivers' meson.build will not be modified.
> 
> I don't understand this comment.

I mean: the option "enable_iova_as_pa" need deprecated future.

Based on this, I think we should limit 'enable_iova_as_pa' usage scope, this allows us to
reduce the amount of change effort when it's about to deprecated.

> 
> 
> .
> 

^ permalink raw reply	[relevance 0%]

* [RFC 1/2] security: introduce out of place support for inline ingress
@ 2023-03-09  8:56  4% Nithin Dabilpuram
  2023-04-11 10:04  4% ` [PATCH 1/3] " Nithin Dabilpuram
  0 siblings, 1 reply; 200+ results
From: Nithin Dabilpuram @ 2023-03-09  8:56 UTC (permalink / raw)
  To: Thomas Monjalon, Akhil Goyal; +Cc: jerinj, dev, Nithin Dabilpuram

Similar to out of place(OOP) processing support that exists for
Lookaside crypto/security sessions, Inline ingress security
sessions may also need out of place processing in usecases
where original encrypted packet needs to be retained for post
processing. So for NIC's which have such a kind of HW support,
a new SA option is provided to indicate whether OOP needs to
be enabled on that Inline ingress security session or not.

Since for inline ingress sessions, packet is not received by
CPU until the processing is done, we can only have per-SA
option and not per-packet option like Lookaside sessions.

In order to return the original encrypted packet mbuf,
this patch adds a new mbuf dynamic field of 8B size
containing pointer to original mbuf which will be populated
for packets associated with Inline SA that has OOP enabled.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 devtools/libabigail.abignore       |  4 +++
 lib/security/rte_security.c        | 17 +++++++++++++
 lib/security/rte_security.h        | 39 +++++++++++++++++++++++++++++-
 lib/security/rte_security_driver.h |  8 ++++++
 lib/security/version.map           |  2 ++
 5 files changed, 69 insertions(+), 1 deletion(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 7a93de3ba1..9f52ffbf2e 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -34,3 +34,7 @@
 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
 ; Temporary exceptions till next major ABI version ;
 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+
+; Ignore change to reserved opts for new SA option
+[suppress_type]
+       name = rte_security_ipsec_sa_options
diff --git a/lib/security/rte_security.c b/lib/security/rte_security.c
index e102c55e55..c2199dd8db 100644
--- a/lib/security/rte_security.c
+++ b/lib/security/rte_security.c
@@ -27,7 +27,10 @@
 } while (0)
 
 #define RTE_SECURITY_DYNFIELD_NAME "rte_security_dynfield_metadata"
+#define RTE_SECURITY_OOP_DYNFIELD_NAME "rte_security_oop_dynfield_metadata"
+
 int rte_security_dynfield_offset = -1;
+int rte_security_oop_dynfield_offset = -1;
 
 int
 rte_security_dynfield_register(void)
@@ -42,6 +45,20 @@ rte_security_dynfield_register(void)
 	return rte_security_dynfield_offset;
 }
 
+int
+rte_security_oop_dynfield_register(void)
+{
+	static const struct rte_mbuf_dynfield dynfield_desc = {
+		.name = RTE_SECURITY_OOP_DYNFIELD_NAME,
+		.size = sizeof(rte_security_oop_dynfield_t),
+		.align = __alignof__(rte_security_oop_dynfield_t),
+	};
+
+	rte_security_oop_dynfield_offset =
+		rte_mbuf_dynfield_register(&dynfield_desc);
+	return rte_security_oop_dynfield_offset;
+}
+
 void *
 rte_security_session_create(struct rte_security_ctx *instance,
 			    struct rte_security_session_conf *conf,
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 4bacf9fcd9..866cd4e8ee 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -275,6 +275,17 @@ struct rte_security_ipsec_sa_options {
 	 */
 	uint32_t ip_reassembly_en : 1;
 
+	/** Enable out of place processing on inline inbound packets.
+	 *
+	 * * 1: Enable driver to perform Out-of-place(OOP) processing for this inline
+	 *      inbound SA if supported by driver. PMD need to register mbuf
+	 *      dynamic field using rte_security_oop_dynfield_register()
+	 *      and security session creation would fail if dynfield is not
+	 *      registered successfully.
+	 * * 0: Disable OOP processing for this session (default).
+	 */
+	uint32_t ingress_oop : 1;
+
 	/** Reserved bit fields for future extension
 	 *
 	 * User should ensure reserved_opts is cleared as it may change in
@@ -282,7 +293,7 @@ struct rte_security_ipsec_sa_options {
 	 *
 	 * Note: Reduce number of bits in reserved_opts for every new option.
 	 */
-	uint32_t reserved_opts : 17;
+	uint32_t reserved_opts : 16;
 };
 
 /** IPSec security association direction */
@@ -812,6 +823,13 @@ typedef uint64_t rte_security_dynfield_t;
 /** Dynamic mbuf field for device-specific metadata */
 extern int rte_security_dynfield_offset;
 
+/** Out-of-Place(OOP) processing field type */
+typedef struct rte_mbuf *rte_security_oop_dynfield_t;
+/** Dynamic mbuf field for pointer to original mbuf for
+ * OOP processing session.
+ */
+extern int rte_security_oop_dynfield_offset;
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice
@@ -834,6 +852,25 @@ rte_security_dynfield(struct rte_mbuf *mbuf)
 		rte_security_dynfield_t *);
 }
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Get pointer to mbuf field for original mbuf pointer when
+ * Out-Of-Place(OOP) processing is enabled in security session.
+ *
+ * @param       mbuf    packet to access
+ * @return pointer to mbuf field
+ */
+__rte_experimental
+static inline rte_security_oop_dynfield_t *
+rte_security_oop_dynfield(struct rte_mbuf *mbuf)
+{
+	return RTE_MBUF_DYNFIELD(mbuf,
+			rte_security_oop_dynfield_offset,
+			rte_security_oop_dynfield_t *);
+}
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice
diff --git a/lib/security/rte_security_driver.h b/lib/security/rte_security_driver.h
index 421e6f7780..91e7786ab7 100644
--- a/lib/security/rte_security_driver.h
+++ b/lib/security/rte_security_driver.h
@@ -190,6 +190,14 @@ typedef int (*security_macsec_sa_stats_get_t)(void *device, uint16_t sa_id,
 __rte_internal
 int rte_security_dynfield_register(void);
 
+/**
+ * @internal
+ * Register mbuf dynamic field for Security inline ingress Out-of-Place(OOP)
+ * processing.
+ */
+__rte_internal
+int rte_security_oop_dynfield_register(void);
+
 /**
  * Update the mbuf with provided metadata.
  *
diff --git a/lib/security/version.map b/lib/security/version.map
index 07dcce9ffb..59a95f40bd 100644
--- a/lib/security/version.map
+++ b/lib/security/version.map
@@ -23,10 +23,12 @@ EXPERIMENTAL {
 	rte_security_macsec_sc_stats_get;
 	rte_security_session_stats_get;
 	rte_security_session_update;
+	rte_security_oop_dynfield_offset;
 };
 
 INTERNAL {
 	global:
 
 	rte_security_dynfield_register;
+	rte_security_oop_dynfield_register;
 };
-- 
2.25.1


^ permalink raw reply	[relevance 4%]

* Re: [PATCH v2 1/2] build: clarify configuration without IOVA field in mbuf
  2023-03-09  1:43  0%     ` fengchengwen
@ 2023-03-09  7:29  0%       ` Thomas Monjalon
  2023-03-09 11:23  0%         ` fengchengwen
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2023-03-09  7:29 UTC (permalink / raw)
  To: fengchengwen
  Cc: dev, David Marchand, Bruce Richardson, Qi Zhang,
	Morten Brørup, Shijith Thotton, Olivier Matz, Ruifeng Wang,
	Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
	Jingjing Wu, Beilei Xing, Ankur Dwivedi, Anoob Joseph,
	Tejasree Kondoj, Kai Ji, Pablo de Lara, Radha Mohan Chintakuntla,
	Veerasenareddy Burru, Kevin Laatz, Pavan Nikhilesh,
	Mattias Rönnblom, Liang Ma, Peter Mccarthy, Jerin Jacob,
	Harry van Haaren, Artem V. Andreev, Andrew Rybchenko,
	Ashwin Sekhar T K, John W. Linville, Ciara Loftus, Chas Williams,
	Min Hu (Connor),
	Gaetan Rivet, Dongdong Liu, Yisen Zhuang, Konstantin Ananyev,
	Qiming Yang, Jakub Grajciar, Tetsuya Mukawa, Jakub Palider,
	Tomasz Duszynski, Sachin Saxena, Hemant Agrawal

09/03/2023 02:43, fengchengwen:
> On 2023/3/7 0:13, Thomas Monjalon wrote:
> > --- a/doc/guides/rel_notes/release_22_11.rst
> > +++ b/doc/guides/rel_notes/release_22_11.rst
> > @@ -504,7 +504,7 @@ ABI Changes
> >    ``rte-worker-<lcore_id>`` so that DPDK can accommodate lcores higher than 99.
> >  
> >  * mbuf: Replaced ``buf_iova`` field with ``next`` field and added a new field
> > -  ``dynfield2`` at its place in second cacheline if ``RTE_IOVA_AS_PA`` is 0.
> > +  ``dynfield2`` at its place in second cacheline if ``RTE_IOVA_IN_MBUF`` is 0.
> 
> Should add to release 23.03 rst.

Yes we could add a note in API changes.

> The original 22.11 still have RTE_IOVA_AS_PA definition.

Yes it was not a good idea to rename in the release notes.

> > -if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
> > -    build = false
> > -    reason = 'driver does not support disabling IOVA as PA mode'
> > +if not get_option('enable_iova_as_pa')
> >      subdir_done()
> >  endif
> 
> Suggest keep original, and replace RTE_IOVA_AS_PA with RTE_IOVA_IN_MBUF:
> if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0
>      subdir_done()
> endif

Why testing the C macro in Meson?
It looks simpler to check the Meson option in Meson.

> Meson build 0.63.0 already support deprecated a option by a new option.
> When update to the new meson verion, the drivers' meson.build will not be modified.

I don't understand this comment.



^ permalink raw reply	[relevance 0%]

* Re: [PATCH v2 1/2] build: clarify configuration without IOVA field in mbuf
  2023-03-06 16:13  2%   ` [PATCH v2 1/2] build: clarify configuration without IOVA field in mbuf Thomas Monjalon
@ 2023-03-09  1:43  0%     ` fengchengwen
  2023-03-09  7:29  0%       ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: fengchengwen @ 2023-03-09  1:43 UTC (permalink / raw)
  To: Thomas Monjalon, dev
  Cc: David Marchand, Bruce Richardson, Qi Zhang, Morten Brørup,
	Shijith Thotton, Olivier Matz, Ruifeng Wang, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Jingjing Wu,
	Beilei Xing, Ankur Dwivedi, Anoob Joseph, Tejasree Kondoj,
	Kai Ji, Pablo de Lara, Radha Mohan Chintakuntla,
	Veerasenareddy Burru, Kevin Laatz, Pavan Nikhilesh,
	Mattias Rönnblom, Liang Ma, Peter Mccarthy, Jerin Jacob,
	Harry van Haaren, Artem V. Andreev, Andrew Rybchenko,
	Ashwin Sekhar T K, John W. Linville, Ciara Loftus, Chas Williams,
	Min Hu (Connor),
	Gaetan Rivet, Dongdong Liu, Yisen Zhuang, Konstantin Ananyev,
	Qiming Yang, Jakub Grajciar, Tetsuya Mukawa, Jakub Palider,
	Tomasz Duszynski, Sachin Saxena, Hemant Agrawal

On 2023/3/7 0:13, Thomas Monjalon wrote:
> The impact of the option "enable_iova_as_pa" is explained for users.
> 
> Also the code flag "RTE_IOVA_AS_PA" is renamed as "RTE_IOVA_IN_MBUF"
> in order to be more accurate (IOVA mode is decided at runtime),
> and more readable in the code.
> 
> Similarly the drivers are using the variable "require_iova_in_mbuf"
> instead of "pmd_supports_disable_iova_as_pa" with an opposite meaning.
> By default, it is assumed that drivers require the IOVA field in mbuf.
> The drivers which support removing this field have to declare themselves.
> 
> If the option "enable_iova_as_pa" is disabled, the unsupported drivers
> will be listed with the new reason text "requires IOVA in mbuf".
> 
> Suggested-by: Bruce Richardson <bruce.richardson@intel.com>
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---

...

>  compile_time_cpuflags = []
>  subdir(arch_subdir)
> diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
> index 91414573bd..c67c2823a2 100644
> --- a/doc/guides/rel_notes/release_22_11.rst
> +++ b/doc/guides/rel_notes/release_22_11.rst
> @@ -504,7 +504,7 @@ ABI Changes
>    ``rte-worker-<lcore_id>`` so that DPDK can accommodate lcores higher than 99.
>  
>  * mbuf: Replaced ``buf_iova`` field with ``next`` field and added a new field
> -  ``dynfield2`` at its place in second cacheline if ``RTE_IOVA_AS_PA`` is 0.
> +  ``dynfield2`` at its place in second cacheline if ``RTE_IOVA_IN_MBUF`` is 0.

Should add to release 23.03 rst.
The original 22.11 still have RTE_IOVA_AS_PA definition.

...

> diff --git a/drivers/net/hns3/meson.build b/drivers/net/hns3/meson.build
> index e1a5afa2ec..743fae9db7 100644
> --- a/drivers/net/hns3/meson.build
> +++ b/drivers/net/hns3/meson.build
> @@ -13,9 +13,7 @@ if arch_subdir != 'x86' and arch_subdir != 'arm' or not dpdk_conf.get('RTE_ARCH_
>      subdir_done()
>  endif
>  
> -if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
> -    build = false
> -    reason = 'driver does not support disabling IOVA as PA mode'
> +if not get_option('enable_iova_as_pa')
>      subdir_done()
>  endif

Suggest keep original, and replace RTE_IOVA_AS_PA with RTE_IOVA_IN_MBUF:
if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0
     subdir_done()
endif
Meson build 0.63.0 already support deprecated a option by a new option.
When update to the new meson verion, the drivers' meson.build will not be modified.

>  
> diff --git a/drivers/net/ice/ice_rxtx_common_avx.h b/drivers/net/ice/ice_rxtx_common_avx.h
> index e69e23997f..dacb87dcb0 100644

...

^ permalink raw reply	[relevance 0%]

* RE: [PATCH v1 04/13] graph: add get/set graph worker model APIs
  2023-03-02 13:58  0%           ` Jerin Jacob
@ 2023-03-07  8:26  0%             ` Yan, Zhirun
  0 siblings, 0 replies; 200+ results
From: Yan, Zhirun @ 2023-03-07  8:26 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, jerinj, kirankumark, ndabilpuram, Liang, Cunming, Wang, Haiyue



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Thursday, March 2, 2023 9:58 PM
> To: Yan, Zhirun <zhirun.yan@intel.com>
> Cc: dev@dpdk.org; jerinj@marvell.com; kirankumark@marvell.com;
> ndabilpuram@marvell.com; Liang, Cunming <cunming.liang@intel.com>; Wang,
> Haiyue <haiyue.wang@intel.com>
> Subject: Re: [PATCH v1 04/13] graph: add get/set graph worker model APIs
> 
> On Thu, Mar 2, 2023 at 2:09 PM Yan, Zhirun <zhirun.yan@intel.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Monday, February 27, 2023 6:23 AM
> > > To: Yan, Zhirun <zhirun.yan@intel.com>
> > > Cc: dev@dpdk.org; jerinj@marvell.com; kirankumark@marvell.com;
> > > ndabilpuram@marvell.com; Liang, Cunming <cunming.liang@intel.com>;
> > > Wang, Haiyue <haiyue.wang@intel.com>
> > > Subject: Re: [PATCH v1 04/13] graph: add get/set graph worker model
> > > APIs
> > >
> > > On Fri, Feb 24, 2023 at 12:01 PM Yan, Zhirun <zhirun.yan@intel.com> wrote:
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > Sent: Monday, February 20, 2023 9:51 PM
> > > > > To: Yan, Zhirun <zhirun.yan@intel.com>
> > > > > Cc: dev@dpdk.org; jerinj@marvell.com; kirankumark@marvell.com;
> > > > > ndabilpuram@marvell.com; Liang, Cunming
> > > > > <cunming.liang@intel.com>; Wang, Haiyue <haiyue.wang@intel.com>
> > > > > Subject: Re: [PATCH v1 04/13] graph: add get/set graph worker
> > > > > model APIs
> > > > >
> > > > > On Thu, Nov 17, 2022 at 10:40 AM Zhirun Yan
> > > > > <zhirun.yan@intel.com>
> > > wrote:
> > > > > >
> > > > > > Add new get/set APIs to configure graph worker model which is
> > > > > > used to determine which model will be chosen.
> > > > > >
> > > > > > Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
> > > > > > Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> > > > > > Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
> > > > > > ---
> > > > > >  lib/graph/rte_graph_worker.h        | 51
> > > +++++++++++++++++++++++++++++
> > > > > >  lib/graph/rte_graph_worker_common.h | 13 ++++++++
> > > > > >  lib/graph/version.map               |  3 ++
> > > > > >  3 files changed, 67 insertions(+)
> > > > > >
> > > > > > diff --git a/lib/graph/rte_graph_worker.h
> > > > > > b/lib/graph/rte_graph_worker.h index 54d1390786..a0ea0df153
> > > 100644
> > > > > > --- a/lib/graph/rte_graph_worker.h
> > > > > > +++ b/lib/graph/rte_graph_worker.h
> > > > > > @@ -1,5 +1,56 @@
> > > > > >  #include "rte_graph_model_rtc.h"
> > > > > >
> > > > > > +static enum rte_graph_worker_model worker_model =
> > > > > > +RTE_GRAPH_MODEL_DEFAULT;
> > > > >
> > > > > This will break the multiprocess.
> > > >
> > > > Thanks. I will use TLS for per-thread local storage.
> > >
> > > If it needs to be used from secondary process, then it needs to be
> > > from memzone.
> > >
> >
> >
> > This filed will be set by primary process in initial stage, and then lcore will only
> read it.
> > I want to use RTE_DEFINE_PER_LCORE to define the worker model here. It
> > seems not necessary to allocate from memzone.
> >
> > >
> > >
> > > >
> > > > >
> > > > > > +
> > > > > > +/** Graph worker models */
> > > > > > +enum rte_graph_worker_model { #define WORKER_MODEL_DEFAULT
> > > > > > +"default"
> > > > >
> > > > > Why need strings?
> > > > > Also, every symbol in a public header file should start with
> > > > > RTE_ to avoid namespace conflict.
> > > >
> > > > It was used to config the model in app. I can put the string into example.
> > >
> > > OK
> > >
> > > >
> > > > >
> > > > > > +       RTE_GRAPH_MODEL_DEFAULT = 0, #define
> WORKER_MODEL_RTC
> > > > > > +"rtc"
> > > > > > +       RTE_GRAPH_MODEL_RTC,
> > > > >
> > > > > Why not RTE_GRAPH_MODEL_RTC = RTE_GRAPH_MODEL_DEFAULT in
> > > enum
> > > > > itself.
> > > > Yes, will do in next version.
> > > >
> > > > >
> > > > > > +#define WORKER_MODEL_GENERIC "generic"
> > > > >
> > > > > Generic is a very overloaded term. Use pipeline here i.e
> > > > > RTE_GRAPH_MODEL_PIPELINE
> > > >
> > > > Actually, it's not a purely pipeline mode. I prefer to change to hybrid.
> > >
> > > Hybrid is very overloaded term, and it will be confusing
> > > (considering there will be new models in future).
> > > Please pick a word that really express the model working.
> > >
> >
> > In this case, the path is Node0 -> Node1 -> Node2 -> Node3 And Node1
> > and Node3 are binding with one core.
> >
> > Our model offers the ability to dispatch between cores.
> >
> > Do you think RTE_GRAPH_MODEL_DISPATCH is a good name?
> 
> Some names, What I can think of
> 
> // MCORE->MULTI CORE
> 
> RTE_GRAPH_MODEL_MCORE_PIPELINE
> or
> RTE_GRAG_MODEL_MCORE_DISPATCH
> or
> RTE_GRAG_MODEL_MCORE_RING
> or
> RTE_GRAPH_MODEL_MULTI_CORE
> 

Thanks, I will use RTE_GRAG_MODEL_MCORE_DISPATCH as the name.

> >
> > + - - - - - -+     +- - - - - - - - - - - - - +     + - - - - - -+
> > '  Core #0   '     '  Core #1       Core #1   '     '  Core #2   '
> > '            '     '                          '     '            '
> > ' +--------+ '     ' +--------+    +--------+ '     ' +--------+ '
> > ' | Node-0 | - - - ->| Node-1 |    | Node-3 |<- - - - | Node-2 | '
> > ' +--------+ '     ' +--------+    +--------+ '     ' +--------+ '
> > '            '     '     |                    '     '      ^     '
> > + - - - - - -+     +- - -|- - - - - - - - - - +     + - - -|- - -+
> >                          |                                 |
> >                          + - - - - - - - - - - - - - - - - +
> >
> >
> > > > >
> > > > >
> > > > > > +       RTE_GRAPH_MODEL_GENERIC,
> > > > > > +       RTE_GRAPH_MODEL_MAX,
> > > > >
> > > > > No need for MAX, it will break the ABI for future. See other
> > > > > subsystem such as cryptodev.
> > > >
> > > > Thanks, I will change it.
> > > > >
> > > > > > +};
> > > > >
> > > > > >

^ permalink raw reply	[relevance 0%]

* [PATCH v2 1/2] build: clarify configuration without IOVA field in mbuf
  @ 2023-03-06 16:13  2%   ` Thomas Monjalon
  2023-03-09  1:43  0%     ` fengchengwen
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2023-03-06 16:13 UTC (permalink / raw)
  To: dev
  Cc: David Marchand, Bruce Richardson, Qi Zhang, Morten Brørup,
	Shijith Thotton, Olivier Matz, Ruifeng Wang, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Jingjing Wu,
	Beilei Xing, Ankur Dwivedi, Anoob Joseph, Tejasree Kondoj,
	Kai Ji, Pablo de Lara, Radha Mohan Chintakuntla,
	Veerasenareddy Burru, Chengwen Feng, Kevin Laatz,
	Pavan Nikhilesh, Mattias Rönnblom, Liang Ma, Peter Mccarthy,
	Jerin Jacob, Harry van Haaren, Artem V. Andreev,
	Andrew Rybchenko, Ashwin Sekhar T K, John W. Linville,
	Ciara Loftus, Chas Williams, Min Hu (Connor),
	Gaetan Rivet, Dongdong Liu, Yisen Zhuang, Konstantin Ananyev,
	Qiming Yang, Jakub Grajciar, Tetsuya Mukawa, Jakub Palider,
	Tomasz Duszynski, Sachin Saxena, Hemant Agrawal

The impact of the option "enable_iova_as_pa" is explained for users.

Also the code flag "RTE_IOVA_AS_PA" is renamed as "RTE_IOVA_IN_MBUF"
in order to be more accurate (IOVA mode is decided at runtime),
and more readable in the code.

Similarly the drivers are using the variable "require_iova_in_mbuf"
instead of "pmd_supports_disable_iova_as_pa" with an opposite meaning.
By default, it is assumed that drivers require the IOVA field in mbuf.
The drivers which support removing this field have to declare themselves.

If the option "enable_iova_as_pa" is disabled, the unsupported drivers
will be listed with the new reason text "requires IOVA in mbuf".

Suggested-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 app/test/test_mbuf.c                   |  2 +-
 config/arm/meson.build                 |  4 ++--
 config/meson.build                     |  2 +-
 doc/guides/rel_notes/release_22_11.rst |  2 +-
 drivers/common/cnxk/meson.build        |  2 +-
 drivers/common/iavf/meson.build        |  2 +-
 drivers/crypto/armv8/meson.build       |  2 +-
 drivers/crypto/cnxk/meson.build        |  2 +-
 drivers/crypto/ipsec_mb/meson.build    |  2 +-
 drivers/crypto/null/meson.build        |  2 +-
 drivers/crypto/openssl/meson.build     |  2 +-
 drivers/dma/cnxk/meson.build           |  2 +-
 drivers/dma/skeleton/meson.build       |  2 +-
 drivers/event/cnxk/meson.build         |  2 +-
 drivers/event/dsw/meson.build          |  2 +-
 drivers/event/opdl/meson.build         |  2 +-
 drivers/event/skeleton/meson.build     |  2 +-
 drivers/event/sw/meson.build           |  2 +-
 drivers/mempool/bucket/meson.build     |  2 +-
 drivers/mempool/cnxk/meson.build       |  2 +-
 drivers/mempool/ring/meson.build       |  2 +-
 drivers/mempool/stack/meson.build      |  2 +-
 drivers/meson.build                    |  6 +++---
 drivers/net/af_packet/meson.build      |  2 +-
 drivers/net/af_xdp/meson.build         |  2 +-
 drivers/net/bonding/meson.build        |  2 +-
 drivers/net/cnxk/meson.build           |  2 +-
 drivers/net/failsafe/meson.build       |  2 +-
 drivers/net/hns3/meson.build           |  4 +---
 drivers/net/ice/ice_rxtx_common_avx.h  | 12 ++++++------
 drivers/net/ice/ice_rxtx_vec_sse.c     |  4 ++--
 drivers/net/ice/meson.build            |  2 +-
 drivers/net/memif/meson.build          |  2 +-
 drivers/net/null/meson.build           |  2 +-
 drivers/net/pcap/meson.build           |  2 +-
 drivers/net/ring/meson.build           |  2 +-
 drivers/net/tap/meson.build            |  2 +-
 drivers/raw/cnxk_bphy/meson.build      |  2 +-
 drivers/raw/cnxk_gpio/meson.build      |  2 +-
 drivers/raw/skeleton/meson.build       |  2 +-
 lib/eal/linux/eal.c                    |  2 +-
 lib/mbuf/rte_mbuf.c                    |  2 +-
 lib/mbuf/rte_mbuf.h                    |  4 ++--
 lib/mbuf/rte_mbuf_core.h               |  8 ++++----
 lib/mbuf/rte_mbuf_dyn.c                |  2 +-
 lib/meson.build                        |  2 +-
 meson_options.txt                      |  2 +-
 47 files changed, 60 insertions(+), 62 deletions(-)

diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index 6cbb03b0af..81a6632d11 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -1232,7 +1232,7 @@ test_failing_mbuf_sanity_check(struct rte_mempool *pktmbuf_pool)
 		return -1;
 	}
 
-	if (RTE_IOVA_AS_PA) {
+	if (RTE_IOVA_IN_MBUF) {
 		badbuf = *buf;
 		rte_mbuf_iova_set(&badbuf, 0);
 		if (verify_mbuf_check_panics(&badbuf)) {
diff --git a/config/arm/meson.build b/config/arm/meson.build
index 451dbada7d..5ff66248de 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -319,7 +319,7 @@ soc_cn10k = {
         ['RTE_MAX_LCORE', 24],
         ['RTE_MAX_NUMA_NODES', 1],
         ['RTE_MEMPOOL_ALIGN', 128],
-        ['RTE_IOVA_AS_PA', 0]
+        ['RTE_IOVA_IN_MBUF', 0]
     ],
     'part_number': '0xd49',
     'extra_march_features': ['crypto'],
@@ -412,7 +412,7 @@ soc_cn9k = {
     'part_number': '0xb2',
     'numa': false,
     'flags': [
-        ['RTE_IOVA_AS_PA', 0]
+        ['RTE_IOVA_IN_MBUF', 0]
     ]
 }
 
diff --git a/config/meson.build b/config/meson.build
index fc3ac99a32..fa730a1b14 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -316,7 +316,7 @@ endif
 if get_option('mbuf_refcnt_atomic')
     dpdk_conf.set('RTE_MBUF_REFCNT_ATOMIC', true)
 endif
-dpdk_conf.set10('RTE_IOVA_AS_PA', get_option('enable_iova_as_pa'))
+dpdk_conf.set10('RTE_IOVA_IN_MBUF', get_option('enable_iova_as_pa'))
 
 compile_time_cpuflags = []
 subdir(arch_subdir)
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index 91414573bd..c67c2823a2 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -504,7 +504,7 @@ ABI Changes
   ``rte-worker-<lcore_id>`` so that DPDK can accommodate lcores higher than 99.
 
 * mbuf: Replaced ``buf_iova`` field with ``next`` field and added a new field
-  ``dynfield2`` at its place in second cacheline if ``RTE_IOVA_AS_PA`` is 0.
+  ``dynfield2`` at its place in second cacheline if ``RTE_IOVA_IN_MBUF`` is 0.
 
 * ethdev: enum ``RTE_FLOW_ITEM`` was affected by deprecation procedure.
 
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 849735921c..ce71f3d70c 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -87,4 +87,4 @@ sources += files('cnxk_telemetry_bphy.c',
 )
 
 deps += ['bus_pci', 'net', 'telemetry']
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/common/iavf/meson.build b/drivers/common/iavf/meson.build
index af8a4983e0..af26955772 100644
--- a/drivers/common/iavf/meson.build
+++ b/drivers/common/iavf/meson.build
@@ -6,4 +6,4 @@ sources = files('iavf_adminq.c', 'iavf_common.c', 'iavf_impl.c')
 if cc.has_argument('-Wno-pointer-to-int-cast')
         cflags += '-Wno-pointer-to-int-cast'
 endif
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/crypto/armv8/meson.build b/drivers/crypto/armv8/meson.build
index 700fb80eb2..a735eb511c 100644
--- a/drivers/crypto/armv8/meson.build
+++ b/drivers/crypto/armv8/meson.build
@@ -17,4 +17,4 @@ endif
 ext_deps += dep
 deps += ['bus_vdev']
 sources = files('rte_armv8_pmd.c', 'rte_armv8_pmd_ops.c')
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/crypto/cnxk/meson.build b/drivers/crypto/cnxk/meson.build
index a5acabab2b..3d9a0dbbf0 100644
--- a/drivers/crypto/cnxk/meson.build
+++ b/drivers/crypto/cnxk/meson.build
@@ -32,4 +32,4 @@ else
     cflags += [ '-ULA_IPSEC_DEBUG','-UCNXK_CRYPTODEV_DEBUG' ]
 endif
 
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/crypto/ipsec_mb/meson.build b/drivers/crypto/ipsec_mb/meson.build
index ec147d2110..3057e6fd10 100644
--- a/drivers/crypto/ipsec_mb/meson.build
+++ b/drivers/crypto/ipsec_mb/meson.build
@@ -41,4 +41,4 @@ sources = files(
         'pmd_zuc.c',
 )
 deps += ['bus_vdev', 'net', 'security']
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/crypto/null/meson.build b/drivers/crypto/null/meson.build
index 59a7508f18..2e8b05ad28 100644
--- a/drivers/crypto/null/meson.build
+++ b/drivers/crypto/null/meson.build
@@ -9,4 +9,4 @@ endif
 
 deps += 'bus_vdev'
 sources = files('null_crypto_pmd.c', 'null_crypto_pmd_ops.c')
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/crypto/openssl/meson.build b/drivers/crypto/openssl/meson.build
index d165c32ae8..1ec63c216d 100644
--- a/drivers/crypto/openssl/meson.build
+++ b/drivers/crypto/openssl/meson.build
@@ -15,4 +15,4 @@ endif
 deps += 'bus_vdev'
 sources = files('rte_openssl_pmd.c', 'rte_openssl_pmd_ops.c')
 ext_deps += dep
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/dma/cnxk/meson.build b/drivers/dma/cnxk/meson.build
index 252e5ff78b..b868fb14cb 100644
--- a/drivers/dma/cnxk/meson.build
+++ b/drivers/dma/cnxk/meson.build
@@ -3,4 +3,4 @@
 
 deps += ['bus_pci', 'common_cnxk', 'dmadev']
 sources = files('cnxk_dmadev.c')
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/dma/skeleton/meson.build b/drivers/dma/skeleton/meson.build
index 2b0422ce61..77055683ad 100644
--- a/drivers/dma/skeleton/meson.build
+++ b/drivers/dma/skeleton/meson.build
@@ -5,4 +5,4 @@ deps += ['dmadev', 'kvargs', 'ring', 'bus_vdev']
 sources = files(
         'skeleton_dmadev.c',
 )
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index aa42ab3a90..3517e79341 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -479,4 +479,4 @@ foreach flag: extra_flags
 endforeach
 
 deps += ['bus_pci', 'common_cnxk', 'net_cnxk', 'crypto_cnxk']
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/event/dsw/meson.build b/drivers/event/dsw/meson.build
index e6808c0f71..01af94165f 100644
--- a/drivers/event/dsw/meson.build
+++ b/drivers/event/dsw/meson.build
@@ -6,4 +6,4 @@ if cc.has_argument('-Wno-format-nonliteral')
     cflags += '-Wno-format-nonliteral'
 endif
 sources = files('dsw_evdev.c', 'dsw_event.c', 'dsw_xstats.c')
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/event/opdl/meson.build b/drivers/event/opdl/meson.build
index 7abef44609..8613b2a746 100644
--- a/drivers/event/opdl/meson.build
+++ b/drivers/event/opdl/meson.build
@@ -9,4 +9,4 @@ sources = files(
         'opdl_test.c',
 )
 deps += ['bus_vdev']
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/event/skeleton/meson.build b/drivers/event/skeleton/meson.build
index fa6a5e0a9f..6e788cfcee 100644
--- a/drivers/event/skeleton/meson.build
+++ b/drivers/event/skeleton/meson.build
@@ -3,4 +3,4 @@
 
 sources = files('skeleton_eventdev.c')
 deps += ['bus_pci', 'bus_vdev']
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/event/sw/meson.build b/drivers/event/sw/meson.build
index 8d815dfa84..3a3ebd72a3 100644
--- a/drivers/event/sw/meson.build
+++ b/drivers/event/sw/meson.build
@@ -9,4 +9,4 @@ sources = files(
         'sw_evdev.c',
 )
 deps += ['hash', 'bus_vdev']
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/mempool/bucket/meson.build b/drivers/mempool/bucket/meson.build
index 94c060904b..d0ec523237 100644
--- a/drivers/mempool/bucket/meson.build
+++ b/drivers/mempool/bucket/meson.build
@@ -12,4 +12,4 @@ if is_windows
 endif
 
 sources = files('rte_mempool_bucket.c')
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/mempool/cnxk/meson.build b/drivers/mempool/cnxk/meson.build
index d8bcc41ca0..50856ecde8 100644
--- a/drivers/mempool/cnxk/meson.build
+++ b/drivers/mempool/cnxk/meson.build
@@ -17,4 +17,4 @@ sources = files(
 )
 
 deps += ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_cnxk', 'mempool']
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/mempool/ring/meson.build b/drivers/mempool/ring/meson.build
index 65d203d4b7..a25e9ebc16 100644
--- a/drivers/mempool/ring/meson.build
+++ b/drivers/mempool/ring/meson.build
@@ -2,4 +2,4 @@
 # Copyright(c) 2017 Intel Corporation
 
 sources = files('rte_mempool_ring.c')
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/mempool/stack/meson.build b/drivers/mempool/stack/meson.build
index 961e90fc04..95f69042ae 100644
--- a/drivers/mempool/stack/meson.build
+++ b/drivers/mempool/stack/meson.build
@@ -4,4 +4,4 @@
 sources = files('rte_mempool_stack.c')
 
 deps += ['stack']
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/meson.build b/drivers/meson.build
index 0618c31a69..2aefa146a7 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -109,7 +109,7 @@ foreach subpath:subdirs
         ext_deps = []
         pkgconfig_extra_libs = []
         testpmd_sources = []
-        pmd_supports_disable_iova_as_pa = false
+        require_iova_in_mbuf = true
 
         if not enable_drivers.contains(drv_path)
             build = false
@@ -127,9 +127,9 @@ foreach subpath:subdirs
             # pull in driver directory which should update all the local variables
             subdir(drv_path)
 
-            if dpdk_conf.get('RTE_IOVA_AS_PA') == 0 and not pmd_supports_disable_iova_as_pa and not always_enable.contains(drv_path)
+            if not get_option('enable_iova_as_pa') and require_iova_in_mbuf and not always_enable.contains(drv_path)
                 build = false
-                reason = 'driver does not support disabling IOVA as PA mode'
+                reason = 'requires IOVA in mbuf'
             endif
 
             # get dependency objs from strings
diff --git a/drivers/net/af_packet/meson.build b/drivers/net/af_packet/meson.build
index bab008d083..f45e4491d4 100644
--- a/drivers/net/af_packet/meson.build
+++ b/drivers/net/af_packet/meson.build
@@ -6,4 +6,4 @@ if not is_linux
     reason = 'only supported on Linux'
 endif
 sources = files('rte_eth_af_packet.c')
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/net/af_xdp/meson.build b/drivers/net/af_xdp/meson.build
index 979b914bb6..9a8dbb4d49 100644
--- a/drivers/net/af_xdp/meson.build
+++ b/drivers/net/af_xdp/meson.build
@@ -71,4 +71,4 @@ if build
   endif
 endif
 
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/net/bonding/meson.build b/drivers/net/bonding/meson.build
index 29022712cb..83326c0d63 100644
--- a/drivers/net/bonding/meson.build
+++ b/drivers/net/bonding/meson.build
@@ -22,4 +22,4 @@ deps += 'sched' # needed for rte_bitmap.h
 deps += ['ip_frag']
 
 headers = files('rte_eth_bond.h', 'rte_eth_bond_8023ad.h')
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build
index c7ca24d437..c1da121a15 100644
--- a/drivers/net/cnxk/meson.build
+++ b/drivers/net/cnxk/meson.build
@@ -195,4 +195,4 @@ foreach flag: extra_flags
 endforeach
 
 headers = files('rte_pmd_cnxk.h')
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/net/failsafe/meson.build b/drivers/net/failsafe/meson.build
index bf8f791984..513de17535 100644
--- a/drivers/net/failsafe/meson.build
+++ b/drivers/net/failsafe/meson.build
@@ -27,4 +27,4 @@ sources = files(
         'failsafe_ops.c',
         'failsafe_rxtx.c',
 )
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/net/hns3/meson.build b/drivers/net/hns3/meson.build
index e1a5afa2ec..743fae9db7 100644
--- a/drivers/net/hns3/meson.build
+++ b/drivers/net/hns3/meson.build
@@ -13,9 +13,7 @@ if arch_subdir != 'x86' and arch_subdir != 'arm' or not dpdk_conf.get('RTE_ARCH_
     subdir_done()
 endif
 
-if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
-    build = false
-    reason = 'driver does not support disabling IOVA as PA mode'
+if not get_option('enable_iova_as_pa')
     subdir_done()
 endif
 
diff --git a/drivers/net/ice/ice_rxtx_common_avx.h b/drivers/net/ice/ice_rxtx_common_avx.h
index e69e23997f..dacb87dcb0 100644
--- a/drivers/net/ice/ice_rxtx_common_avx.h
+++ b/drivers/net/ice/ice_rxtx_common_avx.h
@@ -54,7 +54,7 @@ ice_rxq_rearm_common(struct ice_rx_queue *rxq, __rte_unused bool avx512)
 		mb0 = rxep[0].mbuf;
 		mb1 = rxep[1].mbuf;
 
-#if RTE_IOVA_AS_PA
+#if RTE_IOVA_IN_MBUF
 		/* load buf_addr(lo 64bit) and buf_iova(hi 64bit) */
 		RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) !=
 				offsetof(struct rte_mbuf, buf_addr) + 8);
@@ -62,7 +62,7 @@ ice_rxq_rearm_common(struct ice_rx_queue *rxq, __rte_unused bool avx512)
 		vaddr0 = _mm_loadu_si128((__m128i *)&mb0->buf_addr);
 		vaddr1 = _mm_loadu_si128((__m128i *)&mb1->buf_addr);
 
-#if RTE_IOVA_AS_PA
+#if RTE_IOVA_IN_MBUF
 		/* convert pa to dma_addr hdr/data */
 		dma_addr0 = _mm_unpackhi_epi64(vaddr0, vaddr0);
 		dma_addr1 = _mm_unpackhi_epi64(vaddr1, vaddr1);
@@ -105,7 +105,7 @@ ice_rxq_rearm_common(struct ice_rx_queue *rxq, __rte_unused bool avx512)
 			mb6 = rxep[6].mbuf;
 			mb7 = rxep[7].mbuf;
 
-#if RTE_IOVA_AS_PA
+#if RTE_IOVA_IN_MBUF
 			/* load buf_addr(lo 64bit) and buf_iova(hi 64bit) */
 			RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) !=
 					offsetof(struct rte_mbuf, buf_addr) + 8);
@@ -142,7 +142,7 @@ ice_rxq_rearm_common(struct ice_rx_queue *rxq, __rte_unused bool avx512)
 				_mm512_inserti64x4(_mm512_castsi256_si512(vaddr4_5),
 						   vaddr6_7, 1);
 
-#if RTE_IOVA_AS_PA
+#if RTE_IOVA_IN_MBUF
 			/* convert pa to dma_addr hdr/data */
 			dma_addr0_3 = _mm512_unpackhi_epi64(vaddr0_3, vaddr0_3);
 			dma_addr4_7 = _mm512_unpackhi_epi64(vaddr4_7, vaddr4_7);
@@ -177,7 +177,7 @@ ice_rxq_rearm_common(struct ice_rx_queue *rxq, __rte_unused bool avx512)
 			mb2 = rxep[2].mbuf;
 			mb3 = rxep[3].mbuf;
 
-#if RTE_IOVA_AS_PA
+#if RTE_IOVA_IN_MBUF
 			/* load buf_addr(lo 64bit) and buf_iova(hi 64bit) */
 			RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) !=
 					offsetof(struct rte_mbuf, buf_addr) + 8);
@@ -198,7 +198,7 @@ ice_rxq_rearm_common(struct ice_rx_queue *rxq, __rte_unused bool avx512)
 				_mm256_inserti128_si256(_mm256_castsi128_si256(vaddr2),
 							vaddr3, 1);
 
-#if RTE_IOVA_AS_PA
+#if RTE_IOVA_IN_MBUF
 			/* convert pa to dma_addr hdr/data */
 			dma_addr0_1 = _mm256_unpackhi_epi64(vaddr0_1, vaddr0_1);
 			dma_addr2_3 = _mm256_unpackhi_epi64(vaddr2_3, vaddr2_3);
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 72dfd58308..71fdd6ffb5 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -68,7 +68,7 @@ ice_rxq_rearm(struct ice_rx_queue *rxq)
 		mb0 = rxep[0].mbuf;
 		mb1 = rxep[1].mbuf;
 
-#if RTE_IOVA_AS_PA
+#if RTE_IOVA_IN_MBUF
 		/* load buf_addr(lo 64bit) and buf_iova(hi 64bit) */
 		RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) !=
 				 offsetof(struct rte_mbuf, buf_addr) + 8);
@@ -76,7 +76,7 @@ ice_rxq_rearm(struct ice_rx_queue *rxq)
 		vaddr0 = _mm_loadu_si128((__m128i *)&mb0->buf_addr);
 		vaddr1 = _mm_loadu_si128((__m128i *)&mb1->buf_addr);
 
-#if RTE_IOVA_AS_PA
+#if RTE_IOVA_IN_MBUF
 		/* convert pa to dma_addr hdr/data */
 		dma_addr0 = _mm_unpackhi_epi64(vaddr0, vaddr0);
 		dma_addr1 = _mm_unpackhi_epi64(vaddr1, vaddr1);
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index 123b190f72..5e90afcb9b 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -78,4 +78,4 @@ sources += files(
         'ice_dcf_parent.c',
         'ice_dcf_sched.c',
 )
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/net/memif/meson.build b/drivers/net/memif/meson.build
index 28416a982f..b890984b46 100644
--- a/drivers/net/memif/meson.build
+++ b/drivers/net/memif/meson.build
@@ -12,4 +12,4 @@ sources = files(
 )
 
 deps += ['hash']
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/net/null/meson.build b/drivers/net/null/meson.build
index 4a483955a7..076b9937c1 100644
--- a/drivers/net/null/meson.build
+++ b/drivers/net/null/meson.build
@@ -8,4 +8,4 @@ if is_windows
 endif
 
 sources = files('rte_eth_null.c')
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/net/pcap/meson.build b/drivers/net/pcap/meson.build
index a5a2971f0e..de2a70ef0b 100644
--- a/drivers/net/pcap/meson.build
+++ b/drivers/net/pcap/meson.build
@@ -15,4 +15,4 @@ ext_deps += pcap_dep
 if is_windows
     ext_deps += cc.find_library('iphlpapi', required: true)
 endif
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/net/ring/meson.build b/drivers/net/ring/meson.build
index 72792e26b0..2cd0e97e56 100644
--- a/drivers/net/ring/meson.build
+++ b/drivers/net/ring/meson.build
@@ -9,4 +9,4 @@ endif
 
 sources = files('rte_eth_ring.c')
 headers = files('rte_eth_ring.h')
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/net/tap/meson.build b/drivers/net/tap/meson.build
index 4c9a9eac2b..b07ce68e48 100644
--- a/drivers/net/tap/meson.build
+++ b/drivers/net/tap/meson.build
@@ -35,4 +35,4 @@ foreach arg:args
     config.set(arg[0], cc.has_header_symbol(arg[1], arg[2]))
 endforeach
 configure_file(output : 'tap_autoconf.h', configuration : config)
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/raw/cnxk_bphy/meson.build b/drivers/raw/cnxk_bphy/meson.build
index ffb0ee6b7e..bb5d2ffb80 100644
--- a/drivers/raw/cnxk_bphy/meson.build
+++ b/drivers/raw/cnxk_bphy/meson.build
@@ -10,4 +10,4 @@ sources = files(
         'cnxk_bphy_irq.c',
 )
 headers = files('rte_pmd_bphy.h')
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/raw/cnxk_gpio/meson.build b/drivers/raw/cnxk_gpio/meson.build
index f52a7be9eb..9d9a527392 100644
--- a/drivers/raw/cnxk_gpio/meson.build
+++ b/drivers/raw/cnxk_gpio/meson.build
@@ -9,4 +9,4 @@ sources = files(
         'cnxk_gpio_selftest.c',
 )
 headers = files('rte_pmd_cnxk_gpio.h')
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/drivers/raw/skeleton/meson.build b/drivers/raw/skeleton/meson.build
index bfb8fd8bcc..9d5fcf6514 100644
--- a/drivers/raw/skeleton/meson.build
+++ b/drivers/raw/skeleton/meson.build
@@ -6,4 +6,4 @@ sources = files(
         'skeleton_rawdev.c',
         'skeleton_rawdev_test.c',
 )
-pmd_supports_disable_iova_as_pa = true
+require_iova_in_mbuf = false
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index fabafbc39b..e39b6643ee 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -1134,7 +1134,7 @@ rte_eal_init(int argc, char **argv)
 		return -1;
 	}
 
-	if (rte_eal_iova_mode() == RTE_IOVA_PA && !RTE_IOVA_AS_PA) {
+	if (rte_eal_iova_mode() == RTE_IOVA_PA && !RTE_IOVA_IN_MBUF) {
 		rte_eal_init_alert("Cannot use IOVA as 'PA' as it is disabled during build");
 		rte_errno = EINVAL;
 		return -1;
diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c
index cfd8062f1e..686e797c80 100644
--- a/lib/mbuf/rte_mbuf.c
+++ b/lib/mbuf/rte_mbuf.c
@@ -388,7 +388,7 @@ int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
 		*reason = "bad mbuf pool";
 		return -1;
 	}
-	if (RTE_IOVA_AS_PA && rte_mbuf_iova_get(m) == 0) {
+	if (RTE_IOVA_IN_MBUF && rte_mbuf_iova_get(m) == 0) {
 		*reason = "bad IO addr";
 		return -1;
 	}
diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h
index 3a82eb136d..bc41eac10d 100644
--- a/lib/mbuf/rte_mbuf.h
+++ b/lib/mbuf/rte_mbuf.h
@@ -146,7 +146,7 @@ static inline uint16_t rte_pktmbuf_priv_size(struct rte_mempool *mp);
 static inline rte_iova_t
 rte_mbuf_iova_get(const struct rte_mbuf *m)
 {
-#if RTE_IOVA_AS_PA
+#if RTE_IOVA_IN_MBUF
 	return m->buf_iova;
 #else
 	return (rte_iova_t)m->buf_addr;
@@ -164,7 +164,7 @@ rte_mbuf_iova_get(const struct rte_mbuf *m)
 static inline void
 rte_mbuf_iova_set(struct rte_mbuf *m, rte_iova_t iova)
 {
-#if RTE_IOVA_AS_PA
+#if RTE_IOVA_IN_MBUF
 	m->buf_iova = iova;
 #else
 	RTE_SET_USED(m);
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index a30e1e0eaf..dfffb6e5e6 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -466,11 +466,11 @@ struct rte_mbuf {
 	RTE_MARKER cacheline0;
 
 	void *buf_addr;           /**< Virtual address of segment buffer. */
-#if RTE_IOVA_AS_PA
+#if RTE_IOVA_IN_MBUF
 	/**
 	 * Physical address of segment buffer.
 	 * This field is undefined if the build is configured to use only
-	 * virtual address as IOVA (i.e. RTE_IOVA_AS_PA is 0).
+	 * virtual address as IOVA (i.e. RTE_IOVA_IN_MBUF is 0).
 	 * Force alignment to 8-bytes, so as to ensure we have the exact
 	 * same mbuf cacheline0 layout for 32-bit and 64-bit. This makes
 	 * working on vector drivers easier.
@@ -599,7 +599,7 @@ struct rte_mbuf {
 	/* second cache line - fields only used in slow path or on TX */
 	RTE_MARKER cacheline1 __rte_cache_min_aligned;
 
-#if RTE_IOVA_AS_PA
+#if RTE_IOVA_IN_MBUF
 	/**
 	 * Next segment of scattered packet. Must be NULL in the last
 	 * segment or in case of non-segmented packet.
@@ -608,7 +608,7 @@ struct rte_mbuf {
 #else
 	/**
 	 * Reserved for dynamic fields
-	 * when the next pointer is in first cache line (i.e. RTE_IOVA_AS_PA is 0).
+	 * when the next pointer is in first cache line (i.e. RTE_IOVA_IN_MBUF is 0).
 	 */
 	uint64_t dynfield2;
 #endif
diff --git a/lib/mbuf/rte_mbuf_dyn.c b/lib/mbuf/rte_mbuf_dyn.c
index 35839e938c..5049508bea 100644
--- a/lib/mbuf/rte_mbuf_dyn.c
+++ b/lib/mbuf/rte_mbuf_dyn.c
@@ -128,7 +128,7 @@ init_shared_mem(void)
 		 */
 		memset(shm, 0, sizeof(*shm));
 		mark_free(dynfield1);
-#if !RTE_IOVA_AS_PA
+#if !RTE_IOVA_IN_MBUF
 		mark_free(dynfield2);
 #endif
 
diff --git a/lib/meson.build b/lib/meson.build
index 2bc0932ad5..fc7abd4aa3 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -93,7 +93,7 @@ dpdk_libs_deprecated += [
 disabled_libs = []
 opt_disabled_libs = run_command(list_dir_globs, get_option('disable_libs'),
         check: true).stdout().split()
-if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
+if not get_option('enable_iova_as_pa')
     opt_disabled_libs += ['kni']
 endif
 foreach l:opt_disabled_libs
diff --git a/meson_options.txt b/meson_options.txt
index 08528492f7..82c8297065 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -41,7 +41,7 @@ option('max_lcores', type: 'string', value: 'default', description:
 option('max_numa_nodes', type: 'string', value: 'default', description:
        'Set the highest NUMA node supported by EAL; "default" is different per-arch, "detect" detects the highest NUMA node on the build machine.')
 option('enable_iova_as_pa', type: 'boolean', value: true, description:
-       'Support for IOVA as physical address. Disabling removes the buf_iova field of mbuf.')
+       'Support the use of physical addresses for IO addresses, such as used by UIO or VFIO in no-IOMMU mode. When disabled, DPDK can only run with IOMMU support for address mappings, but will have more space available in the mbuf structure.')
 option('mbuf_refcnt_atomic', type: 'boolean', value: true, description:
        'Atomically access the mbuf refcnt.')
 option('platform', type: 'string', value: 'native', description:
-- 
2.39.1


^ permalink raw reply	[relevance 2%]

* Re: [PATCH v1 04/13] graph: add get/set graph worker model APIs
  2023-03-02  8:38  0%         ` Yan, Zhirun
@ 2023-03-02 13:58  0%           ` Jerin Jacob
  2023-03-07  8:26  0%             ` Yan, Zhirun
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2023-03-02 13:58 UTC (permalink / raw)
  To: Yan, Zhirun
  Cc: dev, jerinj, kirankumark, ndabilpuram, Liang, Cunming, Wang, Haiyue

On Thu, Mar 2, 2023 at 2:09 PM Yan, Zhirun <zhirun.yan@intel.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Monday, February 27, 2023 6:23 AM
> > To: Yan, Zhirun <zhirun.yan@intel.com>
> > Cc: dev@dpdk.org; jerinj@marvell.com; kirankumark@marvell.com;
> > ndabilpuram@marvell.com; Liang, Cunming <cunming.liang@intel.com>;
> > Wang, Haiyue <haiyue.wang@intel.com>
> > Subject: Re: [PATCH v1 04/13] graph: add get/set graph worker model APIs
> >
> > On Fri, Feb 24, 2023 at 12:01 PM Yan, Zhirun <zhirun.yan@intel.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > Sent: Monday, February 20, 2023 9:51 PM
> > > > To: Yan, Zhirun <zhirun.yan@intel.com>
> > > > Cc: dev@dpdk.org; jerinj@marvell.com; kirankumark@marvell.com;
> > > > ndabilpuram@marvell.com; Liang, Cunming <cunming.liang@intel.com>;
> > > > Wang, Haiyue <haiyue.wang@intel.com>
> > > > Subject: Re: [PATCH v1 04/13] graph: add get/set graph worker model
> > > > APIs
> > > >
> > > > On Thu, Nov 17, 2022 at 10:40 AM Zhirun Yan <zhirun.yan@intel.com>
> > wrote:
> > > > >
> > > > > Add new get/set APIs to configure graph worker model which is used
> > > > > to determine which model will be chosen.
> > > > >
> > > > > Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
> > > > > Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> > > > > Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
> > > > > ---
> > > > >  lib/graph/rte_graph_worker.h        | 51
> > +++++++++++++++++++++++++++++
> > > > >  lib/graph/rte_graph_worker_common.h | 13 ++++++++
> > > > >  lib/graph/version.map               |  3 ++
> > > > >  3 files changed, 67 insertions(+)
> > > > >
> > > > > diff --git a/lib/graph/rte_graph_worker.h
> > > > > b/lib/graph/rte_graph_worker.h index 54d1390786..a0ea0df153
> > 100644
> > > > > --- a/lib/graph/rte_graph_worker.h
> > > > > +++ b/lib/graph/rte_graph_worker.h
> > > > > @@ -1,5 +1,56 @@
> > > > >  #include "rte_graph_model_rtc.h"
> > > > >
> > > > > +static enum rte_graph_worker_model worker_model =
> > > > > +RTE_GRAPH_MODEL_DEFAULT;
> > > >
> > > > This will break the multiprocess.
> > >
> > > Thanks. I will use TLS for per-thread local storage.
> >
> > If it needs to be used from secondary process, then it needs to be from
> > memzone.
> >
>
>
> This filed will be set by primary process in initial stage, and then lcore will only read it.
> I want to use RTE_DEFINE_PER_LCORE to define the worker model here. It seems
> not necessary to allocate from memzone.
>
> >
> >
> > >
> > > >
> > > > > +
> > > > > +/** Graph worker models */
> > > > > +enum rte_graph_worker_model {
> > > > > +#define WORKER_MODEL_DEFAULT "default"
> > > >
> > > > Why need strings?
> > > > Also, every symbol in a public header file should start with RTE_ to
> > > > avoid namespace conflict.
> > >
> > > It was used to config the model in app. I can put the string into example.
> >
> > OK
> >
> > >
> > > >
> > > > > +       RTE_GRAPH_MODEL_DEFAULT = 0, #define WORKER_MODEL_RTC
> > > > > +"rtc"
> > > > > +       RTE_GRAPH_MODEL_RTC,
> > > >
> > > > Why not RTE_GRAPH_MODEL_RTC = RTE_GRAPH_MODEL_DEFAULT in
> > enum
> > > > itself.
> > > Yes, will do in next version.
> > >
> > > >
> > > > > +#define WORKER_MODEL_GENERIC "generic"
> > > >
> > > > Generic is a very overloaded term. Use pipeline here i.e
> > > > RTE_GRAPH_MODEL_PIPELINE
> > >
> > > Actually, it's not a purely pipeline mode. I prefer to change to hybrid.
> >
> > Hybrid is very overloaded term, and it will be confusing (considering there
> > will be new models in future).
> > Please pick a word that really express the model working.
> >
>
> In this case, the path is Node0 -> Node1 -> Node2 -> Node3
> And Node1 and Node3 are binding with one core.
>
> Our model offers the ability to dispatch between cores.
>
> Do you think RTE_GRAPH_MODEL_DISPATCH is a good name?

Some names, What I can think of

// MCORE->MULTI CORE

RTE_GRAPH_MODEL_MCORE_PIPELINE
or
RTE_GRAG_MODEL_MCORE_DISPATCH
or
RTE_GRAG_MODEL_MCORE_RING
or
RTE_GRAPH_MODEL_MULTI_CORE

>
> + - - - - - -+     +- - - - - - - - - - - - - +     + - - - - - -+
> '  Core #0   '     '  Core #1       Core #1   '     '  Core #2   '
> '            '     '                          '     '            '
> ' +--------+ '     ' +--------+    +--------+ '     ' +--------+ '
> ' | Node-0 | - - - ->| Node-1 |    | Node-3 |<- - - - | Node-2 | '
> ' +--------+ '     ' +--------+    +--------+ '     ' +--------+ '
> '            '     '     |                    '     '      ^     '
> + - - - - - -+     +- - -|- - - - - - - - - - +     + - - -|- - -+
>                          |                                 |
>                          + - - - - - - - - - - - - - - - - +
>
>
> > > >
> > > >
> > > > > +       RTE_GRAPH_MODEL_GENERIC,
> > > > > +       RTE_GRAPH_MODEL_MAX,
> > > >
> > > > No need for MAX, it will break the ABI for future. See other
> > > > subsystem such as cryptodev.
> > >
> > > Thanks, I will change it.
> > > >
> > > > > +};
> > > >
> > > > >

^ permalink raw reply	[relevance 0%]

* RE: [PATCH v1 04/13] graph: add get/set graph worker model APIs
  2023-02-26 22:23  0%       ` Jerin Jacob
@ 2023-03-02  8:38  0%         ` Yan, Zhirun
  2023-03-02 13:58  0%           ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Yan, Zhirun @ 2023-03-02  8:38 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, jerinj, kirankumark, ndabilpuram, Liang, Cunming, Wang, Haiyue



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Monday, February 27, 2023 6:23 AM
> To: Yan, Zhirun <zhirun.yan@intel.com>
> Cc: dev@dpdk.org; jerinj@marvell.com; kirankumark@marvell.com;
> ndabilpuram@marvell.com; Liang, Cunming <cunming.liang@intel.com>;
> Wang, Haiyue <haiyue.wang@intel.com>
> Subject: Re: [PATCH v1 04/13] graph: add get/set graph worker model APIs
> 
> On Fri, Feb 24, 2023 at 12:01 PM Yan, Zhirun <zhirun.yan@intel.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Monday, February 20, 2023 9:51 PM
> > > To: Yan, Zhirun <zhirun.yan@intel.com>
> > > Cc: dev@dpdk.org; jerinj@marvell.com; kirankumark@marvell.com;
> > > ndabilpuram@marvell.com; Liang, Cunming <cunming.liang@intel.com>;
> > > Wang, Haiyue <haiyue.wang@intel.com>
> > > Subject: Re: [PATCH v1 04/13] graph: add get/set graph worker model
> > > APIs
> > >
> > > On Thu, Nov 17, 2022 at 10:40 AM Zhirun Yan <zhirun.yan@intel.com>
> wrote:
> > > >
> > > > Add new get/set APIs to configure graph worker model which is used
> > > > to determine which model will be chosen.
> > > >
> > > > Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
> > > > Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> > > > Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
> > > > ---
> > > >  lib/graph/rte_graph_worker.h        | 51
> +++++++++++++++++++++++++++++
> > > >  lib/graph/rte_graph_worker_common.h | 13 ++++++++
> > > >  lib/graph/version.map               |  3 ++
> > > >  3 files changed, 67 insertions(+)
> > > >
> > > > diff --git a/lib/graph/rte_graph_worker.h
> > > > b/lib/graph/rte_graph_worker.h index 54d1390786..a0ea0df153
> 100644
> > > > --- a/lib/graph/rte_graph_worker.h
> > > > +++ b/lib/graph/rte_graph_worker.h
> > > > @@ -1,5 +1,56 @@
> > > >  #include "rte_graph_model_rtc.h"
> > > >
> > > > +static enum rte_graph_worker_model worker_model =
> > > > +RTE_GRAPH_MODEL_DEFAULT;
> > >
> > > This will break the multiprocess.
> >
> > Thanks. I will use TLS for per-thread local storage.
> 
> If it needs to be used from secondary process, then it needs to be from
> memzone.
> 


This filed will be set by primary process in initial stage, and then lcore will only read it.
I want to use RTE_DEFINE_PER_LCORE to define the worker model here. It seems
not necessary to allocate from memzone.

> 
> 
> >
> > >
> > > > +
> > > > +/** Graph worker models */
> > > > +enum rte_graph_worker_model {
> > > > +#define WORKER_MODEL_DEFAULT "default"
> > >
> > > Why need strings?
> > > Also, every symbol in a public header file should start with RTE_ to
> > > avoid namespace conflict.
> >
> > It was used to config the model in app. I can put the string into example.
> 
> OK
> 
> >
> > >
> > > > +       RTE_GRAPH_MODEL_DEFAULT = 0, #define WORKER_MODEL_RTC
> > > > +"rtc"
> > > > +       RTE_GRAPH_MODEL_RTC,
> > >
> > > Why not RTE_GRAPH_MODEL_RTC = RTE_GRAPH_MODEL_DEFAULT in
> enum
> > > itself.
> > Yes, will do in next version.
> >
> > >
> > > > +#define WORKER_MODEL_GENERIC "generic"
> > >
> > > Generic is a very overloaded term. Use pipeline here i.e
> > > RTE_GRAPH_MODEL_PIPELINE
> >
> > Actually, it's not a purely pipeline mode. I prefer to change to hybrid.
> 
> Hybrid is very overloaded term, and it will be confusing (considering there
> will be new models in future).
> Please pick a word that really express the model working.
> 

In this case, the path is Node0 -> Node1 -> Node2 -> Node3
And Node1 and Node3 are binding with one core.

Our model offers the ability to dispatch between cores.

Do you think RTE_GRAPH_MODEL_DISPATCH is a good name?

+ - - - - - -+     +- - - - - - - - - - - - - +     + - - - - - -+
'  Core #0   '     '  Core #1       Core #1   '     '  Core #2   '
'            '     '                          '     '            '
' +--------+ '     ' +--------+    +--------+ '     ' +--------+ '
' | Node-0 | - - - ->| Node-1 |    | Node-3 |<- - - - | Node-2 | '
' +--------+ '     ' +--------+    +--------+ '     ' +--------+ '
'            '     '     |                    '     '      ^     '
+ - - - - - -+     +- - -|- - - - - - - - - - +     + - - -|- - -+
                         |                                 |
                         + - - - - - - - - - - - - - - - - +


> > >
> > >
> > > > +       RTE_GRAPH_MODEL_GENERIC,
> > > > +       RTE_GRAPH_MODEL_MAX,
> > >
> > > No need for MAX, it will break the ABI for future. See other
> > > subsystem such as cryptodev.
> >
> > Thanks, I will change it.
> > >
> > > > +};
> > >
> > > >

^ permalink raw reply	[relevance 0%]

* RE: [RFC 0/2] Add high-performance timer facility
  2023-03-01 15:50  3%       ` Mattias Rönnblom
@ 2023-03-01 17:06  0%         ` Morten Brørup
  0 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2023-03-01 17:06 UTC (permalink / raw)
  To: Mattias Rönnblom, dev
  Cc: Erik Gabriel Carrillo, David Marchand, Maria Lingemark, Stefan Sundkvist

> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> Sent: Wednesday, 1 March 2023 16.50
> 
> On 2023-03-01 14:31, Morten Brørup wrote:
> >> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> >> Sent: Wednesday, 1 March 2023 12.18
> >>
> >> On 2023-02-28 17:01, Morten Brørup wrote:
> >>>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> >>>> Sent: Tuesday, 28 February 2023 10.39
> >>>
> >>> I have been looking for a high performance timer library (for use in
> a fast
> >> path TCP stack), and this looks very useful, Mattias.
> >>>
> >>> My initial feedback is based on quickly skimming the patch source
> code, and
> >> reading this cover letter.
> >>>
> >>>>
> >>>> This patchset is an attempt to introduce a high-performance, highly
> >>>> scalable timer facility into DPDK.
> >>>>
> >>>> More specifically, the goals for the htimer library are:
> >>>>
> >>>> * Efficient handling of a handful up to hundreds of thousands of
> >>>>     concurrent timers.
> >>>> * Reduced overhead of adding and canceling timers.
> >>>> * Provide a service functionally equivalent to that of
> >>>>     <rte_timer.h>. API/ABI backward compatibility is secondary.
> >>>>
> >>>> In the author's opinion, there are two main shortcomings with the
> >>>> current DPDK timer library (i.e., rte_timer.[ch]).
> >>>>
> >>>> One is the synchronization overhead, where heavy-weight full-
> barrier
> >>>> type synchronization is used. rte_timer.c uses per-EAL/lcore skip
> >>>> lists, but any thread may add or cancel (or otherwise access)
> timers
> >>>> managed by another lcore (and thus resides in its timer skip list).
> >>>>
> >>>> The other is an algorithmic shortcoming, with rte_timer.c's
> reliance
> >>>> on a skip list, which, seemingly, is less efficient than certain
> >>>> alternatives.
> >>>>
> >>>> This patchset implements a hierarchical timer wheel (HWT, in
> >>>
> >>> Typo: HWT or HTW?
> >>
> >> Yes. I don't understand how I could managed to make so many such HTW
> ->
> >> HWT typos. At least I got the filenames (rte_htw.[ch]) correct.
> >>
> >>>
> >>>> rte_htw.c), as per the Varghese and Lauck paper "Hashed and
> >>>> Hierarchical Timing Wheels: Data Structures for the Efficient
> >>>> Implementation of a Timer Facility". A HWT is a data structure
> >>>> purposely design for this task, and used by many operating system
> >>>> kernel timer facilities.
> >>>>
> >>>> To further improve the solution described by Varghese and Lauck, a
> >>>> bitset is placed in front of each of the timer wheel in the HWT,
> >>>> reducing overhead of rte_htimer_mgr_manage() (i.e., progressing
> time
> >>>> and expiry processing).
> >>>>
> >>>> Cycle-efficient scanning and manipulation of these bitsets are
> crucial
> >>>> for the HWT's performance.
> >>>>
> >>>> The htimer module keeps a per-lcore (or per-registered EAL thread)
> HWT
> >>>> instance, much like rte_timer.c keeps a per-lcore skip list.
> >>>>
> >>>> To avoid expensive synchronization overhead for thread-local timer
> >>>> management, the HWTs are accessed only from the "owning" thread.
> Any
> >>>> interaction any other thread has with a particular lcore's timer
> >>>> wheel goes over a set of DPDK rings. A side-effect of this design
> is
> >>>> that all operations working toward a "remote" HWT must be
> >>>> asynchronous.
> >>>>
> >>>> The <rte_htimer.h> API is available only to EAL threads and
> registered
> >>>> non-EAL threads.
> >>>>
> >>>> The htimer API allows the application to supply the current time,
> >>>> useful in case it already has retrieved this for other purposes,
> >>>> saving the cost of a rdtsc instruction (or its equivalent).
> >>>>
> >>>> Relative htimer does not retrieve a new time, but reuse the current
> >>>> time (as known via/at-the-time of the manage-call), again to shave
> off
> >>>> some cycles of overhead.
> >>>
> >>> I have a comment to the two points above.
> >>>
> >>> I agree that the application should supply the current time.
> >>>
> >>> This should be the concept throughout the library. I don't
> understand why
> >> TSC is used in the library at all?
> >>>
> >>> Please use a unit-less tick, and let the application decide what one
> tick
> >> means.
> >>>
> >>
> >> I suspect the design of rte_htimer_mgr.h (and rte_timer.h) makes more
> >> sense if you think of the user of the API as not just a "monolithic"
> >> application, but rather a set of different modules, developed by
> >> different organizations, and reused across a set of applications. The
> >> idea behind the API design is they should all be able to share one
> timer
> >> service instance.
> >>
> >> The different parts of the application and any future DPDK platform
> >> modules that use the htimer service needs to agree what a tick means
> in
> >> terms of actual wall-time, if it's not mandated by the API.
> >
> > I see. Then those non-monolithic applications can agree that the unit
> of time is nanoseconds, or whatever makes sense for those applications.
> And then they can instantiate one shared HTW for that purpose.
> >
> 
> <rte_htimer_mgr.h> contains nothing but shared HTWs.
> 
> > There is no need to impose such an API limit on other users of the
> library.
> >
> >>
> >> There might be room for module-specific timer wheels as well, with
> >> different resolution or other characteristics. The event timer
> adapter's
> >> use of a timer wheel could be one example (although I'm not sure it
> is).
> >
> > We are not using the event device, and I have not looked into it, so I
> have no qualified comments to this.
> >
> >>
> >> If timer-wheel-as-a-private-lego-piece is also a valid use case, then
> >> one could consider make the <rte_htw.h> API public as well. That is
> what
> >> I think you as asking for here: a generic timer wheel that doesn't
> know
> >> anything about time sources, time source time -> tick conversion, or
> >> timer source time -> monotonic wall time conversion, and maybe is
> also
> >> not bound to a particular thread.
> >
> > Yes, that is what I had been searching the Internet for.
> >
> > (I'm not sure what you mean by "not bound to a particular thread".
> Your per-thread design seems good to me.)
> >
> > I don't want more stuff in the EAL. What I want is high-performance
> DPDK libraries we can use in our applications.
> >
> >>
> >> I picked TSC because it seemed like a good "universal time unit" for
> >> DPDK. rdtsc (and its equivalent) is also a very precise (especially
> on
> >> x86) and cheap-to-retrieve (especially on ARM, from what I
> understand).
> >
> > The TSC does have excellent performance, but on all other parameters
> it is a horrible time keeper: The measurement unit depends on the
> underlying hardware, the TSC drifts depending on temperature, it cannot
> be PTP synchronized, the list is endless!
> >
> >>
> >> That said, at the moment, I'm leaning toward nanoseconds (uint64_t
> >> format) should be the default for timer expiration time instead of
> TSC.
> >> TSC could still be an option for passing the current time, since TSC
> >> will be a common time source, and it shaves off one conversion.
> >
> > There are many reasons why nanoseconds is a much better choice than
> TSC.
> >
> >>
> >>> A unit-less tick will also let the application instantiate a HTW
> with higher
> >> resolution than the TSC. (E.g. think about oversampling in audio
> processing,
> >> or Brezenham's line drawing algorithm for 2D visuals - oversampling
> can sound
> >> and look better.)
> >
> > Some of the timing data in our application have a resolution orders of
> magnitude higher than one nanosecond. If we combined that with a HTW
> library with nanosecond resolution, we would need to keep these timer
> values in two locations: The original high-res timer in our data
> structure, and the shadow low-res (nanosecond) timer in the HTW.
> >
> 
> There is no way you will meet timers with anything approaching
> pico-second-level precision.

Correct. Our sub-nanosecond timers don't need to meet the exact time, but the higher resolution prevents loss of accuracy when a number has been added to it many times. Think of it like a special fixed-point number, where the least significant part is included to ensure accuracy in calculations, while the actual timer only considers the most significant part of the number.

> You will also get into a value range issue,
> since you will wrap around a 64-bit integer in a matter of days.

Yes. We use timers with different scales for individual purposes. Our highest resolution are sub-nanosecond.

> 
> The HTW only stores the timeout in ticks, not TSC, nanoseconds or
> picoseconds.

Excellent. Then I'm happy.

> Generally, you don't want pico-second-level tick
> granularity, since it increases the overhead of advancing the wheel(s).

We currently use proprietary algorithms for our bandwidth scheduling. It seems that a HTW is not a good fit for this purpose. Perhaps you are offering a hammer, and it's not a good replacement for my screwdriver.

I suppose that nanosecond resolution suffices for a TCP stack, which is the use case I have been on the lookout for a timer library for. :-)

> The first (lowest-significance) few wheels will pretty much always be
> empty.
> 
> > We might also need to frequently update the HTW timers to prevent
> drifting away from the high-res timers. E.g. 1.2 + 1.2 is still 2 when
> rounded, but + 1.2 becomes 3 when it should have been 4 (3 * 1.2 = 3.6)
> rounded. This level of drifting would also make periodic timers in the
> HTW useless.
> >
> 
> Useless, for a certain class of applications. What application would
> that be?

Sorry about being unclear there. Yes, I only meant the specific application I was talking about, i.e. our application for high precision bandwidth management. For reference, 1 bit at 100 Gbit/s is 10 picoseconds.

> 
> > Please note: I haven't really considered merging the high-res timing
> in our application with this HTW, and I'm also not saying that PERIODIC
> timers in the HTW are required or even useful for our application. I'm
> only providing arguments for a unit-less time!
> >
> >>>
> >>> For reference (supporting my suggestion), the dynamic timestamp
> field in the
> >> rte_mbuf structure is also defined as being unit-less. (I think
> NVIDIA
> >> implements it as nanoseconds, but that's an implementation specific
> choice.)
> >>>
> >>>>
> >>>> A semantic improvement compared to the <rte_timer.h> API is that
> the
> >>>> htimer library can give a definite answer on the question if the
> timer
> >>>> expiry callback was called, after a timer has been canceled.
> >>>>
> >>>> Below is a performance data from DPDK's 'app/test' micro
> benchmarks,
> >>>> using 10k concurrent timers. The benchmarks (test_timer_perf.c and
> >>>> test_htimer_mgr_perf.c) aren't identical in their structure, but
> the
> >>>> numbers give some indication of the difference.
> >>>>
> >>>> Use case               htimer  timer
> >>>> ------------------------------------
> >>>> Add timer                 28    253
> >>>> Cancel timer              10    412
> >>>> Async add (source lcore)  64
> >>>> Async add (target lcore)  13
> >>>>
> >>>> (AMD 5900X CPU. Time in TSC.)
> >>>>
> >>>> Prototype integration of the htimer library into real, timer-heavy,
> >>>> applications indicates that htimer may result in significant
> >>>> application-level performance gains.
> >>>>
> >>>> The bitset implementation which the HWT implementation depends upon
> >>>> seemed generic-enough and potentially useful outside the world of
> >>>> HWTs, to justify being located in the EAL.
> >>>>
> >>>> This patchset is very much an RFC, and the author is yet to form an
> >>>> opinion on many important issues.
> >>>>
> >>>> * If deemed a suitable replacement, should the htimer replace the
> >>>>     current DPDK timer library in some particular (ABI-breaking)
> >>>>     release, or should it live side-by-side with the then-legacy
> >>>>     <rte_timer.h> API? A lot of things in and outside DPDK depend
> on
> >>>>     <rte_timer.h>, so coexistence may be required to facilitate a
> smooth
> >>>>     transition.
> >>>
> >>> It's my immediate impression that they are totally different in both
> design
> >> philosophy and API.
> >>>
> >>> Personal opinion: I would call it an entirely different library.
> >>>
> >>>>
> >>>> * Should the htimer and htw-related files be colocated with
> rte_timer.c
> >>>>     in the timer library?
> >>>
> >>> Personal opinion: No. This is an entirely different library, and
> should live
> >> for itself in a directory of its own.
> >>>
> >>>>
> >>>> * Would it be useful for applications using asynchronous cancel to
> >>>>     have the option of having the timer callback run not only in
> case of
> >>>>     timer expiration, but also cancellation (on the target lcore)?
> The
> >>>>     timer cb signature would need to include an additional
> parameter in
> >>>>     that case.
> >>>
> >>> If one thread cancels something in another thread, some
> synchronization
> >> between the threads is going to be required anyway. So we could
> reprase your
> >> question: Will the burden of the otherwise required synchronization
> between
> >> the two threads be significantly reduced if the library has the
> ability to run
> >> the callback on asynchronous cancel?
> >>>
> >>
> >> Yes.
> >>
> >> Intuitively, it seems convenient that if you hand off a timer to a
> >> different lcore, the timer callback will be called exactly once,
> >> regardless if the timer was canceled or expired.
> >>
> >> But, as you indicate, you may still need synchronization to solve the
> >> resource reclamation issue.
> >>
> >>> Is such a feature mostly "Must have" or "Nice to have"?
> >>>
> >>> More thoughts in this area...
> >>>
> >>> If adding and additional callback parameter, it could be an enum, so
> the
> >> callback could be expanded to support "timeout (a.k.a. timer fired)",
> "cancel"
> >> and more events we have not yet come up with, e.g. "early kick".
> >>>
> >>
> >> Yes, or an int.
> >>
> >>> Here's an idea off the top of my head: An additional callback
> parameter has
> >> a (small) performance cost incurred with every timer fired (which is
> a very
> >> large multiplier). It might not be required. As an alternative to an
> "what
> >> happened" parameter to the callback, the callback could investigate
> the state
> >> of the object for which the timer fired, and draw its own conclusion
> on how to
> >> proceed. Obviously, this also has a performance cost, but perhaps the
> callback
> >> works on the object's state anyway, making this cost insignificant.
> >>>
> >>
> >> It's not obvious to me that you, in the timer callback, can determine
> >> what happened, if the same callback is called both in the cancel and
> the
> >> expired case.
> >>
> >> The cost of an extra integer passed in a register (or checking a
> flag,
> >> if the timer callback should be called at all at cancellation) that
> is
> >> the concern for me; it's extra bit of API complexity.
> >
> > Then introduce the library without this feature. More features can be
> added later.
> >
> > The library will be introduced as "experimental", so we are free to
> improve it and modify the ABI along the way.
> >
> >>
> >>> Here's another alternative to adding a "what happened" parameter to
> the
> >> callback:
> >>>
> >>> The rte_htimer could have one more callback pointer, which (if set)
> will be
> >> called on cancellation of the timer.
> >>>
> >>
> >> This will grow the timer struct with 16 bytes.
> >
> > If the rte_htimer struct stays within one cache line, it should be
> acceptable.
> >
> 
> Timer structs are often embedded in other structures, and need not
> themselves be cache line aligned (although the "parent" struct may need
> to be, e.g. if it's dynamically allocated).
> 
> So smaller is better. Just consider if you want your attosecond-level
> time stamp in a struct:
> 
> struct my_timer {
>      uint64_t high_precision_time_high_bits;
>      uint64_t high_precision_time_low_bits;
>      struct rte_htimer timer;
> };
> 
> ...and you allocate those structs from a mempool. If rte_htimer is small
> enough, you will fit on one cache line.

Ahh... I somehow assumed they only existed as stand-alone elements inside the HTW.

Then I obviously agree that shorter is better.

> 
> > On the other hand, this approach is less generic than passing an
> additional parameter. (E.g. add yet another callback pointer for "early
> kick"?)
> >
> > BTW, async cancel is a form of inter-thread communication. Does this
> library really need to provide any inter-thread communication
> mechanisms? Doesn't an inter-thread communication mechanism belong in a
> separate library?
> >
> 
> Yes, <rte_htimer_mgr.h> needs this because:
> 1) Being able to schedule timers on a remote lcore is a useful feature
> (especially since we don't have much else in terms of deferred work
> mechanisms in DPDK).

Although remote procedures is a useful feature, providing such a feature doesn't necessarily belong in a library that uses remote procedures.

> 2) htimer aspires to be a plug-in replacement for <rte_timer.h> (albeit
> an ABI-breaking one).

This is a good argument.

But I would much rather have a highly tuned stand-alone HTW library than a plug-in replacement of the old <rte_timer.h>.

> 
> The pure HTW is in rte_htw.[ch].
> 
> Plus, with the current design, async operations basically come for free
> (if you don't use them), from a performance perspective. The extra
> overhead boils down to occasionally polling an empty ring, which is an
> inexpensive operation.

OK. Then no worries.

> 
> >>
> >>>>
> >>>> * Should the rte_htimer be a nested struct, so the htw parts be
> separated
> >>>>     from the htimer parts?
> >>>>
> >>>> * <rte_htimer.h> is kept separate from <rte_htimer_mgr.h>, so that
> >>>>     <rte_htw.h> may avoid a depedency to <rte_htimer_mgr.h>. Should
> it
> >>>>     be so?
> >>>>
> >>>> * rte_htimer struct is only supposed to be used by the application
> to
> >>>>     give an indication of how much memory it needs to allocate, and
> is
> >>>>     its member are not supposed to be directly accessed (w/ the
> possible
> >>>>     exception of the owner_lcore_id field). Should there be a dummy
> >>>>     struct, or a #define RTE_HTIMER_MEMSIZE or a
> rte_htimer_get_memsize()
> >>>>     function instead, serving the same purpose? Better
> encapsulation,
> >>>>     but more inconvenient for applications. Run-time dynamic sizing
> >>>>     would force application-level dynamic allocations.
> >>>>
> >>>> * Asynchronous cancellation is a little tricky to use for the
> >>>>     application (primarily due to timer memory reclamation/race
> >>>>     issues). Should this functionality be removed?
> >>>>
> >>>> * Should rte_htimer_mgr_init() also retrieve the current time? If
> so,
> >>>>     there should to be a variant which allows the user to specify
> the
> >>>>     time (to match rte_htimer_mgr_manage_time()). One pitfall with
> the
> >>>>     current proposed API is an application calling
> rte_htimer_mgr_init()
> >>>>     and then immediately adding a timer with a relative timeout, in
> >>>>     which case the current absolute time used is 0, which might be
> a
> >>>>     surprise.
> >>>>
> >>>> * Should libdivide (optionally) be used to avoid the div in the TSC
> ->
> >>>>     tick conversion? (Doesn't improve performance on Zen 3, but may
> >>>>     do on other CPUs.) Consider <rte_reciprocal.h> as well.
> >>>>
> >>>> * Should the TSC-per-tick be rounded up to a power of 2, so shifts
> can be
> >>>>     used for conversion? Very minor performance gains to be found
> there,
> >>>>     at least on Zen 3 cores.
> >>>>
> >>>> * Should it be possible to supply the time in rte_htimer_mgr_add()
> >>>>     and/or rte_htimer_mgr_manage_time() functions as ticks, rather
> than
> >>>>     as TSC? Should it be possible to also use nanoseconds?
> >>>>     rte_htimer_mgr_manage_time() would need a flags parameter in
> that
> >>>>     case.
> >>>
> >>> Do not use TSC anywhere in this library. Let the application decide
> the
> >> meaning of a tick.
> >>>
> >>>>
> >>>> * Would the event timer adapter be best off using <rte_htw.h>
> >>>>     directly, or <rte_htimer.h>? In the latter case, there needs to
> be a
> >>>>     way to instantiate more HWTs (similar to the "alt" functions of
> >>>>     <rte_timer.h>)?
> >>>>
> >>>> * Should the PERIODICAL flag (and the complexity it brings) be
> >>>>     removed? And leave the application with only single-shot
> timers, and
> >>>>     the option to re-add them in the timer callback.
> >>>
> >>> First thought: Yes, keep it lean and remove the periodical stuff.
> >>>
> >>> Second thought: This needs a more detailed analysis.
> >>>
> >>>   From one angle:
> >>>
> >>> How many PERIODICAL versus ONESHOT timers do we expect?
> >>>
> >>
> >> I suspect you should be prepared for the ratio being anything.
> >
> > In theory, anything is possible. But I'm asking that we consider
> realistic use cases.
> >
> >>
> >>> Intuitively, I would use this library for ONESHOT timers, and
> perhaps
> >> implement my periodical timers by other means.
> >>>
> >>> If the PERIODICAL:ONESHOT ratio is low, we can probably live with
> the extra
> >> cost of cancel+add for a few periodical timers.
> >>>
> >>>   From another angle:
> >>>
> >>> What is the performance gain with the PERIODICAL flag?
> >>>
> >>
> >> None, pretty much. It's just there for convenience.
> >
> > OK, then I suggest that you remove it, unless you get objections.
> >
> > The library can be expanded with useful features at any time later.
> Useless features are (nearly) impossible to remove, once they are in
> there - they are just "technical debt" with associated maintenance
> costs, added complexity weaving into other features, etc..
> >
> >>
> >>> Without a periodical timer, cancel+add costs 10+28 cycles. How many
> cycles
> >> would a "move" function, performing both cancel and add, use?
> >>>
> >>> And then compare that to the cost (in cycles) of repeating a timer
> with
> >> PERIODICAL?
> >>>
> >>> Furthermore, not having the PERIODICAL flag probably improves the
> >> performance for non-periodical timers. How many cycles could we gain
> here?
> >>>
> >>>
> >>> Another, vaguely related, idea:
> >>>
> >>> The callback pointer might not need to be stored per rte_htimer, but
> could
> >> instead be common for the rte_htw.
> >>>
> >>
> >> Do you mean rte_htw, or rte_htimer_mgr?
> >>
> >> If you make one common callback, all the different parts of the
> >> application needs to be coordinated (in a big switch-statement, or
> >> something of that sort), or have some convention for using an
> >> application-specific wrapper structure (accessed via container_of()).
> >>
> >> This is a problem if the timer service API consumer is a set of
> largely
> >> uncoordinated software modules.
> >>
> >> Btw, the eventdev API has the same issue, and the proposed event
> >> dispatcher is one way to help facilitate application-internal
> decoupling.
> >>
> >> For a module-private rte_htw instance your suggestion may work, but
> not
> >> for <rte_htimer_mgr.h>.
> >
> > I was speculating that a common callback pointer might provide a
> performance benefit for single-purpose HTW instances. (The same concept
> applies if there are multiple callbacks, e.g. a "Timer Fired", a "Timer
> Cancelled", and an "Early Kick" callback pointer - i.e. having the
> callback pointers per HTW instance, instead of per timer.)
> >
> >>
> >>> When a timer fires, the callback probably needs to check/update the
> state of
> >> the object for which the timer fired anyway, so why not just let the
> >> application use that state to determine the appropriate action. This
> might
> >> provide some performance benefit.
> >>>
> >>> It might complicate using one HTW for multiple different purposes,
> though.
> >> Probably a useless idea, but I wanted to share the idea anyway. It
> might
> >> trigger other, better ideas in the community.
> >>>
> >>>>
> >>>> * Should the async result codes and the sync cancel error codes be
> merged
> >>>>     into one set of result codes?
> >>>>
> >>>> * Should the rte_htimer_mgr_async_add() have a flag which allow
> >>>>     buffering add request messages until rte_htimer_mgr_process()
> is
> >>>>     called? Or any manage function. Would reduce ring signaling
> overhead
> >>>>     (i.e., burst enqueue operations instead of single-element
> >>>>     enqueue). Could also be a rte_htimer_mgr_async_add_burst()
> function,
> >>>>     solving the same "problem" a different way. (The signature of
> such
> >>>>     a function would not be pretty.)
> >>>>
> >>>> * Does the functionality provided by the rte_htimer_mgr_process()
> >>>>     function match its the use cases? Should there me a more clear
> >>>>     separation between expiry processing and asynchronous operation
> >>>>     processing?
> >>>>
> >>>> * Should the patchset be split into more commits? If so, how?
> >>>>
> >>>> Thanks to Erik Carrillo for his assistance.
> >>>>
> >>>> Mattias Rönnblom (2):
> >>>>     eal: add bitset type
> >>>>     eal: add high-performance timer facility
> >


^ permalink raw reply	[relevance 0%]

* Re: [RFC 0/2] Add high-performance timer facility
  2023-03-01 13:31  3%     ` Morten Brørup
@ 2023-03-01 15:50  3%       ` Mattias Rönnblom
  2023-03-01 17:06  0%         ` Morten Brørup
  0 siblings, 1 reply; 200+ results
From: Mattias Rönnblom @ 2023-03-01 15:50 UTC (permalink / raw)
  To: Morten Brørup, dev
  Cc: Erik Gabriel Carrillo, David Marchand, Maria Lingemark, Stefan Sundkvist

On 2023-03-01 14:31, Morten Brørup wrote:
>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
>> Sent: Wednesday, 1 March 2023 12.18
>>
>> On 2023-02-28 17:01, Morten Brørup wrote:
>>>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
>>>> Sent: Tuesday, 28 February 2023 10.39
>>>
>>> I have been looking for a high performance timer library (for use in a fast
>> path TCP stack), and this looks very useful, Mattias.
>>>
>>> My initial feedback is based on quickly skimming the patch source code, and
>> reading this cover letter.
>>>
>>>>
>>>> This patchset is an attempt to introduce a high-performance, highly
>>>> scalable timer facility into DPDK.
>>>>
>>>> More specifically, the goals for the htimer library are:
>>>>
>>>> * Efficient handling of a handful up to hundreds of thousands of
>>>>     concurrent timers.
>>>> * Reduced overhead of adding and canceling timers.
>>>> * Provide a service functionally equivalent to that of
>>>>     <rte_timer.h>. API/ABI backward compatibility is secondary.
>>>>
>>>> In the author's opinion, there are two main shortcomings with the
>>>> current DPDK timer library (i.e., rte_timer.[ch]).
>>>>
>>>> One is the synchronization overhead, where heavy-weight full-barrier
>>>> type synchronization is used. rte_timer.c uses per-EAL/lcore skip
>>>> lists, but any thread may add or cancel (or otherwise access) timers
>>>> managed by another lcore (and thus resides in its timer skip list).
>>>>
>>>> The other is an algorithmic shortcoming, with rte_timer.c's reliance
>>>> on a skip list, which, seemingly, is less efficient than certain
>>>> alternatives.
>>>>
>>>> This patchset implements a hierarchical timer wheel (HWT, in
>>>
>>> Typo: HWT or HTW?
>>
>> Yes. I don't understand how I could managed to make so many such HTW ->
>> HWT typos. At least I got the filenames (rte_htw.[ch]) correct.
>>
>>>
>>>> rte_htw.c), as per the Varghese and Lauck paper "Hashed and
>>>> Hierarchical Timing Wheels: Data Structures for the Efficient
>>>> Implementation of a Timer Facility". A HWT is a data structure
>>>> purposely design for this task, and used by many operating system
>>>> kernel timer facilities.
>>>>
>>>> To further improve the solution described by Varghese and Lauck, a
>>>> bitset is placed in front of each of the timer wheel in the HWT,
>>>> reducing overhead of rte_htimer_mgr_manage() (i.e., progressing time
>>>> and expiry processing).
>>>>
>>>> Cycle-efficient scanning and manipulation of these bitsets are crucial
>>>> for the HWT's performance.
>>>>
>>>> The htimer module keeps a per-lcore (or per-registered EAL thread) HWT
>>>> instance, much like rte_timer.c keeps a per-lcore skip list.
>>>>
>>>> To avoid expensive synchronization overhead for thread-local timer
>>>> management, the HWTs are accessed only from the "owning" thread.  Any
>>>> interaction any other thread has with a particular lcore's timer
>>>> wheel goes over a set of DPDK rings. A side-effect of this design is
>>>> that all operations working toward a "remote" HWT must be
>>>> asynchronous.
>>>>
>>>> The <rte_htimer.h> API is available only to EAL threads and registered
>>>> non-EAL threads.
>>>>
>>>> The htimer API allows the application to supply the current time,
>>>> useful in case it already has retrieved this for other purposes,
>>>> saving the cost of a rdtsc instruction (or its equivalent).
>>>>
>>>> Relative htimer does not retrieve a new time, but reuse the current
>>>> time (as known via/at-the-time of the manage-call), again to shave off
>>>> some cycles of overhead.
>>>
>>> I have a comment to the two points above.
>>>
>>> I agree that the application should supply the current time.
>>>
>>> This should be the concept throughout the library. I don't understand why
>> TSC is used in the library at all?
>>>
>>> Please use a unit-less tick, and let the application decide what one tick
>> means.
>>>
>>
>> I suspect the design of rte_htimer_mgr.h (and rte_timer.h) makes more
>> sense if you think of the user of the API as not just a "monolithic"
>> application, but rather a set of different modules, developed by
>> different organizations, and reused across a set of applications. The
>> idea behind the API design is they should all be able to share one timer
>> service instance.
>>
>> The different parts of the application and any future DPDK platform
>> modules that use the htimer service needs to agree what a tick means in
>> terms of actual wall-time, if it's not mandated by the API.
> 
> I see. Then those non-monolithic applications can agree that the unit of time is nanoseconds, or whatever makes sense for those applications. And then they can instantiate one shared HTW for that purpose.
> 

<rte_htimer_mgr.h> contains nothing but shared HTWs.

> There is no need to impose such an API limit on other users of the library.
> 
>>
>> There might be room for module-specific timer wheels as well, with
>> different resolution or other characteristics. The event timer adapter's
>> use of a timer wheel could be one example (although I'm not sure it is).
> 
> We are not using the event device, and I have not looked into it, so I have no qualified comments to this.
> 
>>
>> If timer-wheel-as-a-private-lego-piece is also a valid use case, then
>> one could consider make the <rte_htw.h> API public as well. That is what
>> I think you as asking for here: a generic timer wheel that doesn't know
>> anything about time sources, time source time -> tick conversion, or
>> timer source time -> monotonic wall time conversion, and maybe is also
>> not bound to a particular thread.
> 
> Yes, that is what I had been searching the Internet for.
> 
> (I'm not sure what you mean by "not bound to a particular thread". Your per-thread design seems good to me.)
> 
> I don't want more stuff in the EAL. What I want is high-performance DPDK libraries we can use in our applications.
> 
>>
>> I picked TSC because it seemed like a good "universal time unit" for
>> DPDK. rdtsc (and its equivalent) is also a very precise (especially on
>> x86) and cheap-to-retrieve (especially on ARM, from what I understand).
> 
> The TSC does have excellent performance, but on all other parameters it is a horrible time keeper: The measurement unit depends on the underlying hardware, the TSC drifts depending on temperature, it cannot be PTP synchronized, the list is endless!
> 
>>
>> That said, at the moment, I'm leaning toward nanoseconds (uint64_t
>> format) should be the default for timer expiration time instead of TSC.
>> TSC could still be an option for passing the current time, since TSC
>> will be a common time source, and it shaves off one conversion.
> 
> There are many reasons why nanoseconds is a much better choice than TSC.
> 
>>
>>> A unit-less tick will also let the application instantiate a HTW with higher
>> resolution than the TSC. (E.g. think about oversampling in audio processing,
>> or Brezenham's line drawing algorithm for 2D visuals - oversampling can sound
>> and look better.)
> 
> Some of the timing data in our application have a resolution orders of magnitude higher than one nanosecond. If we combined that with a HTW library with nanosecond resolution, we would need to keep these timer values in two locations: The original high-res timer in our data structure, and the shadow low-res (nanosecond) timer in the HTW.
> 

There is no way you will meet timers with anything approaching 
pico-second-level precision. You will also get into a value range issue, 
since you will wrap around a 64-bit integer in a matter of days.

The HTW only stores the timeout in ticks, not TSC, nanoseconds or 
picoseconds. Generally, you don't want pico-second-level tick 
granularity, since it increases the overhead of advancing the wheel(s). 
The first (lowest-significance) few wheels will pretty much always be empty.

> We might also need to frequently update the HTW timers to prevent drifting away from the high-res timers. E.g. 1.2 + 1.2 is still 2 when rounded, but + 1.2 becomes 3 when it should have been 4 (3 * 1.2 = 3.6) rounded. This level of drifting would also make periodic timers in the HTW useless.
> 

Useless, for a certain class of applications. What application would 
that be?

> Please note: I haven't really considered merging the high-res timing in our application with this HTW, and I'm also not saying that PERIODIC timers in the HTW are required or even useful for our application. I'm only providing arguments for a unit-less time!
> 
>>>
>>> For reference (supporting my suggestion), the dynamic timestamp field in the
>> rte_mbuf structure is also defined as being unit-less. (I think NVIDIA
>> implements it as nanoseconds, but that's an implementation specific choice.)
>>>
>>>>
>>>> A semantic improvement compared to the <rte_timer.h> API is that the
>>>> htimer library can give a definite answer on the question if the timer
>>>> expiry callback was called, after a timer has been canceled.
>>>>
>>>> Below is a performance data from DPDK's 'app/test' micro benchmarks,
>>>> using 10k concurrent timers. The benchmarks (test_timer_perf.c and
>>>> test_htimer_mgr_perf.c) aren't identical in their structure, but the
>>>> numbers give some indication of the difference.
>>>>
>>>> Use case               htimer  timer
>>>> ------------------------------------
>>>> Add timer                 28    253
>>>> Cancel timer              10    412
>>>> Async add (source lcore)  64
>>>> Async add (target lcore)  13
>>>>
>>>> (AMD 5900X CPU. Time in TSC.)
>>>>
>>>> Prototype integration of the htimer library into real, timer-heavy,
>>>> applications indicates that htimer may result in significant
>>>> application-level performance gains.
>>>>
>>>> The bitset implementation which the HWT implementation depends upon
>>>> seemed generic-enough and potentially useful outside the world of
>>>> HWTs, to justify being located in the EAL.
>>>>
>>>> This patchset is very much an RFC, and the author is yet to form an
>>>> opinion on many important issues.
>>>>
>>>> * If deemed a suitable replacement, should the htimer replace the
>>>>     current DPDK timer library in some particular (ABI-breaking)
>>>>     release, or should it live side-by-side with the then-legacy
>>>>     <rte_timer.h> API? A lot of things in and outside DPDK depend on
>>>>     <rte_timer.h>, so coexistence may be required to facilitate a smooth
>>>>     transition.
>>>
>>> It's my immediate impression that they are totally different in both design
>> philosophy and API.
>>>
>>> Personal opinion: I would call it an entirely different library.
>>>
>>>>
>>>> * Should the htimer and htw-related files be colocated with rte_timer.c
>>>>     in the timer library?
>>>
>>> Personal opinion: No. This is an entirely different library, and should live
>> for itself in a directory of its own.
>>>
>>>>
>>>> * Would it be useful for applications using asynchronous cancel to
>>>>     have the option of having the timer callback run not only in case of
>>>>     timer expiration, but also cancellation (on the target lcore)? The
>>>>     timer cb signature would need to include an additional parameter in
>>>>     that case.
>>>
>>> If one thread cancels something in another thread, some synchronization
>> between the threads is going to be required anyway. So we could reprase your
>> question: Will the burden of the otherwise required synchronization between
>> the two threads be significantly reduced if the library has the ability to run
>> the callback on asynchronous cancel?
>>>
>>
>> Yes.
>>
>> Intuitively, it seems convenient that if you hand off a timer to a
>> different lcore, the timer callback will be called exactly once,
>> regardless if the timer was canceled or expired.
>>
>> But, as you indicate, you may still need synchronization to solve the
>> resource reclamation issue.
>>
>>> Is such a feature mostly "Must have" or "Nice to have"?
>>>
>>> More thoughts in this area...
>>>
>>> If adding and additional callback parameter, it could be an enum, so the
>> callback could be expanded to support "timeout (a.k.a. timer fired)", "cancel"
>> and more events we have not yet come up with, e.g. "early kick".
>>>
>>
>> Yes, or an int.
>>
>>> Here's an idea off the top of my head: An additional callback parameter has
>> a (small) performance cost incurred with every timer fired (which is a very
>> large multiplier). It might not be required. As an alternative to an "what
>> happened" parameter to the callback, the callback could investigate the state
>> of the object for which the timer fired, and draw its own conclusion on how to
>> proceed. Obviously, this also has a performance cost, but perhaps the callback
>> works on the object's state anyway, making this cost insignificant.
>>>
>>
>> It's not obvious to me that you, in the timer callback, can determine
>> what happened, if the same callback is called both in the cancel and the
>> expired case.
>>
>> The cost of an extra integer passed in a register (or checking a flag,
>> if the timer callback should be called at all at cancellation) that is
>> the concern for me; it's extra bit of API complexity.
> 
> Then introduce the library without this feature. More features can be added later.
> 
> The library will be introduced as "experimental", so we are free to improve it and modify the ABI along the way.
> 
>>
>>> Here's another alternative to adding a "what happened" parameter to the
>> callback:
>>>
>>> The rte_htimer could have one more callback pointer, which (if set) will be
>> called on cancellation of the timer.
>>>
>>
>> This will grow the timer struct with 16 bytes.
> 
> If the rte_htimer struct stays within one cache line, it should be acceptable.
> 

Timer structs are often embedded in other structures, and need not 
themselves be cache line aligned (although the "parent" struct may need 
to be, e.g. if it's dynamically allocated).

So smaller is better. Just consider if you want your attosecond-level 
time stamp in a struct:

struct my_timer {
     uint64_t high_precision_time_high_bits;
     uint64_t high_precision_time_low_bits;
     struct rte_htimer timer;
};

...and you allocate those structs from a mempool. If rte_htimer is small 
enough, you will fit on one cache line.

> On the other hand, this approach is less generic than passing an additional parameter. (E.g. add yet another callback pointer for "early kick"?)
> 
> BTW, async cancel is a form of inter-thread communication. Does this library really need to provide any inter-thread communication mechanisms? Doesn't an inter-thread communication mechanism belong in a separate library?
> 

Yes, <rte_htimer_mgr.h> needs this because:
1) Being able to schedule timers on a remote lcore is a useful feature 
(especially since we don't have much else in terms of deferred work 
mechanisms in DPDK).
2) htimer aspires to be a plug-in replacement for <rte_timer.h> (albeit 
an ABI-breaking one).

The pure HTW is in rte_htw.[ch].

Plus, with the current design, async operations basically come for free 
(if you don't use them), from a performance perspective. The extra 
overhead boils down to occasionally polling an empty ring, which is an 
inexpensive operation.

>>
>>>>
>>>> * Should the rte_htimer be a nested struct, so the htw parts be separated
>>>>     from the htimer parts?
>>>>
>>>> * <rte_htimer.h> is kept separate from <rte_htimer_mgr.h>, so that
>>>>     <rte_htw.h> may avoid a depedency to <rte_htimer_mgr.h>. Should it
>>>>     be so?
>>>>
>>>> * rte_htimer struct is only supposed to be used by the application to
>>>>     give an indication of how much memory it needs to allocate, and is
>>>>     its member are not supposed to be directly accessed (w/ the possible
>>>>     exception of the owner_lcore_id field). Should there be a dummy
>>>>     struct, or a #define RTE_HTIMER_MEMSIZE or a rte_htimer_get_memsize()
>>>>     function instead, serving the same purpose? Better encapsulation,
>>>>     but more inconvenient for applications. Run-time dynamic sizing
>>>>     would force application-level dynamic allocations.
>>>>
>>>> * Asynchronous cancellation is a little tricky to use for the
>>>>     application (primarily due to timer memory reclamation/race
>>>>     issues). Should this functionality be removed?
>>>>
>>>> * Should rte_htimer_mgr_init() also retrieve the current time? If so,
>>>>     there should to be a variant which allows the user to specify the
>>>>     time (to match rte_htimer_mgr_manage_time()). One pitfall with the
>>>>     current proposed API is an application calling rte_htimer_mgr_init()
>>>>     and then immediately adding a timer with a relative timeout, in
>>>>     which case the current absolute time used is 0, which might be a
>>>>     surprise.
>>>>
>>>> * Should libdivide (optionally) be used to avoid the div in the TSC ->
>>>>     tick conversion? (Doesn't improve performance on Zen 3, but may
>>>>     do on other CPUs.) Consider <rte_reciprocal.h> as well.
>>>>
>>>> * Should the TSC-per-tick be rounded up to a power of 2, so shifts can be
>>>>     used for conversion? Very minor performance gains to be found there,
>>>>     at least on Zen 3 cores.
>>>>
>>>> * Should it be possible to supply the time in rte_htimer_mgr_add()
>>>>     and/or rte_htimer_mgr_manage_time() functions as ticks, rather than
>>>>     as TSC? Should it be possible to also use nanoseconds?
>>>>     rte_htimer_mgr_manage_time() would need a flags parameter in that
>>>>     case.
>>>
>>> Do not use TSC anywhere in this library. Let the application decide the
>> meaning of a tick.
>>>
>>>>
>>>> * Would the event timer adapter be best off using <rte_htw.h>
>>>>     directly, or <rte_htimer.h>? In the latter case, there needs to be a
>>>>     way to instantiate more HWTs (similar to the "alt" functions of
>>>>     <rte_timer.h>)?
>>>>
>>>> * Should the PERIODICAL flag (and the complexity it brings) be
>>>>     removed? And leave the application with only single-shot timers, and
>>>>     the option to re-add them in the timer callback.
>>>
>>> First thought: Yes, keep it lean and remove the periodical stuff.
>>>
>>> Second thought: This needs a more detailed analysis.
>>>
>>>   From one angle:
>>>
>>> How many PERIODICAL versus ONESHOT timers do we expect?
>>>
>>
>> I suspect you should be prepared for the ratio being anything.
> 
> In theory, anything is possible. But I'm asking that we consider realistic use cases.
> 
>>
>>> Intuitively, I would use this library for ONESHOT timers, and perhaps
>> implement my periodical timers by other means.
>>>
>>> If the PERIODICAL:ONESHOT ratio is low, we can probably live with the extra
>> cost of cancel+add for a few periodical timers.
>>>
>>>   From another angle:
>>>
>>> What is the performance gain with the PERIODICAL flag?
>>>
>>
>> None, pretty much. It's just there for convenience.
> 
> OK, then I suggest that you remove it, unless you get objections.
> 
> The library can be expanded with useful features at any time later. Useless features are (nearly) impossible to remove, once they are in there - they are just "technical debt" with associated maintenance costs, added complexity weaving into other features, etc..
> 
>>
>>> Without a periodical timer, cancel+add costs 10+28 cycles. How many cycles
>> would a "move" function, performing both cancel and add, use?
>>>
>>> And then compare that to the cost (in cycles) of repeating a timer with
>> PERIODICAL?
>>>
>>> Furthermore, not having the PERIODICAL flag probably improves the
>> performance for non-periodical timers. How many cycles could we gain here?
>>>
>>>
>>> Another, vaguely related, idea:
>>>
>>> The callback pointer might not need to be stored per rte_htimer, but could
>> instead be common for the rte_htw.
>>>
>>
>> Do you mean rte_htw, or rte_htimer_mgr?
>>
>> If you make one common callback, all the different parts of the
>> application needs to be coordinated (in a big switch-statement, or
>> something of that sort), or have some convention for using an
>> application-specific wrapper structure (accessed via container_of()).
>>
>> This is a problem if the timer service API consumer is a set of largely
>> uncoordinated software modules.
>>
>> Btw, the eventdev API has the same issue, and the proposed event
>> dispatcher is one way to help facilitate application-internal decoupling.
>>
>> For a module-private rte_htw instance your suggestion may work, but not
>> for <rte_htimer_mgr.h>.
> 
> I was speculating that a common callback pointer might provide a performance benefit for single-purpose HTW instances. (The same concept applies if there are multiple callbacks, e.g. a "Timer Fired", a "Timer Cancelled", and an "Early Kick" callback pointer - i.e. having the callback pointers per HTW instance, instead of per timer.)
> 
>>
>>> When a timer fires, the callback probably needs to check/update the state of
>> the object for which the timer fired anyway, so why not just let the
>> application use that state to determine the appropriate action. This might
>> provide some performance benefit.
>>>
>>> It might complicate using one HTW for multiple different purposes, though.
>> Probably a useless idea, but I wanted to share the idea anyway. It might
>> trigger other, better ideas in the community.
>>>
>>>>
>>>> * Should the async result codes and the sync cancel error codes be merged
>>>>     into one set of result codes?
>>>>
>>>> * Should the rte_htimer_mgr_async_add() have a flag which allow
>>>>     buffering add request messages until rte_htimer_mgr_process() is
>>>>     called? Or any manage function. Would reduce ring signaling overhead
>>>>     (i.e., burst enqueue operations instead of single-element
>>>>     enqueue). Could also be a rte_htimer_mgr_async_add_burst() function,
>>>>     solving the same "problem" a different way. (The signature of such
>>>>     a function would not be pretty.)
>>>>
>>>> * Does the functionality provided by the rte_htimer_mgr_process()
>>>>     function match its the use cases? Should there me a more clear
>>>>     separation between expiry processing and asynchronous operation
>>>>     processing?
>>>>
>>>> * Should the patchset be split into more commits? If so, how?
>>>>
>>>> Thanks to Erik Carrillo for his assistance.
>>>>
>>>> Mattias Rönnblom (2):
>>>>     eal: add bitset type
>>>>     eal: add high-performance timer facility
> 


^ permalink raw reply	[relevance 3%]

* RE: [RFC 0/2] Add high-performance timer facility
  2023-03-01 11:18  0%   ` Mattias Rönnblom
@ 2023-03-01 13:31  3%     ` Morten Brørup
  2023-03-01 15:50  3%       ` Mattias Rönnblom
  0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2023-03-01 13:31 UTC (permalink / raw)
  To: Mattias Rönnblom, dev
  Cc: Erik Gabriel Carrillo, David Marchand, Maria Lingemark, Stefan Sundkvist

> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> Sent: Wednesday, 1 March 2023 12.18
> 
> On 2023-02-28 17:01, Morten Brørup wrote:
> >> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> >> Sent: Tuesday, 28 February 2023 10.39
> >
> > I have been looking for a high performance timer library (for use in a fast
> path TCP stack), and this looks very useful, Mattias.
> >
> > My initial feedback is based on quickly skimming the patch source code, and
> reading this cover letter.
> >
> >>
> >> This patchset is an attempt to introduce a high-performance, highly
> >> scalable timer facility into DPDK.
> >>
> >> More specifically, the goals for the htimer library are:
> >>
> >> * Efficient handling of a handful up to hundreds of thousands of
> >>    concurrent timers.
> >> * Reduced overhead of adding and canceling timers.
> >> * Provide a service functionally equivalent to that of
> >>    <rte_timer.h>. API/ABI backward compatibility is secondary.
> >>
> >> In the author's opinion, there are two main shortcomings with the
> >> current DPDK timer library (i.e., rte_timer.[ch]).
> >>
> >> One is the synchronization overhead, where heavy-weight full-barrier
> >> type synchronization is used. rte_timer.c uses per-EAL/lcore skip
> >> lists, but any thread may add or cancel (or otherwise access) timers
> >> managed by another lcore (and thus resides in its timer skip list).
> >>
> >> The other is an algorithmic shortcoming, with rte_timer.c's reliance
> >> on a skip list, which, seemingly, is less efficient than certain
> >> alternatives.
> >>
> >> This patchset implements a hierarchical timer wheel (HWT, in
> >
> > Typo: HWT or HTW?
> 
> Yes. I don't understand how I could managed to make so many such HTW ->
> HWT typos. At least I got the filenames (rte_htw.[ch]) correct.
> 
> >
> >> rte_htw.c), as per the Varghese and Lauck paper "Hashed and
> >> Hierarchical Timing Wheels: Data Structures for the Efficient
> >> Implementation of a Timer Facility". A HWT is a data structure
> >> purposely design for this task, and used by many operating system
> >> kernel timer facilities.
> >>
> >> To further improve the solution described by Varghese and Lauck, a
> >> bitset is placed in front of each of the timer wheel in the HWT,
> >> reducing overhead of rte_htimer_mgr_manage() (i.e., progressing time
> >> and expiry processing).
> >>
> >> Cycle-efficient scanning and manipulation of these bitsets are crucial
> >> for the HWT's performance.
> >>
> >> The htimer module keeps a per-lcore (or per-registered EAL thread) HWT
> >> instance, much like rte_timer.c keeps a per-lcore skip list.
> >>
> >> To avoid expensive synchronization overhead for thread-local timer
> >> management, the HWTs are accessed only from the "owning" thread.  Any
> >> interaction any other thread has with a particular lcore's timer
> >> wheel goes over a set of DPDK rings. A side-effect of this design is
> >> that all operations working toward a "remote" HWT must be
> >> asynchronous.
> >>
> >> The <rte_htimer.h> API is available only to EAL threads and registered
> >> non-EAL threads.
> >>
> >> The htimer API allows the application to supply the current time,
> >> useful in case it already has retrieved this for other purposes,
> >> saving the cost of a rdtsc instruction (or its equivalent).
> >>
> >> Relative htimer does not retrieve a new time, but reuse the current
> >> time (as known via/at-the-time of the manage-call), again to shave off
> >> some cycles of overhead.
> >
> > I have a comment to the two points above.
> >
> > I agree that the application should supply the current time.
> >
> > This should be the concept throughout the library. I don't understand why
> TSC is used in the library at all?
> >
> > Please use a unit-less tick, and let the application decide what one tick
> means.
> >
> 
> I suspect the design of rte_htimer_mgr.h (and rte_timer.h) makes more
> sense if you think of the user of the API as not just a "monolithic"
> application, but rather a set of different modules, developed by
> different organizations, and reused across a set of applications. The
> idea behind the API design is they should all be able to share one timer
> service instance.
> 
> The different parts of the application and any future DPDK platform
> modules that use the htimer service needs to agree what a tick means in
> terms of actual wall-time, if it's not mandated by the API.

I see. Then those non-monolithic applications can agree that the unit of time is nanoseconds, or whatever makes sense for those applications. And then they can instantiate one shared HTW for that purpose.

There is no need to impose such an API limit on other users of the library.

> 
> There might be room for module-specific timer wheels as well, with
> different resolution or other characteristics. The event timer adapter's
> use of a timer wheel could be one example (although I'm not sure it is).

We are not using the event device, and I have not looked into it, so I have no qualified comments to this.

> 
> If timer-wheel-as-a-private-lego-piece is also a valid use case, then
> one could consider make the <rte_htw.h> API public as well. That is what
> I think you as asking for here: a generic timer wheel that doesn't know
> anything about time sources, time source time -> tick conversion, or
> timer source time -> monotonic wall time conversion, and maybe is also
> not bound to a particular thread.

Yes, that is what I had been searching the Internet for.

(I'm not sure what you mean by "not bound to a particular thread". Your per-thread design seems good to me.)

I don't want more stuff in the EAL. What I want is high-performance DPDK libraries we can use in our applications.

> 
> I picked TSC because it seemed like a good "universal time unit" for
> DPDK. rdtsc (and its equivalent) is also a very precise (especially on
> x86) and cheap-to-retrieve (especially on ARM, from what I understand).

The TSC does have excellent performance, but on all other parameters it is a horrible time keeper: The measurement unit depends on the underlying hardware, the TSC drifts depending on temperature, it cannot be PTP synchronized, the list is endless!

> 
> That said, at the moment, I'm leaning toward nanoseconds (uint64_t
> format) should be the default for timer expiration time instead of TSC.
> TSC could still be an option for passing the current time, since TSC
> will be a common time source, and it shaves off one conversion.

There are many reasons why nanoseconds is a much better choice than TSC.

> 
> > A unit-less tick will also let the application instantiate a HTW with higher
> resolution than the TSC. (E.g. think about oversampling in audio processing,
> or Brezenham's line drawing algorithm for 2D visuals - oversampling can sound
> and look better.)

Some of the timing data in our application have a resolution orders of magnitude higher than one nanosecond. If we combined that with a HTW library with nanosecond resolution, we would need to keep these timer values in two locations: The original high-res timer in our data structure, and the shadow low-res (nanosecond) timer in the HTW.

We might also need to frequently update the HTW timers to prevent drifting away from the high-res timers. E.g. 1.2 + 1.2 is still 2 when rounded, but + 1.2 becomes 3 when it should have been 4 (3 * 1.2 = 3.6) rounded. This level of drifting would also make periodic timers in the HTW useless.

Please note: I haven't really considered merging the high-res timing in our application with this HTW, and I'm also not saying that PERIODIC timers in the HTW are required or even useful for our application. I'm only providing arguments for a unit-less time!

> >
> > For reference (supporting my suggestion), the dynamic timestamp field in the
> rte_mbuf structure is also defined as being unit-less. (I think NVIDIA
> implements it as nanoseconds, but that's an implementation specific choice.)
> >
> >>
> >> A semantic improvement compared to the <rte_timer.h> API is that the
> >> htimer library can give a definite answer on the question if the timer
> >> expiry callback was called, after a timer has been canceled.
> >>
> >> Below is a performance data from DPDK's 'app/test' micro benchmarks,
> >> using 10k concurrent timers. The benchmarks (test_timer_perf.c and
> >> test_htimer_mgr_perf.c) aren't identical in their structure, but the
> >> numbers give some indication of the difference.
> >>
> >> Use case               htimer  timer
> >> ------------------------------------
> >> Add timer                 28    253
> >> Cancel timer              10    412
> >> Async add (source lcore)  64
> >> Async add (target lcore)  13
> >>
> >> (AMD 5900X CPU. Time in TSC.)
> >>
> >> Prototype integration of the htimer library into real, timer-heavy,
> >> applications indicates that htimer may result in significant
> >> application-level performance gains.
> >>
> >> The bitset implementation which the HWT implementation depends upon
> >> seemed generic-enough and potentially useful outside the world of
> >> HWTs, to justify being located in the EAL.
> >>
> >> This patchset is very much an RFC, and the author is yet to form an
> >> opinion on many important issues.
> >>
> >> * If deemed a suitable replacement, should the htimer replace the
> >>    current DPDK timer library in some particular (ABI-breaking)
> >>    release, or should it live side-by-side with the then-legacy
> >>    <rte_timer.h> API? A lot of things in and outside DPDK depend on
> >>    <rte_timer.h>, so coexistence may be required to facilitate a smooth
> >>    transition.
> >
> > It's my immediate impression that they are totally different in both design
> philosophy and API.
> >
> > Personal opinion: I would call it an entirely different library.
> >
> >>
> >> * Should the htimer and htw-related files be colocated with rte_timer.c
> >>    in the timer library?
> >
> > Personal opinion: No. This is an entirely different library, and should live
> for itself in a directory of its own.
> >
> >>
> >> * Would it be useful for applications using asynchronous cancel to
> >>    have the option of having the timer callback run not only in case of
> >>    timer expiration, but also cancellation (on the target lcore)? The
> >>    timer cb signature would need to include an additional parameter in
> >>    that case.
> >
> > If one thread cancels something in another thread, some synchronization
> between the threads is going to be required anyway. So we could reprase your
> question: Will the burden of the otherwise required synchronization between
> the two threads be significantly reduced if the library has the ability to run
> the callback on asynchronous cancel?
> >
> 
> Yes.
> 
> Intuitively, it seems convenient that if you hand off a timer to a
> different lcore, the timer callback will be called exactly once,
> regardless if the timer was canceled or expired.
> 
> But, as you indicate, you may still need synchronization to solve the
> resource reclamation issue.
> 
> > Is such a feature mostly "Must have" or "Nice to have"?
> >
> > More thoughts in this area...
> >
> > If adding and additional callback parameter, it could be an enum, so the
> callback could be expanded to support "timeout (a.k.a. timer fired)", "cancel"
> and more events we have not yet come up with, e.g. "early kick".
> >
> 
> Yes, or an int.
> 
> > Here's an idea off the top of my head: An additional callback parameter has
> a (small) performance cost incurred with every timer fired (which is a very
> large multiplier). It might not be required. As an alternative to an "what
> happened" parameter to the callback, the callback could investigate the state
> of the object for which the timer fired, and draw its own conclusion on how to
> proceed. Obviously, this also has a performance cost, but perhaps the callback
> works on the object's state anyway, making this cost insignificant.
> >
> 
> It's not obvious to me that you, in the timer callback, can determine
> what happened, if the same callback is called both in the cancel and the
> expired case.
> 
> The cost of an extra integer passed in a register (or checking a flag,
> if the timer callback should be called at all at cancellation) that is
> the concern for me; it's extra bit of API complexity.

Then introduce the library without this feature. More features can be added later.

The library will be introduced as "experimental", so we are free to improve it and modify the ABI along the way.

> 
> > Here's another alternative to adding a "what happened" parameter to the
> callback:
> >
> > The rte_htimer could have one more callback pointer, which (if set) will be
> called on cancellation of the timer.
> >
> 
> This will grow the timer struct with 16 bytes.

If the rte_htimer struct stays within one cache line, it should be acceptable.

On the other hand, this approach is less generic than passing an additional parameter. (E.g. add yet another callback pointer for "early kick"?)

BTW, async cancel is a form of inter-thread communication. Does this library really need to provide any inter-thread communication mechanisms? Doesn't an inter-thread communication mechanism belong in a separate library?

> 
> >>
> >> * Should the rte_htimer be a nested struct, so the htw parts be separated
> >>    from the htimer parts?
> >>
> >> * <rte_htimer.h> is kept separate from <rte_htimer_mgr.h>, so that
> >>    <rte_htw.h> may avoid a depedency to <rte_htimer_mgr.h>. Should it
> >>    be so?
> >>
> >> * rte_htimer struct is only supposed to be used by the application to
> >>    give an indication of how much memory it needs to allocate, and is
> >>    its member are not supposed to be directly accessed (w/ the possible
> >>    exception of the owner_lcore_id field). Should there be a dummy
> >>    struct, or a #define RTE_HTIMER_MEMSIZE or a rte_htimer_get_memsize()
> >>    function instead, serving the same purpose? Better encapsulation,
> >>    but more inconvenient for applications. Run-time dynamic sizing
> >>    would force application-level dynamic allocations.
> >>
> >> * Asynchronous cancellation is a little tricky to use for the
> >>    application (primarily due to timer memory reclamation/race
> >>    issues). Should this functionality be removed?
> >>
> >> * Should rte_htimer_mgr_init() also retrieve the current time? If so,
> >>    there should to be a variant which allows the user to specify the
> >>    time (to match rte_htimer_mgr_manage_time()). One pitfall with the
> >>    current proposed API is an application calling rte_htimer_mgr_init()
> >>    and then immediately adding a timer with a relative timeout, in
> >>    which case the current absolute time used is 0, which might be a
> >>    surprise.
> >>
> >> * Should libdivide (optionally) be used to avoid the div in the TSC ->
> >>    tick conversion? (Doesn't improve performance on Zen 3, but may
> >>    do on other CPUs.) Consider <rte_reciprocal.h> as well.
> >>
> >> * Should the TSC-per-tick be rounded up to a power of 2, so shifts can be
> >>    used for conversion? Very minor performance gains to be found there,
> >>    at least on Zen 3 cores.
> >>
> >> * Should it be possible to supply the time in rte_htimer_mgr_add()
> >>    and/or rte_htimer_mgr_manage_time() functions as ticks, rather than
> >>    as TSC? Should it be possible to also use nanoseconds?
> >>    rte_htimer_mgr_manage_time() would need a flags parameter in that
> >>    case.
> >
> > Do not use TSC anywhere in this library. Let the application decide the
> meaning of a tick.
> >
> >>
> >> * Would the event timer adapter be best off using <rte_htw.h>
> >>    directly, or <rte_htimer.h>? In the latter case, there needs to be a
> >>    way to instantiate more HWTs (similar to the "alt" functions of
> >>    <rte_timer.h>)?
> >>
> >> * Should the PERIODICAL flag (and the complexity it brings) be
> >>    removed? And leave the application with only single-shot timers, and
> >>    the option to re-add them in the timer callback.
> >
> > First thought: Yes, keep it lean and remove the periodical stuff.
> >
> > Second thought: This needs a more detailed analysis.
> >
> >  From one angle:
> >
> > How many PERIODICAL versus ONESHOT timers do we expect?
> >
> 
> I suspect you should be prepared for the ratio being anything.

In theory, anything is possible. But I'm asking that we consider realistic use cases.

> 
> > Intuitively, I would use this library for ONESHOT timers, and perhaps
> implement my periodical timers by other means.
> >
> > If the PERIODICAL:ONESHOT ratio is low, we can probably live with the extra
> cost of cancel+add for a few periodical timers.
> >
> >  From another angle:
> >
> > What is the performance gain with the PERIODICAL flag?
> >
> 
> None, pretty much. It's just there for convenience.

OK, then I suggest that you remove it, unless you get objections.

The library can be expanded with useful features at any time later. Useless features are (nearly) impossible to remove, once they are in there - they are just "technical debt" with associated maintenance costs, added complexity weaving into other features, etc..

> 
> > Without a periodical timer, cancel+add costs 10+28 cycles. How many cycles
> would a "move" function, performing both cancel and add, use?
> >
> > And then compare that to the cost (in cycles) of repeating a timer with
> PERIODICAL?
> >
> > Furthermore, not having the PERIODICAL flag probably improves the
> performance for non-periodical timers. How many cycles could we gain here?
> >
> >
> > Another, vaguely related, idea:
> >
> > The callback pointer might not need to be stored per rte_htimer, but could
> instead be common for the rte_htw.
> >
> 
> Do you mean rte_htw, or rte_htimer_mgr?
> 
> If you make one common callback, all the different parts of the
> application needs to be coordinated (in a big switch-statement, or
> something of that sort), or have some convention for using an
> application-specific wrapper structure (accessed via container_of()).
> 
> This is a problem if the timer service API consumer is a set of largely
> uncoordinated software modules.
> 
> Btw, the eventdev API has the same issue, and the proposed event
> dispatcher is one way to help facilitate application-internal decoupling.
> 
> For a module-private rte_htw instance your suggestion may work, but not
> for <rte_htimer_mgr.h>.

I was speculating that a common callback pointer might provide a performance benefit for single-purpose HTW instances. (The same concept applies if there are multiple callbacks, e.g. a "Timer Fired", a "Timer Cancelled", and an "Early Kick" callback pointer - i.e. having the callback pointers per HTW instance, instead of per timer.)

> 
> > When a timer fires, the callback probably needs to check/update the state of
> the object for which the timer fired anyway, so why not just let the
> application use that state to determine the appropriate action. This might
> provide some performance benefit.
> >
> > It might complicate using one HTW for multiple different purposes, though.
> Probably a useless idea, but I wanted to share the idea anyway. It might
> trigger other, better ideas in the community.
> >
> >>
> >> * Should the async result codes and the sync cancel error codes be merged
> >>    into one set of result codes?
> >>
> >> * Should the rte_htimer_mgr_async_add() have a flag which allow
> >>    buffering add request messages until rte_htimer_mgr_process() is
> >>    called? Or any manage function. Would reduce ring signaling overhead
> >>    (i.e., burst enqueue operations instead of single-element
> >>    enqueue). Could also be a rte_htimer_mgr_async_add_burst() function,
> >>    solving the same "problem" a different way. (The signature of such
> >>    a function would not be pretty.)
> >>
> >> * Does the functionality provided by the rte_htimer_mgr_process()
> >>    function match its the use cases? Should there me a more clear
> >>    separation between expiry processing and asynchronous operation
> >>    processing?
> >>
> >> * Should the patchset be split into more commits? If so, how?
> >>
> >> Thanks to Erik Carrillo for his assistance.
> >>
> >> Mattias Rönnblom (2):
> >>    eal: add bitset type
> >>    eal: add high-performance timer facility


^ permalink raw reply	[relevance 3%]

* Re: [RFC 0/2] Add high-performance timer facility
  2023-02-28 16:01  0% ` Morten Brørup
@ 2023-03-01 11:18  0%   ` Mattias Rönnblom
  2023-03-01 13:31  3%     ` Morten Brørup
  0 siblings, 1 reply; 200+ results
From: Mattias Rönnblom @ 2023-03-01 11:18 UTC (permalink / raw)
  To: Morten Brørup, dev
  Cc: Erik Gabriel Carrillo, David Marchand, Maria Lingemark, Stefan Sundkvist

On 2023-02-28 17:01, Morten Brørup wrote:
>> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
>> Sent: Tuesday, 28 February 2023 10.39
> 
> I have been looking for a high performance timer library (for use in a fast path TCP stack), and this looks very useful, Mattias.
> 
> My initial feedback is based on quickly skimming the patch source code, and reading this cover letter.
> 
>>
>> This patchset is an attempt to introduce a high-performance, highly
>> scalable timer facility into DPDK.
>>
>> More specifically, the goals for the htimer library are:
>>
>> * Efficient handling of a handful up to hundreds of thousands of
>>    concurrent timers.
>> * Reduced overhead of adding and canceling timers.
>> * Provide a service functionally equivalent to that of
>>    <rte_timer.h>. API/ABI backward compatibility is secondary.
>>
>> In the author's opinion, there are two main shortcomings with the
>> current DPDK timer library (i.e., rte_timer.[ch]).
>>
>> One is the synchronization overhead, where heavy-weight full-barrier
>> type synchronization is used. rte_timer.c uses per-EAL/lcore skip
>> lists, but any thread may add or cancel (or otherwise access) timers
>> managed by another lcore (and thus resides in its timer skip list).
>>
>> The other is an algorithmic shortcoming, with rte_timer.c's reliance
>> on a skip list, which, seemingly, is less efficient than certain
>> alternatives.
>>
>> This patchset implements a hierarchical timer wheel (HWT, in
> 
> Typo: HWT or HTW?

Yes. I don't understand how I could managed to make so many such HTW -> 
HWT typos. At least I got the filenames (rte_htw.[ch]) correct.

> 
>> rte_htw.c), as per the Varghese and Lauck paper "Hashed and
>> Hierarchical Timing Wheels: Data Structures for the Efficient
>> Implementation of a Timer Facility". A HWT is a data structure
>> purposely design for this task, and used by many operating system
>> kernel timer facilities.
>>
>> To further improve the solution described by Varghese and Lauck, a
>> bitset is placed in front of each of the timer wheel in the HWT,
>> reducing overhead of rte_htimer_mgr_manage() (i.e., progressing time
>> and expiry processing).
>>
>> Cycle-efficient scanning and manipulation of these bitsets are crucial
>> for the HWT's performance.
>>
>> The htimer module keeps a per-lcore (or per-registered EAL thread) HWT
>> instance, much like rte_timer.c keeps a per-lcore skip list.
>>
>> To avoid expensive synchronization overhead for thread-local timer
>> management, the HWTs are accessed only from the "owning" thread.  Any
>> interaction any other thread has with a particular lcore's timer
>> wheel goes over a set of DPDK rings. A side-effect of this design is
>> that all operations working toward a "remote" HWT must be
>> asynchronous.
>>
>> The <rte_htimer.h> API is available only to EAL threads and registered
>> non-EAL threads.
>>
>> The htimer API allows the application to supply the current time,
>> useful in case it already has retrieved this for other purposes,
>> saving the cost of a rdtsc instruction (or its equivalent).
>>
>> Relative htimer does not retrieve a new time, but reuse the current
>> time (as known via/at-the-time of the manage-call), again to shave off
>> some cycles of overhead.
> 
> I have a comment to the two points above.
> 
> I agree that the application should supply the current time.
> 
> This should be the concept throughout the library. I don't understand why TSC is used in the library at all?
> 
> Please use a unit-less tick, and let the application decide what one tick means.
> 

I suspect the design of rte_htimer_mgr.h (and rte_timer.h) makes more 
sense if you think of the user of the API as not just a "monolithic" 
application, but rather a set of different modules, developed by 
different organizations, and reused across a set of applications. The 
idea behind the API design is they should all be able to share one timer 
service instance.

The different parts of the application and any future DPDK platform 
modules that use the htimer service needs to agree what a tick means in 
terms of actual wall-time, if it's not mandated by the API.

There might be room for module-specific timer wheels as well, with 
different resolution or other characteristics. The event timer adapter's 
use of a timer wheel could be one example (although I'm not sure it is).

If timer-wheel-as-a-private-lego-piece is also a valid use case, then 
one could consider make the <rte_htw.h> API public as well. That is what 
I think you as asking for here: a generic timer wheel that doesn't know 
anything about time sources, time source time -> tick conversion, or 
timer source time -> monotonic wall time conversion, and maybe is also 
not bound to a particular thread.

I picked TSC because it seemed like a good "universal time unit" for 
DPDK. rdtsc (and its equivalent) is also a very precise (especially on 
x86) and cheap-to-retrieve (especially on ARM, from what I understand).

That said, at the moment, I'm leaning toward nanoseconds (uint64_t 
format) should be the default for timer expiration time instead of TSC. 
TSC could still be an option for passing the current time, since TSC 
will be a common time source, and it shaves off one conversion.

> A unit-less tick will also let the application instantiate a HTW with higher resolution than the TSC. (E.g. think about oversampling in audio processing, or Brezenham's line drawing algorithm for 2D visuals - oversampling can sound and look better.)
> 
> For reference (supporting my suggestion), the dynamic timestamp field in the rte_mbuf structure is also defined as being unit-less. (I think NVIDIA implements it as nanoseconds, but that's an implementation specific choice.)
> 
>>
>> A semantic improvement compared to the <rte_timer.h> API is that the
>> htimer library can give a definite answer on the question if the timer
>> expiry callback was called, after a timer has been canceled.
>>
>> Below is a performance data from DPDK's 'app/test' micro benchmarks,
>> using 10k concurrent timers. The benchmarks (test_timer_perf.c and
>> test_htimer_mgr_perf.c) aren't identical in their structure, but the
>> numbers give some indication of the difference.
>>
>> Use case               htimer  timer
>> ------------------------------------
>> Add timer                 28    253
>> Cancel timer              10    412
>> Async add (source lcore)  64
>> Async add (target lcore)  13
>>
>> (AMD 5900X CPU. Time in TSC.)
>>
>> Prototype integration of the htimer library into real, timer-heavy,
>> applications indicates that htimer may result in significant
>> application-level performance gains.
>>
>> The bitset implementation which the HWT implementation depends upon
>> seemed generic-enough and potentially useful outside the world of
>> HWTs, to justify being located in the EAL.
>>
>> This patchset is very much an RFC, and the author is yet to form an
>> opinion on many important issues.
>>
>> * If deemed a suitable replacement, should the htimer replace the
>>    current DPDK timer library in some particular (ABI-breaking)
>>    release, or should it live side-by-side with the then-legacy
>>    <rte_timer.h> API? A lot of things in and outside DPDK depend on
>>    <rte_timer.h>, so coexistence may be required to facilitate a smooth
>>    transition.
> 
> It's my immediate impression that they are totally different in both design philosophy and API.
> 
> Personal opinion: I would call it an entirely different library.
> 
>>
>> * Should the htimer and htw-related files be colocated with rte_timer.c
>>    in the timer library?
> 
> Personal opinion: No. This is an entirely different library, and should live for itself in a directory of its own.
> 
>>
>> * Would it be useful for applications using asynchronous cancel to
>>    have the option of having the timer callback run not only in case of
>>    timer expiration, but also cancellation (on the target lcore)? The
>>    timer cb signature would need to include an additional parameter in
>>    that case.
> 
> If one thread cancels something in another thread, some synchronization between the threads is going to be required anyway. So we could reprase your question: Will the burden of the otherwise required synchronization between the two threads be significantly reduced if the library has the ability to run the callback on asynchronous cancel?
> 

Yes.

Intuitively, it seems convenient that if you hand off a timer to a 
different lcore, the timer callback will be called exactly once, 
regardless if the timer was canceled or expired.

But, as you indicate, you may still need synchronization to solve the 
resource reclamation issue.

> Is such a feature mostly "Must have" or "Nice to have"?
> 
> More thoughts in this area...
> 
> If adding and additional callback parameter, it could be an enum, so the callback could be expanded to support "timeout (a.k.a. timer fired)", "cancel" and more events we have not yet come up with, e.g. "early kick".
> 

Yes, or an int.

> Here's an idea off the top of my head: An additional callback parameter has a (small) performance cost incurred with every timer fired (which is a very large multiplier). It might not be required. As an alternative to an "what happened" parameter to the callback, the callback could investigate the state of the object for which the timer fired, and draw its own conclusion on how to proceed. Obviously, this also has a performance cost, but perhaps the callback works on the object's state anyway, making this cost insignificant.
> 

It's not obvious to me that you, in the timer callback, can determine 
what happened, if the same callback is called both in the cancel and the 
expired case.

The cost of an extra integer passed in a register (or checking a flag, 
if the timer callback should be called at all at cancellation) that is 
the concern for me; it's extra bit of API complexity.

> Here's another alternative to adding a "what happened" parameter to the callback:
> 
> The rte_htimer could have one more callback pointer, which (if set) will be called on cancellation of the timer.
> 

This will grow the timer struct with 16 bytes.

>>
>> * Should the rte_htimer be a nested struct, so the htw parts be separated
>>    from the htimer parts?
>>
>> * <rte_htimer.h> is kept separate from <rte_htimer_mgr.h>, so that
>>    <rte_htw.h> may avoid a depedency to <rte_htimer_mgr.h>. Should it
>>    be so?
>>
>> * rte_htimer struct is only supposed to be used by the application to
>>    give an indication of how much memory it needs to allocate, and is
>>    its member are not supposed to be directly accessed (w/ the possible
>>    exception of the owner_lcore_id field). Should there be a dummy
>>    struct, or a #define RTE_HTIMER_MEMSIZE or a rte_htimer_get_memsize()
>>    function instead, serving the same purpose? Better encapsulation,
>>    but more inconvenient for applications. Run-time dynamic sizing
>>    would force application-level dynamic allocations.
>>
>> * Asynchronous cancellation is a little tricky to use for the
>>    application (primarily due to timer memory reclamation/race
>>    issues). Should this functionality be removed?
>>
>> * Should rte_htimer_mgr_init() also retrieve the current time? If so,
>>    there should to be a variant which allows the user to specify the
>>    time (to match rte_htimer_mgr_manage_time()). One pitfall with the
>>    current proposed API is an application calling rte_htimer_mgr_init()
>>    and then immediately adding a timer with a relative timeout, in
>>    which case the current absolute time used is 0, which might be a
>>    surprise.
>>
>> * Should libdivide (optionally) be used to avoid the div in the TSC ->
>>    tick conversion? (Doesn't improve performance on Zen 3, but may
>>    do on other CPUs.) Consider <rte_reciprocal.h> as well.
>>
>> * Should the TSC-per-tick be rounded up to a power of 2, so shifts can be
>>    used for conversion? Very minor performance gains to be found there,
>>    at least on Zen 3 cores.
>>
>> * Should it be possible to supply the time in rte_htimer_mgr_add()
>>    and/or rte_htimer_mgr_manage_time() functions as ticks, rather than
>>    as TSC? Should it be possible to also use nanoseconds?
>>    rte_htimer_mgr_manage_time() would need a flags parameter in that
>>    case.
> 
> Do not use TSC anywhere in this library. Let the application decide the meaning of a tick.
> 
>>
>> * Would the event timer adapter be best off using <rte_htw.h>
>>    directly, or <rte_htimer.h>? In the latter case, there needs to be a
>>    way to instantiate more HWTs (similar to the "alt" functions of
>>    <rte_timer.h>)?
>>
>> * Should the PERIODICAL flag (and the complexity it brings) be
>>    removed? And leave the application with only single-shot timers, and
>>    the option to re-add them in the timer callback.
> 
> First thought: Yes, keep it lean and remove the periodical stuff.
> 
> Second thought: This needs a more detailed analysis.
> 
>  From one angle:
> 
> How many PERIODICAL versus ONESHOT timers do we expect?
> 

I suspect you should be prepared for the ratio being anything.

> Intuitively, I would use this library for ONESHOT timers, and perhaps implement my periodical timers by other means.
> 
> If the PERIODICAL:ONESHOT ratio is low, we can probably live with the extra cost of cancel+add for a few periodical timers.
> 
>  From another angle:
> 
> What is the performance gain with the PERIODICAL flag?
> 

None, pretty much. It's just there for convenience.

> Without a periodical timer, cancel+add costs 10+28 cycles. How many cycles would a "move" function, performing both cancel and add, use?
> 
> And then compare that to the cost (in cycles) of repeating a timer with PERIODICAL?
> 
> Furthermore, not having the PERIODICAL flag probably improves the performance for non-periodical timers. How many cycles could we gain here?
> 
> 
> Another, vaguely related, idea:
> 
> The callback pointer might not need to be stored per rte_htimer, but could instead be common for the rte_htw.
> 

Do you mean rte_htw, or rte_htimer_mgr?

If you make one common callback, all the different parts of the 
application needs to be coordinated (in a big switch-statement, or 
something of that sort), or have some convention for using an 
application-specific wrapper structure (accessed via container_of()).

This is a problem if the timer service API consumer is a set of largely 
uncoordinated software modules.

Btw, the eventdev API has the same issue, and the proposed event 
dispatcher is one way to help facilitate application-internal decoupling.

For a module-private rte_htw instance your suggestion may work, but not 
for <rte_htimer_mgr.h>.

> When a timer fires, the callback probably needs to check/update the state of the object for which the timer fired anyway, so why not just let the application use that state to determine the appropriate action. This might provide some performance benefit.
> 
> It might complicate using one HTW for multiple different purposes, though. Probably a useless idea, but I wanted to share the idea anyway. It might trigger other, better ideas in the community.
> 
>>
>> * Should the async result codes and the sync cancel error codes be merged
>>    into one set of result codes?
>>
>> * Should the rte_htimer_mgr_async_add() have a flag which allow
>>    buffering add request messages until rte_htimer_mgr_process() is
>>    called? Or any manage function. Would reduce ring signaling overhead
>>    (i.e., burst enqueue operations instead of single-element
>>    enqueue). Could also be a rte_htimer_mgr_async_add_burst() function,
>>    solving the same "problem" a different way. (The signature of such
>>    a function would not be pretty.)
>>
>> * Does the functionality provided by the rte_htimer_mgr_process()
>>    function match its the use cases? Should there me a more clear
>>    separation between expiry processing and asynchronous operation
>>    processing?
>>
>> * Should the patchset be split into more commits? If so, how?
>>
>> Thanks to Erik Carrillo for his assistance.
>>
>> Mattias Rönnblom (2):
>>    eal: add bitset type
>>    eal: add high-performance timer facility


^ permalink raw reply	[relevance 0%]

* RE: [RFC 0/2] Add high-performance timer facility
  2023-02-28  9:39  3% [RFC 0/2] Add high-performance timer facility Mattias Rönnblom
@ 2023-02-28 16:01  0% ` Morten Brørup
  2023-03-01 11:18  0%   ` Mattias Rönnblom
  2023-03-15 17:03  3% ` [RFC v2 " Mattias Rönnblom
  1 sibling, 1 reply; 200+ results
From: Morten Brørup @ 2023-02-28 16:01 UTC (permalink / raw)
  To: Mattias Rönnblom, dev
  Cc: Erik Gabriel Carrillo, David Marchand, maria.lingemark, Stefan Sundkvist

> From: Mattias Rönnblom [mailto:mattias.ronnblom@ericsson.com]
> Sent: Tuesday, 28 February 2023 10.39

I have been looking for a high performance timer library (for use in a fast path TCP stack), and this looks very useful, Mattias.

My initial feedback is based on quickly skimming the patch source code, and reading this cover letter.

> 
> This patchset is an attempt to introduce a high-performance, highly
> scalable timer facility into DPDK.
> 
> More specifically, the goals for the htimer library are:
> 
> * Efficient handling of a handful up to hundreds of thousands of
>   concurrent timers.
> * Reduced overhead of adding and canceling timers.
> * Provide a service functionally equivalent to that of
>   <rte_timer.h>. API/ABI backward compatibility is secondary.
> 
> In the author's opinion, there are two main shortcomings with the
> current DPDK timer library (i.e., rte_timer.[ch]).
> 
> One is the synchronization overhead, where heavy-weight full-barrier
> type synchronization is used. rte_timer.c uses per-EAL/lcore skip
> lists, but any thread may add or cancel (or otherwise access) timers
> managed by another lcore (and thus resides in its timer skip list).
> 
> The other is an algorithmic shortcoming, with rte_timer.c's reliance
> on a skip list, which, seemingly, is less efficient than certain
> alternatives.
> 
> This patchset implements a hierarchical timer wheel (HWT, in

Typo: HWT or HTW?

> rte_htw.c), as per the Varghese and Lauck paper "Hashed and
> Hierarchical Timing Wheels: Data Structures for the Efficient
> Implementation of a Timer Facility". A HWT is a data structure
> purposely design for this task, and used by many operating system
> kernel timer facilities.
> 
> To further improve the solution described by Varghese and Lauck, a
> bitset is placed in front of each of the timer wheel in the HWT,
> reducing overhead of rte_htimer_mgr_manage() (i.e., progressing time
> and expiry processing).
> 
> Cycle-efficient scanning and manipulation of these bitsets are crucial
> for the HWT's performance.
> 
> The htimer module keeps a per-lcore (or per-registered EAL thread) HWT
> instance, much like rte_timer.c keeps a per-lcore skip list.
> 
> To avoid expensive synchronization overhead for thread-local timer
> management, the HWTs are accessed only from the "owning" thread.  Any
> interaction any other thread has with a particular lcore's timer
> wheel goes over a set of DPDK rings. A side-effect of this design is
> that all operations working toward a "remote" HWT must be
> asynchronous.
> 
> The <rte_htimer.h> API is available only to EAL threads and registered
> non-EAL threads.
> 
> The htimer API allows the application to supply the current time,
> useful in case it already has retrieved this for other purposes,
> saving the cost of a rdtsc instruction (or its equivalent).
> 
> Relative htimer does not retrieve a new time, but reuse the current
> time (as known via/at-the-time of the manage-call), again to shave off
> some cycles of overhead.

I have a comment to the two points above.

I agree that the application should supply the current time.

This should be the concept throughout the library. I don't understand why TSC is used in the library at all?

Please use a unit-less tick, and let the application decide what one tick means.

A unit-less tick will also let the application instantiate a HTW with higher resolution than the TSC. (E.g. think about oversampling in audio processing, or Brezenham's line drawing algorithm for 2D visuals - oversampling can sound and look better.)

For reference (supporting my suggestion), the dynamic timestamp field in the rte_mbuf structure is also defined as being unit-less. (I think NVIDIA implements it as nanoseconds, but that's an implementation specific choice.)

> 
> A semantic improvement compared to the <rte_timer.h> API is that the
> htimer library can give a definite answer on the question if the timer
> expiry callback was called, after a timer has been canceled.
> 
> Below is a performance data from DPDK's 'app/test' micro benchmarks,
> using 10k concurrent timers. The benchmarks (test_timer_perf.c and
> test_htimer_mgr_perf.c) aren't identical in their structure, but the
> numbers give some indication of the difference.
> 
> Use case               htimer  timer
> ------------------------------------
> Add timer                 28    253
> Cancel timer              10    412
> Async add (source lcore)  64
> Async add (target lcore)  13
> 
> (AMD 5900X CPU. Time in TSC.)
> 
> Prototype integration of the htimer library into real, timer-heavy,
> applications indicates that htimer may result in significant
> application-level performance gains.
> 
> The bitset implementation which the HWT implementation depends upon
> seemed generic-enough and potentially useful outside the world of
> HWTs, to justify being located in the EAL.
> 
> This patchset is very much an RFC, and the author is yet to form an
> opinion on many important issues.
> 
> * If deemed a suitable replacement, should the htimer replace the
>   current DPDK timer library in some particular (ABI-breaking)
>   release, or should it live side-by-side with the then-legacy
>   <rte_timer.h> API? A lot of things in and outside DPDK depend on
>   <rte_timer.h>, so coexistence may be required to facilitate a smooth
>   transition.

It's my immediate impression that they are totally different in both design philosophy and API.

Personal opinion: I would call it an entirely different library.

> 
> * Should the htimer and htw-related files be colocated with rte_timer.c
>   in the timer library?

Personal opinion: No. This is an entirely different library, and should live for itself in a directory of its own.

> 
> * Would it be useful for applications using asynchronous cancel to
>   have the option of having the timer callback run not only in case of
>   timer expiration, but also cancellation (on the target lcore)? The
>   timer cb signature would need to include an additional parameter in
>   that case.

If one thread cancels something in another thread, some synchronization between the threads is going to be required anyway. So we could reprase your question: Will the burden of the otherwise required synchronization between the two threads be significantly reduced if the library has the ability to run the callback on asynchronous cancel?

Is such a feature mostly "Must have" or "Nice to have"?

More thoughts in this area...

If adding and additional callback parameter, it could be an enum, so the callback could be expanded to support "timeout (a.k.a. timer fired)", "cancel" and more events we have not yet come up with, e.g. "early kick".

Here's an idea off the top of my head: An additional callback parameter has a (small) performance cost incurred with every timer fired (which is a very large multiplier). It might not be required. As an alternative to an "what happened" parameter to the callback, the callback could investigate the state of the object for which the timer fired, and draw its own conclusion on how to proceed. Obviously, this also has a performance cost, but perhaps the callback works on the object's state anyway, making this cost insignificant.

Here's another alternative to adding a "what happened" parameter to the callback:

The rte_htimer could have one more callback pointer, which (if set) will be called on cancellation of the timer.

> 
> * Should the rte_htimer be a nested struct, so the htw parts be separated
>   from the htimer parts?
> 
> * <rte_htimer.h> is kept separate from <rte_htimer_mgr.h>, so that
>   <rte_htw.h> may avoid a depedency to <rte_htimer_mgr.h>. Should it
>   be so?
> 
> * rte_htimer struct is only supposed to be used by the application to
>   give an indication of how much memory it needs to allocate, and is
>   its member are not supposed to be directly accessed (w/ the possible
>   exception of the owner_lcore_id field). Should there be a dummy
>   struct, or a #define RTE_HTIMER_MEMSIZE or a rte_htimer_get_memsize()
>   function instead, serving the same purpose? Better encapsulation,
>   but more inconvenient for applications. Run-time dynamic sizing
>   would force application-level dynamic allocations.
> 
> * Asynchronous cancellation is a little tricky to use for the
>   application (primarily due to timer memory reclamation/race
>   issues). Should this functionality be removed?
> 
> * Should rte_htimer_mgr_init() also retrieve the current time? If so,
>   there should to be a variant which allows the user to specify the
>   time (to match rte_htimer_mgr_manage_time()). One pitfall with the
>   current proposed API is an application calling rte_htimer_mgr_init()
>   and then immediately adding a timer with a relative timeout, in
>   which case the current absolute time used is 0, which might be a
>   surprise.
> 
> * Should libdivide (optionally) be used to avoid the div in the TSC ->
>   tick conversion? (Doesn't improve performance on Zen 3, but may
>   do on other CPUs.) Consider <rte_reciprocal.h> as well.
> 
> * Should the TSC-per-tick be rounded up to a power of 2, so shifts can be
>   used for conversion? Very minor performance gains to be found there,
>   at least on Zen 3 cores.
> 
> * Should it be possible to supply the time in rte_htimer_mgr_add()
>   and/or rte_htimer_mgr_manage_time() functions as ticks, rather than
>   as TSC? Should it be possible to also use nanoseconds?
>   rte_htimer_mgr_manage_time() would need a flags parameter in that
>   case.

Do not use TSC anywhere in this library. Let the application decide the meaning of a tick.

> 
> * Would the event timer adapter be best off using <rte_htw.h>
>   directly, or <rte_htimer.h>? In the latter case, there needs to be a
>   way to instantiate more HWTs (similar to the "alt" functions of
>   <rte_timer.h>)?
> 
> * Should the PERIODICAL flag (and the complexity it brings) be
>   removed? And leave the application with only single-shot timers, and
>   the option to re-add them in the timer callback.

First thought: Yes, keep it lean and remove the periodical stuff.

Second thought: This needs a more detailed analysis.

From one angle:

How many PERIODICAL versus ONESHOT timers do we expect?

Intuitively, I would use this library for ONESHOT timers, and perhaps implement my periodical timers by other means.

If the PERIODICAL:ONESHOT ratio is low, we can probably live with the extra cost of cancel+add for a few periodical timers.

From another angle:

What is the performance gain with the PERIODICAL flag?

Without a periodical timer, cancel+add costs 10+28 cycles. How many cycles would a "move" function, performing both cancel and add, use?

And then compare that to the cost (in cycles) of repeating a timer with PERIODICAL?

Furthermore, not having the PERIODICAL flag probably improves the performance for non-periodical timers. How many cycles could we gain here?


Another, vaguely related, idea:

The callback pointer might not need to be stored per rte_htimer, but could instead be common for the rte_htw.

When a timer fires, the callback probably needs to check/update the state of the object for which the timer fired anyway, so why not just let the application use that state to determine the appropriate action. This might provide some performance benefit.

It might complicate using one HTW for multiple different purposes, though. Probably a useless idea, but I wanted to share the idea anyway. It might trigger other, better ideas in the community.

> 
> * Should the async result codes and the sync cancel error codes be merged
>   into one set of result codes?
> 
> * Should the rte_htimer_mgr_async_add() have a flag which allow
>   buffering add request messages until rte_htimer_mgr_process() is
>   called? Or any manage function. Would reduce ring signaling overhead
>   (i.e., burst enqueue operations instead of single-element
>   enqueue). Could also be a rte_htimer_mgr_async_add_burst() function,
>   solving the same "problem" a different way. (The signature of such
>   a function would not be pretty.)
> 
> * Does the functionality provided by the rte_htimer_mgr_process()
>   function match its the use cases? Should there me a more clear
>   separation between expiry processing and asynchronous operation
>   processing?
> 
> * Should the patchset be split into more commits? If so, how?
> 
> Thanks to Erik Carrillo for his assistance.
> 
> Mattias Rönnblom (2):
>   eal: add bitset type
>   eal: add high-performance timer facility

^ permalink raw reply	[relevance 0%]

* [RFC 0/2] Add high-performance timer facility
@ 2023-02-28  9:39  3% Mattias Rönnblom
  2023-02-28 16:01  0% ` Morten Brørup
  2023-03-15 17:03  3% ` [RFC v2 " Mattias Rönnblom
  0 siblings, 2 replies; 200+ results
From: Mattias Rönnblom @ 2023-02-28  9:39 UTC (permalink / raw)
  To: dev
  Cc: Erik Gabriel Carrillo, David Marchand, maria.lingemark,
	Stefan Sundkvist, Mattias Rönnblom

This patchset is an attempt to introduce a high-performance, highly
scalable timer facility into DPDK.

More specifically, the goals for the htimer library are:

* Efficient handling of a handful up to hundreds of thousands of
  concurrent timers.
* Reduced overhead of adding and canceling timers.
* Provide a service functionally equivalent to that of
  <rte_timer.h>. API/ABI backward compatibility is secondary.

In the author's opinion, there are two main shortcomings with the
current DPDK timer library (i.e., rte_timer.[ch]).

One is the synchronization overhead, where heavy-weight full-barrier
type synchronization is used. rte_timer.c uses per-EAL/lcore skip
lists, but any thread may add or cancel (or otherwise access) timers
managed by another lcore (and thus resides in its timer skip list).

The other is an algorithmic shortcoming, with rte_timer.c's reliance
on a skip list, which, seemingly, is less efficient than certain
alternatives.

This patchset implements a hierarchical timer wheel (HWT, in
rte_htw.c), as per the Varghese and Lauck paper "Hashed and
Hierarchical Timing Wheels: Data Structures for the Efficient
Implementation of a Timer Facility". A HWT is a data structure
purposely design for this task, and used by many operating system
kernel timer facilities.

To further improve the solution described by Varghese and Lauck, a
bitset is placed in front of each of the timer wheel in the HWT,
reducing overhead of rte_htimer_mgr_manage() (i.e., progressing time
and expiry processing).

Cycle-efficient scanning and manipulation of these bitsets are crucial
for the HWT's performance.

The htimer module keeps a per-lcore (or per-registered EAL thread) HWT
instance, much like rte_timer.c keeps a per-lcore skip list.

To avoid expensive synchronization overhead for thread-local timer
management, the HWTs are accessed only from the "owning" thread.  Any
interaction any other thread has with a particular lcore's timer
wheel goes over a set of DPDK rings. A side-effect of this design is
that all operations working toward a "remote" HWT must be
asynchronous.

The <rte_htimer.h> API is available only to EAL threads and registered
non-EAL threads.

The htimer API allows the application to supply the current time,
useful in case it already has retrieved this for other purposes,
saving the cost of a rdtsc instruction (or its equivalent).

Relative htimer does not retrieve a new time, but reuse the current
time (as known via/at-the-time of the manage-call), again to shave off
some cycles of overhead.

A semantic improvement compared to the <rte_timer.h> API is that the
htimer library can give a definite answer on the question if the timer
expiry callback was called, after a timer has been canceled.

Below is a performance data from DPDK's 'app/test' micro benchmarks,
using 10k concurrent timers. The benchmarks (test_timer_perf.c and
test_htimer_mgr_perf.c) aren't identical in their structure, but the
numbers give some indication of the difference.

Use case               htimer  timer
------------------------------------
Add timer                 28    253
Cancel timer              10    412
Async add (source lcore)  64
Async add (target lcore)  13

(AMD 5900X CPU. Time in TSC.)

Prototype integration of the htimer library into real, timer-heavy,
applications indicates that htimer may result in significant
application-level performance gains.

The bitset implementation which the HWT implementation depends upon
seemed generic-enough and potentially useful outside the world of
HWTs, to justify being located in the EAL.

This patchset is very much an RFC, and the author is yet to form an
opinion on many important issues.

* If deemed a suitable replacement, should the htimer replace the
  current DPDK timer library in some particular (ABI-breaking)
  release, or should it live side-by-side with the then-legacy
  <rte_timer.h> API? A lot of things in and outside DPDK depend on
  <rte_timer.h>, so coexistence may be required to facilitate a smooth
  transition.

* Should the htimer and htw-related files be colocated with rte_timer.c
  in the timer library?

* Would it be useful for applications using asynchronous cancel to
  have the option of having the timer callback run not only in case of
  timer expiration, but also cancellation (on the target lcore)? The
  timer cb signature would need to include an additional parameter in
  that case.

* Should the rte_htimer be a nested struct, so the htw parts be separated
  from the htimer parts?

* <rte_htimer.h> is kept separate from <rte_htimer_mgr.h>, so that
  <rte_htw.h> may avoid a depedency to <rte_htimer_mgr.h>. Should it
  be so?

* rte_htimer struct is only supposed to be used by the application to
  give an indication of how much memory it needs to allocate, and is
  its member are not supposed to be directly accessed (w/ the possible
  exception of the owner_lcore_id field). Should there be a dummy
  struct, or a #define RTE_HTIMER_MEMSIZE or a rte_htimer_get_memsize()
  function instead, serving the same purpose? Better encapsulation,
  but more inconvenient for applications. Run-time dynamic sizing
  would force application-level dynamic allocations.

* Asynchronous cancellation is a little tricky to use for the
  application (primarily due to timer memory reclamation/race
  issues). Should this functionality be removed?
  
* Should rte_htimer_mgr_init() also retrieve the current time? If so,
  there should to be a variant which allows the user to specify the
  time (to match rte_htimer_mgr_manage_time()). One pitfall with the
  current proposed API is an application calling rte_htimer_mgr_init()
  and then immediately adding a timer with a relative timeout, in
  which case the current absolute time used is 0, which might be a
  surprise.

* Should libdivide (optionally) be used to avoid the div in the TSC ->
  tick conversion? (Doesn't improve performance on Zen 3, but may
  do on other CPUs.) Consider <rte_reciprocal.h> as well.

* Should the TSC-per-tick be rounded up to a power of 2, so shifts can be
  used for conversion? Very minor performance gains to be found there,
  at least on Zen 3 cores.

* Should it be possible to supply the time in rte_htimer_mgr_add()
  and/or rte_htimer_mgr_manage_time() functions as ticks, rather than
  as TSC? Should it be possible to also use nanoseconds?
  rte_htimer_mgr_manage_time() would need a flags parameter in that
  case.

* Would the event timer adapter be best off using <rte_htw.h>
  directly, or <rte_htimer.h>? In the latter case, there needs to be a
  way to instantiate more HWTs (similar to the "alt" functions of
  <rte_timer.h>)?

* Should the PERIODICAL flag (and the complexity it brings) be
  removed? And leave the application with only single-shot timers, and
  the option to re-add them in the timer callback.

* Should the async result codes and the sync cancel error codes be merged
  into one set of result codes?

* Should the rte_htimer_mgr_async_add() have a flag which allow
  buffering add request messages until rte_htimer_mgr_process() is
  called? Or any manage function. Would reduce ring signaling overhead
  (i.e., burst enqueue operations instead of single-element
  enqueue). Could also be a rte_htimer_mgr_async_add_burst() function,
  solving the same "problem" a different way. (The signature of such
  a function would not be pretty.)

* Does the functionality provided by the rte_htimer_mgr_process()
  function match its the use cases? Should there me a more clear
  separation between expiry processing and asynchronous operation
  processing?

* Should the patchset be split into more commits? If so, how?

Thanks to Erik Carrillo for his assistance.

Mattias Rönnblom (2):
  eal: add bitset type
  eal: add high-performance timer facility

 app/test/meson.build             |  10 +-
 app/test/test_bitset.c           | 646 +++++++++++++++++++++++
 app/test/test_htimer_mgr.c       | 674 ++++++++++++++++++++++++
 app/test/test_htimer_mgr_perf.c  | 324 ++++++++++++
 app/test/test_htw.c              | 478 +++++++++++++++++
 app/test/test_htw_perf.c         | 181 +++++++
 doc/api/doxy-api-index.md        |   5 +-
 doc/api/doxy-api.conf.in         |   1 +
 lib/eal/common/meson.build       |   1 +
 lib/eal/common/rte_bitset.c      |  29 +
 lib/eal/include/meson.build      |   1 +
 lib/eal/include/rte_bitset.h     | 878 +++++++++++++++++++++++++++++++
 lib/eal/version.map              |   3 +
 lib/htimer/meson.build           |   7 +
 lib/htimer/rte_htimer.h          |  65 +++
 lib/htimer/rte_htimer_mgr.c      | 488 +++++++++++++++++
 lib/htimer/rte_htimer_mgr.h      | 497 +++++++++++++++++
 lib/htimer/rte_htimer_msg.h      |  44 ++
 lib/htimer/rte_htimer_msg_ring.c |  18 +
 lib/htimer/rte_htimer_msg_ring.h |  49 ++
 lib/htimer/rte_htw.c             | 437 +++++++++++++++
 lib/htimer/rte_htw.h             |  49 ++
 lib/htimer/version.map           |  17 +
 lib/meson.build                  |   1 +
 24 files changed, 4901 insertions(+), 2 deletions(-)
 create mode 100644 app/test/test_bitset.c
 create mode 100644 app/test/test_htimer_mgr.c
 create mode 100644 app/test/test_htimer_mgr_perf.c
 create mode 100644 app/test/test_htw.c
 create mode 100644 app/test/test_htw_perf.c
 create mode 100644 lib/eal/common/rte_bitset.c
 create mode 100644 lib/eal/include/rte_bitset.h
 create mode 100644 lib/htimer/meson.build
 create mode 100644 lib/htimer/rte_htimer.h
 create mode 100644 lib/htimer/rte_htimer_mgr.c
 create mode 100644 lib/htimer/rte_htimer_mgr.h
 create mode 100644 lib/htimer/rte_htimer_msg.h
 create mode 100644 lib/htimer/rte_htimer_msg_ring.c
 create mode 100644 lib/htimer/rte_htimer_msg_ring.h
 create mode 100644 lib/htimer/rte_htw.c
 create mode 100644 lib/htimer/rte_htw.h
 create mode 100644 lib/htimer/version.map

-- 
2.34.1


^ permalink raw reply	[relevance 3%]

* Re: [RFC PATCH] drivers/net: fix RSS multi-queue mode check
  @ 2023-02-28  8:23  3%       ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2023-02-28  8:23 UTC (permalink / raw)
  To: lihuisong (C),
	Ajit Khaparde, Somnath Kotur, Rahul Lakkireddy, Simei Su,
	Wenjun Wu, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
	Evgeny Schemeilin, Igor Chauskin, John Daley, Hyong Youb Kim,
	Qi Zhang, Xiao Wang, Junfeng Guo, Ziyang Xuan, Xiaoyun Wang,
	Guoyang Zhou, Dongdong Liu, Yisen Zhuang, Yuying Zhang,
	Beilei Xing, Jingjing Wu, Qiming Yang, Shijith Thotton,
	Srisivasubramanian Srinivasan, Long Li, Chaoyong He,
	Niklas Söderlund, Jiawen Wu, Rasesh Mody,
	Devendra Singh Rawat, Jerin Jacob, Maciej Czekaj, Jian Wang,
	Jochen Behrens, Andrew Rybchenko
  Cc: Thomas Monjalon, dev, stable

On 2/28/2023 1:24 AM, lihuisong (C) wrote:
> 
> 在 2023/2/27 17:57, Ferruh Yigit 写道:
>> On 2/27/2023 1:34 AM, lihuisong (C) wrote:
>>> 在 2023/2/24 0:04, Ferruh Yigit 写道:
>>>> 'rxmode.mq_mode' is an enum which should be an abstraction over values,
>>>> instead of mask it with 'RTE_ETH_MQ_RX_RSS_FLAG' to detect if RSS is
>>>> supported, directly compare with 'RTE_ETH_MQ_RX_RSS' enum element.
>>>>
>>>> Most of the time only 'RTE_ETH_MQ_RX_RSS' is requested by user, that is
>>>> why output is almost same, but there may be cases driver doesn't
>>>> support
>>>> RSS combinations, like 'RTE_ETH_MQ_RX_VMDQ_DCB_RSS' but that is hidden
>>>> by masking with 'RTE_ETH_MQ_RX_RSS_FLAG'.
>>> Hi Ferruh,
>>>
>>> It seems that this fully changes the usage of the mq_mode.
>>> It will cause RSS, DCB and VMDQ function cannot work well.
>>>
>>> For example,
>>> Both user and driver enable RSS and DCB functions based on xxx_DCB_FLAG
>>> and xxx_RSS_FLAG in rxmode.mq_mode.
>>> If we directly compare with 'RTE_ETH_MQ_RX_RSS' enum element now, how do
>>> we enable RSS+DCB mode?
>>>
>> Hi Huisong,
>>
>> Technically 'RSS+DCB' mode can be set by user setting 'rxmode.mq_mode'
>> to 'RTE_ETH_MQ_RX_DCB_RSS' and PMD checking the same.
> This is not a good way to use.
> Because this has a greate impact for user and PMDs and will add
> cyclomatic complexity of PMD.
>>
>> Overall I think it is not good idea to use enum items as masked values,
> I agree what you do.
> It is better to change rxmode.mq_mode and txmode.mq_mode type from
> 'enum' to 'u32'.
> In this way, PMD code logic don't need to be modified and the impact on
> PMDs and user is minimal.
> What do you think?

If bitmask feature of mq_mode is used and needed, I agree changing
underlying data type cause less disturbance in logic.

But chaning underlying data type has ABI impications, for now I will
drop this patch, thanks for the feedback.

>> but that seems done intentionally in the past:
>> Commit 4bdefaade6d1 ("ethdev: VMDQ enhancements")
> Seems it was.
>>
>> Since this can be in use already, following patch only changes where
>> 'RTE_ETH_RX_OFFLOAD_RSS_HASH' is set, rest of the usage remaining same.
>>
>> And even for 'RTE_ETH_RX_OFFLOAD_RSS_HASH', I think intention was to
>> override this offload config in PMD when explicitly RSS mode is enabled,
>> but I made the set as RFC to get feedback on this. We may keep as it is
>> if some other modes with 'RTE_ETH_MQ_RX_RSS_FLAG' uses this offload.
>>
>>>> Fixes: 73fb89dd6a00 ("drivers/net: fix RSS hash offload flag if no
>>>> RSS")
>>>> Cc: stable@dpdk.org
>>>>
>>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@amd.com>
>>>>
>>>> ---
>>>>
>>>> There are more usage like "rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG" in
>>>> drivers, not sure to fix all in this commit or not, feedback welcomed.
>>>> ---
>>>>    drivers/net/bnxt/bnxt_ethdev.c       | 2 +-
>>>>    drivers/net/cxgbe/cxgbe_ethdev.c     | 2 +-
>>>>    drivers/net/e1000/igb_ethdev.c       | 4 ++--
>>>>    drivers/net/ena/ena_ethdev.c         | 2 +-
>>>>    drivers/net/enic/enic_ethdev.c       | 2 +-
>>>>    drivers/net/fm10k/fm10k_ethdev.c     | 2 +-
>>>>    drivers/net/gve/gve_ethdev.c         | 2 +-
>>>>    drivers/net/hinic/hinic_pmd_ethdev.c | 2 +-
>>>>    drivers/net/hns3/hns3_ethdev.c       | 2 +-
>>>>    drivers/net/hns3/hns3_ethdev_vf.c    | 2 +-
>>>>    drivers/net/i40e/i40e_ethdev.c       | 2 +-
>>>>    drivers/net/iavf/iavf_ethdev.c       | 2 +-
>>>>    drivers/net/ice/ice_dcf_ethdev.c     | 2 +-
>>>>    drivers/net/ice/ice_ethdev.c         | 2 +-
>>>>    drivers/net/igc/igc_ethdev.c         | 2 +-
>>>>    drivers/net/ixgbe/ixgbe_ethdev.c     | 4 ++--
>>>>    drivers/net/liquidio/lio_ethdev.c    | 2 +-
>>>>    drivers/net/mana/mana.c              | 2 +-
>>>>    drivers/net/netvsc/hn_ethdev.c       | 2 +-
>>>>    drivers/net/nfp/nfp_common.c         | 2 +-
>>>>    drivers/net/ngbe/ngbe_ethdev.c       | 2 +-
>>>>    drivers/net/qede/qede_ethdev.c       | 2 +-
>>>>    drivers/net/thunderx/nicvf_ethdev.c  | 2 +-
>>>>    drivers/net/txgbe/txgbe_ethdev.c     | 2 +-
>>>>    drivers/net/txgbe/txgbe_ethdev_vf.c  | 2 +-
>>>>    drivers/net/vmxnet3/vmxnet3_ethdev.c | 2 +-
>>>>    26 files changed, 28 insertions(+), 28 deletions(-)
>>>>
>>>> diff --git a/drivers/net/bnxt/bnxt_ethdev.c
>>>> b/drivers/net/bnxt/bnxt_ethdev.c
>>>> index 753e86b4b2af..14c0d5f8c72b 100644
>>>> --- a/drivers/net/bnxt/bnxt_ethdev.c
>>>> +++ b/drivers/net/bnxt/bnxt_ethdev.c
>>>> @@ -1143,7 +1143,7 @@ static int bnxt_dev_configure_op(struct
>>>> rte_eth_dev *eth_dev)
>>>>        bp->rx_cp_nr_rings = bp->rx_nr_rings;
>>>>        bp->tx_cp_nr_rings = bp->tx_nr_rings;
>>>>    -    if (eth_dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            rx_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>        eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
>>>>    diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c
>>>> b/drivers/net/cxgbe/cxgbe_ethdev.c
>>>> index 45bbeaef0ceb..0e9ccc0587ba 100644
>>>> --- a/drivers/net/cxgbe/cxgbe_ethdev.c
>>>> +++ b/drivers/net/cxgbe/cxgbe_ethdev.c
>>>> @@ -440,7 +440,7 @@ int cxgbe_dev_configure(struct rte_eth_dev
>>>> *eth_dev)
>>>>          CXGBE_FUNC_TRACE();
>>>>    -    if (eth_dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            eth_dev->data->dev_conf.rxmode.offloads |=
>>>>                RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>    diff --git a/drivers/net/e1000/igb_ethdev.c
>>>> b/drivers/net/e1000/igb_ethdev.c
>>>> index 8858f975f8cc..8e6b43c2ff2d 100644
>>>> --- a/drivers/net/e1000/igb_ethdev.c
>>>> +++ b/drivers/net/e1000/igb_ethdev.c
>>>> @@ -1146,7 +1146,7 @@ eth_igb_configure(struct rte_eth_dev *dev)
>>>>          PMD_INIT_FUNC_TRACE();
>>>>    -    if (dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev->data->dev_conf.rxmode.offloads |=
>>>> RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          /* multiple queue mode checking */
>>>> @@ -3255,7 +3255,7 @@ igbvf_dev_configure(struct rte_eth_dev *dev)
>>>>        PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
>>>>                 dev->data->port_id);
>>>>    -    if (dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev->data->dev_conf.rxmode.offloads |=
>>>> RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          /*
>>>> diff --git a/drivers/net/ena/ena_ethdev.c
>>>> b/drivers/net/ena/ena_ethdev.c
>>>> index efcb163027c8..6929d7066fbd 100644
>>>> --- a/drivers/net/ena/ena_ethdev.c
>>>> +++ b/drivers/net/ena/ena_ethdev.c
>>>> @@ -2307,7 +2307,7 @@ static int ena_dev_configure(struct rte_eth_dev
>>>> *dev)
>>>>          adapter->state = ENA_ADAPTER_STATE_CONFIG;
>>>>    -    if (dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev->data->dev_conf.rxmode.offloads |=
>>>> RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>        dev->data->dev_conf.txmode.offloads |=
>>>> RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
>>>>    diff --git a/drivers/net/enic/enic_ethdev.c
>>>> b/drivers/net/enic/enic_ethdev.c
>>>> index cdf091559196..f3a7bc161408 100644
>>>> --- a/drivers/net/enic/enic_ethdev.c
>>>> +++ b/drivers/net/enic/enic_ethdev.c
>>>> @@ -323,7 +323,7 @@ static int enicpmd_dev_configure(struct
>>>> rte_eth_dev *eth_dev)
>>>>            return ret;
>>>>        }
>>>>    -    if (eth_dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            eth_dev->data->dev_conf.rxmode.offloads |=
>>>>                RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>    diff --git a/drivers/net/fm10k/fm10k_ethdev.c
>>>> b/drivers/net/fm10k/fm10k_ethdev.c
>>>> index 8b83063f0a2d..49d7849ba5ea 100644
>>>> --- a/drivers/net/fm10k/fm10k_ethdev.c
>>>> +++ b/drivers/net/fm10k/fm10k_ethdev.c
>>>> @@ -450,7 +450,7 @@ fm10k_dev_configure(struct rte_eth_dev *dev)
>>>>          PMD_INIT_FUNC_TRACE();
>>>>    -    if (dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev->data->dev_conf.rxmode.offloads |=
>>>> RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          /* multiple queue mode checking */
>>>> diff --git a/drivers/net/gve/gve_ethdev.c
>>>> b/drivers/net/gve/gve_ethdev.c
>>>> index cf28a4a3b710..f34755a369fb 100644
>>>> --- a/drivers/net/gve/gve_ethdev.c
>>>> +++ b/drivers/net/gve/gve_ethdev.c
>>>> @@ -92,7 +92,7 @@ gve_dev_configure(struct rte_eth_dev *dev)
>>>>    {
>>>>        struct gve_priv *priv = dev->data->dev_private;
>>>>    -    if (dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev->data->dev_conf.rxmode.offloads |=
>>>> RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          if (dev->data->dev_conf.rxmode.offloads &
>>>> RTE_ETH_RX_OFFLOAD_TCP_LRO)
>>>> diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c
>>>> b/drivers/net/hinic/hinic_pmd_ethdev.c
>>>> index 7aa5e7d8e929..872ee97b1e97 100644
>>>> --- a/drivers/net/hinic/hinic_pmd_ethdev.c
>>>> +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
>>>> @@ -311,7 +311,7 @@ static int hinic_dev_configure(struct rte_eth_dev
>>>> *dev)
>>>>            return -EINVAL;
>>>>        }
>>>>    -    if (dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev->data->dev_conf.rxmode.offloads |=
>>>> RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          /* mtu size is 256~9600 */
>>>> diff --git a/drivers/net/hns3/hns3_ethdev.c
>>>> b/drivers/net/hns3/hns3_ethdev.c
>>>> index 6babf67fcec2..fd3e499a3d38 100644
>>>> --- a/drivers/net/hns3/hns3_ethdev.c
>>>> +++ b/drivers/net/hns3/hns3_ethdev.c
>>>> @@ -2016,7 +2016,7 @@ hns3_dev_configure(struct rte_eth_dev *dev)
>>>>                goto cfg_err;
>>>>        }
>>>>    -    if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
>>>> +    if (mq_mode == RTE_ETH_MQ_RX_RSS) {
>>>>            conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>            rss_conf = conf->rx_adv_conf.rss_conf;
>>>>            ret = hns3_dev_rss_hash_update(dev, &rss_conf);
>>>> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c
>>>> b/drivers/net/hns3/hns3_ethdev_vf.c
>>>> index d051a1357b9f..00eb22d05558 100644
>>>> --- a/drivers/net/hns3/hns3_ethdev_vf.c
>>>> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
>>>> @@ -494,7 +494,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
>>>>        }
>>>>          /* When RSS is not configured, redirect the packet queue 0 */
>>>> -    if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
>>>> +    if (mq_mode == RTE_ETH_MQ_RX_RSS) {
>>>>            conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>            rss_conf = conf->rx_adv_conf.rss_conf;
>>>>            ret = hns3_dev_rss_hash_update(dev, &rss_conf);
>>>> diff --git a/drivers/net/i40e/i40e_ethdev.c
>>>> b/drivers/net/i40e/i40e_ethdev.c
>>>> index 7726a89d99fb..3c3dbc285c96 100644
>>>> --- a/drivers/net/i40e/i40e_ethdev.c
>>>> +++ b/drivers/net/i40e/i40e_ethdev.c
>>>> @@ -1884,7 +1884,7 @@ i40e_dev_configure(struct rte_eth_dev *dev)
>>>>        ad->tx_simple_allowed = true;
>>>>        ad->tx_vec_allowed = true;
>>>>    -    if (dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev->data->dev_conf.rxmode.offloads |=
>>>> RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          ret = i40e_dev_init_vlan(dev);
>>>> diff --git a/drivers/net/iavf/iavf_ethdev.c
>>>> b/drivers/net/iavf/iavf_ethdev.c
>>>> index 3196210f2c1d..39860c08b606 100644
>>>> --- a/drivers/net/iavf/iavf_ethdev.c
>>>> +++ b/drivers/net/iavf/iavf_ethdev.c
>>>> @@ -638,7 +638,7 @@ iavf_dev_configure(struct rte_eth_dev *dev)
>>>>        ad->rx_vec_allowed = true;
>>>>        ad->tx_vec_allowed = true;
>>>>    -    if (dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev->data->dev_conf.rxmode.offloads |=
>>>> RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          /* Large VF setting */
>>>> diff --git a/drivers/net/ice/ice_dcf_ethdev.c
>>>> b/drivers/net/ice/ice_dcf_ethdev.c
>>>> index dcbf2af5b039..f61a30716e5e 100644
>>>> --- a/drivers/net/ice/ice_dcf_ethdev.c
>>>> +++ b/drivers/net/ice/ice_dcf_ethdev.c
>>>> @@ -711,7 +711,7 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
>>>>        ad->rx_bulk_alloc_allowed = true;
>>>>        ad->tx_simple_allowed = true;
>>>>    -    if (dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev->data->dev_conf.rxmode.offloads |=
>>>> RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          return 0;
>>>> diff --git a/drivers/net/ice/ice_ethdev.c
>>>> b/drivers/net/ice/ice_ethdev.c
>>>> index 0d011bbffa77..96595fd7afaf 100644
>>>> --- a/drivers/net/ice/ice_ethdev.c
>>>> +++ b/drivers/net/ice/ice_ethdev.c
>>>> @@ -3403,7 +3403,7 @@ ice_dev_configure(struct rte_eth_dev *dev)
>>>>        ad->rx_bulk_alloc_allowed = true;
>>>>        ad->tx_simple_allowed = true;
>>>>    -    if (dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev->data->dev_conf.rxmode.offloads |=
>>>> RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          if (dev->data->nb_rx_queues) {
>>>> diff --git a/drivers/net/igc/igc_ethdev.c
>>>> b/drivers/net/igc/igc_ethdev.c
>>>> index fab2ab6d1ce7..49f2b3738b84 100644
>>>> --- a/drivers/net/igc/igc_ethdev.c
>>>> +++ b/drivers/net/igc/igc_ethdev.c
>>>> @@ -375,7 +375,7 @@ eth_igc_configure(struct rte_eth_dev *dev)
>>>>          PMD_INIT_FUNC_TRACE();
>>>>    -    if (dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev->data->dev_conf.rxmode.offloads |=
>>>> RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          ret  = igc_check_mq_mode(dev);
>>>> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
>>>> b/drivers/net/ixgbe/ixgbe_ethdev.c
>>>> index 88118bc30560..328ccf918e86 100644
>>>> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
>>>> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
>>>> @@ -2431,7 +2431,7 @@ ixgbe_dev_configure(struct rte_eth_dev *dev)
>>>>          PMD_INIT_FUNC_TRACE();
>>>>    -    if (dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev->data->dev_conf.rxmode.offloads |=
>>>> RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          /* multiple queue mode checking */
>>>> @@ -5321,7 +5321,7 @@ ixgbevf_dev_configure(struct rte_eth_dev *dev)
>>>>        PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
>>>>                 dev->data->port_id);
>>>>    -    if (dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev->data->dev_conf.rxmode.offloads |=
>>>> RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          /*
>>>> diff --git a/drivers/net/liquidio/lio_ethdev.c
>>>> b/drivers/net/liquidio/lio_ethdev.c
>>>> index ebcfbb1a5c0f..07fbaeda1ee6 100644
>>>> --- a/drivers/net/liquidio/lio_ethdev.c
>>>> +++ b/drivers/net/liquidio/lio_ethdev.c
>>>> @@ -1722,7 +1722,7 @@ lio_dev_configure(struct rte_eth_dev *eth_dev)
>>>>          PMD_INIT_FUNC_TRACE();
>>>>    -    if (eth_dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            eth_dev->data->dev_conf.rxmode.offloads |=
>>>>                RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>    diff --git a/drivers/net/mana/mana.c b/drivers/net/mana/mana.c
>>>> index 43221e743e87..76de691a8252 100644
>>>> --- a/drivers/net/mana/mana.c
>>>> +++ b/drivers/net/mana/mana.c
>>>> @@ -78,7 +78,7 @@ mana_dev_configure(struct rte_eth_dev *dev)
>>>>        struct mana_priv *priv = dev->data->dev_private;
>>>>        struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
>>>>    -    if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev_conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          if (dev->data->nb_rx_queues != dev->data->nb_tx_queues) {
>>>> diff --git a/drivers/net/netvsc/hn_ethdev.c
>>>> b/drivers/net/netvsc/hn_ethdev.c
>>>> index d0bbc0a4c0c0..4950b061799c 100644
>>>> --- a/drivers/net/netvsc/hn_ethdev.c
>>>> +++ b/drivers/net/netvsc/hn_ethdev.c
>>>> @@ -721,7 +721,7 @@ static int hn_dev_configure(struct rte_eth_dev
>>>> *dev)
>>>>          PMD_INIT_FUNC_TRACE();
>>>>    -    if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev_conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          unsupported = txmode->offloads & ~HN_TX_OFFLOAD_CAPS;
>>>> diff --git a/drivers/net/nfp/nfp_common.c
>>>> b/drivers/net/nfp/nfp_common.c
>>>> index 907777a9e44d..a774fad3fba2 100644
>>>> --- a/drivers/net/nfp/nfp_common.c
>>>> +++ b/drivers/net/nfp/nfp_common.c
>>>> @@ -161,7 +161,7 @@ nfp_net_configure(struct rte_eth_dev *dev)
>>>>        rxmode = &dev_conf->rxmode;
>>>>        txmode = &dev_conf->txmode;
>>>>    -    if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          /* Checking TX mode */
>>>> diff --git a/drivers/net/ngbe/ngbe_ethdev.c
>>>> b/drivers/net/ngbe/ngbe_ethdev.c
>>>> index c32d954769b0..5b53781c4aaf 100644
>>>> --- a/drivers/net/ngbe/ngbe_ethdev.c
>>>> +++ b/drivers/net/ngbe/ngbe_ethdev.c
>>>> @@ -918,7 +918,7 @@ ngbe_dev_configure(struct rte_eth_dev *dev)
>>>>          PMD_INIT_FUNC_TRACE();
>>>>    -    if (dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev->data->dev_conf.rxmode.offloads |=
>>>> RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          /* set flag to update link status after init */
>>>> diff --git a/drivers/net/qede/qede_ethdev.c
>>>> b/drivers/net/qede/qede_ethdev.c
>>>> index a4923670d6ba..11ddd8abf16a 100644
>>>> --- a/drivers/net/qede/qede_ethdev.c
>>>> +++ b/drivers/net/qede/qede_ethdev.c
>>>> @@ -1272,7 +1272,7 @@ static int qede_dev_configure(struct rte_eth_dev
>>>> *eth_dev)
>>>>          PMD_INIT_FUNC_TRACE(edev);
>>>>    -    if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          /* We need to have min 1 RX queue.There is no min check in
>>>> diff --git a/drivers/net/thunderx/nicvf_ethdev.c
>>>> b/drivers/net/thunderx/nicvf_ethdev.c
>>>> index ab1e714d9767..b9cd09332510 100644
>>>> --- a/drivers/net/thunderx/nicvf_ethdev.c
>>>> +++ b/drivers/net/thunderx/nicvf_ethdev.c
>>>> @@ -1984,7 +1984,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
>>>>          PMD_INIT_FUNC_TRACE();
>>>>    -    if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          if (!rte_eal_has_hugepages()) {
>>>> diff --git a/drivers/net/txgbe/txgbe_ethdev.c
>>>> b/drivers/net/txgbe/txgbe_ethdev.c
>>>> index a502618bc5a2..08ad5a087e23 100644
>>>> --- a/drivers/net/txgbe/txgbe_ethdev.c
>>>> +++ b/drivers/net/txgbe/txgbe_ethdev.c
>>>> @@ -1508,7 +1508,7 @@ txgbe_dev_configure(struct rte_eth_dev *dev)
>>>>          PMD_INIT_FUNC_TRACE();
>>>>    -    if (dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev->data->dev_conf.rxmode.offloads |=
>>>> RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          /* multiple queue mode checking */
>>>> diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c
>>>> b/drivers/net/txgbe/txgbe_ethdev_vf.c
>>>> index 3b1f7c913b7b..02a59fc696e5 100644
>>>> --- a/drivers/net/txgbe/txgbe_ethdev_vf.c
>>>> +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
>>>> @@ -577,7 +577,7 @@ txgbevf_dev_configure(struct rte_eth_dev *dev)
>>>>        PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
>>>>                 dev->data->port_id);
>>>>    -    if (dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev->data->dev_conf.rxmode.offloads |=
>>>> RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          /*
>>>> diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c
>>>> b/drivers/net/vmxnet3/vmxnet3_ethdev.c
>>>> index fd946dec5c80..8efde46ae0ad 100644
>>>> --- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
>>>> +++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
>>>> @@ -531,7 +531,7 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
>>>>          PMD_INIT_FUNC_TRACE();
>>>>    -    if (dev->data->dev_conf.rxmode.mq_mode &
>>>> RTE_ETH_MQ_RX_RSS_FLAG)
>>>> +    if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
>>>>            dev->data->dev_conf.rxmode.offloads |=
>>>> RTE_ETH_RX_OFFLOAD_RSS_HASH;
>>>>          if (!VMXNET3_VERSION_GE_6(hw)) {
>> .


^ permalink raw reply	[relevance 3%]

* Re: [PATCH v7 01/21] net/cpfl: support device initialization
  @ 2023-02-27 23:38  3%       ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2023-02-27 23:38 UTC (permalink / raw)
  To: Thomas Monjalon, Andrew Rybchenko, Jerin Jacob Kollanukkaran,
	Qi Z Zhang, David Marchand
  Cc: dev, Mingxia Liu, yuying.zhang, beilei.xing, techboard

On 2/27/2023 3:45 PM, Thomas Monjalon wrote:
> 27/02/2023 14:46, Ferruh Yigit:
>> On 2/16/2023 12:29 AM, Mingxia Liu wrote:
>>> +static int
>>> +cpfl_dev_configure(struct rte_eth_dev *dev)
>>> +{
>>> +	struct rte_eth_conf *conf = &dev->data->dev_conf;
>>> +
>>> +	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
>>> +		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
>>> +		return -ENOTSUP;
>>> +	}
>>> +
>>> +	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
>>> +		PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported",
>>> +			     conf->txmode.mq_mode);
>>> +		return -ENOTSUP;
>>> +	}
>>> +
>>> +	if (conf->lpbk_mode != 0) {
>>> +		PMD_INIT_LOG(ERR, "Loopback operation mode %d is not supported",
>>> +			     conf->lpbk_mode);
>>> +		return -ENOTSUP;
>>> +	}
>>> +
>>> +	if (conf->dcb_capability_en != 0) {
>>> +		PMD_INIT_LOG(ERR, "Priority Flow Control(PFC) if not supported");
>>> +		return -ENOTSUP;
>>> +	}
>>> +
>>> +	if (conf->intr_conf.lsc != 0) {
>>> +		PMD_INIT_LOG(ERR, "LSC interrupt is not supported");
>>> +		return -ENOTSUP;
>>> +	}
>>> +
>>> +	if (conf->intr_conf.rxq != 0) {
>>> +		PMD_INIT_LOG(ERR, "RXQ interrupt is not supported");
>>> +		return -ENOTSUP;
>>> +	}
>>> +
>>> +	if (conf->intr_conf.rmv != 0) {
>>> +		PMD_INIT_LOG(ERR, "RMV interrupt is not supported");
>>> +		return -ENOTSUP;
>>> +	}
>>> +
>>> +	return 0;
>>
>> This is '.dev_configure()' dev ops of a driver, there is nothing wrong
>> with the function but it is a good example to highlight a point.
>>
>>
>> 'rte_eth_dev_configure()' can fail from various reasons, what can an
>> application do in this case?
>> It is not clear why configuration failed, there is no way to figure out
>> failed config option dynamically.
> 
> There are some capabilities to read before calling "configure".
> 

Yes, but there are some PMD specific cases as well, like above
SPEED_FIXED is not supported. How an app can manage this?

Mainly "struct rte_eth_dev_info" is used for capabilities (although it
is a mixed bag), that is not symmetric with config/setup functions, I
mean for a config/setup function there is no clear matching capability
struct/function.

>> Application developer can read the log and find out what caused the
>> failure, but what can do next? Put a conditional check for the
>> particular device, assuming application supports multiple devices,
>> before configuration?
> 
> Which failures cannot be guessed with capability flags?
> 

At least for above sample as far as I can see some capabilities are missing:
- txmode.mq_mode
- rxmode.mq_mode
- lpbk_mode
- intr_conf.rxq

We can go through all list to detect gaps if we plan to have an action.

>> I think we need better error value, to help application detect what went
>> wrong and adapt dynamically, perhaps a bitmask of errors one per each
>> config option, what do you think?
> 
> I am not sure we can change such an old API.
> 

Yes that is hard, but if we keep the return value negative, that can
still be backward compatible.

Or API can keep the interface same but set a global 'reason' variable,
similar to 'errno', so optionally new application code can get it with a
new API and investigate it.

>> And I think this is another reason why we should not make a single API
>> too overloaded and complex.
> 
> Right, and I would support a work to have some of those "configure" features
> available as small functions.
> 

If there is enough appetite we can put something to deprecation notice
for next ABI release.


^ permalink raw reply	[relevance 3%]

* RE: [EXT] Re: [PATCH v11 1/4] lib: add generic support for reading PMU events
  2023-02-21  0:48  3%                     ` Konstantin Ananyev
@ 2023-02-27  8:12  0%                       ` Tomasz Duszynski
  0 siblings, 0 replies; 200+ results
From: Tomasz Duszynski @ 2023-02-27  8:12 UTC (permalink / raw)
  To: Konstantin Ananyev, Konstantin Ananyev, dev



>-----Original Message-----
>From: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
>Sent: Tuesday, February 21, 2023 1:48 AM
>To: Tomasz Duszynski <tduszynski@marvell.com>; Konstantin Ananyev <konstantin.ananyev@huawei.com>;
>dev@dpdk.org
>Subject: Re: [EXT] Re: [PATCH v11 1/4] lib: add generic support for reading PMU events
>
>
>>>>>>>>>> diff --git a/lib/pmu/rte_pmu.h b/lib/pmu/rte_pmu.h new file
>>>>>>>>>> mode
>>>>>>>>>> 100644 index 0000000000..6b664c3336
>>>>>>>>>> --- /dev/null
>>>>>>>>>> +++ b/lib/pmu/rte_pmu.h
>>>>>>>>>> @@ -0,0 +1,212 @@
>>>>>>>>>> +/* SPDX-License-Identifier: BSD-3-Clause
>>>>>>>>>> + * Copyright(c) 2023 Marvell  */
>>>>>>>>>> +
>>>>>>>>>> +#ifndef _RTE_PMU_H_
>>>>>>>>>> +#define _RTE_PMU_H_
>>>>>>>>>> +
>>>>>>>>>> +/**
>>>>>>>>>> + * @file
>>>>>>>>>> + *
>>>>>>>>>> + * PMU event tracing operations
>>>>>>>>>> + *
>>>>>>>>>> + * This file defines generic API and types necessary to setup
>>>>>>>>>> +PMU and
>>>>>>>>>> + * read selected counters in runtime.
>>>>>>>>>> + */
>>>>>>>>>> +
>>>>>>>>>> +#ifdef __cplusplus
>>>>>>>>>> +extern "C" {
>>>>>>>>>> +#endif
>>>>>>>>>> +
>>>>>>>>>> +#include <linux/perf_event.h>
>>>>>>>>>> +
>>>>>>>>>> +#include <rte_atomic.h>
>>>>>>>>>> +#include <rte_branch_prediction.h> #include <rte_common.h>
>>>>>>>>>> +#include <rte_compat.h> #include <rte_spinlock.h>
>>>>>>>>>> +
>>>>>>>>>> +/** Maximum number of events in a group */ #define
>>>>>>>>>> +MAX_NUM_GROUP_EVENTS 8
>>>>>>>>>> +
>>>>>>>>>> +/**
>>>>>>>>>> + * A structure describing a group of events.
>>>>>>>>>> + */
>>>>>>>>>> +struct rte_pmu_event_group {
>>>>>>>>>> +	struct perf_event_mmap_page
>>>>>>>>>> +*mmap_pages[MAX_NUM_GROUP_EVENTS];
>>>>>>>>>> +/**< array of user pages
>>>>>>> */
>>>>>>>>>> +	int fds[MAX_NUM_GROUP_EVENTS]; /**< array of event descriptors */
>>>>>>>>>> +	bool enabled; /**< true if group was enabled on particular lcore */
>>>>>>>>>> +	TAILQ_ENTRY(rte_pmu_event_group) next; /**< list entry */ }
>>>>>>>>>> +__rte_cache_aligned;
>>>>>>>>>> +
>>>>>>>>>> +/**
>>>>>>>>>> + * A structure describing an event.
>>>>>>>>>> + */
>>>>>>>>>> +struct rte_pmu_event {
>>>>>>>>>> +	char *name; /**< name of an event */
>>>>>>>>>> +	unsigned int index; /**< event index into fds/mmap_pages */
>>>>>>>>>> +	TAILQ_ENTRY(rte_pmu_event) next; /**< list entry */ };
>>>>>>>>>> +
>>>>>>>>>> +/**
>>>>>>>>>> + * A PMU state container.
>>>>>>>>>> + */
>>>>>>>>>> +struct rte_pmu {
>>>>>>>>>> +	char *name; /**< name of core PMU listed under /sys/bus/event_source/devices */
>>>>>>>>>> +	rte_spinlock_t lock; /**< serialize access to event group list */
>>>>>>>>>> +	TAILQ_HEAD(, rte_pmu_event_group) event_group_list; /**< list of event groups */
>>>>>>>>>> +	unsigned int num_group_events; /**< number of events in a group */
>>>>>>>>>> +	TAILQ_HEAD(, rte_pmu_event) event_list; /**< list of matching events */
>>>>>>>>>> +	unsigned int initialized; /**< initialization counter */ };
>>>>>>>>>> +
>>>>>>>>>> +/** lcore event group */
>>>>>>>>>> +RTE_DECLARE_PER_LCORE(struct rte_pmu_event_group,
>>>>>>>>>> +_event_group);
>>>>>>>>>> +
>>>>>>>>>> +/** PMU state container */
>>>>>>>>>> +extern struct rte_pmu rte_pmu;
>>>>>>>>>> +
>>>>>>>>>> +/** Each architecture supporting PMU needs to provide its own
>>>>>>>>>> +version */ #ifndef rte_pmu_pmc_read #define
>>>>>>>>>> +rte_pmu_pmc_read(index) ({ 0; }) #endif
>>>>>>>>>> +
>>>>>>>>>> +/**
>>>>>>>>>> + * @warning
>>>>>>>>>> + * @b EXPERIMENTAL: this API may change without prior notice
>>>>>>>>>> + *
>>>>>>>>>> + * Read PMU counter.
>>>>>>>>>> + *
>>>>>>>>>> + * @warning This should be not called directly.
>>>>>>>>>> + *
>>>>>>>>>> + * @param pc
>>>>>>>>>> + *   Pointer to the mmapped user page.
>>>>>>>>>> + * @return
>>>>>>>>>> + *   Counter value read from hardware.
>>>>>>>>>> + */
>>>>>>>>>> +static __rte_always_inline uint64_t
>>>>>>>>>> +__rte_pmu_read_userpage(struct perf_event_mmap_page *pc) {
>>>>>>>>>> +	uint64_t width, offset;
>>>>>>>>>> +	uint32_t seq, index;
>>>>>>>>>> +	int64_t pmc;
>>>>>>>>>> +
>>>>>>>>>> +	for (;;) {
>>>>>>>>>> +		seq = pc->lock;
>>>>>>>>>> +		rte_compiler_barrier();
>>>>>>>>>
>>>>>>>>> Are you sure that compiler_barrier() is enough here?
>>>>>>>>> On some archs CPU itself has freedom to re-order reads.
>>>>>>>>> Or I am missing something obvious here?
>>>>>>>>>
>>>>>>>>
>>>>>>>> It's a matter of not keeping old stuff cached in registers and
>>>>>>>> making sure that we have two reads of lock. CPU reordering won't
>>>>>>>> do any harm here.
>>>>>>>
>>>>>>> Sorry, I didn't get you here:
>>>>>>> Suppose CPU will re-order reads and will read lock *after* index or offset value.
>>>>>>> Wouldn't it mean that in that case index and/or offset can contain old/invalid values?
>>>>>>>
>>>>>>
>>>>>> This number is just an indicator whether kernel did change something or not.
>>>>>
>>>>> You are talking about pc->lock, right?
>>>>> Yes, I do understand that it is sort of seqlock.
>>>>> That's why I am puzzled why we do not care about possible cpu read-reordering.
>>>>> Manual for perf_event_open() also has a code snippet with compiler barrier only...
>>>>>
>>>>>> If cpu reordering will come into play then this will not change
>>>>>> anything from pov of this
>>> loop.
>>>>>> All we want is fresh data when needed and no involvement of
>>>>>> compiler when it comes to reordering code.
>>>>>
>>>>> Ok, can you probably explain to me why the following could not happen:
>>>>> T0:
>>>>> pc->seqlock==0; pc->index==I1; pc->offset==O1;
>>>>> T1:
>>>>>       cpu #0 read pmu (due to cpu read reorder, we get index value before seqlock):
>>>>>        index=pc->index;  //index==I1;
>>>>> T2:
>>>>>       cpu #1 kernel vent_update_userpage:
>>>>>       pc->lock++; // pc->lock==1
>>>>>       pc->index=I2;
>>>>>       pc->offset=O2;
>>>>>       ...
>>>>>       pc->lock++; //pc->lock==2
>>>>> T3:
>>>>>       cpu #0 continue with read pmu:
>>>>>       seq=pc->lock; //seq == 2
>>>>>        offset=pc->offset; // offset == O2
>>>>>        ....
>>>>>        pmc = rte_pmu_pmc_read(index - 1);  // Note that we read at I1, not I2
>>>>>        offset += pmc; //offset == O2 + pmcread(I1-1);
>>>>>        if (pc->lock == seq) // they are equal, return
>>>>>              return offset;
>>>>>
>>>>> Or, it can happen, but by some reason we don't care much?
>>>>>
>>>>
>>>> This code does self-monitoring and user page (whole group actually)
>>>> is per thread running on current cpu. Hence I am not sure what are
>>>> you trying to prove with that
>>> example.
>>>
>>> I am not trying to prove anything so far.
>>> I am asking is such situation possible or not, and if not, why?
>>> My current understanding (possibly wrong) is that after you mmaped
>>> these pages, kernel still can asynchronously update them.
>>> So, when reading the data from these pages you have to check 'lock'
>>> value before and after accessing other data.
>>> If so, why possible cpu read-reordering doesn't matter?
>>>
>>
>> Look. I'll reiterate that.
>>
>> 1. That user page/group/PMU config is per process. Other processes do not access that.
>
>Ok, that's clear.
>
>
>>     All this happens on the very same CPU where current thread is running.
>
>Ok... but can't this page be updated by kernel thread running simultaneously on different CPU?
>

I already pointed out that event/counter configuration is bound to current cpu. How can possibly
other cpu update that configuration? This cannot work. 


If you think that there's some problem with the code (or is simply broken on your setup) and logic 
has obvious flaw and you can provide meaningful evidence of that then I'd be more than happy to 
apply that fix. Otherwise that discussion will get us nowhere. 

>
>> 2. Suppose you've already read seq. Now for some reason kernel updates data in page seq was read
>from.
>> 3. Kernel will enter critical section during update. seq changes along with other data without
>app knowing about it.
>>     If you want nitty gritty details consult kernel sources.
>
>Look, I don't have to beg you to answer these questions.
>In fact, I expect library author to document all such narrow things
>clearly either in in PG, or in source code comments (ideally in both).
>If not, then from my perspective the patch is not ready stage and
>shouldn't be accepted.
>I don't know is compiler-barrier is enough here or not, but I think it
>is definitely worth a clear explanation in the docs.
>I suppose it wouldn't be only me who will get confused here.
>So please take an effort and document it clearly why you believe there
>is no race-condition.
>
>> 4. app resumes and has some stale data but *WILL* read new seq. Code loops again because values
>do not match.
>
>If the kernel will always execute update for this page in the same
>thread context, then yes, - user code will always note the difference
>after resume.
>But why it can't happen that your user-thread reads this page on one
>CPU, while some kernel code on other CPU updates it simultaneously?
>
>
>> 5. Otherwise seq values match and data is valid.
>>
>>> Also there was another question below, which you probably  missed, so I copied it here:
>>> Another question - do we really need  to have __rte_pmu_read_userpage() and rte_pmu_read() as
>>> static inline functions in public header?
>>> As I understand, because of that we also have to make 'struct rte_pmu_*'
>>> definitions also public.
>>>
>>
>> These functions need to be inlined otherwise performance takes a hit.
>
>I understand that perfomance might be affected, but how big is hit?
>I expect actual PMU read will not be free anyway, right?
>If the diff is small, might be it is worth to go for such change,
>removing unneeded structures from public headers would help a lot in
>future in terms of ABI/API stability.
>
>
>
>>>>
>>>>>>>>
>>>>>>>>>> +		index = pc->index;
>>>>>>>>>> +		offset = pc->offset;
>>>>>>>>>> +		width = pc->pmc_width;
>>>>>>>>>> +
>>>>>>>>>> +		/* index set to 0 means that particular counter cannot be used */
>>>>>>>>>> +		if (likely(pc->cap_user_rdpmc && index)) {
>>>>>>>>>> +			pmc = rte_pmu_pmc_read(index - 1);
>>>>>>>>>> +			pmc <<= 64 - width;
>>>>>>>>>> +			pmc >>= 64 - width;
>>>>>>>>>> +			offset += pmc;
>>>>>>>>>> +		}
>>>>>>>>>> +
>>>>>>>>>> +		rte_compiler_barrier();
>>>>>>>>>> +
>>>>>>>>>> +		if (likely(pc->lock == seq))
>>>>>>>>>> +			return offset;
>>>>>>>>>> +	}
>>>>>>>>>> +
>>>>>>>>>> +	return 0;
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>> +/**
>>>>>>>>>> + * @warning
>>>>>>>>>> + * @b EXPERIMENTAL: this API may change without prior notice
>>>>>>>>>> + *
>>>>>>>>>> + * Enable group of events on the calling lcore.
>>>>>>>>>> + *
>>>>>>>>>> + * @warning This should be not called directly.
>>>>>>>>>> + *
>>>>>>>>>> + * @return
>>>>>>>>>> + *   0 in case of success, negative value otherwise.
>>>>>>>>>> + */
>>>>>>>>>> +__rte_experimental
>>>>>>>>>> +int
>>>>>>>>>> +__rte_pmu_enable_group(void);
>>>>>>>>>> +
>>>>>>>>>> +/**
>>>>>>>>>> + * @warning
>>>>>>>>>> + * @b EXPERIMENTAL: this API may change without prior notice
>>>>>>>>>> + *
>>>>>>>>>> + * Initialize PMU library.
>>>>>>>>>> + *
>>>>>>>>>> + * @warning This should be not called directly.
>>>>>>>>>> + *
>>>>>>>>>> + * @return
>>>>>>>>>> + *   0 in case of success, negative value otherwise.
>>>>>>>>>> + */
>>>>>>>>>> +__rte_experimental
>>>>>>>>>> +int
>>>>>>>>>> +rte_pmu_init(void);
>>>>>>>>>> +
>>>>>>>>>> +/**
>>>>>>>>>> + * @warning
>>>>>>>>>> + * @b EXPERIMENTAL: this API may change without prior notice
>>>>>>>>>> + *
>>>>>>>>>> + * Finalize PMU library. This should be called after PMU
>>>>>>>>>> +counters are no longer being
>>>>> read.
>>>>>>>>>> + */
>>>>>>>>>> +__rte_experimental
>>>>>>>>>> +void
>>>>>>>>>> +rte_pmu_fini(void);
>>>>>>>>>> +
>>>>>>>>>> +/**
>>>>>>>>>> + * @warning
>>>>>>>>>> + * @b EXPERIMENTAL: this API may change without prior notice
>>>>>>>>>> + *
>>>>>>>>>> + * Add event to the group of enabled events.
>>>>>>>>>> + *
>>>>>>>>>> + * @param name
>>>>>>>>>> + *   Name of an event listed under /sys/bus/event_source/devices/pmu/events.
>>>>>>>>>> + * @return
>>>>>>>>>> + *   Event index in case of success, negative value otherwise.
>>>>>>>>>> + */
>>>>>>>>>> +__rte_experimental
>>>>>>>>>> +int
>>>>>>>>>> +rte_pmu_add_event(const char *name);
>>>>>>>>>> +
>>>>>>>>>> +/**
>>>>>>>>>> + * @warning
>>>>>>>>>> + * @b EXPERIMENTAL: this API may change without prior notice
>>>>>>>>>> + *
>>>>>>>>>> + * Read hardware counter configured to count occurrences of an event.
>>>>>>>>>> + *
>>>>>>>>>> + * @param index
>>>>>>>>>> + *   Index of an event to be read.
>>>>>>>>>> + * @return
>>>>>>>>>> + *   Event value read from register. In case of errors or lack of support
>>>>>>>>>> + *   0 is returned. In other words, stream of zeros in a trace file
>>>>>>>>>> + *   indicates problem with reading particular PMU event register.
>>>>>>>>>> + */
>>>>>
>>>>> Another question - do we really need  to have
>>>>> __rte_pmu_read_userpage() and rte_pmu_read() as static inline functions in public header?
>>>>> As I understand, because of that we also have to make 'struct rte_pmu_*'
>>>>> definitions also public.
>>>>>
>>>>>>>>>> +__rte_experimental
>>>>>>>>>> +static __rte_always_inline uint64_t rte_pmu_read(unsigned
>>>>>>>>>> +int
>>>>>>>>>> +index) {
>>>>>>>>>> +	struct rte_pmu_event_group *group = &RTE_PER_LCORE(_event_group);
>>>>>>>>>> +	int ret;
>>>>>>>>>> +
>>>>>>>>>> +	if (unlikely(!rte_pmu.initialized))
>>>>>>>>>> +		return 0;
>>>>>>>>>> +
>>>>>>>>>> +	if (unlikely(!group->enabled)) {
>>>>>>>>>> +		ret = __rte_pmu_enable_group();
>>>>>>>>>> +		if (ret)
>>>>>>>>>> +			return 0;
>>>>>>>>>> +	}
>>>>>>>>>> +
>>>>>>>>>> +	if (unlikely(index >= rte_pmu.num_group_events))
>>>>>>>>>> +		return 0;
>>>>>>>>>> +
>>>>>>>>>> +	return __rte_pmu_read_userpage(group->mmap_pages[index]);
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>> +#ifdef __cplusplus
>>>>>>>>>> +}
>>>>>>>>>> +#endif
>>>>>>>>>> +
>>


^ permalink raw reply	[relevance 0%]

* Re: [PATCH v1 04/13] graph: add get/set graph worker model APIs
  2023-02-24  6:31  0%     ` Yan, Zhirun
@ 2023-02-26 22:23  0%       ` Jerin Jacob
  2023-03-02  8:38  0%         ` Yan, Zhirun
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2023-02-26 22:23 UTC (permalink / raw)
  To: Yan, Zhirun
  Cc: dev, jerinj, kirankumark, ndabilpuram, Liang, Cunming, Wang, Haiyue

On Fri, Feb 24, 2023 at 12:01 PM Yan, Zhirun <zhirun.yan@intel.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Monday, February 20, 2023 9:51 PM
> > To: Yan, Zhirun <zhirun.yan@intel.com>
> > Cc: dev@dpdk.org; jerinj@marvell.com; kirankumark@marvell.com;
> > ndabilpuram@marvell.com; Liang, Cunming <cunming.liang@intel.com>; Wang,
> > Haiyue <haiyue.wang@intel.com>
> > Subject: Re: [PATCH v1 04/13] graph: add get/set graph worker model APIs
> >
> > On Thu, Nov 17, 2022 at 10:40 AM Zhirun Yan <zhirun.yan@intel.com> wrote:
> > >
> > > Add new get/set APIs to configure graph worker model which is used to
> > > determine which model will be chosen.
> > >
> > > Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
> > > Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> > > Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
> > > ---
> > >  lib/graph/rte_graph_worker.h        | 51 +++++++++++++++++++++++++++++
> > >  lib/graph/rte_graph_worker_common.h | 13 ++++++++
> > >  lib/graph/version.map               |  3 ++
> > >  3 files changed, 67 insertions(+)
> > >
> > > diff --git a/lib/graph/rte_graph_worker.h
> > > b/lib/graph/rte_graph_worker.h index 54d1390786..a0ea0df153 100644
> > > --- a/lib/graph/rte_graph_worker.h
> > > +++ b/lib/graph/rte_graph_worker.h
> > > @@ -1,5 +1,56 @@
> > >  #include "rte_graph_model_rtc.h"
> > >
> > > +static enum rte_graph_worker_model worker_model =
> > > +RTE_GRAPH_MODEL_DEFAULT;
> >
> > This will break the multiprocess.
>
> Thanks. I will use TLS for per-thread local storage.

If it needs to be used from secondary process, then it needs to be from memzone.



>
> >
> > > +
> > > +/** Graph worker models */
> > > +enum rte_graph_worker_model {
> > > +#define WORKER_MODEL_DEFAULT "default"
> >
> > Why need strings?
> > Also, every symbol in a public header file should start with RTE_ to avoid
> > namespace conflict.
>
> It was used to config the model in app. I can put the string into example.

OK

>
> >
> > > +       RTE_GRAPH_MODEL_DEFAULT = 0,
> > > +#define WORKER_MODEL_RTC "rtc"
> > > +       RTE_GRAPH_MODEL_RTC,
> >
> > Why not RTE_GRAPH_MODEL_RTC = RTE_GRAPH_MODEL_DEFAULT in enum
> > itself.
> Yes, will do in next version.
>
> >
> > > +#define WORKER_MODEL_GENERIC "generic"
> >
> > Generic is a very overloaded term. Use pipeline here i.e
> > RTE_GRAPH_MODEL_PIPELINE
>
> Actually, it's not a purely pipeline mode. I prefer to change to hybrid.

Hybrid is very overloaded term, and it will be confusing (considering
there will be new models in future).
Please pick a word that really express the model working.

> >
> >
> > > +       RTE_GRAPH_MODEL_GENERIC,
> > > +       RTE_GRAPH_MODEL_MAX,
> >
> > No need for MAX, it will break the ABI for future. See other subsystem such as
> > cryptodev.
>
> Thanks, I will change it.
> >
> > > +};
> >
> > >

^ permalink raw reply	[relevance 0%]

* Re: [PATCH v2] vhost: fix madvise arguments alignment
  2023-02-23 16:57  0%     ` Mike Pattrick
@ 2023-02-24 15:05  4%       ` Patrick Robb
  0 siblings, 0 replies; 200+ results
From: Patrick Robb @ 2023-02-24 15:05 UTC (permalink / raw)
  To: Mike Pattrick; +Cc: Maxime Coquelin, dev, david.marchand, chenbo.xia

[-- Attachment #1: Type: text/plain, Size: 16088 bytes --]

UNH CI reported an ABI failure for this patch which did not report due to a
bug on our end, so I'm manually reporting it now. I see Maxime you already
predicted the issue though!

*07:58:32*  1 function with some indirect sub-type change:*07:58:32*
*07:58:32*    [C] 'function int rte_vhost_get_mem_table(int,
rte_vhost_memory**)' at vhost.c:922:1 has some indirect sub-type
changes:*07:58:32*      parameter 2 of type 'rte_vhost_memory**' has
sub-type changes:*07:58:32*        in pointed to type
'rte_vhost_memory*':*07:58:32*          in pointed to type 'struct
rte_vhost_memory' at rte_vhost.h:145:1:*07:58:32*            type size
hasn't changed*07:58:32*            1 data member change:*07:58:32*
          type of 'rte_vhost_mem_region regions[]' changed:*07:58:32*
              array element type 'struct rte_vhost_mem_region'
changed:*07:58:32*                  type size changed from 448 to 512
(in bits)*07:58:32*                  1 data member
insertion:*07:58:32*                    'uint64_t alignment', at
offset 448 (in bits) at rte_vhost.h:139:1*07:58:32*
type size hasn't changed*07:58:32*  *07:58:32*  Error: ABI issue
reported for abidiff --suppr dpdk/devtools/libabigail.abignore
--no-added-syms --headers-dir1 reference/include --headers-dir2
build_install/include reference/dump/librte_vhost.dump
build_install/dump/librte_vhost.dump*07:58:32*  ABIDIFF_ABI_CHANGE,
this change requires a review (abidiff flagged this as a potential
issue).


On Thu, Feb 23, 2023 at 11:57 AM Mike Pattrick <mkp@redhat.com> wrote:

> On Thu, Feb 23, 2023 at 11:12 AM Maxime Coquelin
> <maxime.coquelin@redhat.com> wrote:
> >
> > Hi Mike,
> >
> > Thanks for  looking into this issue.
> >
> > On 2/23/23 05:35, Mike Pattrick wrote:
> > > The arguments passed to madvise should be aligned to the alignment of
> > > the backing memory. Now we keep track of each regions alignment and use
> > > then when setting coredump preferences. To facilitate this, a new
> member
> > > was added to rte_vhost_mem_region. A new function was added to easily
> > > translate memory address back to region alignment. Unneeded calls to
> > > madvise were reduced, as the cache removal case should already be
> > > covered by the cache insertion case. The previously inline function
> > > mem_set_dump was removed from a header file and made not inline.
> > >
> > > Fixes: 338ad77c9ed3 ("vhost: exclude VM hugepages from coredumps")
> > >
> > > Signed-off-by: Mike Pattrick <mkp@redhat.com>
> > > ---
> > > Since v1:
> > >   - Corrected a cast for 32bit compiles
> > > ---
> > >   lib/vhost/iotlb.c      |  9 +++---
> > >   lib/vhost/rte_vhost.h  |  1 +
> > >   lib/vhost/vhost.h      | 12 ++------
> > >   lib/vhost/vhost_user.c | 63
> +++++++++++++++++++++++++++++++++++-------
> > >   4 files changed, 60 insertions(+), 25 deletions(-)
> > >
> > > diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c
> > > index a0b8fd7302..5293507b63 100644
> > > --- a/lib/vhost/iotlb.c
> > > +++ b/lib/vhost/iotlb.c
> > > @@ -149,7 +149,6 @@ vhost_user_iotlb_cache_remove_all(struct
> vhost_virtqueue *vq)
> > >       rte_rwlock_write_lock(&vq->iotlb_lock);
> > >
> > >       RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
> > > -             mem_set_dump((void *)(uintptr_t)node->uaddr, node->size,
> true);
> >
> > Hmm, it should have been called with enable=false here since we are
> > removing the entry from the IOTLB cache. It should be kept in order to
> > "DONTDUMP" pages evicted from the cache.
>
> Here I was thinking that if we add an entry and then remove a
> different entry, they could be in the same page. But on I should have
> kept an enable=false in remove_all().
>
> And now that I think about it again, I could just check if there are
> any active cache entries in the page on every evict/remove, they're
> sorted so that should be an easy check. Unless there are any
> objections I'll go forward with that.
>
> >
> > >               TAILQ_REMOVE(&vq->iotlb_list, node, next);
> > >               vhost_user_iotlb_pool_put(vq, node);
> > >       }
> > > @@ -171,7 +170,6 @@ vhost_user_iotlb_cache_random_evict(struct
> vhost_virtqueue *vq)
> > >
> > >       RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
> > >               if (!entry_idx) {
> > > -                     mem_set_dump((void *)(uintptr_t)node->uaddr,
> node->size, true);
> >
> > Same here.
> >
> > >                       TAILQ_REMOVE(&vq->iotlb_list, node, next);
> > >                       vhost_user_iotlb_pool_put(vq, node);
> > >                       vq->iotlb_cache_nr--;
> > > @@ -224,14 +222,16 @@ vhost_user_iotlb_cache_insert(struct virtio_net
> *dev, struct vhost_virtqueue *vq
> > >                       vhost_user_iotlb_pool_put(vq, new_node);
> > >                       goto unlock;
> > >               } else if (node->iova > new_node->iova) {
> > > -                     mem_set_dump((void *)(uintptr_t)node->uaddr,
> node->size, true);
> > > +                     mem_set_dump((void *)(uintptr_t)new_node->uaddr,
> new_node->size, true,
> > > +                             hua_to_alignment(dev->mem, (void
> *)(uintptr_t)node->uaddr));
> > >                       TAILQ_INSERT_BEFORE(node, new_node, next);
> > >                       vq->iotlb_cache_nr++;
> > >                       goto unlock;
> > >               }
> > >       }
> > >
> > > -     mem_set_dump((void *)(uintptr_t)node->uaddr, node->size, true);
> > > +     mem_set_dump((void *)(uintptr_t)new_node->uaddr, new_node->size,
> true,
> > > +             hua_to_alignment(dev->mem, (void
> *)(uintptr_t)new_node->uaddr));
> > >       TAILQ_INSERT_TAIL(&vq->iotlb_list, new_node, next);
> > >       vq->iotlb_cache_nr++;
> > >
> > > @@ -259,7 +259,6 @@ vhost_user_iotlb_cache_remove(struct
> vhost_virtqueue *vq,
> > >                       break;
> > >
> > >               if (iova < node->iova + node->size) {
> > > -                     mem_set_dump((void *)(uintptr_t)node->uaddr,
> node->size, true);
> > >                       TAILQ_REMOVE(&vq->iotlb_list, node, next);
> > >                       vhost_user_iotlb_pool_put(vq, node);
> > >                       vq->iotlb_cache_nr--;
> > > diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h
> > > index a395843fe9..c5c97ea67e 100644
> > > --- a/lib/vhost/rte_vhost.h
> > > +++ b/lib/vhost/rte_vhost.h
> > > @@ -136,6 +136,7 @@ struct rte_vhost_mem_region {
> > >       void     *mmap_addr;
> > >       uint64_t mmap_size;
> > >       int fd;
> > > +     uint64_t alignment;
> >
> > This is not possible to do this as it breaks the ABI.
> > You have to store the information somewhere else, or simply call
> > get_blk_size() in hua_to_alignment() since the fd is not closed.
> >
>
> Sorry about that! You're right, checking the fd per operation should
> be easy enough.
>
> Thanks for the review,
>
> M
>
> > >   };
> > >
> > >   /**
> > > diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
> > > index 5750f0c005..a2467ba509 100644
> > > --- a/lib/vhost/vhost.h
> > > +++ b/lib/vhost/vhost.h
> > > @@ -1009,14 +1009,6 @@ mbuf_is_consumed(struct rte_mbuf *m)
> > >       return true;
> > >   }
> > >
> > > -static __rte_always_inline void
> > > -mem_set_dump(__rte_unused void *ptr, __rte_unused size_t size,
> __rte_unused bool enable)
> > > -{
> > > -#ifdef MADV_DONTDUMP
> > > -     if (madvise(ptr, size, enable ? MADV_DODUMP : MADV_DONTDUMP) ==
> -1) {
> > > -             rte_log(RTE_LOG_INFO, vhost_config_log_level,
> > > -                     "VHOST_CONFIG: could not set coredump preference
> (%s).\n", strerror(errno));
> > > -     }
> > > -#endif
> > > -}
> > > +uint64_t hua_to_alignment(struct rte_vhost_memory *mem, void *ptr);
> > > +void mem_set_dump(void *ptr, size_t size, bool enable, uint64_t
> alignment);
> > >   #endif /* _VHOST_NET_CDEV_H_ */
> > > diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
> > > index d702d082dd..6d09597fbe 100644
> > > --- a/lib/vhost/vhost_user.c
> > > +++ b/lib/vhost/vhost_user.c
> > > @@ -737,6 +737,40 @@ log_addr_to_gpa(struct virtio_net *dev, struct
> vhost_virtqueue *vq)
> > >       return log_gpa;
> > >   }
> > >
> > > +uint64_t
> > > +hua_to_alignment(struct rte_vhost_memory *mem, void *ptr)
> > > +{
> > > +     struct rte_vhost_mem_region *r;
> > > +     uint32_t i;
> > > +     uintptr_t hua = (uintptr_t)ptr;
> > > +
> > > +     for (i = 0; i < mem->nregions; i++) {
> > > +             r = &mem->regions[i];
> > > +             if (hua >= r->host_user_addr &&
> > > +                     hua < r->host_user_addr + r->size) {
> > > +                     return r->alignment;
> > > +             }
> > > +     }
> > > +
> > > +     /* If region isn't found, don't align at all */
> > > +     return 1;
> > > +}
> > > +
> > > +void
> > > +mem_set_dump(void *ptr, size_t size, bool enable, uint64_t pagesz)
> > > +{
> > > +#ifdef MADV_DONTDUMP
> > > +     void *start = RTE_PTR_ALIGN_FLOOR(ptr, pagesz);
> > > +     uintptr_t end = RTE_ALIGN_CEIL((uintptr_t)ptr + size, pagesz);
> > > +     size_t len = end - (uintptr_t)start;
> > > +
> > > +     if (madvise(start, len, enable ? MADV_DODUMP : MADV_DONTDUMP) ==
> -1) {
> > > +             rte_log(RTE_LOG_INFO, vhost_config_log_level,
> > > +                     "VHOST_CONFIG: could not set coredump preference
> (%s).\n", strerror(errno));
> > > +     }
> > > +#endif
> > > +}
> > > +
> > >   static void
> > >   translate_ring_addresses(struct virtio_net **pdev, struct
> vhost_virtqueue **pvq)
> > >   {
> > > @@ -767,6 +801,8 @@ translate_ring_addresses(struct virtio_net **pdev,
> struct vhost_virtqueue **pvq)
> > >                       return;
> > >               }
> > >
> > > +             mem_set_dump(vq->desc_packed, len, true,
> > > +                     hua_to_alignment(dev->mem, vq->desc_packed));
> > >               numa_realloc(&dev, &vq);
> > >               *pdev = dev;
> > >               *pvq = vq;
> > > @@ -782,6 +818,8 @@ translate_ring_addresses(struct virtio_net **pdev,
> struct vhost_virtqueue **pvq)
> > >                       return;
> > >               }
> > >
> > > +             mem_set_dump(vq->driver_event, len, true,
> > > +                     hua_to_alignment(dev->mem, vq->driver_event));
> > >               len = sizeof(struct vring_packed_desc_event);
> > >               vq->device_event = (struct vring_packed_desc_event *)
> > >                                       (uintptr_t)ring_addr_to_vva(dev,
> > > @@ -793,9 +831,8 @@ translate_ring_addresses(struct virtio_net **pdev,
> struct vhost_virtqueue **pvq)
> > >                       return;
> > >               }
> > >
> > > -             mem_set_dump(vq->desc_packed, len, true);
> > > -             mem_set_dump(vq->driver_event, len, true);
> > > -             mem_set_dump(vq->device_event, len, true);
> > > +             mem_set_dump(vq->device_event, len, true,
> > > +                     hua_to_alignment(dev->mem, vq->device_event));
> > >               vq->access_ok = true;
> > >               return;
> > >       }
> > > @@ -812,6 +849,7 @@ translate_ring_addresses(struct virtio_net **pdev,
> struct vhost_virtqueue **pvq)
> > >               return;
> > >       }
> > >
> > > +     mem_set_dump(vq->desc, len, true, hua_to_alignment(dev->mem,
> vq->desc));
> > >       numa_realloc(&dev, &vq);
> > >       *pdev = dev;
> > >       *pvq = vq;
> > > @@ -827,6 +865,7 @@ translate_ring_addresses(struct virtio_net **pdev,
> struct vhost_virtqueue **pvq)
> > >               return;
> > >       }
> > >
> > > +     mem_set_dump(vq->avail, len, true, hua_to_alignment(dev->mem,
> vq->avail));
> > >       len = sizeof(struct vring_used) +
> > >               sizeof(struct vring_used_elem) * vq->size;
> > >       if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX))
> > > @@ -839,6 +878,8 @@ translate_ring_addresses(struct virtio_net **pdev,
> struct vhost_virtqueue **pvq)
> > >               return;
> > >       }
> > >
> > > +     mem_set_dump(vq->used, len, true, hua_to_alignment(dev->mem,
> vq->used));
> > > +
> > >       if (vq->last_used_idx != vq->used->idx) {
> > >               VHOST_LOG_CONFIG(dev->ifname, WARNING,
> > >                       "last_used_idx (%u) and vq->used->idx (%u)
> mismatches;\n",
> > > @@ -849,9 +890,6 @@ translate_ring_addresses(struct virtio_net **pdev,
> struct vhost_virtqueue **pvq)
> > >                       "some packets maybe resent for Tx and dropped
> for Rx\n");
> > >       }
> > >
> > > -     mem_set_dump(vq->desc, len, true);
> > > -     mem_set_dump(vq->avail, len, true);
> > > -     mem_set_dump(vq->used, len, true);
> > >       vq->access_ok = true;
> > >
> > >       VHOST_LOG_CONFIG(dev->ifname, DEBUG, "mapped address desc:
> %p\n", vq->desc);
> > > @@ -1230,7 +1268,8 @@ vhost_user_mmap_region(struct virtio_net *dev,
> > >       region->mmap_addr = mmap_addr;
> > >       region->mmap_size = mmap_size;
> > >       region->host_user_addr = (uint64_t)(uintptr_t)mmap_addr +
> mmap_offset;
> > > -     mem_set_dump(mmap_addr, mmap_size, false);
> > > +     region->alignment = alignment;
> > > +     mem_set_dump(mmap_addr, mmap_size, false, alignment);
> > >
> > >       if (dev->async_copy) {
> > >               if (add_guest_pages(dev, region, alignment) < 0) {
> > > @@ -1535,7 +1574,6 @@ inflight_mem_alloc(struct virtio_net *dev, const
> char *name, size_t size, int *f
> > >               return NULL;
> > >       }
> > >
> > > -     mem_set_dump(ptr, size, false);
> > >       *fd = mfd;
> > >       return ptr;
> > >   }
> > > @@ -1566,6 +1604,7 @@ vhost_user_get_inflight_fd(struct virtio_net
> **pdev,
> > >       uint64_t pervq_inflight_size, mmap_size;
> > >       uint16_t num_queues, queue_size;
> > >       struct virtio_net *dev = *pdev;
> > > +     uint64_t alignment;
> > >       int fd, i, j;
> > >       int numa_node = SOCKET_ID_ANY;
> > >       void *addr;
> > > @@ -1628,6 +1667,8 @@ vhost_user_get_inflight_fd(struct virtio_net
> **pdev,
> > >               dev->inflight_info->fd = -1;
> > >       }
> > >
> > > +     alignment = get_blk_size(fd);
> > > +     mem_set_dump(addr, mmap_size, false, alignment);
> > >       dev->inflight_info->addr = addr;
> > >       dev->inflight_info->size = ctx->msg.payload.inflight.mmap_size =
> mmap_size;
> > >       dev->inflight_info->fd = ctx->fds[0] = fd;
> > > @@ -1744,10 +1785,10 @@ vhost_user_set_inflight_fd(struct virtio_net
> **pdev,
> > >               dev->inflight_info->fd = -1;
> > >       }
> > >
> > > -     mem_set_dump(addr, mmap_size, false);
> > >       dev->inflight_info->fd = fd;
> > >       dev->inflight_info->addr = addr;
> > >       dev->inflight_info->size = mmap_size;
> > > +     mem_set_dump(addr, mmap_size, false, get_blk_size(fd));
> > >
> > >       for (i = 0; i < num_queues; i++) {
> > >               vq = dev->virtqueue[i];
> > > @@ -2242,6 +2283,7 @@ vhost_user_set_log_base(struct virtio_net **pdev,
> > >       struct virtio_net *dev = *pdev;
> > >       int fd = ctx->fds[0];
> > >       uint64_t size, off;
> > > +     uint64_t alignment;
> > >       void *addr;
> > >       uint32_t i;
> > >
> > > @@ -2280,6 +2322,7 @@ vhost_user_set_log_base(struct virtio_net **pdev,
> > >        * fail when offset is not page size aligned.
> > >        */
> > >       addr = mmap(0, size + off, PROT_READ | PROT_WRITE, MAP_SHARED,
> fd, 0);
> > > +     alignment = get_blk_size(fd);
> > >       close(fd);
> > >       if (addr == MAP_FAILED) {
> > >               VHOST_LOG_CONFIG(dev->ifname, ERR, "mmap log base
> failed!\n");
> > > @@ -2296,7 +2339,7 @@ vhost_user_set_log_base(struct virtio_net **pdev,
> > >       dev->log_addr = (uint64_t)(uintptr_t)addr;
> > >       dev->log_base = dev->log_addr + off;
> > >       dev->log_size = size;
> > > -     mem_set_dump(addr, size, false);
> > > +     mem_set_dump(addr, size + off, false, alignment);
> > >
> > >       for (i = 0; i < dev->nr_vring; i++) {
> > >               struct vhost_virtqueue *vq = dev->virtqueue[i];
> >
>
>

-- 

Patrick Robb

Technical Service Manager

UNH InterOperability Laboratory

21 Madbury Rd, Suite 100, Durham, NH 03824

www.iol.unh.edu

[-- Attachment #2: Type: text/html, Size: 24631 bytes --]

^ permalink raw reply	[relevance 4%]

* RE: [PATCH v11 21/22] hash: move rte_hash_set_alg out header
  2023-02-22 21:55  2%   ` [PATCH v11 21/22] hash: move rte_hash_set_alg out header Stephen Hemminger
  2023-02-23  7:11  0%     ` Ruifeng Wang
@ 2023-02-24  9:45  0%     ` Ruifeng Wang
  1 sibling, 0 replies; 200+ results
From: Ruifeng Wang @ 2023-02-24  9:45 UTC (permalink / raw)
  To: Stephen Hemminger, dev
  Cc: Yipeng Wang, Sameh Gobriel, Bruce Richardson, Vladimir Medvedkin, nd

> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Thursday, February 23, 2023 5:56 AM
> To: dev@dpdk.org
> Cc: Stephen Hemminger <stephen@networkplumber.org>; Yipeng Wang <yipeng1.wang@intel.com>;
> Sameh Gobriel <sameh.gobriel@intel.com>; Bruce Richardson <bruce.richardson@intel.com>;
> Vladimir Medvedkin <vladimir.medvedkin@intel.com>; Ruifeng Wang <Ruifeng.Wang@arm.com>
> Subject: [PATCH v11 21/22] hash: move rte_hash_set_alg out header
> 
> The code for setting algorithm for hash is not at all perf sensitive, and doing it inline
> has a couple of problems. First, it means that if multiple files include the header, then
> the initialization gets done multiple times. But also, it makes it harder to fix usage of
> RTE_LOG().
> 
> Despite what the checking script say. This is not an ABI change, the previous version
> inlined the same code; therefore both old and new code will work the same.
> 
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
>  lib/hash/meson.build     |  1 +
>  lib/hash/rte_crc_arm64.h |  8 ++---
>  lib/hash/rte_crc_x86.h   | 10 +++---
>  lib/hash/rte_hash_crc.c  | 68 ++++++++++++++++++++++++++++++++++++++++
>  lib/hash/rte_hash_crc.h  | 48 ++--------------------------
>  lib/hash/version.map     |  7 +++++
>  6 files changed, 88 insertions(+), 54 deletions(-)  create mode 100644
> lib/hash/rte_hash_crc.c
> 
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>


^ permalink raw reply	[relevance 0%]

* 回复: [PATCH v3 1/3] ethdev: enable direct rearm with separate API
  @ 2023-02-24  8:55  0%           ` Feifei Wang
  0 siblings, 0 replies; 200+ results
From: Feifei Wang @ 2023-02-24  8:55 UTC (permalink / raw)
  To: Morten Brørup, thomas, Ferruh Yigit, Andrew Rybchenko
  Cc: dev, konstantin.v.ananyev, nd, Honnappa Nagarahalli,
	Ruifeng Wang, nd, nd

Sorry for my delayed reply.

> -----邮件原件-----
> 发件人: Morten Brørup <mb@smartsharesystems.com>
> 发送时间: Wednesday, January 4, 2023 6:11 PM
> 收件人: Feifei Wang <Feifei.Wang2@arm.com>; thomas@monjalon.net;
> Ferruh Yigit <ferruh.yigit@amd.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>
> 抄送: dev@dpdk.org; konstantin.v.ananyev@yandex.ru; nd <nd@arm.com>;
> Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Ruifeng Wang
> <Ruifeng.Wang@arm.com>; nd <nd@arm.com>
> 主题: RE: [PATCH v3 1/3] ethdev: enable direct rearm with separate API
> 
> > From: Feifei Wang [mailto:Feifei.Wang2@arm.com]
> > Sent: Wednesday, 4 January 2023 09.51
> >
> > Hi, Morten
> >
> > > 发件人: Morten Brørup <mb@smartsharesystems.com>
> > > 发送时间: Wednesday, January 4, 2023 4:22 PM
> > >
> > > > From: Feifei Wang [mailto:feifei.wang2@arm.com]
> > > > Sent: Wednesday, 4 January 2023 08.31
> > > >
> > > > Add 'tx_fill_sw_ring' and 'rx_flush_descriptor' API into direct
> > rearm
> > > > mode for separate Rx and Tx Operation. And this can support
> > different
> > > > multiple sources in direct rearm mode. For examples, Rx driver is
> > > > ixgbe, and Tx driver is i40e.
> > > >
> > > > Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > > Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > > Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> > > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > > Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > > ---
> > >
> > > This feature looks very promising for performance. I am pleased to
> > see
> > > progress on it.
> > >
> > Thanks very much for your reviewing.
> >
> > > Please confirm that the fast path functions are still thread safe,
> > i.e. one EAL
> > > thread may be calling rte_eth_rx_burst() while another EAL thread is
> > calling
> > > rte_eth_tx_burst().
> > >
> > For the multiple threads safe, like we say in cover letter, current
> > direct-rearm support Rx and Tx in the same thread. If we consider
> > multiple threads like 'pipeline model', there need to add 'lock' in
> > the data path which can decrease the performance.
> > Thus, the first step we do is try to enable direct-rearm in the single
> > thread, and then we will consider to enable direct rearm in multiple
> > threads and improve the performance.
> 
> OK, doing it in steps is a good idea for a feature like this - makes it easier to
> understand and review.
> 
> When proceeding to add support for the "pipeline model", perhaps the
> lockless principles from the rte_ring can be used in this feature too.
> 
> From a high level perspective, I'm somewhat worried that releasing a "work-
> in-progress" version of this feature in some DPDK version will cause API/ABI
> breakage discussions when progressing to the next steps of the
> implementation to make the feature more complete. Not only support for
> thread safety across simultaneous RX and TX, but also support for multiple
> mbuf pools per RX queue [1]. Marking the functions experimental should
> alleviate such discussions, but there is a risk of pushback to not break the
> API/ABI anyway.
> 
> [1]:
> https://elixir.bootlin.com/dpdk/v22.11.1/source/lib/ethdev/rte_ethdev.h#L1
> 105
> 

[Feifei] I think the subsequent upgrade does not significantly damage the stability
of the API we currently define.

For thread safety across simultaneous RX and TX, in the future, the lockless operation
change will happen in the pmd layer, such as CAS load/store for rxq queue index of pmd.
Thus, this can not affect the stability of the upper API.

For multiple mbuf pools per RX queue, direct-rearm just put Tx buffers into Rx buffers, and
it do not care which mempool the buffer coming from. 
From different mempool buffers eventually freed into their respective sources in the
no FAST_FREE path.  
I think this is a mistake in cover letter. Previous direct-rearm can just support FAST_FREE
so it constraint that buffer should be from the same mempool. Now, the latest version can
support no_FAST_FREE path, but we forget to make change in cover letter.
> [...]
> 
> > > > --- a/lib/ethdev/ethdev_driver.h
> > > > +++ b/lib/ethdev/ethdev_driver.h
> > > > @@ -59,6 +59,10 @@ struct rte_eth_dev {
> > > >  	eth_rx_descriptor_status_t rx_descriptor_status;
> > > >  	/** Check the status of a Tx descriptor */
> > > >  	eth_tx_descriptor_status_t tx_descriptor_status;
> > > > +	/** Fill Rx sw-ring with Tx buffers in direct rearm mode */
> > > > +	eth_tx_fill_sw_ring_t tx_fill_sw_ring;
> > >
> > > What is "Rx sw-ring"? Please confirm that this is not an Intel PMD
> > specific
> > > term and/or implementation detail, e.g. by providing a conceptual
> > > implementation for a non-Intel PMD, e.g. mlx5.
> > Rx sw_ring is used  to store mbufs in intel PMD. This is the same as
> > 'rxq->elts'
> > in mlx5.
> 
> Sounds good.
> 
> Then all we need is consensus on a generic name for this, unless "Rx sw-ring"
> already is the generic name. (I'm not a PMD developer, so I might be
> completely off track here.) Naming is often debatable, so I'll stop talking
> about it now - I only wanted to highlight that we should avoid vendor-
> specific terms in public APIs intended to be implemented by multiple vendors.
> On the other hand... if no other vendors raise their voices before merging
> into the DPDK main repository, they forfeit their right to complain about it. ;-)
> 
> > Agree with that we need to providing a conceptual implementation for
> > all PMDs.
> 
> My main point is that we should ensure that the feature is not too tightly
> coupled with the way Intel PMDs implement mbuf handling. Providing a
> conceptual implementation for a non-Intel PMD is one way of checking this.
> 
> The actual implementation in other PMDs could be left up to the various NIC
> vendors.

Yes. And we will rename our API to make it suitable for all vendors:
rte_eth_direct_rearm  ->  rte_eth_buf_cycle   (upper API for direct rearm)
rte_eth_tx_fill_sw_ring  -> rte_eth_tx_buf_stash   (Tx queue fill Rx ring buffer )
rte_eth_rx_flush_descriptor -> rte_eth_rx_descriptors_refill (Rx queue flush its descriptors)

rte_eth_rxq_rearm_data {
	void *rx_sw_ring;
	uint16_t *rearm_start;
	uint16_t *rearm_nb;
}

->

struct *rxq_recycle_info {
	rte_mbuf **buf_ring;
	uint16_t *offset = (uint16 *)(&rq-<ci);
	uint16_t *end;
	uint16_t ring_size; 

}

^ permalink raw reply	[relevance 0%]

* RE: [PATCH v1 04/13] graph: add get/set graph worker model APIs
  2023-02-20 13:50  3%   ` Jerin Jacob
@ 2023-02-24  6:31  0%     ` Yan, Zhirun
  2023-02-26 22:23  0%       ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Yan, Zhirun @ 2023-02-24  6:31 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, jerinj, kirankumark, ndabilpuram, Liang, Cunming, Wang, Haiyue



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Monday, February 20, 2023 9:51 PM
> To: Yan, Zhirun <zhirun.yan@intel.com>
> Cc: dev@dpdk.org; jerinj@marvell.com; kirankumark@marvell.com;
> ndabilpuram@marvell.com; Liang, Cunming <cunming.liang@intel.com>; Wang,
> Haiyue <haiyue.wang@intel.com>
> Subject: Re: [PATCH v1 04/13] graph: add get/set graph worker model APIs
> 
> On Thu, Nov 17, 2022 at 10:40 AM Zhirun Yan <zhirun.yan@intel.com> wrote:
> >
> > Add new get/set APIs to configure graph worker model which is used to
> > determine which model will be chosen.
> >
> > Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
> > Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> > Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
> > ---
> >  lib/graph/rte_graph_worker.h        | 51 +++++++++++++++++++++++++++++
> >  lib/graph/rte_graph_worker_common.h | 13 ++++++++
> >  lib/graph/version.map               |  3 ++
> >  3 files changed, 67 insertions(+)
> >
> > diff --git a/lib/graph/rte_graph_worker.h
> > b/lib/graph/rte_graph_worker.h index 54d1390786..a0ea0df153 100644
> > --- a/lib/graph/rte_graph_worker.h
> > +++ b/lib/graph/rte_graph_worker.h
> > @@ -1,5 +1,56 @@
> >  #include "rte_graph_model_rtc.h"
> >
> > +static enum rte_graph_worker_model worker_model =
> > +RTE_GRAPH_MODEL_DEFAULT;
> 
> This will break the multiprocess.

Thanks. I will use TLS for per-thread local storage.

> 
> > +
> > +/** Graph worker models */
> > +enum rte_graph_worker_model {
> > +#define WORKER_MODEL_DEFAULT "default"
> 
> Why need strings?
> Also, every symbol in a public header file should start with RTE_ to avoid
> namespace conflict.

It was used to config the model in app. I can put the string into example.

> 
> > +       RTE_GRAPH_MODEL_DEFAULT = 0,
> > +#define WORKER_MODEL_RTC "rtc"
> > +       RTE_GRAPH_MODEL_RTC,
> 
> Why not RTE_GRAPH_MODEL_RTC = RTE_GRAPH_MODEL_DEFAULT in enum
> itself.
Yes, will do in next version.

> 
> > +#define WORKER_MODEL_GENERIC "generic"
> 
> Generic is a very overloaded term. Use pipeline here i.e
> RTE_GRAPH_MODEL_PIPELINE

Actually, it's not a purely pipeline mode. I prefer to change to hybrid. 
> 
> 
> > +       RTE_GRAPH_MODEL_GENERIC,
> > +       RTE_GRAPH_MODEL_MAX,
> 
> No need for MAX, it will break the ABI for future. See other subsystem such as
> cryptodev.

Thanks, I will change it.
> 
> > +};
> 
> >

^ permalink raw reply	[relevance 0%]

* Re: [PATCH v2] vhost: fix madvise arguments alignment
  2023-02-23 16:12  3%   ` Maxime Coquelin
@ 2023-02-23 16:57  0%     ` Mike Pattrick
  2023-02-24 15:05  4%       ` Patrick Robb
  0 siblings, 1 reply; 200+ results
From: Mike Pattrick @ 2023-02-23 16:57 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: dev, david.marchand, chenbo.xia

On Thu, Feb 23, 2023 at 11:12 AM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> Hi Mike,
>
> Thanks for  looking into this issue.
>
> On 2/23/23 05:35, Mike Pattrick wrote:
> > The arguments passed to madvise should be aligned to the alignment of
> > the backing memory. Now we keep track of each regions alignment and use
> > then when setting coredump preferences. To facilitate this, a new member
> > was added to rte_vhost_mem_region. A new function was added to easily
> > translate memory address back to region alignment. Unneeded calls to
> > madvise were reduced, as the cache removal case should already be
> > covered by the cache insertion case. The previously inline function
> > mem_set_dump was removed from a header file and made not inline.
> >
> > Fixes: 338ad77c9ed3 ("vhost: exclude VM hugepages from coredumps")
> >
> > Signed-off-by: Mike Pattrick <mkp@redhat.com>
> > ---
> > Since v1:
> >   - Corrected a cast for 32bit compiles
> > ---
> >   lib/vhost/iotlb.c      |  9 +++---
> >   lib/vhost/rte_vhost.h  |  1 +
> >   lib/vhost/vhost.h      | 12 ++------
> >   lib/vhost/vhost_user.c | 63 +++++++++++++++++++++++++++++++++++-------
> >   4 files changed, 60 insertions(+), 25 deletions(-)
> >
> > diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c
> > index a0b8fd7302..5293507b63 100644
> > --- a/lib/vhost/iotlb.c
> > +++ b/lib/vhost/iotlb.c
> > @@ -149,7 +149,6 @@ vhost_user_iotlb_cache_remove_all(struct vhost_virtqueue *vq)
> >       rte_rwlock_write_lock(&vq->iotlb_lock);
> >
> >       RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
> > -             mem_set_dump((void *)(uintptr_t)node->uaddr, node->size, true);
>
> Hmm, it should have been called with enable=false here since we are
> removing the entry from the IOTLB cache. It should be kept in order to
> "DONTDUMP" pages evicted from the cache.

Here I was thinking that if we add an entry and then remove a
different entry, they could be in the same page. But on I should have
kept an enable=false in remove_all().

And now that I think about it again, I could just check if there are
any active cache entries in the page on every evict/remove, they're
sorted so that should be an easy check. Unless there are any
objections I'll go forward with that.

>
> >               TAILQ_REMOVE(&vq->iotlb_list, node, next);
> >               vhost_user_iotlb_pool_put(vq, node);
> >       }
> > @@ -171,7 +170,6 @@ vhost_user_iotlb_cache_random_evict(struct vhost_virtqueue *vq)
> >
> >       RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
> >               if (!entry_idx) {
> > -                     mem_set_dump((void *)(uintptr_t)node->uaddr, node->size, true);
>
> Same here.
>
> >                       TAILQ_REMOVE(&vq->iotlb_list, node, next);
> >                       vhost_user_iotlb_pool_put(vq, node);
> >                       vq->iotlb_cache_nr--;
> > @@ -224,14 +222,16 @@ vhost_user_iotlb_cache_insert(struct virtio_net *dev, struct vhost_virtqueue *vq
> >                       vhost_user_iotlb_pool_put(vq, new_node);
> >                       goto unlock;
> >               } else if (node->iova > new_node->iova) {
> > -                     mem_set_dump((void *)(uintptr_t)node->uaddr, node->size, true);
> > +                     mem_set_dump((void *)(uintptr_t)new_node->uaddr, new_node->size, true,
> > +                             hua_to_alignment(dev->mem, (void *)(uintptr_t)node->uaddr));
> >                       TAILQ_INSERT_BEFORE(node, new_node, next);
> >                       vq->iotlb_cache_nr++;
> >                       goto unlock;
> >               }
> >       }
> >
> > -     mem_set_dump((void *)(uintptr_t)node->uaddr, node->size, true);
> > +     mem_set_dump((void *)(uintptr_t)new_node->uaddr, new_node->size, true,
> > +             hua_to_alignment(dev->mem, (void *)(uintptr_t)new_node->uaddr));
> >       TAILQ_INSERT_TAIL(&vq->iotlb_list, new_node, next);
> >       vq->iotlb_cache_nr++;
> >
> > @@ -259,7 +259,6 @@ vhost_user_iotlb_cache_remove(struct vhost_virtqueue *vq,
> >                       break;
> >
> >               if (iova < node->iova + node->size) {
> > -                     mem_set_dump((void *)(uintptr_t)node->uaddr, node->size, true);
> >                       TAILQ_REMOVE(&vq->iotlb_list, node, next);
> >                       vhost_user_iotlb_pool_put(vq, node);
> >                       vq->iotlb_cache_nr--;
> > diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h
> > index a395843fe9..c5c97ea67e 100644
> > --- a/lib/vhost/rte_vhost.h
> > +++ b/lib/vhost/rte_vhost.h
> > @@ -136,6 +136,7 @@ struct rte_vhost_mem_region {
> >       void     *mmap_addr;
> >       uint64_t mmap_size;
> >       int fd;
> > +     uint64_t alignment;
>
> This is not possible to do this as it breaks the ABI.
> You have to store the information somewhere else, or simply call
> get_blk_size() in hua_to_alignment() since the fd is not closed.
>

Sorry about that! You're right, checking the fd per operation should
be easy enough.

Thanks for the review,

M

> >   };
> >
> >   /**
> > diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
> > index 5750f0c005..a2467ba509 100644
> > --- a/lib/vhost/vhost.h
> > +++ b/lib/vhost/vhost.h
> > @@ -1009,14 +1009,6 @@ mbuf_is_consumed(struct rte_mbuf *m)
> >       return true;
> >   }
> >
> > -static __rte_always_inline void
> > -mem_set_dump(__rte_unused void *ptr, __rte_unused size_t size, __rte_unused bool enable)
> > -{
> > -#ifdef MADV_DONTDUMP
> > -     if (madvise(ptr, size, enable ? MADV_DODUMP : MADV_DONTDUMP) == -1) {
> > -             rte_log(RTE_LOG_INFO, vhost_config_log_level,
> > -                     "VHOST_CONFIG: could not set coredump preference (%s).\n", strerror(errno));
> > -     }
> > -#endif
> > -}
> > +uint64_t hua_to_alignment(struct rte_vhost_memory *mem, void *ptr);
> > +void mem_set_dump(void *ptr, size_t size, bool enable, uint64_t alignment);
> >   #endif /* _VHOST_NET_CDEV_H_ */
> > diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
> > index d702d082dd..6d09597fbe 100644
> > --- a/lib/vhost/vhost_user.c
> > +++ b/lib/vhost/vhost_user.c
> > @@ -737,6 +737,40 @@ log_addr_to_gpa(struct virtio_net *dev, struct vhost_virtqueue *vq)
> >       return log_gpa;
> >   }
> >
> > +uint64_t
> > +hua_to_alignment(struct rte_vhost_memory *mem, void *ptr)
> > +{
> > +     struct rte_vhost_mem_region *r;
> > +     uint32_t i;
> > +     uintptr_t hua = (uintptr_t)ptr;
> > +
> > +     for (i = 0; i < mem->nregions; i++) {
> > +             r = &mem->regions[i];
> > +             if (hua >= r->host_user_addr &&
> > +                     hua < r->host_user_addr + r->size) {
> > +                     return r->alignment;
> > +             }
> > +     }
> > +
> > +     /* If region isn't found, don't align at all */
> > +     return 1;
> > +}
> > +
> > +void
> > +mem_set_dump(void *ptr, size_t size, bool enable, uint64_t pagesz)
> > +{
> > +#ifdef MADV_DONTDUMP
> > +     void *start = RTE_PTR_ALIGN_FLOOR(ptr, pagesz);
> > +     uintptr_t end = RTE_ALIGN_CEIL((uintptr_t)ptr + size, pagesz);
> > +     size_t len = end - (uintptr_t)start;
> > +
> > +     if (madvise(start, len, enable ? MADV_DODUMP : MADV_DONTDUMP) == -1) {
> > +             rte_log(RTE_LOG_INFO, vhost_config_log_level,
> > +                     "VHOST_CONFIG: could not set coredump preference (%s).\n", strerror(errno));
> > +     }
> > +#endif
> > +}
> > +
> >   static void
> >   translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
> >   {
> > @@ -767,6 +801,8 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
> >                       return;
> >               }
> >
> > +             mem_set_dump(vq->desc_packed, len, true,
> > +                     hua_to_alignment(dev->mem, vq->desc_packed));
> >               numa_realloc(&dev, &vq);
> >               *pdev = dev;
> >               *pvq = vq;
> > @@ -782,6 +818,8 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
> >                       return;
> >               }
> >
> > +             mem_set_dump(vq->driver_event, len, true,
> > +                     hua_to_alignment(dev->mem, vq->driver_event));
> >               len = sizeof(struct vring_packed_desc_event);
> >               vq->device_event = (struct vring_packed_desc_event *)
> >                                       (uintptr_t)ring_addr_to_vva(dev,
> > @@ -793,9 +831,8 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
> >                       return;
> >               }
> >
> > -             mem_set_dump(vq->desc_packed, len, true);
> > -             mem_set_dump(vq->driver_event, len, true);
> > -             mem_set_dump(vq->device_event, len, true);
> > +             mem_set_dump(vq->device_event, len, true,
> > +                     hua_to_alignment(dev->mem, vq->device_event));
> >               vq->access_ok = true;
> >               return;
> >       }
> > @@ -812,6 +849,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
> >               return;
> >       }
> >
> > +     mem_set_dump(vq->desc, len, true, hua_to_alignment(dev->mem, vq->desc));
> >       numa_realloc(&dev, &vq);
> >       *pdev = dev;
> >       *pvq = vq;
> > @@ -827,6 +865,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
> >               return;
> >       }
> >
> > +     mem_set_dump(vq->avail, len, true, hua_to_alignment(dev->mem, vq->avail));
> >       len = sizeof(struct vring_used) +
> >               sizeof(struct vring_used_elem) * vq->size;
> >       if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX))
> > @@ -839,6 +878,8 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
> >               return;
> >       }
> >
> > +     mem_set_dump(vq->used, len, true, hua_to_alignment(dev->mem, vq->used));
> > +
> >       if (vq->last_used_idx != vq->used->idx) {
> >               VHOST_LOG_CONFIG(dev->ifname, WARNING,
> >                       "last_used_idx (%u) and vq->used->idx (%u) mismatches;\n",
> > @@ -849,9 +890,6 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
> >                       "some packets maybe resent for Tx and dropped for Rx\n");
> >       }
> >
> > -     mem_set_dump(vq->desc, len, true);
> > -     mem_set_dump(vq->avail, len, true);
> > -     mem_set_dump(vq->used, len, true);
> >       vq->access_ok = true;
> >
> >       VHOST_LOG_CONFIG(dev->ifname, DEBUG, "mapped address desc: %p\n", vq->desc);
> > @@ -1230,7 +1268,8 @@ vhost_user_mmap_region(struct virtio_net *dev,
> >       region->mmap_addr = mmap_addr;
> >       region->mmap_size = mmap_size;
> >       region->host_user_addr = (uint64_t)(uintptr_t)mmap_addr + mmap_offset;
> > -     mem_set_dump(mmap_addr, mmap_size, false);
> > +     region->alignment = alignment;
> > +     mem_set_dump(mmap_addr, mmap_size, false, alignment);
> >
> >       if (dev->async_copy) {
> >               if (add_guest_pages(dev, region, alignment) < 0) {
> > @@ -1535,7 +1574,6 @@ inflight_mem_alloc(struct virtio_net *dev, const char *name, size_t size, int *f
> >               return NULL;
> >       }
> >
> > -     mem_set_dump(ptr, size, false);
> >       *fd = mfd;
> >       return ptr;
> >   }
> > @@ -1566,6 +1604,7 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev,
> >       uint64_t pervq_inflight_size, mmap_size;
> >       uint16_t num_queues, queue_size;
> >       struct virtio_net *dev = *pdev;
> > +     uint64_t alignment;
> >       int fd, i, j;
> >       int numa_node = SOCKET_ID_ANY;
> >       void *addr;
> > @@ -1628,6 +1667,8 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev,
> >               dev->inflight_info->fd = -1;
> >       }
> >
> > +     alignment = get_blk_size(fd);
> > +     mem_set_dump(addr, mmap_size, false, alignment);
> >       dev->inflight_info->addr = addr;
> >       dev->inflight_info->size = ctx->msg.payload.inflight.mmap_size = mmap_size;
> >       dev->inflight_info->fd = ctx->fds[0] = fd;
> > @@ -1744,10 +1785,10 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev,
> >               dev->inflight_info->fd = -1;
> >       }
> >
> > -     mem_set_dump(addr, mmap_size, false);
> >       dev->inflight_info->fd = fd;
> >       dev->inflight_info->addr = addr;
> >       dev->inflight_info->size = mmap_size;
> > +     mem_set_dump(addr, mmap_size, false, get_blk_size(fd));
> >
> >       for (i = 0; i < num_queues; i++) {
> >               vq = dev->virtqueue[i];
> > @@ -2242,6 +2283,7 @@ vhost_user_set_log_base(struct virtio_net **pdev,
> >       struct virtio_net *dev = *pdev;
> >       int fd = ctx->fds[0];
> >       uint64_t size, off;
> > +     uint64_t alignment;
> >       void *addr;
> >       uint32_t i;
> >
> > @@ -2280,6 +2322,7 @@ vhost_user_set_log_base(struct virtio_net **pdev,
> >        * fail when offset is not page size aligned.
> >        */
> >       addr = mmap(0, size + off, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > +     alignment = get_blk_size(fd);
> >       close(fd);
> >       if (addr == MAP_FAILED) {
> >               VHOST_LOG_CONFIG(dev->ifname, ERR, "mmap log base failed!\n");
> > @@ -2296,7 +2339,7 @@ vhost_user_set_log_base(struct virtio_net **pdev,
> >       dev->log_addr = (uint64_t)(uintptr_t)addr;
> >       dev->log_base = dev->log_addr + off;
> >       dev->log_size = size;
> > -     mem_set_dump(addr, size, false);
> > +     mem_set_dump(addr, size + off, false, alignment);
> >
> >       for (i = 0; i < dev->nr_vring; i++) {
> >               struct vhost_virtqueue *vq = dev->virtqueue[i];
>


^ permalink raw reply	[relevance 0%]

* Re: [PATCH v2] vhost: fix madvise arguments alignment
  @ 2023-02-23 16:12  3%   ` Maxime Coquelin
  2023-02-23 16:57  0%     ` Mike Pattrick
  0 siblings, 1 reply; 200+ results
From: Maxime Coquelin @ 2023-02-23 16:12 UTC (permalink / raw)
  To: Mike Pattrick, dev; +Cc: david.marchand, chenbo.xia

Hi Mike,

Thanks for  looking into this issue.

On 2/23/23 05:35, Mike Pattrick wrote:
> The arguments passed to madvise should be aligned to the alignment of
> the backing memory. Now we keep track of each regions alignment and use
> then when setting coredump preferences. To facilitate this, a new member
> was added to rte_vhost_mem_region. A new function was added to easily
> translate memory address back to region alignment. Unneeded calls to
> madvise were reduced, as the cache removal case should already be
> covered by the cache insertion case. The previously inline function
> mem_set_dump was removed from a header file and made not inline.
> 
> Fixes: 338ad77c9ed3 ("vhost: exclude VM hugepages from coredumps")
> 
> Signed-off-by: Mike Pattrick <mkp@redhat.com>
> ---
> Since v1:
>   - Corrected a cast for 32bit compiles
> ---
>   lib/vhost/iotlb.c      |  9 +++---
>   lib/vhost/rte_vhost.h  |  1 +
>   lib/vhost/vhost.h      | 12 ++------
>   lib/vhost/vhost_user.c | 63 +++++++++++++++++++++++++++++++++++-------
>   4 files changed, 60 insertions(+), 25 deletions(-)
> 
> diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c
> index a0b8fd7302..5293507b63 100644
> --- a/lib/vhost/iotlb.c
> +++ b/lib/vhost/iotlb.c
> @@ -149,7 +149,6 @@ vhost_user_iotlb_cache_remove_all(struct vhost_virtqueue *vq)
>   	rte_rwlock_write_lock(&vq->iotlb_lock);
>   
>   	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
> -		mem_set_dump((void *)(uintptr_t)node->uaddr, node->size, true);

Hmm, it should have been called with enable=false here since we are
removing the entry from the IOTLB cache. It should be kept in order to
"DONTDUMP" pages evicted from the cache.

>   		TAILQ_REMOVE(&vq->iotlb_list, node, next);
>   		vhost_user_iotlb_pool_put(vq, node);
>   	}
> @@ -171,7 +170,6 @@ vhost_user_iotlb_cache_random_evict(struct vhost_virtqueue *vq)
>   
>   	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
>   		if (!entry_idx) {
> -			mem_set_dump((void *)(uintptr_t)node->uaddr, node->size, true);

Same here.

>   			TAILQ_REMOVE(&vq->iotlb_list, node, next);
>   			vhost_user_iotlb_pool_put(vq, node);
>   			vq->iotlb_cache_nr--;
> @@ -224,14 +222,16 @@ vhost_user_iotlb_cache_insert(struct virtio_net *dev, struct vhost_virtqueue *vq
>   			vhost_user_iotlb_pool_put(vq, new_node);
>   			goto unlock;
>   		} else if (node->iova > new_node->iova) {
> -			mem_set_dump((void *)(uintptr_t)node->uaddr, node->size, true);
> +			mem_set_dump((void *)(uintptr_t)new_node->uaddr, new_node->size, true,
> +				hua_to_alignment(dev->mem, (void *)(uintptr_t)node->uaddr));
>   			TAILQ_INSERT_BEFORE(node, new_node, next);
>   			vq->iotlb_cache_nr++;
>   			goto unlock;
>   		}
>   	}
>   
> -	mem_set_dump((void *)(uintptr_t)node->uaddr, node->size, true);
> +	mem_set_dump((void *)(uintptr_t)new_node->uaddr, new_node->size, true,
> +		hua_to_alignment(dev->mem, (void *)(uintptr_t)new_node->uaddr));
>   	TAILQ_INSERT_TAIL(&vq->iotlb_list, new_node, next);
>   	vq->iotlb_cache_nr++;
>   
> @@ -259,7 +259,6 @@ vhost_user_iotlb_cache_remove(struct vhost_virtqueue *vq,
>   			break;
>   
>   		if (iova < node->iova + node->size) {
> -			mem_set_dump((void *)(uintptr_t)node->uaddr, node->size, true);
>   			TAILQ_REMOVE(&vq->iotlb_list, node, next);
>   			vhost_user_iotlb_pool_put(vq, node);
>   			vq->iotlb_cache_nr--;
> diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h
> index a395843fe9..c5c97ea67e 100644
> --- a/lib/vhost/rte_vhost.h
> +++ b/lib/vhost/rte_vhost.h
> @@ -136,6 +136,7 @@ struct rte_vhost_mem_region {
>   	void	 *mmap_addr;
>   	uint64_t mmap_size;
>   	int fd;
> +	uint64_t alignment;

This is not possible to do this as it breaks the ABI.
You have to store the information somewhere else, or simply call
get_blk_size() in hua_to_alignment() since the fd is not closed.

>   };
>   
>   /**
> diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
> index 5750f0c005..a2467ba509 100644
> --- a/lib/vhost/vhost.h
> +++ b/lib/vhost/vhost.h
> @@ -1009,14 +1009,6 @@ mbuf_is_consumed(struct rte_mbuf *m)
>   	return true;
>   }
>   
> -static __rte_always_inline void
> -mem_set_dump(__rte_unused void *ptr, __rte_unused size_t size, __rte_unused bool enable)
> -{
> -#ifdef MADV_DONTDUMP
> -	if (madvise(ptr, size, enable ? MADV_DODUMP : MADV_DONTDUMP) == -1) {
> -		rte_log(RTE_LOG_INFO, vhost_config_log_level,
> -			"VHOST_CONFIG: could not set coredump preference (%s).\n", strerror(errno));
> -	}
> -#endif
> -}
> +uint64_t hua_to_alignment(struct rte_vhost_memory *mem, void *ptr);
> +void mem_set_dump(void *ptr, size_t size, bool enable, uint64_t alignment);
>   #endif /* _VHOST_NET_CDEV_H_ */
> diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
> index d702d082dd..6d09597fbe 100644
> --- a/lib/vhost/vhost_user.c
> +++ b/lib/vhost/vhost_user.c
> @@ -737,6 +737,40 @@ log_addr_to_gpa(struct virtio_net *dev, struct vhost_virtqueue *vq)
>   	return log_gpa;
>   }
>   
> +uint64_t
> +hua_to_alignment(struct rte_vhost_memory *mem, void *ptr)
> +{
> +	struct rte_vhost_mem_region *r;
> +	uint32_t i;
> +	uintptr_t hua = (uintptr_t)ptr;
> +
> +	for (i = 0; i < mem->nregions; i++) {
> +		r = &mem->regions[i];
> +		if (hua >= r->host_user_addr &&
> +			hua < r->host_user_addr + r->size) {
> +			return r->alignment;
> +		}
> +	}
> +
> +	/* If region isn't found, don't align at all */
> +	return 1;
> +}
> +
> +void
> +mem_set_dump(void *ptr, size_t size, bool enable, uint64_t pagesz)
> +{
> +#ifdef MADV_DONTDUMP
> +	void *start = RTE_PTR_ALIGN_FLOOR(ptr, pagesz);
> +	uintptr_t end = RTE_ALIGN_CEIL((uintptr_t)ptr + size, pagesz);
> +	size_t len = end - (uintptr_t)start;
> +
> +	if (madvise(start, len, enable ? MADV_DODUMP : MADV_DONTDUMP) == -1) {
> +		rte_log(RTE_LOG_INFO, vhost_config_log_level,
> +			"VHOST_CONFIG: could not set coredump preference (%s).\n", strerror(errno));
> +	}
> +#endif
> +}
> +
>   static void
>   translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
>   {
> @@ -767,6 +801,8 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
>   			return;
>   		}
>   
> +		mem_set_dump(vq->desc_packed, len, true,
> +			hua_to_alignment(dev->mem, vq->desc_packed));
>   		numa_realloc(&dev, &vq);
>   		*pdev = dev;
>   		*pvq = vq;
> @@ -782,6 +818,8 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
>   			return;
>   		}
>   
> +		mem_set_dump(vq->driver_event, len, true,
> +			hua_to_alignment(dev->mem, vq->driver_event));
>   		len = sizeof(struct vring_packed_desc_event);
>   		vq->device_event = (struct vring_packed_desc_event *)
>   					(uintptr_t)ring_addr_to_vva(dev,
> @@ -793,9 +831,8 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
>   			return;
>   		}
>   
> -		mem_set_dump(vq->desc_packed, len, true);
> -		mem_set_dump(vq->driver_event, len, true);
> -		mem_set_dump(vq->device_event, len, true);
> +		mem_set_dump(vq->device_event, len, true,
> +			hua_to_alignment(dev->mem, vq->device_event));
>   		vq->access_ok = true;
>   		return;
>   	}
> @@ -812,6 +849,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
>   		return;
>   	}
>   
> +	mem_set_dump(vq->desc, len, true, hua_to_alignment(dev->mem, vq->desc));
>   	numa_realloc(&dev, &vq);
>   	*pdev = dev;
>   	*pvq = vq;
> @@ -827,6 +865,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
>   		return;
>   	}
>   
> +	mem_set_dump(vq->avail, len, true, hua_to_alignment(dev->mem, vq->avail));
>   	len = sizeof(struct vring_used) +
>   		sizeof(struct vring_used_elem) * vq->size;
>   	if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX))
> @@ -839,6 +878,8 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
>   		return;
>   	}
>   
> +	mem_set_dump(vq->used, len, true, hua_to_alignment(dev->mem, vq->used));
> +
>   	if (vq->last_used_idx != vq->used->idx) {
>   		VHOST_LOG_CONFIG(dev->ifname, WARNING,
>   			"last_used_idx (%u) and vq->used->idx (%u) mismatches;\n",
> @@ -849,9 +890,6 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
>   			"some packets maybe resent for Tx and dropped for Rx\n");
>   	}
>   
> -	mem_set_dump(vq->desc, len, true);
> -	mem_set_dump(vq->avail, len, true);
> -	mem_set_dump(vq->used, len, true);
>   	vq->access_ok = true;
>   
>   	VHOST_LOG_CONFIG(dev->ifname, DEBUG, "mapped address desc: %p\n", vq->desc);
> @@ -1230,7 +1268,8 @@ vhost_user_mmap_region(struct virtio_net *dev,
>   	region->mmap_addr = mmap_addr;
>   	region->mmap_size = mmap_size;
>   	region->host_user_addr = (uint64_t)(uintptr_t)mmap_addr + mmap_offset;
> -	mem_set_dump(mmap_addr, mmap_size, false);
> +	region->alignment = alignment;
> +	mem_set_dump(mmap_addr, mmap_size, false, alignment);
>   
>   	if (dev->async_copy) {
>   		if (add_guest_pages(dev, region, alignment) < 0) {
> @@ -1535,7 +1574,6 @@ inflight_mem_alloc(struct virtio_net *dev, const char *name, size_t size, int *f
>   		return NULL;
>   	}
>   
> -	mem_set_dump(ptr, size, false);
>   	*fd = mfd;
>   	return ptr;
>   }
> @@ -1566,6 +1604,7 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev,
>   	uint64_t pervq_inflight_size, mmap_size;
>   	uint16_t num_queues, queue_size;
>   	struct virtio_net *dev = *pdev;
> +	uint64_t alignment;
>   	int fd, i, j;
>   	int numa_node = SOCKET_ID_ANY;
>   	void *addr;
> @@ -1628,6 +1667,8 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev,
>   		dev->inflight_info->fd = -1;
>   	}
>   
> +	alignment = get_blk_size(fd);
> +	mem_set_dump(addr, mmap_size, false, alignment);
>   	dev->inflight_info->addr = addr;
>   	dev->inflight_info->size = ctx->msg.payload.inflight.mmap_size = mmap_size;
>   	dev->inflight_info->fd = ctx->fds[0] = fd;
> @@ -1744,10 +1785,10 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev,
>   		dev->inflight_info->fd = -1;
>   	}
>   
> -	mem_set_dump(addr, mmap_size, false);
>   	dev->inflight_info->fd = fd;
>   	dev->inflight_info->addr = addr;
>   	dev->inflight_info->size = mmap_size;
> +	mem_set_dump(addr, mmap_size, false, get_blk_size(fd));
>   
>   	for (i = 0; i < num_queues; i++) {
>   		vq = dev->virtqueue[i];
> @@ -2242,6 +2283,7 @@ vhost_user_set_log_base(struct virtio_net **pdev,
>   	struct virtio_net *dev = *pdev;
>   	int fd = ctx->fds[0];
>   	uint64_t size, off;
> +	uint64_t alignment;
>   	void *addr;
>   	uint32_t i;
>   
> @@ -2280,6 +2322,7 @@ vhost_user_set_log_base(struct virtio_net **pdev,
>   	 * fail when offset is not page size aligned.
>   	 */
>   	addr = mmap(0, size + off, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> +	alignment = get_blk_size(fd);
>   	close(fd);
>   	if (addr == MAP_FAILED) {
>   		VHOST_LOG_CONFIG(dev->ifname, ERR, "mmap log base failed!\n");
> @@ -2296,7 +2339,7 @@ vhost_user_set_log_base(struct virtio_net **pdev,
>   	dev->log_addr = (uint64_t)(uintptr_t)addr;
>   	dev->log_base = dev->log_addr + off;
>   	dev->log_size = size;
> -	mem_set_dump(addr, size, false);
> +	mem_set_dump(addr, size + off, false, alignment);
>   
>   	for (i = 0; i < dev->nr_vring; i++) {
>   		struct vhost_virtqueue *vq = dev->virtqueue[i];


^ permalink raw reply	[relevance 3%]

* RE: [PATCH v11 21/22] hash: move rte_hash_set_alg out header
  2023-02-23  7:11  0%     ` Ruifeng Wang
@ 2023-02-23  7:27  0%       ` Ruifeng Wang
  0 siblings, 0 replies; 200+ results
From: Ruifeng Wang @ 2023-02-23  7:27 UTC (permalink / raw)
  To: Stephen Hemminger, dev
  Cc: Yipeng Wang, Sameh Gobriel, Bruce Richardson, Vladimir Medvedkin, nd, nd

> -----Original Message-----
> From: Ruifeng Wang
> Sent: Thursday, February 23, 2023 3:11 PM
> To: Stephen Hemminger <stephen@networkplumber.org>; dev@dpdk.org
> Cc: Yipeng Wang <yipeng1.wang@intel.com>; Sameh Gobriel <sameh.gobriel@intel.com>; Bruce
> Richardson <bruce.richardson@intel.com>; Vladimir Medvedkin <vladimir.medvedkin@intel.com>;
> nd <nd@arm.com>
> Subject: RE: [PATCH v11 21/22] hash: move rte_hash_set_alg out header
> 
> > -----Original Message-----
> > From: Stephen Hemminger <stephen@networkplumber.org>
> > Sent: Thursday, February 23, 2023 5:56 AM
> > To: dev@dpdk.org
> > Cc: Stephen Hemminger <stephen@networkplumber.org>; Yipeng Wang
> > <yipeng1.wang@intel.com>; Sameh Gobriel <sameh.gobriel@intel.com>;
> > Bruce Richardson <bruce.richardson@intel.com>; Vladimir Medvedkin
> > <vladimir.medvedkin@intel.com>; Ruifeng Wang <Ruifeng.Wang@arm.com>
> > Subject: [PATCH v11 21/22] hash: move rte_hash_set_alg out header
> >
> > The code for setting algorithm for hash is not at all perf sensitive,
> > and doing it inline has a couple of problems. First, it means that if
> > multiple files include the header, then the initialization gets done
> > multiple times. But also, it makes it harder to fix usage of RTE_LOG().
> >
> > Despite what the checking script say. This is not an ABI change, the
> > previous version inlined the same code; therefore both old and new code will work the
> same.
> >
> > Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> > ---
> >  lib/hash/meson.build     |  1 +
> >  lib/hash/rte_crc_arm64.h |  8 ++---
> >  lib/hash/rte_crc_x86.h   | 10 +++---
> >  lib/hash/rte_hash_crc.c  | 68
> > ++++++++++++++++++++++++++++++++++++++++
> >  lib/hash/rte_hash_crc.h  | 48 ++--------------------------
> >  lib/hash/version.map     |  7 +++++
> >  6 files changed, 88 insertions(+), 54 deletions(-)  create mode
> > 100644 lib/hash/rte_hash_crc.c
> >
> > diff --git a/lib/hash/meson.build b/lib/hash/meson.build index
> > e56ee8572564..c345c6f561fc
> > 100644
> > --- a/lib/hash/meson.build
> > +++ b/lib/hash/meson.build
> > @@ -19,6 +19,7 @@ indirect_headers += files(
> >
> >  sources = files(
> >      'rte_cuckoo_hash.c',
> > +    'rte_hash_crc.c',
> 
> I suppose this list is alphabetically ordered.
> 
> >      'rte_fbk_hash.c',
> >      'rte_thash.c',
> >      'rte_thash_gfni.c'
> <snip>
> > diff --git a/lib/hash/rte_hash_crc.h b/lib/hash/rte_hash_crc.h index
> > 0249ad16c5b6..e8145ee44204 100644
> > --- a/lib/hash/rte_hash_crc.h
> > +++ b/lib/hash/rte_hash_crc.h
> > @@ -20,8 +20,6 @@ extern "C" {
> >  #include <rte_branch_prediction.h>
> >  #include <rte_common.h>
> >  #include <rte_config.h>
> > -#include <rte_cpuflags.h>
> 
> A couple of files need update with this change.
> rte_cpuflags.h should be included in rte_fbk_hash.c (for ARM) and rte_efd.c.

OK, I see the changes already there in other patches in the same series.
Please ignore this comment.
Thanks.

> 
> > -#include <rte_log.h>
> >
> >  #include "rte_crc_sw.h"
> >
> <snip>

^ permalink raw reply	[relevance 0%]

* RE: [PATCH v11 21/22] hash: move rte_hash_set_alg out header
  2023-02-22 21:55  2%   ` [PATCH v11 21/22] hash: move rte_hash_set_alg out header Stephen Hemminger
@ 2023-02-23  7:11  0%     ` Ruifeng Wang
  2023-02-23  7:27  0%       ` Ruifeng Wang
  2023-02-24  9:45  0%     ` Ruifeng Wang
  1 sibling, 1 reply; 200+ results
From: Ruifeng Wang @ 2023-02-23  7:11 UTC (permalink / raw)
  To: Stephen Hemminger, dev
  Cc: Yipeng Wang, Sameh Gobriel, Bruce Richardson, Vladimir Medvedkin, nd

> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Thursday, February 23, 2023 5:56 AM
> To: dev@dpdk.org
> Cc: Stephen Hemminger <stephen@networkplumber.org>; Yipeng Wang <yipeng1.wang@intel.com>;
> Sameh Gobriel <sameh.gobriel@intel.com>; Bruce Richardson <bruce.richardson@intel.com>;
> Vladimir Medvedkin <vladimir.medvedkin@intel.com>; Ruifeng Wang <Ruifeng.Wang@arm.com>
> Subject: [PATCH v11 21/22] hash: move rte_hash_set_alg out header
> 
> The code for setting algorithm for hash is not at all perf sensitive, and doing it inline
> has a couple of problems. First, it means that if multiple files include the header, then
> the initialization gets done multiple times. But also, it makes it harder to fix usage of
> RTE_LOG().
> 
> Despite what the checking script say. This is not an ABI change, the previous version
> inlined the same code; therefore both old and new code will work the same.
> 
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
>  lib/hash/meson.build     |  1 +
>  lib/hash/rte_crc_arm64.h |  8 ++---
>  lib/hash/rte_crc_x86.h   | 10 +++---
>  lib/hash/rte_hash_crc.c  | 68 ++++++++++++++++++++++++++++++++++++++++
>  lib/hash/rte_hash_crc.h  | 48 ++--------------------------
>  lib/hash/version.map     |  7 +++++
>  6 files changed, 88 insertions(+), 54 deletions(-)  create mode 100644
> lib/hash/rte_hash_crc.c
> 
> diff --git a/lib/hash/meson.build b/lib/hash/meson.build index e56ee8572564..c345c6f561fc
> 100644
> --- a/lib/hash/meson.build
> +++ b/lib/hash/meson.build
> @@ -19,6 +19,7 @@ indirect_headers += files(
> 
>  sources = files(
>      'rte_cuckoo_hash.c',
> +    'rte_hash_crc.c',

I suppose this list is alphabetically ordered.

>      'rte_fbk_hash.c',
>      'rte_thash.c',
>      'rte_thash_gfni.c'
<snip>
> diff --git a/lib/hash/rte_hash_crc.h b/lib/hash/rte_hash_crc.h index
> 0249ad16c5b6..e8145ee44204 100644
> --- a/lib/hash/rte_hash_crc.h
> +++ b/lib/hash/rte_hash_crc.h
> @@ -20,8 +20,6 @@ extern "C" {
>  #include <rte_branch_prediction.h>
>  #include <rte_common.h>
>  #include <rte_config.h>
> -#include <rte_cpuflags.h>

A couple of files need update with this change.
rte_cpuflags.h should be included in rte_fbk_hash.c (for ARM) and rte_efd.c.

> -#include <rte_log.h>
> 
>  #include "rte_crc_sw.h"
> 
<snip>

^ permalink raw reply	[relevance 0%]

* [PATCH v11 21/22] hash: move rte_hash_set_alg out header
  2023-02-22 21:55  2% ` [PATCH v11 00/22] Convert static log type values in libraries Stephen Hemminger
@ 2023-02-22 21:55  2%   ` Stephen Hemminger
  2023-02-23  7:11  0%     ` Ruifeng Wang
  2023-02-24  9:45  0%     ` Ruifeng Wang
  0 siblings, 2 replies; 200+ results
From: Stephen Hemminger @ 2023-02-22 21:55 UTC (permalink / raw)
  To: dev
  Cc: Stephen Hemminger, Yipeng Wang, Sameh Gobriel, Bruce Richardson,
	Vladimir Medvedkin, Ruifeng Wang

The code for setting algorithm for hash is not at all perf sensitive,
and doing it inline has a couple of problems. First, it means that if
multiple files include the header, then the initialization gets done
multiple times. But also, it makes it harder to fix usage of RTE_LOG().

Despite what the checking script say. This is not an ABI change, the
previous version inlined the same code; therefore both old and new code
will work the same.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/hash/meson.build     |  1 +
 lib/hash/rte_crc_arm64.h |  8 ++---
 lib/hash/rte_crc_x86.h   | 10 +++---
 lib/hash/rte_hash_crc.c  | 68 ++++++++++++++++++++++++++++++++++++++++
 lib/hash/rte_hash_crc.h  | 48 ++--------------------------
 lib/hash/version.map     |  7 +++++
 6 files changed, 88 insertions(+), 54 deletions(-)
 create mode 100644 lib/hash/rte_hash_crc.c

diff --git a/lib/hash/meson.build b/lib/hash/meson.build
index e56ee8572564..c345c6f561fc 100644
--- a/lib/hash/meson.build
+++ b/lib/hash/meson.build
@@ -19,6 +19,7 @@ indirect_headers += files(
 
 sources = files(
     'rte_cuckoo_hash.c',
+    'rte_hash_crc.c',
     'rte_fbk_hash.c',
     'rte_thash.c',
     'rte_thash_gfni.c'
diff --git a/lib/hash/rte_crc_arm64.h b/lib/hash/rte_crc_arm64.h
index c9f52510871b..414fe065caa8 100644
--- a/lib/hash/rte_crc_arm64.h
+++ b/lib/hash/rte_crc_arm64.h
@@ -53,7 +53,7 @@ crc32c_arm64_u64(uint64_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_ARM64))
+	if (likely(rte_hash_crc32_alg & CRC32_ARM64))
 		return crc32c_arm64_u8(data, init_val);
 
 	return crc32c_1byte(data, init_val);
@@ -67,7 +67,7 @@ rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_ARM64))
+	if (likely(rte_hash_crc32_alg & CRC32_ARM64))
 		return crc32c_arm64_u16(data, init_val);
 
 	return crc32c_2bytes(data, init_val);
@@ -81,7 +81,7 @@ rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_ARM64))
+	if (likely(rte_hash_crc32_alg & CRC32_ARM64))
 		return crc32c_arm64_u32(data, init_val);
 
 	return crc32c_1word(data, init_val);
@@ -95,7 +95,7 @@ rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_8byte(uint64_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_ARM64))
+	if (likely(rte_hash_crc32_alg & CRC32_ARM64))
 		return crc32c_arm64_u64(data, init_val);
 
 	return crc32c_2words(data, init_val);
diff --git a/lib/hash/rte_crc_x86.h b/lib/hash/rte_crc_x86.h
index 205bc182be77..3b865e251db2 100644
--- a/lib/hash/rte_crc_x86.h
+++ b/lib/hash/rte_crc_x86.h
@@ -67,7 +67,7 @@ crc32c_sse42_u64(uint64_t data, uint64_t init_val)
 static inline uint32_t
 rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_SSE42))
+	if (likely(rte_hash_crc32_alg & CRC32_SSE42))
 		return crc32c_sse42_u8(data, init_val);
 
 	return crc32c_1byte(data, init_val);
@@ -81,7 +81,7 @@ rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_SSE42))
+	if (likely(rte_hash_crc32_alg & CRC32_SSE42))
 		return crc32c_sse42_u16(data, init_val);
 
 	return crc32c_2bytes(data, init_val);
@@ -95,7 +95,7 @@ rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_SSE42))
+	if (likely(rte_hash_crc32_alg & CRC32_SSE42))
 		return crc32c_sse42_u32(data, init_val);
 
 	return crc32c_1word(data, init_val);
@@ -110,11 +110,11 @@ static inline uint32_t
 rte_hash_crc_8byte(uint64_t data, uint32_t init_val)
 {
 #ifdef RTE_ARCH_X86_64
-	if (likely(crc32_alg == CRC32_SSE42_x64))
+	if (likely(rte_hash_crc32_alg == CRC32_SSE42_x64))
 		return crc32c_sse42_u64(data, init_val);
 #endif
 
-	if (likely(crc32_alg & CRC32_SSE42))
+	if (likely(rte_hash_crc32_alg & CRC32_SSE42))
 		return crc32c_sse42_u64_mimic(data, init_val);
 
 	return crc32c_2words(data, init_val);
diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c
new file mode 100644
index 000000000000..1439d8a71f6a
--- /dev/null
+++ b/lib/hash/rte_hash_crc.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#include <rte_cpuflags.h>
+#include <rte_log.h>
+
+#include "rte_hash_crc.h"
+
+RTE_LOG_REGISTER_SUFFIX(hash_crc_logtype, crc, INFO);
+#define RTE_LOGTYPE_HASH_CRC hash_crc_logtype
+
+uint8_t rte_hash_crc32_alg = CRC32_SW;
+
+/**
+ * Allow or disallow use of SSE4.2/ARMv8 intrinsics for CRC32 hash
+ * calculation.
+ *
+ * @param alg
+ *   An OR of following flags:
+ *   - (CRC32_SW) Don't use SSE4.2/ARMv8 intrinsics (default non-[x86/ARMv8])
+ *   - (CRC32_SSE42) Use SSE4.2 intrinsics if available
+ *   - (CRC32_SSE42_x64) Use 64-bit SSE4.2 intrinsic if available (default x86)
+ *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
+ *
+ */
+void
+rte_hash_crc_set_alg(uint8_t alg)
+{
+	rte_hash_crc32_alg = CRC32_SW;
+
+	if (alg == CRC32_SW)
+		return;
+
+#if defined RTE_ARCH_X86
+	if (!(alg & CRC32_SSE42_x64))
+		RTE_LOG(WARNING, HASH_CRC,
+			"Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
+	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
+		rte_hash_crc32_alg = CRC32_SSE42;
+	else
+		rte_hash_crc32_alg = CRC32_SSE42_x64;
+#endif
+
+#if defined RTE_ARCH_ARM64
+	if (!(alg & CRC32_ARM64))
+		RTE_LOG(WARNING, HASH_CRC,
+			"Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
+		rte_hash_crc32_alg = CRC32_ARM64;
+#endif
+
+	if (rte_hash_crc32_alg == CRC32_SW)
+		RTE_LOG(WARNING, HASH_CRC,
+			"Unsupported CRC32 algorithm requested using CRC32_SW\n");
+}
+
+/* Setting the best available algorithm */
+RTE_INIT(rte_hash_crc_init_alg)
+{
+#if defined(RTE_ARCH_X86)
+	rte_hash_crc_set_alg(CRC32_SSE42_x64);
+#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
+	rte_hash_crc_set_alg(CRC32_ARM64);
+#else
+	rte_hash_crc_set_alg(CRC32_SW);
+#endif
+}
diff --git a/lib/hash/rte_hash_crc.h b/lib/hash/rte_hash_crc.h
index 0249ad16c5b6..e8145ee44204 100644
--- a/lib/hash/rte_hash_crc.h
+++ b/lib/hash/rte_hash_crc.h
@@ -20,8 +20,6 @@ extern "C" {
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_config.h>
-#include <rte_cpuflags.h>
-#include <rte_log.h>
 
 #include "rte_crc_sw.h"
 
@@ -31,7 +29,7 @@ extern "C" {
 #define CRC32_SSE42_x64     (CRC32_x64|CRC32_SSE42)
 #define CRC32_ARM64         (1U << 3)
 
-static uint8_t crc32_alg = CRC32_SW;
+extern uint8_t rte_hash_crc32_alg;
 
 #if defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
 #include "rte_crc_arm64.h"
@@ -53,48 +51,8 @@ static uint8_t crc32_alg = CRC32_SW;
  *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
  *
  */
-static inline void
-rte_hash_crc_set_alg(uint8_t alg)
-{
-	crc32_alg = CRC32_SW;
-
-	if (alg == CRC32_SW)
-		return;
-
-#if defined RTE_ARCH_X86
-	if (!(alg & CRC32_SSE42_x64))
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
-	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
-		crc32_alg = CRC32_SSE42;
-	else
-		crc32_alg = CRC32_SSE42_x64;
-#endif
-
-#if defined RTE_ARCH_ARM64
-	if (!(alg & CRC32_ARM64))
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
-	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
-		crc32_alg = CRC32_ARM64;
-#endif
-
-	if (crc32_alg == CRC32_SW)
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_SW\n");
-}
-
-/* Setting the best available algorithm */
-RTE_INIT(rte_hash_crc_init_alg)
-{
-#if defined(RTE_ARCH_X86)
-	rte_hash_crc_set_alg(CRC32_SSE42_x64);
-#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
-	rte_hash_crc_set_alg(CRC32_ARM64);
-#else
-	rte_hash_crc_set_alg(CRC32_SW);
-#endif
-}
+void
+rte_hash_crc_set_alg(uint8_t alg);
 
 #ifdef __DOXYGEN__
 
diff --git a/lib/hash/version.map b/lib/hash/version.map
index f03b047b2eec..8b22aad5626b 100644
--- a/lib/hash/version.map
+++ b/lib/hash/version.map
@@ -9,6 +9,7 @@ DPDK_23 {
 	rte_hash_add_key_with_hash;
 	rte_hash_add_key_with_hash_data;
 	rte_hash_count;
+	rte_hash_crc_set_alg;
 	rte_hash_create;
 	rte_hash_del_key;
 	rte_hash_del_key_with_hash;
@@ -56,3 +57,9 @@ EXPERIMENTAL {
 	rte_thash_gfni;
 	rte_thash_gfni_bulk;
 };
+
+INTERNAL {
+	global:
+
+	rte_hash_crc32_alg;
+};
-- 
2.39.1


^ permalink raw reply	[relevance 2%]

* [PATCH v11 00/22] Convert static log type values in libraries
                     ` (6 preceding siblings ...)
  2023-02-22 16:07  2% ` [PATCH v10 00/22] Convert static log type values in libraries Stephen Hemminger
@ 2023-02-22 21:55  2% ` Stephen Hemminger
  2023-02-22 21:55  2%   ` [PATCH v11 21/22] hash: move rte_hash_set_alg out header Stephen Hemminger
  7 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-02-22 21:55 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

This patchset removes the main uses of static LOGTYPE's in DPDK
libraries. It starts with the easy one and goes on to the more complex ones.

There are several options on how to treat the old static types:
leave them there, mark as deprecated, or remove them.
This version removes them since there is no guarantee in current
DPDK policies that says they can't be removed.

Note: there is one patch in this series that will get
flagged incorrectly as an ABI change.

v11 - fix include check on arm cross build

v10 - add necessary rte_compat.h in thash_gfni stub for arm

v9 - fix handling of crc32 alg in lib/hash.
     make it an internal global variable.
     fix gfni stubs for case where they are not used.

Stephen Hemminger (22):
  gso: don't log message on non TCP/UDP
  eal: drop no longer used GSO logtype
  log: drop unused RTE_LOGTYPE_TIMER
  efd: replace RTE_LOGTYPE_EFD with dynamic type
  mbuf: replace RTE_LOGTYPE_MBUF with dynamic type
  acl: replace LOGTYPE_ACL with dynamic type
  examples/power: replace use of RTE_LOGTYPE_POWER
  examples/l3fwd-power: replace use of RTE_LOGTYPE_POWER
  power: replace RTE_LOGTYPE_POWER with dynamic type
  ring: replace RTE_LOGTYPE_RING with dynamic type
  mempool: replace RTE_LOGTYPE_MEMPOOL with dynamic type
  lpm: replace RTE_LOGTYPE_LPM with dynamic types
  kni: replace RTE_LOGTYPE_KNI with dynamic type
  sched: replace RTE_LOGTYPE_SCHED with dynamic type
  examples/ipsecgw: replace RTE_LOGTYPE_PORT
  port: replace RTE_LOGTYPE_PORT with dynamic type
  table: convert RTE_LOGTYPE_TABLE to dynamic logtype
  app/test: remove use of RTE_LOGTYPE_PIPELINE
  pipeline: replace RTE_LOGTYPE_PIPELINE with dynamic type
  hash: move rte_thash_gfni stubs out of header file
  hash: move rte_hash_set_alg out header
  hash: convert RTE_LOGTYPE_HASH to dynamic type

 app/test/test_acl.c               |  3 +-
 app/test/test_table_acl.c         | 50 +++++++++++------------
 app/test/test_table_pipeline.c    | 40 +++++++++---------
 examples/distributor/main.c       |  2 +-
 examples/ipsec-secgw/sa.c         |  6 +--
 examples/l3fwd-power/main.c       | 15 +++----
 lib/acl/acl_bld.c                 |  1 +
 lib/acl/acl_gen.c                 |  1 +
 lib/acl/acl_log.h                 |  4 ++
 lib/acl/rte_acl.c                 |  4 ++
 lib/acl/tb_mem.c                  |  3 +-
 lib/eal/common/eal_common_log.c   | 17 --------
 lib/eal/include/rte_log.h         | 34 ++++++++--------
 lib/efd/rte_efd.c                 |  4 ++
 lib/fib/fib_log.h                 |  4 ++
 lib/fib/rte_fib.c                 |  3 ++
 lib/fib/rte_fib6.c                |  2 +
 lib/gso/rte_gso.c                 |  5 +--
 lib/gso/rte_gso.h                 |  1 +
 lib/hash/meson.build              |  9 +++-
 lib/hash/rte_crc_arm64.h          |  8 ++--
 lib/hash/rte_crc_x86.h            | 10 ++---
 lib/hash/rte_cuckoo_hash.c        |  5 +++
 lib/hash/rte_fbk_hash.c           |  5 +++
 lib/hash/rte_hash_crc.c           | 68 +++++++++++++++++++++++++++++++
 lib/hash/rte_hash_crc.h           | 48 ++--------------------
 lib/hash/rte_thash.c              |  3 ++
 lib/hash/rte_thash_gfni.c         | 50 +++++++++++++++++++++++
 lib/hash/rte_thash_gfni.h         | 30 ++++----------
 lib/hash/version.map              | 11 +++++
 lib/kni/rte_kni.c                 |  3 ++
 lib/lpm/lpm_log.h                 |  4 ++
 lib/lpm/rte_lpm.c                 |  3 ++
 lib/lpm/rte_lpm6.c                |  1 +
 lib/mbuf/mbuf_log.h               |  4 ++
 lib/mbuf/rte_mbuf.c               |  4 ++
 lib/mbuf/rte_mbuf_dyn.c           |  2 +
 lib/mbuf/rte_mbuf_pool_ops.c      |  2 +
 lib/mempool/rte_mempool.c         |  2 +
 lib/mempool/rte_mempool.h         |  8 ++++
 lib/mempool/version.map           |  3 ++
 lib/pipeline/rte_pipeline.c       |  3 ++
 lib/port/rte_port_ethdev.c        |  3 ++
 lib/port/rte_port_eventdev.c      |  4 ++
 lib/port/rte_port_fd.c            |  3 ++
 lib/port/rte_port_frag.c          |  3 ++
 lib/port/rte_port_kni.c           |  3 ++
 lib/port/rte_port_ras.c           |  3 ++
 lib/port/rte_port_ring.c          |  3 ++
 lib/port/rte_port_sched.c         |  3 ++
 lib/port/rte_port_source_sink.c   |  3 ++
 lib/port/rte_port_sym_crypto.c    |  3 ++
 lib/power/guest_channel.c         |  3 +-
 lib/power/power_common.c          |  2 +
 lib/power/power_common.h          |  3 +-
 lib/power/power_kvm_vm.c          |  1 +
 lib/power/rte_power.c             |  1 +
 lib/rib/rib_log.h                 |  4 ++
 lib/rib/rte_rib.c                 |  3 ++
 lib/rib/rte_rib6.c                |  3 ++
 lib/ring/rte_ring.c               |  3 ++
 lib/sched/rte_pie.c               |  1 +
 lib/sched/rte_sched.c             |  5 +++
 lib/sched/rte_sched_log.h         |  4 ++
 lib/table/rte_table_acl.c         |  3 ++
 lib/table/rte_table_array.c       |  3 ++
 lib/table/rte_table_hash_cuckoo.c |  3 ++
 lib/table/rte_table_hash_ext.c    |  3 ++
 lib/table/rte_table_hash_key16.c  |  3 ++
 lib/table/rte_table_hash_key32.c  |  5 ++-
 lib/table/rte_table_hash_key8.c   |  5 ++-
 lib/table/rte_table_hash_lru.c    |  3 ++
 lib/table/rte_table_lpm.c         |  3 ++
 lib/table/rte_table_lpm_ipv6.c    |  3 ++
 lib/table/rte_table_stub.c        |  3 ++
 75 files changed, 409 insertions(+), 177 deletions(-)
 create mode 100644 lib/acl/acl_log.h
 create mode 100644 lib/fib/fib_log.h
 create mode 100644 lib/hash/rte_hash_crc.c
 create mode 100644 lib/hash/rte_thash_gfni.c
 create mode 100644 lib/lpm/lpm_log.h
 create mode 100644 lib/mbuf/mbuf_log.h
 create mode 100644 lib/rib/rib_log.h
 create mode 100644 lib/sched/rte_sched_log.h

-- 
2.39.1


^ permalink raw reply	[relevance 2%]

* [PATCH v10 21/22] hash: move rte_hash_set_alg out header
  2023-02-22 16:07  2% ` [PATCH v10 00/22] Convert static log type values in libraries Stephen Hemminger
@ 2023-02-22 16:08  2%   ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-02-22 16:08 UTC (permalink / raw)
  To: dev
  Cc: Stephen Hemminger, Yipeng Wang, Sameh Gobriel, Bruce Richardson,
	Vladimir Medvedkin, Ruifeng Wang

The code for setting algorithm for hash is not at all perf sensitive,
and doing it inline has a couple of problems. First, it means that if
multiple files include the header, then the initialization gets done
multiple times. But also, it makes it harder to fix usage of RTE_LOG().

Despite what the checking script say. This is not an ABI change, the
previous version inlined the same code; therefore both old and new code
will work the same.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/hash/meson.build     |  1 +
 lib/hash/rte_crc_arm64.h |  8 ++---
 lib/hash/rte_crc_x86.h   | 10 +++---
 lib/hash/rte_hash_crc.c  | 68 ++++++++++++++++++++++++++++++++++++++++
 lib/hash/rte_hash_crc.h  | 48 ++--------------------------
 lib/hash/version.map     |  7 +++++
 6 files changed, 88 insertions(+), 54 deletions(-)
 create mode 100644 lib/hash/rte_hash_crc.c

diff --git a/lib/hash/meson.build b/lib/hash/meson.build
index e56ee8572564..c345c6f561fc 100644
--- a/lib/hash/meson.build
+++ b/lib/hash/meson.build
@@ -19,6 +19,7 @@ indirect_headers += files(
 
 sources = files(
     'rte_cuckoo_hash.c',
+    'rte_hash_crc.c',
     'rte_fbk_hash.c',
     'rte_thash.c',
     'rte_thash_gfni.c'
diff --git a/lib/hash/rte_crc_arm64.h b/lib/hash/rte_crc_arm64.h
index c9f52510871b..414fe065caa8 100644
--- a/lib/hash/rte_crc_arm64.h
+++ b/lib/hash/rte_crc_arm64.h
@@ -53,7 +53,7 @@ crc32c_arm64_u64(uint64_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_ARM64))
+	if (likely(rte_hash_crc32_alg & CRC32_ARM64))
 		return crc32c_arm64_u8(data, init_val);
 
 	return crc32c_1byte(data, init_val);
@@ -67,7 +67,7 @@ rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_ARM64))
+	if (likely(rte_hash_crc32_alg & CRC32_ARM64))
 		return crc32c_arm64_u16(data, init_val);
 
 	return crc32c_2bytes(data, init_val);
@@ -81,7 +81,7 @@ rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_ARM64))
+	if (likely(rte_hash_crc32_alg & CRC32_ARM64))
 		return crc32c_arm64_u32(data, init_val);
 
 	return crc32c_1word(data, init_val);
@@ -95,7 +95,7 @@ rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_8byte(uint64_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_ARM64))
+	if (likely(rte_hash_crc32_alg & CRC32_ARM64))
 		return crc32c_arm64_u64(data, init_val);
 
 	return crc32c_2words(data, init_val);
diff --git a/lib/hash/rte_crc_x86.h b/lib/hash/rte_crc_x86.h
index 205bc182be77..3b865e251db2 100644
--- a/lib/hash/rte_crc_x86.h
+++ b/lib/hash/rte_crc_x86.h
@@ -67,7 +67,7 @@ crc32c_sse42_u64(uint64_t data, uint64_t init_val)
 static inline uint32_t
 rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_SSE42))
+	if (likely(rte_hash_crc32_alg & CRC32_SSE42))
 		return crc32c_sse42_u8(data, init_val);
 
 	return crc32c_1byte(data, init_val);
@@ -81,7 +81,7 @@ rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_SSE42))
+	if (likely(rte_hash_crc32_alg & CRC32_SSE42))
 		return crc32c_sse42_u16(data, init_val);
 
 	return crc32c_2bytes(data, init_val);
@@ -95,7 +95,7 @@ rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_SSE42))
+	if (likely(rte_hash_crc32_alg & CRC32_SSE42))
 		return crc32c_sse42_u32(data, init_val);
 
 	return crc32c_1word(data, init_val);
@@ -110,11 +110,11 @@ static inline uint32_t
 rte_hash_crc_8byte(uint64_t data, uint32_t init_val)
 {
 #ifdef RTE_ARCH_X86_64
-	if (likely(crc32_alg == CRC32_SSE42_x64))
+	if (likely(rte_hash_crc32_alg == CRC32_SSE42_x64))
 		return crc32c_sse42_u64(data, init_val);
 #endif
 
-	if (likely(crc32_alg & CRC32_SSE42))
+	if (likely(rte_hash_crc32_alg & CRC32_SSE42))
 		return crc32c_sse42_u64_mimic(data, init_val);
 
 	return crc32c_2words(data, init_val);
diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c
new file mode 100644
index 000000000000..1439d8a71f6a
--- /dev/null
+++ b/lib/hash/rte_hash_crc.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#include <rte_cpuflags.h>
+#include <rte_log.h>
+
+#include "rte_hash_crc.h"
+
+RTE_LOG_REGISTER_SUFFIX(hash_crc_logtype, crc, INFO);
+#define RTE_LOGTYPE_HASH_CRC hash_crc_logtype
+
+uint8_t rte_hash_crc32_alg = CRC32_SW;
+
+/**
+ * Allow or disallow use of SSE4.2/ARMv8 intrinsics for CRC32 hash
+ * calculation.
+ *
+ * @param alg
+ *   An OR of following flags:
+ *   - (CRC32_SW) Don't use SSE4.2/ARMv8 intrinsics (default non-[x86/ARMv8])
+ *   - (CRC32_SSE42) Use SSE4.2 intrinsics if available
+ *   - (CRC32_SSE42_x64) Use 64-bit SSE4.2 intrinsic if available (default x86)
+ *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
+ *
+ */
+void
+rte_hash_crc_set_alg(uint8_t alg)
+{
+	rte_hash_crc32_alg = CRC32_SW;
+
+	if (alg == CRC32_SW)
+		return;
+
+#if defined RTE_ARCH_X86
+	if (!(alg & CRC32_SSE42_x64))
+		RTE_LOG(WARNING, HASH_CRC,
+			"Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
+	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
+		rte_hash_crc32_alg = CRC32_SSE42;
+	else
+		rte_hash_crc32_alg = CRC32_SSE42_x64;
+#endif
+
+#if defined RTE_ARCH_ARM64
+	if (!(alg & CRC32_ARM64))
+		RTE_LOG(WARNING, HASH_CRC,
+			"Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
+		rte_hash_crc32_alg = CRC32_ARM64;
+#endif
+
+	if (rte_hash_crc32_alg == CRC32_SW)
+		RTE_LOG(WARNING, HASH_CRC,
+			"Unsupported CRC32 algorithm requested using CRC32_SW\n");
+}
+
+/* Setting the best available algorithm */
+RTE_INIT(rte_hash_crc_init_alg)
+{
+#if defined(RTE_ARCH_X86)
+	rte_hash_crc_set_alg(CRC32_SSE42_x64);
+#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
+	rte_hash_crc_set_alg(CRC32_ARM64);
+#else
+	rte_hash_crc_set_alg(CRC32_SW);
+#endif
+}
diff --git a/lib/hash/rte_hash_crc.h b/lib/hash/rte_hash_crc.h
index 0249ad16c5b6..e8145ee44204 100644
--- a/lib/hash/rte_hash_crc.h
+++ b/lib/hash/rte_hash_crc.h
@@ -20,8 +20,6 @@ extern "C" {
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_config.h>
-#include <rte_cpuflags.h>
-#include <rte_log.h>
 
 #include "rte_crc_sw.h"
 
@@ -31,7 +29,7 @@ extern "C" {
 #define CRC32_SSE42_x64     (CRC32_x64|CRC32_SSE42)
 #define CRC32_ARM64         (1U << 3)
 
-static uint8_t crc32_alg = CRC32_SW;
+extern uint8_t rte_hash_crc32_alg;
 
 #if defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
 #include "rte_crc_arm64.h"
@@ -53,48 +51,8 @@ static uint8_t crc32_alg = CRC32_SW;
  *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
  *
  */
-static inline void
-rte_hash_crc_set_alg(uint8_t alg)
-{
-	crc32_alg = CRC32_SW;
-
-	if (alg == CRC32_SW)
-		return;
-
-#if defined RTE_ARCH_X86
-	if (!(alg & CRC32_SSE42_x64))
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
-	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
-		crc32_alg = CRC32_SSE42;
-	else
-		crc32_alg = CRC32_SSE42_x64;
-#endif
-
-#if defined RTE_ARCH_ARM64
-	if (!(alg & CRC32_ARM64))
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
-	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
-		crc32_alg = CRC32_ARM64;
-#endif
-
-	if (crc32_alg == CRC32_SW)
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_SW\n");
-}
-
-/* Setting the best available algorithm */
-RTE_INIT(rte_hash_crc_init_alg)
-{
-#if defined(RTE_ARCH_X86)
-	rte_hash_crc_set_alg(CRC32_SSE42_x64);
-#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
-	rte_hash_crc_set_alg(CRC32_ARM64);
-#else
-	rte_hash_crc_set_alg(CRC32_SW);
-#endif
-}
+void
+rte_hash_crc_set_alg(uint8_t alg);
 
 #ifdef __DOXYGEN__
 
diff --git a/lib/hash/version.map b/lib/hash/version.map
index f03b047b2eec..8b22aad5626b 100644
--- a/lib/hash/version.map
+++ b/lib/hash/version.map
@@ -9,6 +9,7 @@ DPDK_23 {
 	rte_hash_add_key_with_hash;
 	rte_hash_add_key_with_hash_data;
 	rte_hash_count;
+	rte_hash_crc_set_alg;
 	rte_hash_create;
 	rte_hash_del_key;
 	rte_hash_del_key_with_hash;
@@ -56,3 +57,9 @@ EXPERIMENTAL {
 	rte_thash_gfni;
 	rte_thash_gfni_bulk;
 };
+
+INTERNAL {
+	global:
+
+	rte_hash_crc32_alg;
+};
-- 
2.39.1


^ permalink raw reply	[relevance 2%]

* [PATCH v10 00/22] Convert static log type values in libraries
                     ` (5 preceding siblings ...)
  2023-02-21 19:01  2% ` [PATCH v9 00/22] Convert static logtypes in libraries Stephen Hemminger
@ 2023-02-22 16:07  2% ` Stephen Hemminger
  2023-02-22 16:08  2%   ` [PATCH v10 21/22] hash: move rte_hash_set_alg out header Stephen Hemminger
  2023-02-22 21:55  2% ` [PATCH v11 00/22] Convert static log type values in libraries Stephen Hemminger
  7 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-02-22 16:07 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

This patchset removes the main uses of static LOGTYPE's in DPDK
libraries. It starts with the easy one and goes on to the more complex ones.

There are several options on how to treat the old static types:
leave them there, mark as deprecated, or remove them.
This version removes them since there is no guarantee in current
DPDK policies that says they can't be removed.

Note: there is one patch in this series that will get
flagged incorrectly as an ABI change.

v10 - add necessary rte_compat.h in thash_gfni stub for arm

v9 - fix handling of crc32 alg in lib/hash.
     make it an internal global variable.
     fix gfni stubs for case where they are not used.

Stephen Hemminger (22):
  gso: don't log message on non TCP/UDP
  eal: drop no longer used GSO logtype
  log: drop unused RTE_LOGTYPE_TIMER
  efd: replace RTE_LOGTYPE_EFD with dynamic type
  mbuf: replace RTE_LOGTYPE_MBUF with dynamic type
  acl: replace LOGTYPE_ACL with dynamic type
  examples/power: replace use of RTE_LOGTYPE_POWER
  examples/l3fwd-power: replace use of RTE_LOGTYPE_POWER
  power: replace RTE_LOGTYPE_POWER with dynamic type
  ring: replace RTE_LOGTYPE_RING with dynamic type
  mempool: replace RTE_LOGTYPE_MEMPOOL with dynamic type
  lpm: replace RTE_LOGTYPE_LPM with dynamic types
  kni: replace RTE_LOGTYPE_KNI with dynamic type
  sched: replace RTE_LOGTYPE_SCHED with dynamic type
  examples/ipsecgw: replace RTE_LOGTYPE_PORT
  port: replace RTE_LOGTYPE_PORT with dynamic type
  table: convert RTE_LOGTYPE_TABLE to dynamic logtype
  app/test: remove use of RTE_LOGTYPE_PIPELINE
  pipeline: replace RTE_LOGTYPE_PIPELINE with dynamic type
  hash: move rte_thash_gfni stubs out of header file
  hash: move rte_hash_set_alg out header
  hash: convert RTE_LOGTYPE_HASH to dynamic type

 app/test/test_acl.c               |  3 +-
 app/test/test_table_acl.c         | 50 +++++++++++------------
 app/test/test_table_pipeline.c    | 40 +++++++++---------
 examples/distributor/main.c       |  2 +-
 examples/ipsec-secgw/sa.c         |  6 +--
 examples/l3fwd-power/main.c       | 15 +++----
 lib/acl/acl_bld.c                 |  1 +
 lib/acl/acl_gen.c                 |  1 +
 lib/acl/acl_log.h                 |  4 ++
 lib/acl/rte_acl.c                 |  4 ++
 lib/acl/tb_mem.c                  |  3 +-
 lib/eal/common/eal_common_log.c   | 17 --------
 lib/eal/include/rte_log.h         | 34 ++++++++--------
 lib/efd/rte_efd.c                 |  4 ++
 lib/fib/fib_log.h                 |  4 ++
 lib/fib/rte_fib.c                 |  3 ++
 lib/fib/rte_fib6.c                |  2 +
 lib/gso/rte_gso.c                 |  5 +--
 lib/gso/rte_gso.h                 |  1 +
 lib/hash/meson.build              |  9 +++-
 lib/hash/rte_crc_arm64.h          |  8 ++--
 lib/hash/rte_crc_x86.h            | 10 ++---
 lib/hash/rte_cuckoo_hash.c        |  5 +++
 lib/hash/rte_fbk_hash.c           |  5 +++
 lib/hash/rte_hash_crc.c           | 68 +++++++++++++++++++++++++++++++
 lib/hash/rte_hash_crc.h           | 48 ++--------------------
 lib/hash/rte_thash.c              |  3 ++
 lib/hash/rte_thash_gfni.c         | 50 +++++++++++++++++++++++
 lib/hash/rte_thash_gfni.h         | 29 +++----------
 lib/hash/version.map              | 11 +++++
 lib/kni/rte_kni.c                 |  3 ++
 lib/lpm/lpm_log.h                 |  4 ++
 lib/lpm/rte_lpm.c                 |  3 ++
 lib/lpm/rte_lpm6.c                |  1 +
 lib/mbuf/mbuf_log.h               |  4 ++
 lib/mbuf/rte_mbuf.c               |  4 ++
 lib/mbuf/rte_mbuf_dyn.c           |  2 +
 lib/mbuf/rte_mbuf_pool_ops.c      |  2 +
 lib/mempool/rte_mempool.c         |  2 +
 lib/mempool/rte_mempool.h         |  8 ++++
 lib/mempool/version.map           |  3 ++
 lib/pipeline/rte_pipeline.c       |  3 ++
 lib/port/rte_port_ethdev.c        |  3 ++
 lib/port/rte_port_eventdev.c      |  4 ++
 lib/port/rte_port_fd.c            |  3 ++
 lib/port/rte_port_frag.c          |  3 ++
 lib/port/rte_port_kni.c           |  3 ++
 lib/port/rte_port_ras.c           |  3 ++
 lib/port/rte_port_ring.c          |  3 ++
 lib/port/rte_port_sched.c         |  3 ++
 lib/port/rte_port_source_sink.c   |  3 ++
 lib/port/rte_port_sym_crypto.c    |  3 ++
 lib/power/guest_channel.c         |  3 +-
 lib/power/power_common.c          |  2 +
 lib/power/power_common.h          |  3 +-
 lib/power/power_kvm_vm.c          |  1 +
 lib/power/rte_power.c             |  1 +
 lib/rib/rib_log.h                 |  4 ++
 lib/rib/rte_rib.c                 |  3 ++
 lib/rib/rte_rib6.c                |  3 ++
 lib/ring/rte_ring.c               |  3 ++
 lib/sched/rte_pie.c               |  1 +
 lib/sched/rte_sched.c             |  5 +++
 lib/sched/rte_sched_log.h         |  4 ++
 lib/table/rte_table_acl.c         |  3 ++
 lib/table/rte_table_array.c       |  3 ++
 lib/table/rte_table_hash_cuckoo.c |  3 ++
 lib/table/rte_table_hash_ext.c    |  3 ++
 lib/table/rte_table_hash_key16.c  |  3 ++
 lib/table/rte_table_hash_key32.c  |  5 ++-
 lib/table/rte_table_hash_key8.c   |  5 ++-
 lib/table/rte_table_hash_lru.c    |  3 ++
 lib/table/rte_table_lpm.c         |  3 ++
 lib/table/rte_table_lpm_ipv6.c    |  3 ++
 lib/table/rte_table_stub.c        |  3 ++
 75 files changed, 406 insertions(+), 179 deletions(-)
 create mode 100644 lib/acl/acl_log.h
 create mode 100644 lib/fib/fib_log.h
 create mode 100644 lib/hash/rte_hash_crc.c
 create mode 100644 lib/hash/rte_thash_gfni.c
 create mode 100644 lib/lpm/lpm_log.h
 create mode 100644 lib/mbuf/mbuf_log.h
 create mode 100644 lib/rib/rib_log.h
 create mode 100644 lib/sched/rte_sched_log.h

-- 
2.39.1


^ permalink raw reply	[relevance 2%]

* [PATCH v2] mem: fix displaying heap ID failed for heap info command
  @ 2023-02-22  7:49  4% ` Huisong Li
  0 siblings, 0 replies; 200+ results
From: Huisong Li @ 2023-02-22  7:49 UTC (permalink / raw)
  To: dev; +Cc: bruce.richardson, mb, hkalra, huangdaode, fengchengwen, lihuisong

The telemetry lib has added a allowed characters set for dictionary names.
Please see commit 2537fb0c5f34 ("telemetry: limit characters allowed in
dictionary names")

The space is not in this set, which cause the heap ID in /eal/heap_info
cannot be displayed. Additionally, 'heap' is also misspelling. So use
'Heap_id' to replace 'Head id'.

Fixes: e6732d0d6e26 ("mem: add telemetry infos")
Fixes: 2537fb0c5f34 ("telemetry: limit characters allowed in dictionary names")
Cc: stable@dpdk.org

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
 -v2: add announcement in rel_notes.
---
 doc/guides/rel_notes/release_23_03.rst | 2 ++
 lib/eal/common/eal_common_memory.c     | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst
index 49c18617a5..bdee535046 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -237,6 +237,8 @@ API Changes
 * The experimental structures ``struct rte_graph_param``, ``struct rte_graph``
   and ``struct graph`` were updated to support pcap trace in the graph library.
 
+* The ``Head ip`` in the displaying of ``/eal/heap_info`` telemetry command
+  is modified to ``Heap_id`` to ensure that it can be printed.
 
 ABI Changes
 -----------
diff --git a/lib/eal/common/eal_common_memory.c b/lib/eal/common/eal_common_memory.c
index c917b981bc..c2a4c8f9e7 100644
--- a/lib/eal/common/eal_common_memory.c
+++ b/lib/eal/common/eal_common_memory.c
@@ -1139,7 +1139,7 @@ handle_eal_heap_info_request(const char *cmd __rte_unused, const char *params,
 	malloc_heap_get_stats(heap, &sock_stats);
 
 	rte_tel_data_start_dict(d);
-	rte_tel_data_add_dict_uint(d, "Head id", heap_id);
+	rte_tel_data_add_dict_uint(d, "Heap_id", heap_id);
 	rte_tel_data_add_dict_string(d, "Name", heap->name);
 	rte_tel_data_add_dict_uint(d, "Heap_size",
 				   sock_stats.heap_totalsz_bytes);
-- 
2.33.0


^ permalink raw reply	[relevance 4%]

* [PATCH v9 21/22] hash: move rte_hash_set_alg out header
  2023-02-21 19:01  2% ` [PATCH v9 00/22] Convert static logtypes in libraries Stephen Hemminger
@ 2023-02-21 19:02  2%   ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-02-21 19:02 UTC (permalink / raw)
  To: dev
  Cc: Stephen Hemminger, Yipeng Wang, Sameh Gobriel, Bruce Richardson,
	Vladimir Medvedkin, Ruifeng Wang

The code for setting algorithm for hash is not at all perf sensitive,
and doing it inline has a couple of problems. First, it means that if
multiple files include the header, then the initialization gets done
multiple times. But also, it makes it harder to fix usage of RTE_LOG().

Despite what the checking script say. This is not an ABI change, the
previous version inlined the same code; therefore both old and new code
will work the same.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/hash/meson.build     |  1 +
 lib/hash/rte_crc_arm64.h |  8 ++---
 lib/hash/rte_crc_x86.h   | 10 +++---
 lib/hash/rte_hash_crc.c  | 68 ++++++++++++++++++++++++++++++++++++++++
 lib/hash/rte_hash_crc.h  | 48 ++--------------------------
 lib/hash/version.map     |  7 +++++
 6 files changed, 88 insertions(+), 54 deletions(-)
 create mode 100644 lib/hash/rte_hash_crc.c

diff --git a/lib/hash/meson.build b/lib/hash/meson.build
index e56ee8572564..c345c6f561fc 100644
--- a/lib/hash/meson.build
+++ b/lib/hash/meson.build
@@ -19,6 +19,7 @@ indirect_headers += files(
 
 sources = files(
     'rte_cuckoo_hash.c',
+    'rte_hash_crc.c',
     'rte_fbk_hash.c',
     'rte_thash.c',
     'rte_thash_gfni.c'
diff --git a/lib/hash/rte_crc_arm64.h b/lib/hash/rte_crc_arm64.h
index c9f52510871b..414fe065caa8 100644
--- a/lib/hash/rte_crc_arm64.h
+++ b/lib/hash/rte_crc_arm64.h
@@ -53,7 +53,7 @@ crc32c_arm64_u64(uint64_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_ARM64))
+	if (likely(rte_hash_crc32_alg & CRC32_ARM64))
 		return crc32c_arm64_u8(data, init_val);
 
 	return crc32c_1byte(data, init_val);
@@ -67,7 +67,7 @@ rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_ARM64))
+	if (likely(rte_hash_crc32_alg & CRC32_ARM64))
 		return crc32c_arm64_u16(data, init_val);
 
 	return crc32c_2bytes(data, init_val);
@@ -81,7 +81,7 @@ rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_ARM64))
+	if (likely(rte_hash_crc32_alg & CRC32_ARM64))
 		return crc32c_arm64_u32(data, init_val);
 
 	return crc32c_1word(data, init_val);
@@ -95,7 +95,7 @@ rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_8byte(uint64_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_ARM64))
+	if (likely(rte_hash_crc32_alg & CRC32_ARM64))
 		return crc32c_arm64_u64(data, init_val);
 
 	return crc32c_2words(data, init_val);
diff --git a/lib/hash/rte_crc_x86.h b/lib/hash/rte_crc_x86.h
index 205bc182be77..3b865e251db2 100644
--- a/lib/hash/rte_crc_x86.h
+++ b/lib/hash/rte_crc_x86.h
@@ -67,7 +67,7 @@ crc32c_sse42_u64(uint64_t data, uint64_t init_val)
 static inline uint32_t
 rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_SSE42))
+	if (likely(rte_hash_crc32_alg & CRC32_SSE42))
 		return crc32c_sse42_u8(data, init_val);
 
 	return crc32c_1byte(data, init_val);
@@ -81,7 +81,7 @@ rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_SSE42))
+	if (likely(rte_hash_crc32_alg & CRC32_SSE42))
 		return crc32c_sse42_u16(data, init_val);
 
 	return crc32c_2bytes(data, init_val);
@@ -95,7 +95,7 @@ rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
 static inline uint32_t
 rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
 {
-	if (likely(crc32_alg & CRC32_SSE42))
+	if (likely(rte_hash_crc32_alg & CRC32_SSE42))
 		return crc32c_sse42_u32(data, init_val);
 
 	return crc32c_1word(data, init_val);
@@ -110,11 +110,11 @@ static inline uint32_t
 rte_hash_crc_8byte(uint64_t data, uint32_t init_val)
 {
 #ifdef RTE_ARCH_X86_64
-	if (likely(crc32_alg == CRC32_SSE42_x64))
+	if (likely(rte_hash_crc32_alg == CRC32_SSE42_x64))
 		return crc32c_sse42_u64(data, init_val);
 #endif
 
-	if (likely(crc32_alg & CRC32_SSE42))
+	if (likely(rte_hash_crc32_alg & CRC32_SSE42))
 		return crc32c_sse42_u64_mimic(data, init_val);
 
 	return crc32c_2words(data, init_val);
diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c
new file mode 100644
index 000000000000..1439d8a71f6a
--- /dev/null
+++ b/lib/hash/rte_hash_crc.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#include <rte_cpuflags.h>
+#include <rte_log.h>
+
+#include "rte_hash_crc.h"
+
+RTE_LOG_REGISTER_SUFFIX(hash_crc_logtype, crc, INFO);
+#define RTE_LOGTYPE_HASH_CRC hash_crc_logtype
+
+uint8_t rte_hash_crc32_alg = CRC32_SW;
+
+/**
+ * Allow or disallow use of SSE4.2/ARMv8 intrinsics for CRC32 hash
+ * calculation.
+ *
+ * @param alg
+ *   An OR of following flags:
+ *   - (CRC32_SW) Don't use SSE4.2/ARMv8 intrinsics (default non-[x86/ARMv8])
+ *   - (CRC32_SSE42) Use SSE4.2 intrinsics if available
+ *   - (CRC32_SSE42_x64) Use 64-bit SSE4.2 intrinsic if available (default x86)
+ *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
+ *
+ */
+void
+rte_hash_crc_set_alg(uint8_t alg)
+{
+	rte_hash_crc32_alg = CRC32_SW;
+
+	if (alg == CRC32_SW)
+		return;
+
+#if defined RTE_ARCH_X86
+	if (!(alg & CRC32_SSE42_x64))
+		RTE_LOG(WARNING, HASH_CRC,
+			"Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
+	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
+		rte_hash_crc32_alg = CRC32_SSE42;
+	else
+		rte_hash_crc32_alg = CRC32_SSE42_x64;
+#endif
+
+#if defined RTE_ARCH_ARM64
+	if (!(alg & CRC32_ARM64))
+		RTE_LOG(WARNING, HASH_CRC,
+			"Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
+		rte_hash_crc32_alg = CRC32_ARM64;
+#endif
+
+	if (rte_hash_crc32_alg == CRC32_SW)
+		RTE_LOG(WARNING, HASH_CRC,
+			"Unsupported CRC32 algorithm requested using CRC32_SW\n");
+}
+
+/* Setting the best available algorithm */
+RTE_INIT(rte_hash_crc_init_alg)
+{
+#if defined(RTE_ARCH_X86)
+	rte_hash_crc_set_alg(CRC32_SSE42_x64);
+#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
+	rte_hash_crc_set_alg(CRC32_ARM64);
+#else
+	rte_hash_crc_set_alg(CRC32_SW);
+#endif
+}
diff --git a/lib/hash/rte_hash_crc.h b/lib/hash/rte_hash_crc.h
index 0249ad16c5b6..e8145ee44204 100644
--- a/lib/hash/rte_hash_crc.h
+++ b/lib/hash/rte_hash_crc.h
@@ -20,8 +20,6 @@ extern "C" {
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_config.h>
-#include <rte_cpuflags.h>
-#include <rte_log.h>
 
 #include "rte_crc_sw.h"
 
@@ -31,7 +29,7 @@ extern "C" {
 #define CRC32_SSE42_x64     (CRC32_x64|CRC32_SSE42)
 #define CRC32_ARM64         (1U << 3)
 
-static uint8_t crc32_alg = CRC32_SW;
+extern uint8_t rte_hash_crc32_alg;
 
 #if defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
 #include "rte_crc_arm64.h"
@@ -53,48 +51,8 @@ static uint8_t crc32_alg = CRC32_SW;
  *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
  *
  */
-static inline void
-rte_hash_crc_set_alg(uint8_t alg)
-{
-	crc32_alg = CRC32_SW;
-
-	if (alg == CRC32_SW)
-		return;
-
-#if defined RTE_ARCH_X86
-	if (!(alg & CRC32_SSE42_x64))
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
-	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
-		crc32_alg = CRC32_SSE42;
-	else
-		crc32_alg = CRC32_SSE42_x64;
-#endif
-
-#if defined RTE_ARCH_ARM64
-	if (!(alg & CRC32_ARM64))
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
-	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
-		crc32_alg = CRC32_ARM64;
-#endif
-
-	if (crc32_alg == CRC32_SW)
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_SW\n");
-}
-
-/* Setting the best available algorithm */
-RTE_INIT(rte_hash_crc_init_alg)
-{
-#if defined(RTE_ARCH_X86)
-	rte_hash_crc_set_alg(CRC32_SSE42_x64);
-#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
-	rte_hash_crc_set_alg(CRC32_ARM64);
-#else
-	rte_hash_crc_set_alg(CRC32_SW);
-#endif
-}
+void
+rte_hash_crc_set_alg(uint8_t alg);
 
 #ifdef __DOXYGEN__
 
diff --git a/lib/hash/version.map b/lib/hash/version.map
index f03b047b2eec..8b22aad5626b 100644
--- a/lib/hash/version.map
+++ b/lib/hash/version.map
@@ -9,6 +9,7 @@ DPDK_23 {
 	rte_hash_add_key_with_hash;
 	rte_hash_add_key_with_hash_data;
 	rte_hash_count;
+	rte_hash_crc_set_alg;
 	rte_hash_create;
 	rte_hash_del_key;
 	rte_hash_del_key_with_hash;
@@ -56,3 +57,9 @@ EXPERIMENTAL {
 	rte_thash_gfni;
 	rte_thash_gfni_bulk;
 };
+
+INTERNAL {
+	global:
+
+	rte_hash_crc32_alg;
+};
-- 
2.39.1


^ permalink raw reply	[relevance 2%]

* [PATCH v9 00/22] Convert static logtypes in libraries
                     ` (4 preceding siblings ...)
  2023-02-20 23:35  3% ` [PATCH v8 00/22] Convert static logtypes in libraries Stephen Hemminger
@ 2023-02-21 19:01  2% ` Stephen Hemminger
  2023-02-21 19:02  2%   ` [PATCH v9 21/22] hash: move rte_hash_set_alg out header Stephen Hemminger
  2023-02-22 16:07  2% ` [PATCH v10 00/22] Convert static log type values in libraries Stephen Hemminger
  2023-02-22 21:55  2% ` [PATCH v11 00/22] Convert static log type values in libraries Stephen Hemminger
  7 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-02-21 19:01 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

This patchset removes the main uses of static LOGTYPE's in DPDK
libraries. It starts with the easy one and goes on to the more complex ones.

There are several options on how to treat the old static types:
leave them there, mark as deprecated, or remove them.
This version removes them since there is no guarantee in current
DPDK policies that says they can't be removed.

Note: there is one patch in this series that will get
flagged incorrectly as an ABI change.

v9 - fix handling of crc32 alg in lib/hash.
     make it an internal global variable.
     fix gfni stubs for case where they are not used.

Stephen Hemminger (22):
  gso: don't log message on non TCP/UDP
  eal: drop no longer used GSO logtype
  log: drop unused RTE_LOGTYPE_TIMER
  efd: replace RTE_LOGTYPE_EFD with dynamic type
  mbuf: replace RTE_LOGTYPE_MBUF with dynamic type
  acl: replace LOGTYPE_ACL with dynamic type
  examples/power: replace use of RTE_LOGTYPE_POWER
  examples/l3fwd-power: replace use of RTE_LOGTYPE_POWER
  power: replace RTE_LOGTYPE_POWER with dynamic type
  ring: replace RTE_LOGTYPE_RING with dynamic type
  mempool: replace RTE_LOGTYPE_MEMPOOL with dynamic type
  lpm: replace RTE_LOGTYPE_LPM with dynamic types
  kni: replace RTE_LOGTYPE_KNI with dynamic type
  sched: replace RTE_LOGTYPE_SCHED with dynamic type
  examples/ipsecgw: replace RTE_LOGTYPE_PORT
  port: replace RTE_LOGTYPE_PORT with dynamic type
  table: convert RTE_LOGTYPE_TABLE to dynamic logtype
  app/test: remove use of RTE_LOGTYPE_PIPELINE
  pipeline: replace RTE_LOGTYPE_PIPELINE with dynamic type
  hash: move rte_thash_gfni stubs out of header file
  hash: move rte_hash_set_alg out header
  hash: convert RTE_LOGTYPE_HASH to dynamic type

 app/test/test_acl.c               |  3 +-
 app/test/test_table_acl.c         | 50 +++++++++++------------
 app/test/test_table_pipeline.c    | 40 +++++++++---------
 examples/distributor/main.c       |  2 +-
 examples/ipsec-secgw/sa.c         |  6 +--
 examples/l3fwd-power/main.c       | 15 +++----
 lib/acl/acl_bld.c                 |  1 +
 lib/acl/acl_gen.c                 |  1 +
 lib/acl/acl_log.h                 |  4 ++
 lib/acl/rte_acl.c                 |  4 ++
 lib/acl/tb_mem.c                  |  3 +-
 lib/eal/common/eal_common_log.c   | 17 --------
 lib/eal/include/rte_log.h         | 34 ++++++++--------
 lib/efd/rte_efd.c                 |  4 ++
 lib/fib/fib_log.h                 |  4 ++
 lib/fib/rte_fib.c                 |  3 ++
 lib/fib/rte_fib6.c                |  2 +
 lib/gso/rte_gso.c                 |  5 +--
 lib/gso/rte_gso.h                 |  1 +
 lib/hash/meson.build              |  9 +++-
 lib/hash/rte_crc_arm64.h          |  8 ++--
 lib/hash/rte_crc_x86.h            | 10 ++---
 lib/hash/rte_cuckoo_hash.c        |  5 +++
 lib/hash/rte_fbk_hash.c           |  5 +++
 lib/hash/rte_hash_crc.c           | 68 +++++++++++++++++++++++++++++++
 lib/hash/rte_hash_crc.h           | 48 ++--------------------
 lib/hash/rte_thash.c              |  3 ++
 lib/hash/rte_thash_gfni.c         | 50 +++++++++++++++++++++++
 lib/hash/rte_thash_gfni.h         | 28 +++----------
 lib/hash/version.map              | 11 +++++
 lib/kni/rte_kni.c                 |  3 ++
 lib/lpm/lpm_log.h                 |  4 ++
 lib/lpm/rte_lpm.c                 |  3 ++
 lib/lpm/rte_lpm6.c                |  1 +
 lib/mbuf/mbuf_log.h               |  4 ++
 lib/mbuf/rte_mbuf.c               |  4 ++
 lib/mbuf/rte_mbuf_dyn.c           |  2 +
 lib/mbuf/rte_mbuf_pool_ops.c      |  2 +
 lib/mempool/rte_mempool.c         |  2 +
 lib/mempool/rte_mempool.h         |  8 ++++
 lib/mempool/version.map           |  3 ++
 lib/pipeline/rte_pipeline.c       |  3 ++
 lib/port/rte_port_ethdev.c        |  3 ++
 lib/port/rte_port_eventdev.c      |  4 ++
 lib/port/rte_port_fd.c            |  3 ++
 lib/port/rte_port_frag.c          |  3 ++
 lib/port/rte_port_kni.c           |  3 ++
 lib/port/rte_port_ras.c           |  3 ++
 lib/port/rte_port_ring.c          |  3 ++
 lib/port/rte_port_sched.c         |  3 ++
 lib/port/rte_port_source_sink.c   |  3 ++
 lib/port/rte_port_sym_crypto.c    |  3 ++
 lib/power/guest_channel.c         |  3 +-
 lib/power/power_common.c          |  2 +
 lib/power/power_common.h          |  3 +-
 lib/power/power_kvm_vm.c          |  1 +
 lib/power/rte_power.c             |  1 +
 lib/rib/rib_log.h                 |  4 ++
 lib/rib/rte_rib.c                 |  3 ++
 lib/rib/rte_rib6.c                |  3 ++
 lib/ring/rte_ring.c               |  3 ++
 lib/sched/rte_pie.c               |  1 +
 lib/sched/rte_sched.c             |  5 +++
 lib/sched/rte_sched_log.h         |  4 ++
 lib/table/rte_table_acl.c         |  3 ++
 lib/table/rte_table_array.c       |  3 ++
 lib/table/rte_table_hash_cuckoo.c |  3 ++
 lib/table/rte_table_hash_ext.c    |  3 ++
 lib/table/rte_table_hash_key16.c  |  3 ++
 lib/table/rte_table_hash_key32.c  |  5 ++-
 lib/table/rte_table_hash_key8.c   |  5 ++-
 lib/table/rte_table_hash_lru.c    |  3 ++
 lib/table/rte_table_lpm.c         |  3 ++
 lib/table/rte_table_lpm_ipv6.c    |  3 ++
 lib/table/rte_table_stub.c        |  3 ++
 75 files changed, 405 insertions(+), 179 deletions(-)
 create mode 100644 lib/acl/acl_log.h
 create mode 100644 lib/fib/fib_log.h
 create mode 100644 lib/hash/rte_hash_crc.c
 create mode 100644 lib/hash/rte_thash_gfni.c
 create mode 100644 lib/lpm/lpm_log.h
 create mode 100644 lib/mbuf/mbuf_log.h
 create mode 100644 lib/rib/rib_log.h
 create mode 100644 lib/sched/rte_sched_log.h

-- 
2.39.1


^ permalink raw reply	[relevance 2%]

* Re: [PATCH v8 21/22] hash: move rte_hash_set_alg out header
  2023-02-20 23:35  3%   ` [PATCH v8 21/22] hash: move rte_hash_set_alg out header Stephen Hemminger
@ 2023-02-21 15:02  0%     ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2023-02-21 15:02 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev

On Tue, Feb 21, 2023 at 12:38 AM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> The code for setting algorithm for hash is not at all perf sensitive,
> and doing it inline has a couple of problems. First, it means that if
> multiple files include the header, then the initialization gets done
> multiple times. But also, it makes it harder to fix usage of RTE_LOG().
>
> Despite what the checking script say. This is not an ABI change, the
> previous version inlined the same code; therefore both old and new code
> will work the same.

I suppose you are referring to:
http://mails.dpdk.org/archives/test-report/2023-February/356872.html
ERROR: symbol rte_hash_crc_set_alg is added in the DPDK_23 section,
but is expected to be added in the EXPERIMENTAL section of the version
map

I agree that this is irrelevant and can be ignored in this particular case.


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* [PATCH v2 2/2] net/nfp: modify RSS's processing logic
  @ 2023-02-21  3:55  3%     ` Chaoyong He
  0 siblings, 0 replies; 200+ results
From: Chaoyong He @ 2023-02-21  3:55 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, niklas.soderlund, Long Wu, Chaoyong He

From: Long Wu <long.wu@corigine.com>

The initial logic only support the single type metadata and this
commit add the support of chained type metadata. This commit also
make the relation between the RSS capability (v1/v2) and these
two types of metadata more clear.

Signed-off-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
---
 drivers/net/nfp/nfp_common.c    |  23 +++++++
 drivers/net/nfp/nfp_common.h    |   7 +++
 drivers/net/nfp/nfp_ctrl.h      |  18 +++++-
 drivers/net/nfp/nfp_ethdev.c    |   7 +--
 drivers/net/nfp/nfp_ethdev_vf.c |   7 +--
 drivers/net/nfp/nfp_rxtx.c      | 108 ++++++++++++++++++++------------
 6 files changed, 121 insertions(+), 49 deletions(-)

diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index a545a10013..a1e37ada11 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -1584,6 +1584,29 @@ nfp_net_check_dma_mask(struct nfp_net_hw *hw, char *name)
 	return 0;
 }
 
+void
+nfp_net_init_metadata_format(struct nfp_net_hw *hw)
+{
+	/*
+	 * ABI 4.x and ctrl vNIC always use chained metadata, in other cases we allow use of
+	 * single metadata if only RSS(v1) is supported by hw capability, and RSS(v2)
+	 * also indicate that we are using chained metadata.
+	 */
+	if (NFD_CFG_MAJOR_VERSION_of(hw->ver) == 4) {
+		hw->meta_format = NFP_NET_METAFORMAT_CHAINED;
+	} else if ((hw->cap & NFP_NET_CFG_CTRL_CHAIN_META) != 0) {
+		hw->meta_format = NFP_NET_METAFORMAT_CHAINED;
+		/*
+		 * RSS is incompatible with chained metadata. hw->cap just represents
+		 * firmware's ability rather than the firmware's configuration. We decide
+		 * to reduce the confusion to allow us can use hw->cap to identify RSS later.
+		 */
+		hw->cap &= ~NFP_NET_CFG_CTRL_RSS;
+	} else {
+		hw->meta_format = NFP_NET_METAFORMAT_SINGLE;
+	}
+}
+
 /*
  * Local variables:
  * c-file-style: "Linux"
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 980f3cad89..d33675eb99 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -127,6 +127,11 @@ enum nfp_qcp_ptr {
 	NFP_QCP_WRITE_PTR
 };
 
+enum nfp_net_meta_format {
+	NFP_NET_METAFORMAT_SINGLE,
+	NFP_NET_METAFORMAT_CHAINED,
+};
+
 struct nfp_pf_dev {
 	/* Backpointer to associated pci device */
 	struct rte_pci_device *pci_dev;
@@ -203,6 +208,7 @@ struct nfp_net_hw {
 	uint32_t max_mtu;
 	uint32_t mtu;
 	uint32_t rx_offset;
+	enum nfp_net_meta_format meta_format;
 
 	/* Current values for control */
 	uint32_t ctrl;
@@ -455,6 +461,7 @@ int nfp_net_tx_desc_limits(struct nfp_net_hw *hw,
 		uint16_t *min_tx_desc,
 		uint16_t *max_tx_desc);
 int nfp_net_check_dma_mask(struct nfp_net_hw *hw, char *name);
+void nfp_net_init_metadata_format(struct nfp_net_hw *hw);
 
 #define NFP_NET_DEV_PRIVATE_TO_HW(adapter)\
 	(&((struct nfp_net_adapter *)adapter)->hw)
diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h
index 1069ff9485..bdc39f8974 100644
--- a/drivers/net/nfp/nfp_ctrl.h
+++ b/drivers/net/nfp/nfp_ctrl.h
@@ -110,6 +110,7 @@
 #define   NFP_NET_CFG_CTRL_MSIX_TX_OFF    (0x1 << 26) /* Disable MSIX for TX */
 #define   NFP_NET_CFG_CTRL_LSO2           (0x1 << 28) /* LSO/TSO (version 2) */
 #define   NFP_NET_CFG_CTRL_RSS2           (0x1 << 29) /* RSS (version 2) */
+#define   NFP_NET_CFG_CTRL_CSUM_COMPLETE  (0x1 << 30) /* Checksum complete */
 #define   NFP_NET_CFG_CTRL_LIVE_ADDR      (0x1U << 31)/* live MAC addr change */
 #define NFP_NET_CFG_UPDATE              0x0004
 #define   NFP_NET_CFG_UPDATE_GEN          (0x1 <<  0) /* General update */
@@ -135,6 +136,8 @@
 #define NFP_NET_CFG_CTRL_LSO_ANY (NFP_NET_CFG_CTRL_LSO | NFP_NET_CFG_CTRL_LSO2)
 #define NFP_NET_CFG_CTRL_RSS_ANY (NFP_NET_CFG_CTRL_RSS | NFP_NET_CFG_CTRL_RSS2)
 
+#define NFP_NET_CFG_CTRL_CHAIN_META (NFP_NET_CFG_CTRL_RSS2 | \
+					NFP_NET_CFG_CTRL_CSUM_COMPLETE)
 /*
  * Read-only words (0x0030 - 0x0050):
  * @NFP_NET_CFG_VERSION:     Firmware version number
@@ -218,7 +221,7 @@
 
 /*
  * RSS configuration (0x0100 - 0x01ac):
- * Used only when NFP_NET_CFG_CTRL_RSS is enabled
+ * Used only when NFP_NET_CFG_CTRL_RSS_ANY is enabled
  * @NFP_NET_CFG_RSS_CFG:     RSS configuration word
  * @NFP_NET_CFG_RSS_KEY:     RSS "secret" key
  * @NFP_NET_CFG_RSS_ITBL:    RSS indirection table
@@ -334,6 +337,19 @@
 /* PF multiport offset */
 #define NFP_PF_CSR_SLICE_SIZE	(32 * 1024)
 
+/*
+ * nfp_net_cfg_ctrl_rss() - Get RSS flag based on firmware's capability
+ * @hw_cap: The firmware's capabilities
+ */
+static inline uint32_t
+nfp_net_cfg_ctrl_rss(uint32_t hw_cap)
+{
+	if ((hw_cap & NFP_NET_CFG_CTRL_RSS2) != 0)
+		return NFP_NET_CFG_CTRL_RSS2;
+
+	return NFP_NET_CFG_CTRL_RSS;
+}
+
 #endif /* _NFP_CTRL_H_ */
 /*
  * Local variables:
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index fed7b1ab13..47d5dff16c 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -134,10 +134,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
-		if (hw->cap & NFP_NET_CFG_CTRL_RSS2)
-			new_ctrl |= NFP_NET_CFG_CTRL_RSS2;
-		else
-			new_ctrl |= NFP_NET_CFG_CTRL_RSS;
+		new_ctrl |= nfp_net_cfg_ctrl_rss(hw->cap);
 	}
 
 	/* Enable device */
@@ -611,6 +608,8 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	if (hw->cap & NFP_NET_CFG_CTRL_LSO2)
 		hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN;
 
+	nfp_net_init_metadata_format(hw);
+
 	if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2)
 		hw->rx_offset = NFP_NET_RX_OFFSET;
 	else
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index c1f8a0fa0f..7834b2ee0c 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -95,10 +95,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
-		if (hw->cap & NFP_NET_CFG_CTRL_RSS2)
-			new_ctrl |= NFP_NET_CFG_CTRL_RSS2;
-		else
-			new_ctrl |= NFP_NET_CFG_CTRL_RSS;
+		new_ctrl |= nfp_net_cfg_ctrl_rss(hw->cap);
 	}
 
 	/* Enable device */
@@ -373,6 +370,8 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	if (hw->cap & NFP_NET_CFG_CTRL_LSO2)
 		hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN;
 
+	nfp_net_init_metadata_format(hw);
+
 	if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2)
 		hw->rx_offset = NFP_NET_RX_OFFSET;
 	else
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 17a04cec5e..1c5a230145 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -116,26 +116,18 @@ nfp_net_rx_queue_count(void *rx_queue)
 	return count;
 }
 
-/* nfp_net_parse_meta() - Parse the metadata from packet */
-static void
-nfp_net_parse_meta(struct nfp_meta_parsed *meta,
-		struct nfp_net_rx_desc *rxd,
-		struct nfp_net_rxq *rxq,
-		struct rte_mbuf *mbuf)
+/* nfp_net_parse_chained_meta() - Parse the chained metadata from packet */
+static bool
+nfp_net_parse_chained_meta(uint8_t *meta_base,
+		rte_be32_t meta_header,
+		struct nfp_meta_parsed *meta)
 {
+	uint8_t *meta_offset;
 	uint32_t meta_info;
 	uint32_t vlan_info;
-	uint8_t *meta_offset;
-	struct nfp_net_hw *hw = rxq->hw;
 
-	if (unlikely((NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2) ||
-			NFP_DESC_META_LEN(rxd) == 0))
-		return;
-
-	meta_offset = rte_pktmbuf_mtod(mbuf, uint8_t *);
-	meta_offset -= NFP_DESC_META_LEN(rxd);
-	meta_info = rte_be_to_cpu_32(*(rte_be32_t *)meta_offset);
-	meta_offset += 4;
+	meta_info = rte_be_to_cpu_32(meta_header);
+	meta_offset = meta_base + 4;
 
 	for (; meta_info != 0; meta_info >>= NFP_NET_META_FIELD_SIZE, meta_offset += 4) {
 		switch (meta_info & NFP_NET_META_FIELD_MASK) {
@@ -157,9 +149,11 @@ nfp_net_parse_meta(struct nfp_meta_parsed *meta,
 			break;
 		default:
 			/* Unsupported metadata can be a performance issue */
-			return;
+			return false;
 		}
 	}
+
+	return true;
 }
 
 /*
@@ -170,33 +164,18 @@ nfp_net_parse_meta(struct nfp_meta_parsed *meta,
  */
 static void
 nfp_net_parse_meta_hash(const struct nfp_meta_parsed *meta,
-		struct nfp_net_rx_desc *rxd,
 		struct nfp_net_rxq *rxq,
 		struct rte_mbuf *mbuf)
 {
-	uint32_t hash;
-	uint32_t hash_type;
 	struct nfp_net_hw *hw = rxq->hw;
 
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0)
 		return;
 
-	if (likely((hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) != 0 &&
-			NFP_DESC_META_LEN(rxd) != 0)) {
-		hash = meta->hash;
-		hash_type = meta->hash_type;
-	} else {
-		if ((rxd->rxd.flags & PCIE_DESC_RX_RSS) == 0)
-			return;
-
-		hash = rte_be_to_cpu_32(*(uint32_t *)NFP_HASH_OFFSET);
-		hash_type = rte_be_to_cpu_32(*(uint32_t *)NFP_HASH_TYPE_OFFSET);
-	}
-
-	mbuf->hash.rss = hash;
+	mbuf->hash.rss = meta->hash;
 	mbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
-	switch (hash_type) {
+	switch (meta->hash_type) {
 	case NFP_NET_RSS_IPV4:
 		mbuf->packet_type |= RTE_PTYPE_INNER_L3_IPV4;
 		break;
@@ -223,6 +202,21 @@ nfp_net_parse_meta_hash(const struct nfp_meta_parsed *meta,
 	}
 }
 
+/*
+ * nfp_net_parse_single_meta() - Parse the single metadata
+ *
+ * The RSS hash and hash-type are prepended to the packet data.
+ * Get it from metadata area.
+ */
+static inline void
+nfp_net_parse_single_meta(uint8_t *meta_base,
+		rte_be32_t meta_header,
+		struct nfp_meta_parsed *meta)
+{
+	meta->hash_type = rte_be_to_cpu_32(meta_header);
+	meta->hash = rte_be_to_cpu_32(*(rte_be32_t *)(meta_base + 4));
+}
+
 /*
  * nfp_net_parse_meta_vlan() - Set mbuf vlan_strip data based on metadata info
  *
@@ -304,6 +298,45 @@ nfp_net_parse_meta_qinq(const struct nfp_meta_parsed *meta,
 	mb->ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 }
 
+/* nfp_net_parse_meta() - Parse the metadata from packet */
+static void
+nfp_net_parse_meta(struct nfp_net_rx_desc *rxds,
+		struct nfp_net_rxq *rxq,
+		struct nfp_net_hw *hw,
+		struct rte_mbuf *mb)
+{
+	uint8_t *meta_base;
+	rte_be32_t meta_header;
+	struct nfp_meta_parsed meta = {};
+
+	if (unlikely(NFP_DESC_META_LEN(rxds) == 0))
+		return;
+
+	meta_base = rte_pktmbuf_mtod(mb, uint8_t *);
+	meta_base -= NFP_DESC_META_LEN(rxds);
+	meta_header = *(rte_be32_t *)meta_base;
+
+	switch (hw->meta_format) {
+	case NFP_NET_METAFORMAT_CHAINED:
+		if (nfp_net_parse_chained_meta(meta_base, meta_header, &meta)) {
+			nfp_net_parse_meta_hash(&meta, rxq, mb);
+			nfp_net_parse_meta_vlan(&meta, rxds, rxq, mb);
+			nfp_net_parse_meta_qinq(&meta, rxq, mb);
+		} else {
+			PMD_RX_LOG(DEBUG, "RX chained metadata format is wrong!");
+		}
+		break;
+	case NFP_NET_METAFORMAT_SINGLE:
+		if ((rxds->rxd.flags & PCIE_DESC_RX_RSS) != 0) {
+			nfp_net_parse_single_meta(meta_base, meta_header, &meta);
+			nfp_net_parse_meta_hash(&meta, rxq, mb);
+		}
+		break;
+	default:
+		PMD_RX_LOG(DEBUG, "RX metadata do not exist.");
+	}
+}
+
 /*
  * RX path design:
  *
@@ -341,7 +374,6 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	struct nfp_net_hw *hw;
 	struct rte_mbuf *mb;
 	struct rte_mbuf *new_mb;
-	struct nfp_meta_parsed meta;
 	uint16_t nb_hold;
 	uint64_t dma_addr;
 	uint16_t avail;
@@ -437,11 +469,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		mb->next = NULL;
 		mb->port = rxq->port_id;
 
-		memset(&meta, 0, sizeof(meta));
-		nfp_net_parse_meta(&meta, rxds, rxq, mb);
-		nfp_net_parse_meta_hash(&meta, rxds, rxq, mb);
-		nfp_net_parse_meta_vlan(&meta, rxds, rxq, mb);
-		nfp_net_parse_meta_qinq(&meta, rxq, mb);
+		nfp_net_parse_meta(rxds, rxq, hw, mb);
 
 		/* Checking the checksum flag */
 		nfp_net_rx_cksum(rxq, rxds, mb);
-- 
2.29.3


^ permalink raw reply	[relevance 3%]

* [PATCH v2 2/2] net/nfp: modify RSS's processing logic
  @ 2023-02-21  3:29  3%   ` Chaoyong He
    1 sibling, 0 replies; 200+ results
From: Chaoyong He @ 2023-02-21  3:29 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, niklas.soderlund, Long Wu, Chaoyong He

From: Long Wu <long.wu@corigine.com>

The initial logic only support the single type metadata and this
commit add the support of chained type metadata. This commit also
make the relation between the RSS capability (v1/v2) and these
two types of metadata more clear.

Signed-off-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
---
 drivers/net/nfp/nfp_common.c    |  23 +++++++
 drivers/net/nfp/nfp_common.h    |   7 +++
 drivers/net/nfp/nfp_ctrl.h      |  18 +++++-
 drivers/net/nfp/nfp_ethdev.c    |   7 +--
 drivers/net/nfp/nfp_ethdev_vf.c |   7 +--
 drivers/net/nfp/nfp_rxtx.c      | 108 ++++++++++++++++++++------------
 6 files changed, 121 insertions(+), 49 deletions(-)

diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index a545a10013..a1e37ada11 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -1584,6 +1584,29 @@ nfp_net_check_dma_mask(struct nfp_net_hw *hw, char *name)
 	return 0;
 }
 
+void
+nfp_net_init_metadata_format(struct nfp_net_hw *hw)
+{
+	/*
+	 * ABI 4.x and ctrl vNIC always use chained metadata, in other cases we allow use of
+	 * single metadata if only RSS(v1) is supported by hw capability, and RSS(v2)
+	 * also indicate that we are using chained metadata.
+	 */
+	if (NFD_CFG_MAJOR_VERSION_of(hw->ver) == 4) {
+		hw->meta_format = NFP_NET_METAFORMAT_CHANINED;
+	} else if ((hw->cap & NFP_NET_CFG_CTRL_CHAIN_META) != 0) {
+		hw->meta_format = NFP_NET_METAFORMAT_CHANINED;
+		/*
+		 * RSS is incompatible with chained metadata. hw->cap just represents
+		 * firmware's ability rather than the firmware's configuration. We decide
+		 * to reduce the confusion to allow us can use hw->cap to identify RSS later.
+		 */
+		hw->cap &= ~NFP_NET_CFG_CTRL_RSS;
+	} else {
+		hw->meta_format = NFP_NET_METAFORMAT_SINGLE;
+	}
+}
+
 /*
  * Local variables:
  * c-file-style: "Linux"
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 980f3cad89..d33675eb99 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -127,6 +127,11 @@ enum nfp_qcp_ptr {
 	NFP_QCP_WRITE_PTR
 };
 
+enum nfp_net_meta_format {
+	NFP_NET_METAFORMAT_SINGLE,
+	NFP_NET_METAFORMAT_CHANINED,
+};
+
 struct nfp_pf_dev {
 	/* Backpointer to associated pci device */
 	struct rte_pci_device *pci_dev;
@@ -203,6 +208,7 @@ struct nfp_net_hw {
 	uint32_t max_mtu;
 	uint32_t mtu;
 	uint32_t rx_offset;
+	enum nfp_net_meta_format meta_format;
 
 	/* Current values for control */
 	uint32_t ctrl;
@@ -455,6 +461,7 @@ int nfp_net_tx_desc_limits(struct nfp_net_hw *hw,
 		uint16_t *min_tx_desc,
 		uint16_t *max_tx_desc);
 int nfp_net_check_dma_mask(struct nfp_net_hw *hw, char *name);
+void nfp_net_init_metadata_format(struct nfp_net_hw *hw);
 
 #define NFP_NET_DEV_PRIVATE_TO_HW(adapter)\
 	(&((struct nfp_net_adapter *)adapter)->hw)
diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h
index 1069ff9485..bdc39f8974 100644
--- a/drivers/net/nfp/nfp_ctrl.h
+++ b/drivers/net/nfp/nfp_ctrl.h
@@ -110,6 +110,7 @@
 #define   NFP_NET_CFG_CTRL_MSIX_TX_OFF    (0x1 << 26) /* Disable MSIX for TX */
 #define   NFP_NET_CFG_CTRL_LSO2           (0x1 << 28) /* LSO/TSO (version 2) */
 #define   NFP_NET_CFG_CTRL_RSS2           (0x1 << 29) /* RSS (version 2) */
+#define   NFP_NET_CFG_CTRL_CSUM_COMPLETE  (0x1 << 30) /* Checksum complete */
 #define   NFP_NET_CFG_CTRL_LIVE_ADDR      (0x1U << 31)/* live MAC addr change */
 #define NFP_NET_CFG_UPDATE              0x0004
 #define   NFP_NET_CFG_UPDATE_GEN          (0x1 <<  0) /* General update */
@@ -135,6 +136,8 @@
 #define NFP_NET_CFG_CTRL_LSO_ANY (NFP_NET_CFG_CTRL_LSO | NFP_NET_CFG_CTRL_LSO2)
 #define NFP_NET_CFG_CTRL_RSS_ANY (NFP_NET_CFG_CTRL_RSS | NFP_NET_CFG_CTRL_RSS2)
 
+#define NFP_NET_CFG_CTRL_CHAIN_META (NFP_NET_CFG_CTRL_RSS2 | \
+					NFP_NET_CFG_CTRL_CSUM_COMPLETE)
 /*
  * Read-only words (0x0030 - 0x0050):
  * @NFP_NET_CFG_VERSION:     Firmware version number
@@ -218,7 +221,7 @@
 
 /*
  * RSS configuration (0x0100 - 0x01ac):
- * Used only when NFP_NET_CFG_CTRL_RSS is enabled
+ * Used only when NFP_NET_CFG_CTRL_RSS_ANY is enabled
  * @NFP_NET_CFG_RSS_CFG:     RSS configuration word
  * @NFP_NET_CFG_RSS_KEY:     RSS "secret" key
  * @NFP_NET_CFG_RSS_ITBL:    RSS indirection table
@@ -334,6 +337,19 @@
 /* PF multiport offset */
 #define NFP_PF_CSR_SLICE_SIZE	(32 * 1024)
 
+/*
+ * nfp_net_cfg_ctrl_rss() - Get RSS flag based on firmware's capability
+ * @hw_cap: The firmware's capabilities
+ */
+static inline uint32_t
+nfp_net_cfg_ctrl_rss(uint32_t hw_cap)
+{
+	if ((hw_cap & NFP_NET_CFG_CTRL_RSS2) != 0)
+		return NFP_NET_CFG_CTRL_RSS2;
+
+	return NFP_NET_CFG_CTRL_RSS;
+}
+
 #endif /* _NFP_CTRL_H_ */
 /*
  * Local variables:
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index fed7b1ab13..47d5dff16c 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -134,10 +134,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
-		if (hw->cap & NFP_NET_CFG_CTRL_RSS2)
-			new_ctrl |= NFP_NET_CFG_CTRL_RSS2;
-		else
-			new_ctrl |= NFP_NET_CFG_CTRL_RSS;
+		new_ctrl |= nfp_net_cfg_ctrl_rss(hw->cap);
 	}
 
 	/* Enable device */
@@ -611,6 +608,8 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	if (hw->cap & NFP_NET_CFG_CTRL_LSO2)
 		hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN;
 
+	nfp_net_init_metadata_format(hw);
+
 	if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2)
 		hw->rx_offset = NFP_NET_RX_OFFSET;
 	else
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index c1f8a0fa0f..7834b2ee0c 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -95,10 +95,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
-		if (hw->cap & NFP_NET_CFG_CTRL_RSS2)
-			new_ctrl |= NFP_NET_CFG_CTRL_RSS2;
-		else
-			new_ctrl |= NFP_NET_CFG_CTRL_RSS;
+		new_ctrl |= nfp_net_cfg_ctrl_rss(hw->cap);
 	}
 
 	/* Enable device */
@@ -373,6 +370,8 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	if (hw->cap & NFP_NET_CFG_CTRL_LSO2)
 		hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN;
 
+	nfp_net_init_metadata_format(hw);
+
 	if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2)
 		hw->rx_offset = NFP_NET_RX_OFFSET;
 	else
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 17a04cec5e..1c5a230145 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -116,26 +116,18 @@ nfp_net_rx_queue_count(void *rx_queue)
 	return count;
 }
 
-/* nfp_net_parse_meta() - Parse the metadata from packet */
-static void
-nfp_net_parse_meta(struct nfp_meta_parsed *meta,
-		struct nfp_net_rx_desc *rxd,
-		struct nfp_net_rxq *rxq,
-		struct rte_mbuf *mbuf)
+/* nfp_net_parse_chained_meta() - Parse the chained metadata from packet */
+static bool
+nfp_net_parse_chained_meta(uint8_t *meta_base,
+		rte_be32_t meta_header,
+		struct nfp_meta_parsed *meta)
 {
+	uint8_t *meta_offset;
 	uint32_t meta_info;
 	uint32_t vlan_info;
-	uint8_t *meta_offset;
-	struct nfp_net_hw *hw = rxq->hw;
 
-	if (unlikely((NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2) ||
-			NFP_DESC_META_LEN(rxd) == 0))
-		return;
-
-	meta_offset = rte_pktmbuf_mtod(mbuf, uint8_t *);
-	meta_offset -= NFP_DESC_META_LEN(rxd);
-	meta_info = rte_be_to_cpu_32(*(rte_be32_t *)meta_offset);
-	meta_offset += 4;
+	meta_info = rte_be_to_cpu_32(meta_header);
+	meta_offset = meta_base + 4;
 
 	for (; meta_info != 0; meta_info >>= NFP_NET_META_FIELD_SIZE, meta_offset += 4) {
 		switch (meta_info & NFP_NET_META_FIELD_MASK) {
@@ -157,9 +149,11 @@ nfp_net_parse_meta(struct nfp_meta_parsed *meta,
 			break;
 		default:
 			/* Unsupported metadata can be a performance issue */
-			return;
+			return false;
 		}
 	}
+
+	return true;
 }
 
 /*
@@ -170,33 +164,18 @@ nfp_net_parse_meta(struct nfp_meta_parsed *meta,
  */
 static void
 nfp_net_parse_meta_hash(const struct nfp_meta_parsed *meta,
-		struct nfp_net_rx_desc *rxd,
 		struct nfp_net_rxq *rxq,
 		struct rte_mbuf *mbuf)
 {
-	uint32_t hash;
-	uint32_t hash_type;
 	struct nfp_net_hw *hw = rxq->hw;
 
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0)
 		return;
 
-	if (likely((hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) != 0 &&
-			NFP_DESC_META_LEN(rxd) != 0)) {
-		hash = meta->hash;
-		hash_type = meta->hash_type;
-	} else {
-		if ((rxd->rxd.flags & PCIE_DESC_RX_RSS) == 0)
-			return;
-
-		hash = rte_be_to_cpu_32(*(uint32_t *)NFP_HASH_OFFSET);
-		hash_type = rte_be_to_cpu_32(*(uint32_t *)NFP_HASH_TYPE_OFFSET);
-	}
-
-	mbuf->hash.rss = hash;
+	mbuf->hash.rss = meta->hash;
 	mbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
-	switch (hash_type) {
+	switch (meta->hash_type) {
 	case NFP_NET_RSS_IPV4:
 		mbuf->packet_type |= RTE_PTYPE_INNER_L3_IPV4;
 		break;
@@ -223,6 +202,21 @@ nfp_net_parse_meta_hash(const struct nfp_meta_parsed *meta,
 	}
 }
 
+/*
+ * nfp_net_parse_single_meta() - Parse the single metadata
+ *
+ * The RSS hash and hash-type are prepended to the packet data.
+ * Get it from metadata area.
+ */
+static inline void
+nfp_net_parse_single_meta(uint8_t *meta_base,
+		rte_be32_t meta_header,
+		struct nfp_meta_parsed *meta)
+{
+	meta->hash_type = rte_be_to_cpu_32(meta_header);
+	meta->hash = rte_be_to_cpu_32(*(rte_be32_t *)(meta_base + 4));
+}
+
 /*
  * nfp_net_parse_meta_vlan() - Set mbuf vlan_strip data based on metadata info
  *
@@ -304,6 +298,45 @@ nfp_net_parse_meta_qinq(const struct nfp_meta_parsed *meta,
 	mb->ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 }
 
+/* nfp_net_parse_meta() - Parse the metadata from packet */
+static void
+nfp_net_parse_meta(struct nfp_net_rx_desc *rxds,
+		struct nfp_net_rxq *rxq,
+		struct nfp_net_hw *hw,
+		struct rte_mbuf *mb)
+{
+	uint8_t *meta_base;
+	rte_be32_t meta_header;
+	struct nfp_meta_parsed meta = {};
+
+	if (unlikely(NFP_DESC_META_LEN(rxds) == 0))
+		return;
+
+	meta_base = rte_pktmbuf_mtod(mb, uint8_t *);
+	meta_base -= NFP_DESC_META_LEN(rxds);
+	meta_header = *(rte_be32_t *)meta_base;
+
+	switch (hw->meta_format) {
+	case NFP_NET_METAFORMAT_CHANINED:
+		if (nfp_net_parse_chained_meta(meta_base, meta_header, &meta)) {
+			nfp_net_parse_meta_hash(&meta, rxq, mb);
+			nfp_net_parse_meta_vlan(&meta, rxds, rxq, mb);
+			nfp_net_parse_meta_qinq(&meta, rxq, mb);
+		} else {
+			PMD_RX_LOG(DEBUG, "RX chained metadata format is wrong!");
+		}
+		break;
+	case NFP_NET_METAFORMAT_SINGLE:
+		if ((rxds->rxd.flags & PCIE_DESC_RX_RSS) != 0) {
+			nfp_net_parse_single_meta(meta_base, meta_header, &meta);
+			nfp_net_parse_meta_hash(&meta, rxq, mb);
+		}
+		break;
+	default:
+		PMD_RX_LOG(DEBUG, "RX metadata do not exist.");
+	}
+}
+
 /*
  * RX path design:
  *
@@ -341,7 +374,6 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	struct nfp_net_hw *hw;
 	struct rte_mbuf *mb;
 	struct rte_mbuf *new_mb;
-	struct nfp_meta_parsed meta;
 	uint16_t nb_hold;
 	uint64_t dma_addr;
 	uint16_t avail;
@@ -437,11 +469,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		mb->next = NULL;
 		mb->port = rxq->port_id;
 
-		memset(&meta, 0, sizeof(meta));
-		nfp_net_parse_meta(&meta, rxds, rxq, mb);
-		nfp_net_parse_meta_hash(&meta, rxds, rxq, mb);
-		nfp_net_parse_meta_vlan(&meta, rxds, rxq, mb);
-		nfp_net_parse_meta_qinq(&meta, rxq, mb);
+		nfp_net_parse_meta(rxds, rxq, hw, mb);
 
 		/* Checking the checksum flag */
 		nfp_net_rx_cksum(rxq, rxds, mb);
-- 
2.29.3


^ permalink raw reply	[relevance 3%]

* [PATCH 2/2] net/nfp: modify RSS's processing logic
  @ 2023-02-21  3:10  3% ` Chaoyong He
    1 sibling, 0 replies; 200+ results
From: Chaoyong He @ 2023-02-21  3:10 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, niklas.soderlund, Long Wu, Chaoyong He

From: Long Wu <long.wu@corigine.com>

The initial logic only support the single type metadata and this
commit add the support of chained type metadata. This commit also
make the relation between the RSS capability (v1/v2) and these
two types of metadata more clear.

Signed-off-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
---
 drivers/net/nfp/nfp_common.c    |  23 +++++++
 drivers/net/nfp/nfp_common.h    |   7 +++
 drivers/net/nfp/nfp_ctrl.h      |  18 +++++-
 drivers/net/nfp/nfp_ethdev.c    |   7 +--
 drivers/net/nfp/nfp_ethdev_vf.c |   7 +--
 drivers/net/nfp/nfp_rxtx.c      | 108 ++++++++++++++++++++------------
 6 files changed, 121 insertions(+), 49 deletions(-)

diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index a545a10013..a1e37ada11 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -1584,6 +1584,29 @@ nfp_net_check_dma_mask(struct nfp_net_hw *hw, char *name)
 	return 0;
 }
 
+void
+nfp_net_init_metadata_format(struct nfp_net_hw *hw)
+{
+	/*
+	 * ABI 4.x and ctrl vNIC always use chained metadata, in other cases we allow use of
+	 * single metadata if only RSS(v1) is supported by hw capability, and RSS(v2)
+	 * also indicate that we are using chained metadata.
+	 */
+	if (NFD_CFG_MAJOR_VERSION_of(hw->ver) == 4) {
+		hw->meta_format = NFP_NET_METAFORMAT_CHANINED;
+	} else if ((hw->cap & NFP_NET_CFG_CTRL_CHAIN_META) != 0) {
+		hw->meta_format = NFP_NET_METAFORMAT_CHANINED;
+		/*
+		 * RSS is incompatible with chained metadata. hw->cap just represents
+		 * firmware's ability rather than the firmware's configuration. We decide
+		 * to reduce the confusion to allow us can use hw->cap to identify RSS later.
+		 */
+		hw->cap &= ~NFP_NET_CFG_CTRL_RSS;
+	} else {
+		hw->meta_format = NFP_NET_METAFORMAT_SINGLE;
+	}
+}
+
 /*
  * Local variables:
  * c-file-style: "Linux"
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 980f3cad89..d33675eb99 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -127,6 +127,11 @@ enum nfp_qcp_ptr {
 	NFP_QCP_WRITE_PTR
 };
 
+enum nfp_net_meta_format {
+	NFP_NET_METAFORMAT_SINGLE,
+	NFP_NET_METAFORMAT_CHANINED,
+};
+
 struct nfp_pf_dev {
 	/* Backpointer to associated pci device */
 	struct rte_pci_device *pci_dev;
@@ -203,6 +208,7 @@ struct nfp_net_hw {
 	uint32_t max_mtu;
 	uint32_t mtu;
 	uint32_t rx_offset;
+	enum nfp_net_meta_format meta_format;
 
 	/* Current values for control */
 	uint32_t ctrl;
@@ -455,6 +461,7 @@ int nfp_net_tx_desc_limits(struct nfp_net_hw *hw,
 		uint16_t *min_tx_desc,
 		uint16_t *max_tx_desc);
 int nfp_net_check_dma_mask(struct nfp_net_hw *hw, char *name);
+void nfp_net_init_metadata_format(struct nfp_net_hw *hw);
 
 #define NFP_NET_DEV_PRIVATE_TO_HW(adapter)\
 	(&((struct nfp_net_adapter *)adapter)->hw)
diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h
index 1069ff9485..bdc39f8974 100644
--- a/drivers/net/nfp/nfp_ctrl.h
+++ b/drivers/net/nfp/nfp_ctrl.h
@@ -110,6 +110,7 @@
 #define   NFP_NET_CFG_CTRL_MSIX_TX_OFF    (0x1 << 26) /* Disable MSIX for TX */
 #define   NFP_NET_CFG_CTRL_LSO2           (0x1 << 28) /* LSO/TSO (version 2) */
 #define   NFP_NET_CFG_CTRL_RSS2           (0x1 << 29) /* RSS (version 2) */
+#define   NFP_NET_CFG_CTRL_CSUM_COMPLETE  (0x1 << 30) /* Checksum complete */
 #define   NFP_NET_CFG_CTRL_LIVE_ADDR      (0x1U << 31)/* live MAC addr change */
 #define NFP_NET_CFG_UPDATE              0x0004
 #define   NFP_NET_CFG_UPDATE_GEN          (0x1 <<  0) /* General update */
@@ -135,6 +136,8 @@
 #define NFP_NET_CFG_CTRL_LSO_ANY (NFP_NET_CFG_CTRL_LSO | NFP_NET_CFG_CTRL_LSO2)
 #define NFP_NET_CFG_CTRL_RSS_ANY (NFP_NET_CFG_CTRL_RSS | NFP_NET_CFG_CTRL_RSS2)
 
+#define NFP_NET_CFG_CTRL_CHAIN_META (NFP_NET_CFG_CTRL_RSS2 | \
+					NFP_NET_CFG_CTRL_CSUM_COMPLETE)
 /*
  * Read-only words (0x0030 - 0x0050):
  * @NFP_NET_CFG_VERSION:     Firmware version number
@@ -218,7 +221,7 @@
 
 /*
  * RSS configuration (0x0100 - 0x01ac):
- * Used only when NFP_NET_CFG_CTRL_RSS is enabled
+ * Used only when NFP_NET_CFG_CTRL_RSS_ANY is enabled
  * @NFP_NET_CFG_RSS_CFG:     RSS configuration word
  * @NFP_NET_CFG_RSS_KEY:     RSS "secret" key
  * @NFP_NET_CFG_RSS_ITBL:    RSS indirection table
@@ -334,6 +337,19 @@
 /* PF multiport offset */
 #define NFP_PF_CSR_SLICE_SIZE	(32 * 1024)
 
+/*
+ * nfp_net_cfg_ctrl_rss() - Get RSS flag based on firmware's capability
+ * @hw_cap: The firmware's capabilities
+ */
+static inline uint32_t
+nfp_net_cfg_ctrl_rss(uint32_t hw_cap)
+{
+	if ((hw_cap & NFP_NET_CFG_CTRL_RSS2) != 0)
+		return NFP_NET_CFG_CTRL_RSS2;
+
+	return NFP_NET_CFG_CTRL_RSS;
+}
+
 #endif /* _NFP_CTRL_H_ */
 /*
  * Local variables:
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index fed7b1ab13..47d5dff16c 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -134,10 +134,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
-		if (hw->cap & NFP_NET_CFG_CTRL_RSS2)
-			new_ctrl |= NFP_NET_CFG_CTRL_RSS2;
-		else
-			new_ctrl |= NFP_NET_CFG_CTRL_RSS;
+		new_ctrl |= nfp_net_cfg_ctrl_rss(hw->cap);
 	}
 
 	/* Enable device */
@@ -611,6 +608,8 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	if (hw->cap & NFP_NET_CFG_CTRL_LSO2)
 		hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN;
 
+	nfp_net_init_metadata_format(hw);
+
 	if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2)
 		hw->rx_offset = NFP_NET_RX_OFFSET;
 	else
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index c1f8a0fa0f..7834b2ee0c 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -95,10 +95,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
-		if (hw->cap & NFP_NET_CFG_CTRL_RSS2)
-			new_ctrl |= NFP_NET_CFG_CTRL_RSS2;
-		else
-			new_ctrl |= NFP_NET_CFG_CTRL_RSS;
+		new_ctrl |= nfp_net_cfg_ctrl_rss(hw->cap);
 	}
 
 	/* Enable device */
@@ -373,6 +370,8 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	if (hw->cap & NFP_NET_CFG_CTRL_LSO2)
 		hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN;
 
+	nfp_net_init_metadata_format(hw);
+
 	if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2)
 		hw->rx_offset = NFP_NET_RX_OFFSET;
 	else
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 17a04cec5e..1c5a230145 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -116,26 +116,18 @@ nfp_net_rx_queue_count(void *rx_queue)
 	return count;
 }
 
-/* nfp_net_parse_meta() - Parse the metadata from packet */
-static void
-nfp_net_parse_meta(struct nfp_meta_parsed *meta,
-		struct nfp_net_rx_desc *rxd,
-		struct nfp_net_rxq *rxq,
-		struct rte_mbuf *mbuf)
+/* nfp_net_parse_chained_meta() - Parse the chained metadata from packet */
+static bool
+nfp_net_parse_chained_meta(uint8_t *meta_base,
+		rte_be32_t meta_header,
+		struct nfp_meta_parsed *meta)
 {
+	uint8_t *meta_offset;
 	uint32_t meta_info;
 	uint32_t vlan_info;
-	uint8_t *meta_offset;
-	struct nfp_net_hw *hw = rxq->hw;
 
-	if (unlikely((NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2) ||
-			NFP_DESC_META_LEN(rxd) == 0))
-		return;
-
-	meta_offset = rte_pktmbuf_mtod(mbuf, uint8_t *);
-	meta_offset -= NFP_DESC_META_LEN(rxd);
-	meta_info = rte_be_to_cpu_32(*(rte_be32_t *)meta_offset);
-	meta_offset += 4;
+	meta_info = rte_be_to_cpu_32(meta_header);
+	meta_offset = meta_base + 4;
 
 	for (; meta_info != 0; meta_info >>= NFP_NET_META_FIELD_SIZE, meta_offset += 4) {
 		switch (meta_info & NFP_NET_META_FIELD_MASK) {
@@ -157,9 +149,11 @@ nfp_net_parse_meta(struct nfp_meta_parsed *meta,
 			break;
 		default:
 			/* Unsupported metadata can be a performance issue */
-			return;
+			return false;
 		}
 	}
+
+	return true;
 }
 
 /*
@@ -170,33 +164,18 @@ nfp_net_parse_meta(struct nfp_meta_parsed *meta,
  */
 static void
 nfp_net_parse_meta_hash(const struct nfp_meta_parsed *meta,
-		struct nfp_net_rx_desc *rxd,
 		struct nfp_net_rxq *rxq,
 		struct rte_mbuf *mbuf)
 {
-	uint32_t hash;
-	uint32_t hash_type;
 	struct nfp_net_hw *hw = rxq->hw;
 
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0)
 		return;
 
-	if (likely((hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) != 0 &&
-			NFP_DESC_META_LEN(rxd) != 0)) {
-		hash = meta->hash;
-		hash_type = meta->hash_type;
-	} else {
-		if ((rxd->rxd.flags & PCIE_DESC_RX_RSS) == 0)
-			return;
-
-		hash = rte_be_to_cpu_32(*(uint32_t *)NFP_HASH_OFFSET);
-		hash_type = rte_be_to_cpu_32(*(uint32_t *)NFP_HASH_TYPE_OFFSET);
-	}
-
-	mbuf->hash.rss = hash;
+	mbuf->hash.rss = meta->hash;
 	mbuf->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 
-	switch (hash_type) {
+	switch (meta->hash_type) {
 	case NFP_NET_RSS_IPV4:
 		mbuf->packet_type |= RTE_PTYPE_INNER_L3_IPV4;
 		break;
@@ -223,6 +202,21 @@ nfp_net_parse_meta_hash(const struct nfp_meta_parsed *meta,
 	}
 }
 
+/*
+ * nfp_net_parse_single_meta() - Parse the single metadata
+ *
+ * The RSS hash and hash-type are prepended to the packet data.
+ * Get it from metadata area.
+ */
+static inline void
+nfp_net_parse_single_meta(uint8_t *meta_base,
+		rte_be32_t meta_header,
+		struct nfp_meta_parsed *meta)
+{
+	meta->hash_type = rte_be_to_cpu_32(meta_header);
+	meta->hash = rte_be_to_cpu_32(*(rte_be32_t *)(meta_base + 4));
+}
+
 /*
  * nfp_net_parse_meta_vlan() - Set mbuf vlan_strip data based on metadata info
  *
@@ -304,6 +298,45 @@ nfp_net_parse_meta_qinq(const struct nfp_meta_parsed *meta,
 	mb->ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 }
 
+/* nfp_net_parse_meta() - Parse the metadata from packet */
+static void
+nfp_net_parse_meta(struct nfp_net_rx_desc *rxds,
+		struct nfp_net_rxq *rxq,
+		struct nfp_net_hw *hw,
+		struct rte_mbuf *mb)
+{
+	uint8_t *meta_base;
+	rte_be32_t meta_header;
+	struct nfp_meta_parsed meta = {};
+
+	if (unlikely(NFP_DESC_META_LEN(rxds) == 0))
+		return;
+
+	meta_base = rte_pktmbuf_mtod(mb, uint8_t *);
+	meta_base -= NFP_DESC_META_LEN(rxds);
+	meta_header = *(rte_be32_t *)meta_base;
+
+	switch (hw->meta_format) {
+	case NFP_NET_METAFORMAT_CHANINED:
+		if (nfp_net_parse_chained_meta(meta_base, meta_header, &meta)) {
+			nfp_net_parse_meta_hash(&meta, rxq, mb);
+			nfp_net_parse_meta_vlan(&meta, rxds, rxq, mb);
+			nfp_net_parse_meta_qinq(&meta, rxq, mb);
+		} else {
+			PMD_RX_LOG(DEBUG, "RX chained metadata format is wrong!");
+		}
+		break;
+	case NFP_NET_METAFORMAT_SINGLE:
+		if ((rxds->rxd.flags & PCIE_DESC_RX_RSS) != 0) {
+			nfp_net_parse_single_meta(meta_base, meta_header, &meta);
+			nfp_net_parse_meta_hash(&meta, rxq, mb);
+		}
+		break;
+	default:
+		PMD_RX_LOG(DEBUG, "RX metadata do not exist.");
+	}
+}
+
 /*
  * RX path design:
  *
@@ -341,7 +374,6 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	struct nfp_net_hw *hw;
 	struct rte_mbuf *mb;
 	struct rte_mbuf *new_mb;
-	struct nfp_meta_parsed meta;
 	uint16_t nb_hold;
 	uint64_t dma_addr;
 	uint16_t avail;
@@ -437,11 +469,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		mb->next = NULL;
 		mb->port = rxq->port_id;
 
-		memset(&meta, 0, sizeof(meta));
-		nfp_net_parse_meta(&meta, rxds, rxq, mb);
-		nfp_net_parse_meta_hash(&meta, rxds, rxq, mb);
-		nfp_net_parse_meta_vlan(&meta, rxds, rxq, mb);
-		nfp_net_parse_meta_qinq(&meta, rxq, mb);
+		nfp_net_parse_meta(rxds, rxq, hw, mb);
 
 		/* Checking the checksum flag */
 		nfp_net_rx_cksum(rxq, rxds, mb);
-- 
2.29.3


^ permalink raw reply	[relevance 3%]

* Re: [EXT] Re: [PATCH v11 1/4] lib: add generic support for reading PMU events
  @ 2023-02-21  0:48  3%                     ` Konstantin Ananyev
  2023-02-27  8:12  0%                       ` Tomasz Duszynski
  0 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2023-02-21  0:48 UTC (permalink / raw)
  To: Tomasz Duszynski, Konstantin Ananyev, dev


>>>>>>>>> diff --git a/lib/pmu/rte_pmu.h b/lib/pmu/rte_pmu.h new file
>>>>>>>>> mode
>>>>>>>>> 100644 index 0000000000..6b664c3336
>>>>>>>>> --- /dev/null
>>>>>>>>> +++ b/lib/pmu/rte_pmu.h
>>>>>>>>> @@ -0,0 +1,212 @@
>>>>>>>>> +/* SPDX-License-Identifier: BSD-3-Clause
>>>>>>>>> + * Copyright(c) 2023 Marvell  */
>>>>>>>>> +
>>>>>>>>> +#ifndef _RTE_PMU_H_
>>>>>>>>> +#define _RTE_PMU_H_
>>>>>>>>> +
>>>>>>>>> +/**
>>>>>>>>> + * @file
>>>>>>>>> + *
>>>>>>>>> + * PMU event tracing operations
>>>>>>>>> + *
>>>>>>>>> + * This file defines generic API and types necessary to
>>>>>>>>> +setup PMU and
>>>>>>>>> + * read selected counters in runtime.
>>>>>>>>> + */
>>>>>>>>> +
>>>>>>>>> +#ifdef __cplusplus
>>>>>>>>> +extern "C" {
>>>>>>>>> +#endif
>>>>>>>>> +
>>>>>>>>> +#include <linux/perf_event.h>
>>>>>>>>> +
>>>>>>>>> +#include <rte_atomic.h>
>>>>>>>>> +#include <rte_branch_prediction.h> #include <rte_common.h>
>>>>>>>>> +#include <rte_compat.h> #include <rte_spinlock.h>
>>>>>>>>> +
>>>>>>>>> +/** Maximum number of events in a group */ #define
>>>>>>>>> +MAX_NUM_GROUP_EVENTS 8
>>>>>>>>> +
>>>>>>>>> +/**
>>>>>>>>> + * A structure describing a group of events.
>>>>>>>>> + */
>>>>>>>>> +struct rte_pmu_event_group {
>>>>>>>>> +	struct perf_event_mmap_page
>>>>>>>>> +*mmap_pages[MAX_NUM_GROUP_EVENTS];
>>>>>>>>> +/**< array of user pages
>>>>>> */
>>>>>>>>> +	int fds[MAX_NUM_GROUP_EVENTS]; /**< array of event descriptors */
>>>>>>>>> +	bool enabled; /**< true if group was enabled on particular lcore */
>>>>>>>>> +	TAILQ_ENTRY(rte_pmu_event_group) next; /**< list entry */ }
>>>>>>>>> +__rte_cache_aligned;
>>>>>>>>> +
>>>>>>>>> +/**
>>>>>>>>> + * A structure describing an event.
>>>>>>>>> + */
>>>>>>>>> +struct rte_pmu_event {
>>>>>>>>> +	char *name; /**< name of an event */
>>>>>>>>> +	unsigned int index; /**< event index into fds/mmap_pages */
>>>>>>>>> +	TAILQ_ENTRY(rte_pmu_event) next; /**< list entry */ };
>>>>>>>>> +
>>>>>>>>> +/**
>>>>>>>>> + * A PMU state container.
>>>>>>>>> + */
>>>>>>>>> +struct rte_pmu {
>>>>>>>>> +	char *name; /**< name of core PMU listed under /sys/bus/event_source/devices */
>>>>>>>>> +	rte_spinlock_t lock; /**< serialize access to event group list */
>>>>>>>>> +	TAILQ_HEAD(, rte_pmu_event_group) event_group_list; /**< list of event groups */
>>>>>>>>> +	unsigned int num_group_events; /**< number of events in a group */
>>>>>>>>> +	TAILQ_HEAD(, rte_pmu_event) event_list; /**< list of matching events */
>>>>>>>>> +	unsigned int initialized; /**< initialization counter */ };
>>>>>>>>> +
>>>>>>>>> +/** lcore event group */
>>>>>>>>> +RTE_DECLARE_PER_LCORE(struct rte_pmu_event_group,
>>>>>>>>> +_event_group);
>>>>>>>>> +
>>>>>>>>> +/** PMU state container */
>>>>>>>>> +extern struct rte_pmu rte_pmu;
>>>>>>>>> +
>>>>>>>>> +/** Each architecture supporting PMU needs to provide its
>>>>>>>>> +own version */ #ifndef rte_pmu_pmc_read #define
>>>>>>>>> +rte_pmu_pmc_read(index) ({ 0; }) #endif
>>>>>>>>> +
>>>>>>>>> +/**
>>>>>>>>> + * @warning
>>>>>>>>> + * @b EXPERIMENTAL: this API may change without prior notice
>>>>>>>>> + *
>>>>>>>>> + * Read PMU counter.
>>>>>>>>> + *
>>>>>>>>> + * @warning This should be not called directly.
>>>>>>>>> + *
>>>>>>>>> + * @param pc
>>>>>>>>> + *   Pointer to the mmapped user page.
>>>>>>>>> + * @return
>>>>>>>>> + *   Counter value read from hardware.
>>>>>>>>> + */
>>>>>>>>> +static __rte_always_inline uint64_t
>>>>>>>>> +__rte_pmu_read_userpage(struct perf_event_mmap_page *pc) {
>>>>>>>>> +	uint64_t width, offset;
>>>>>>>>> +	uint32_t seq, index;
>>>>>>>>> +	int64_t pmc;
>>>>>>>>> +
>>>>>>>>> +	for (;;) {
>>>>>>>>> +		seq = pc->lock;
>>>>>>>>> +		rte_compiler_barrier();
>>>>>>>>
>>>>>>>> Are you sure that compiler_barrier() is enough here?
>>>>>>>> On some archs CPU itself has freedom to re-order reads.
>>>>>>>> Or I am missing something obvious here?
>>>>>>>>
>>>>>>>
>>>>>>> It's a matter of not keeping old stuff cached in registers and
>>>>>>> making sure that we have two reads of lock. CPU reordering won't
>>>>>>> do any harm here.
>>>>>>
>>>>>> Sorry, I didn't get you here:
>>>>>> Suppose CPU will re-order reads and will read lock *after* index or offset value.
>>>>>> Wouldn't it mean that in that case index and/or offset can contain old/invalid values?
>>>>>>
>>>>>
>>>>> This number is just an indicator whether kernel did change something or not.
>>>>
>>>> You are talking about pc->lock, right?
>>>> Yes, I do understand that it is sort of seqlock.
>>>> That's why I am puzzled why we do not care about possible cpu read-reordering.
>>>> Manual for perf_event_open() also has a code snippet with compiler barrier only...
>>>>
>>>>> If cpu reordering will come into play then this will not change anything from pov of this
>> loop.
>>>>> All we want is fresh data when needed and no involvement of
>>>>> compiler when it comes to reordering code.
>>>>
>>>> Ok, can you probably explain to me why the following could not happen:
>>>> T0:
>>>> pc->seqlock==0; pc->index==I1; pc->offset==O1;
>>>> T1:
>>>>       cpu #0 read pmu (due to cpu read reorder, we get index value before seqlock):
>>>>        index=pc->index;  //index==I1;
>>>> T2:
>>>>       cpu #1 kernel vent_update_userpage:
>>>>       pc->lock++; // pc->lock==1
>>>>       pc->index=I2;
>>>>       pc->offset=O2;
>>>>       ...
>>>>       pc->lock++; //pc->lock==2
>>>> T3:
>>>>       cpu #0 continue with read pmu:
>>>>       seq=pc->lock; //seq == 2
>>>>        offset=pc->offset; // offset == O2
>>>>        ....
>>>>        pmc = rte_pmu_pmc_read(index - 1);  // Note that we read at I1, not I2
>>>>        offset += pmc; //offset == O2 + pmcread(I1-1);
>>>>        if (pc->lock == seq) // they are equal, return
>>>>              return offset;
>>>>
>>>> Or, it can happen, but by some reason we don't care much?
>>>>
>>>
>>> This code does self-monitoring and user page (whole group actually) is
>>> per thread running on current cpu. Hence I am not sure what are you trying to prove with that
>> example.
>>
>> I am not trying to prove anything so far.
>> I am asking is such situation possible or not, and if not, why?
>> My current understanding (possibly wrong) is that after you mmaped these pages, kernel still can
>> asynchronously update them.
>> So, when reading the data from these pages you have to check 'lock' value before and after
>> accessing other data.
>> If so, why possible cpu read-reordering doesn't matter?
>>
> 
> Look. I'll reiterate that.
> 
> 1. That user page/group/PMU config is per process. Other processes do not access that.

Ok, that's clear.


>     All this happens on the very same CPU where current thread is running.

Ok... but can't this page be updated by kernel thread running 
simultaneously on different CPU?


> 2. Suppose you've already read seq. Now for some reason kernel updates data in page seq was read from.
> 3. Kernel will enter critical section during update. seq changes along with other data without app knowing about it.
>     If you want nitty gritty details consult kernel sources.

Look, I don't have to beg you to answer these questions.
In fact, I expect library author to document all such narrow things 
clearly either in in PG, or in source code comments (ideally in both).
If not, then from my perspective the patch is not ready stage and 
shouldn't be accepted.
I don't know is compiler-barrier is enough here or not, but I think it 
is definitely worth a clear explanation in the docs.
I suppose it wouldn't be only me who will get confused here.
So please take an effort and document it clearly why you believe there 
is no race-condition.

> 4. app resumes and has some stale data but *WILL* read new seq. Code loops again because values do not match.

If the kernel will always execute update for this page in the same 
thread context, then yes, - user code will always note the difference
after resume.
But why it can't happen that your user-thread reads this page on one 
CPU, while some kernel code on other CPU updates it simultaneously?


> 5. Otherwise seq values match and data is valid.
> 
>> Also there was another question below, which you probably  missed, so I copied it here:
>> Another question - do we really need  to have __rte_pmu_read_userpage() and rte_pmu_read() as
>> static inline functions in public header?
>> As I understand, because of that we also have to make 'struct rte_pmu_*'
>> definitions also public.
>>
> 
> These functions need to be inlined otherwise performance takes a hit.

I understand that perfomance might be affected, but how big is hit?
I expect actual PMU read will not be free anyway, right?
If the diff is small, might be it is worth to go for such change,
removing unneeded structures from public headers would help a lot in 
future in terms of ABI/API stability.



>>>
>>>>>>>
>>>>>>>>> +		index = pc->index;
>>>>>>>>> +		offset = pc->offset;
>>>>>>>>> +		width = pc->pmc_width;
>>>>>>>>> +
>>>>>>>>> +		/* index set to 0 means that particular counter cannot be used */
>>>>>>>>> +		if (likely(pc->cap_user_rdpmc && index)) {
>>>>>>>>> +			pmc = rte_pmu_pmc_read(index - 1);
>>>>>>>>> +			pmc <<= 64 - width;
>>>>>>>>> +			pmc >>= 64 - width;
>>>>>>>>> +			offset += pmc;
>>>>>>>>> +		}
>>>>>>>>> +
>>>>>>>>> +		rte_compiler_barrier();
>>>>>>>>> +
>>>>>>>>> +		if (likely(pc->lock == seq))
>>>>>>>>> +			return offset;
>>>>>>>>> +	}
>>>>>>>>> +
>>>>>>>>> +	return 0;
>>>>>>>>> +}
>>>>>>>>> +
>>>>>>>>> +/**
>>>>>>>>> + * @warning
>>>>>>>>> + * @b EXPERIMENTAL: this API may change without prior notice
>>>>>>>>> + *
>>>>>>>>> + * Enable group of events on the calling lcore.
>>>>>>>>> + *
>>>>>>>>> + * @warning This should be not called directly.
>>>>>>>>> + *
>>>>>>>>> + * @return
>>>>>>>>> + *   0 in case of success, negative value otherwise.
>>>>>>>>> + */
>>>>>>>>> +__rte_experimental
>>>>>>>>> +int
>>>>>>>>> +__rte_pmu_enable_group(void);
>>>>>>>>> +
>>>>>>>>> +/**
>>>>>>>>> + * @warning
>>>>>>>>> + * @b EXPERIMENTAL: this API may change without prior notice
>>>>>>>>> + *
>>>>>>>>> + * Initialize PMU library.
>>>>>>>>> + *
>>>>>>>>> + * @warning This should be not called directly.
>>>>>>>>> + *
>>>>>>>>> + * @return
>>>>>>>>> + *   0 in case of success, negative value otherwise.
>>>>>>>>> + */
>>>>>>>>> +__rte_experimental
>>>>>>>>> +int
>>>>>>>>> +rte_pmu_init(void);
>>>>>>>>> +
>>>>>>>>> +/**
>>>>>>>>> + * @warning
>>>>>>>>> + * @b EXPERIMENTAL: this API may change without prior notice
>>>>>>>>> + *
>>>>>>>>> + * Finalize PMU library. This should be called after PMU
>>>>>>>>> +counters are no longer being
>>>> read.
>>>>>>>>> + */
>>>>>>>>> +__rte_experimental
>>>>>>>>> +void
>>>>>>>>> +rte_pmu_fini(void);
>>>>>>>>> +
>>>>>>>>> +/**
>>>>>>>>> + * @warning
>>>>>>>>> + * @b EXPERIMENTAL: this API may change without prior notice
>>>>>>>>> + *
>>>>>>>>> + * Add event to the group of enabled events.
>>>>>>>>> + *
>>>>>>>>> + * @param name
>>>>>>>>> + *   Name of an event listed under /sys/bus/event_source/devices/pmu/events.
>>>>>>>>> + * @return
>>>>>>>>> + *   Event index in case of success, negative value otherwise.
>>>>>>>>> + */
>>>>>>>>> +__rte_experimental
>>>>>>>>> +int
>>>>>>>>> +rte_pmu_add_event(const char *name);
>>>>>>>>> +
>>>>>>>>> +/**
>>>>>>>>> + * @warning
>>>>>>>>> + * @b EXPERIMENTAL: this API may change without prior notice
>>>>>>>>> + *
>>>>>>>>> + * Read hardware counter configured to count occurrences of an event.
>>>>>>>>> + *
>>>>>>>>> + * @param index
>>>>>>>>> + *   Index of an event to be read.
>>>>>>>>> + * @return
>>>>>>>>> + *   Event value read from register. In case of errors or lack of support
>>>>>>>>> + *   0 is returned. In other words, stream of zeros in a trace file
>>>>>>>>> + *   indicates problem with reading particular PMU event register.
>>>>>>>>> + */
>>>>
>>>> Another question - do we really need  to have
>>>> __rte_pmu_read_userpage() and rte_pmu_read() as static inline functions in public header?
>>>> As I understand, because of that we also have to make 'struct rte_pmu_*'
>>>> definitions also public.
>>>>
>>>>>>>>> +__rte_experimental
>>>>>>>>> +static __rte_always_inline uint64_t rte_pmu_read(unsigned
>>>>>>>>> +int
>>>>>>>>> +index) {
>>>>>>>>> +	struct rte_pmu_event_group *group = &RTE_PER_LCORE(_event_group);
>>>>>>>>> +	int ret;
>>>>>>>>> +
>>>>>>>>> +	if (unlikely(!rte_pmu.initialized))
>>>>>>>>> +		return 0;
>>>>>>>>> +
>>>>>>>>> +	if (unlikely(!group->enabled)) {
>>>>>>>>> +		ret = __rte_pmu_enable_group();
>>>>>>>>> +		if (ret)
>>>>>>>>> +			return 0;
>>>>>>>>> +	}
>>>>>>>>> +
>>>>>>>>> +	if (unlikely(index >= rte_pmu.num_group_events))
>>>>>>>>> +		return 0;
>>>>>>>>> +
>>>>>>>>> +	return __rte_pmu_read_userpage(group->mmap_pages[index]);
>>>>>>>>> +}
>>>>>>>>> +
>>>>>>>>> +#ifdef __cplusplus
>>>>>>>>> +}
>>>>>>>>> +#endif
>>>>>>>>> +
> 


^ permalink raw reply	[relevance 3%]

* [PATCH v8 21/22] hash: move rte_hash_set_alg out header
  2023-02-20 23:35  3% ` [PATCH v8 00/22] Convert static logtypes in libraries Stephen Hemminger
@ 2023-02-20 23:35  3%   ` Stephen Hemminger
  2023-02-21 15:02  0%     ` David Marchand
  0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-02-20 23:35 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

The code for setting algorithm for hash is not at all perf sensitive,
and doing it inline has a couple of problems. First, it means that if
multiple files include the header, then the initialization gets done
multiple times. But also, it makes it harder to fix usage of RTE_LOG().

Despite what the checking script say. This is not an ABI change, the
previous version inlined the same code; therefore both old and new code
will work the same.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/hash/meson.build    |  1 +
 lib/hash/rte_hash_crc.c | 63 +++++++++++++++++++++++++++++++++++++++++
 lib/hash/rte_hash_crc.h | 46 ++----------------------------
 lib/hash/version.map    |  1 +
 4 files changed, 67 insertions(+), 44 deletions(-)
 create mode 100644 lib/hash/rte_hash_crc.c

diff --git a/lib/hash/meson.build b/lib/hash/meson.build
index e56ee8572564..c345c6f561fc 100644
--- a/lib/hash/meson.build
+++ b/lib/hash/meson.build
@@ -19,6 +19,7 @@ indirect_headers += files(
 
 sources = files(
     'rte_cuckoo_hash.c',
+    'rte_hash_crc.c',
     'rte_fbk_hash.c',
     'rte_thash.c',
     'rte_thash_gfni.c'
diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c
new file mode 100644
index 000000000000..c59eebccb1eb
--- /dev/null
+++ b/lib/hash/rte_hash_crc.c
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#include <rte_cpuflags.h>
+#include <rte_log.h>
+
+#include "rte_hash_crc.h"
+
+/**
+ * Allow or disallow use of SSE4.2/ARMv8 intrinsics for CRC32 hash
+ * calculation.
+ *
+ * @param alg
+ *   An OR of following flags:
+ *   - (CRC32_SW) Don't use SSE4.2/ARMv8 intrinsics (default non-[x86/ARMv8])
+ *   - (CRC32_SSE42) Use SSE4.2 intrinsics if available
+ *   - (CRC32_SSE42_x64) Use 64-bit SSE4.2 intrinsic if available (default x86)
+ *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
+ *
+ */
+void
+rte_hash_crc_set_alg(uint8_t alg)
+{
+	crc32_alg = CRC32_SW;
+
+	if (alg == CRC32_SW)
+		return;
+
+#if defined RTE_ARCH_X86
+	if (!(alg & CRC32_SSE42_x64))
+		RTE_LOG(WARNING, HASH,
+			"Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
+	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
+		crc32_alg = CRC32_SSE42;
+	else
+		crc32_alg = CRC32_SSE42_x64;
+#endif
+
+#if defined RTE_ARCH_ARM64
+	if (!(alg & CRC32_ARM64))
+		RTE_LOG(WARNING, HASH,
+			"Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
+		crc32_alg = CRC32_ARM64;
+#endif
+
+	if (crc32_alg == CRC32_SW)
+		RTE_LOG(WARNING, HASH,
+			"Unsupported CRC32 algorithm requested using CRC32_SW\n");
+}
+
+/* Setting the best available algorithm */
+RTE_INIT(rte_hash_crc_init_alg)
+{
+#if defined(RTE_ARCH_X86)
+	rte_hash_crc_set_alg(CRC32_SSE42_x64);
+#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
+	rte_hash_crc_set_alg(CRC32_ARM64);
+#else
+	rte_hash_crc_set_alg(CRC32_SW);
+#endif
+}
diff --git a/lib/hash/rte_hash_crc.h b/lib/hash/rte_hash_crc.h
index 0249ad16c5b6..e4acd99a0c81 100644
--- a/lib/hash/rte_hash_crc.h
+++ b/lib/hash/rte_hash_crc.h
@@ -20,8 +20,6 @@ extern "C" {
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_config.h>
-#include <rte_cpuflags.h>
-#include <rte_log.h>
 
 #include "rte_crc_sw.h"
 
@@ -53,48 +51,8 @@ static uint8_t crc32_alg = CRC32_SW;
  *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
  *
  */
-static inline void
-rte_hash_crc_set_alg(uint8_t alg)
-{
-	crc32_alg = CRC32_SW;
-
-	if (alg == CRC32_SW)
-		return;
-
-#if defined RTE_ARCH_X86
-	if (!(alg & CRC32_SSE42_x64))
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
-	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
-		crc32_alg = CRC32_SSE42;
-	else
-		crc32_alg = CRC32_SSE42_x64;
-#endif
-
-#if defined RTE_ARCH_ARM64
-	if (!(alg & CRC32_ARM64))
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
-	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
-		crc32_alg = CRC32_ARM64;
-#endif
-
-	if (crc32_alg == CRC32_SW)
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_SW\n");
-}
-
-/* Setting the best available algorithm */
-RTE_INIT(rte_hash_crc_init_alg)
-{
-#if defined(RTE_ARCH_X86)
-	rte_hash_crc_set_alg(CRC32_SSE42_x64);
-#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
-	rte_hash_crc_set_alg(CRC32_ARM64);
-#else
-	rte_hash_crc_set_alg(CRC32_SW);
-#endif
-}
+void
+rte_hash_crc_set_alg(uint8_t alg);
 
 #ifdef __DOXYGEN__
 
diff --git a/lib/hash/version.map b/lib/hash/version.map
index f03b047b2eec..a1d81835399c 100644
--- a/lib/hash/version.map
+++ b/lib/hash/version.map
@@ -9,6 +9,7 @@ DPDK_23 {
 	rte_hash_add_key_with_hash;
 	rte_hash_add_key_with_hash_data;
 	rte_hash_count;
+	rte_hash_crc_set_alg;
 	rte_hash_create;
 	rte_hash_del_key;
 	rte_hash_del_key_with_hash;
-- 
2.39.1


^ permalink raw reply	[relevance 3%]

* [PATCH v8 00/22] Convert static logtypes in libraries
                     ` (3 preceding siblings ...)
  2023-02-15 17:23  3% ` [PATCH v7 00/22] Replace use of static logtypes in libraries Stephen Hemminger
@ 2023-02-20 23:35  3% ` Stephen Hemminger
  2023-02-20 23:35  3%   ` [PATCH v8 21/22] hash: move rte_hash_set_alg out header Stephen Hemminger
  2023-02-21 19:01  2% ` [PATCH v9 00/22] Convert static logtypes in libraries Stephen Hemminger
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-02-20 23:35 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

This patchset removes the main uses of static LOGTYPE's in DPDK
libraries. It starts with the easy one and goes on to the more complex ones.

There are several options on how to treat the old static types:
	- leave them there
	- mark the definitions as deprecated
	- remove them
This version removes them since there is no guarantee in current
DPDK policies that says they can't be removed.

Note: there is one patch in this series that will get
flagged incorrectly as an ABI change.

v8 - rebase and fix CI issues on Arm
     simplify the mempool logtype patch

Stephen Hemminger (22):
  gso: don't log message on non TCP/UDP
  eal: drop no longer used GSO logtype
  log: drop unused RTE_LOGTYPE_TIMER
  efd: replace RTE_LOGTYPE_EFD with dynamic type
  mbuf: replace RTE_LOGTYPE_MBUF with dynamic type
  acl: replace LOGTYPE_ACL with dynamic type
  examples/power: replace use of RTE_LOGTYPE_POWER
  examples/l3fwd-power: replace use of RTE_LOGTYPE_POWER
  power: replace RTE_LOGTYPE_POWER with dynamic type
  ring: replace RTE_LOGTYPE_RING with dynamic type
  mempool: replace RTE_LOGTYPE_MEMPOOL with dynamic type
  lpm: replace RTE_LOGTYPE_LPM with dynamic types
  kni: replace RTE_LOGTYPE_KNI with dynamic type
  sched: replace RTE_LOGTYPE_SCHED with dynamic type
  examples/ipsecgw: replace RTE_LOGTYPE_PORT
  port: replace RTE_LOGTYPE_PORT with dynamic type
  table: convert RTE_LOGTYPE_TABLE to dynamic logtype
  app/test: remove use of RTE_LOGTYPE_PIPELINE
  pipeline: replace RTE_LOGTYPE_PIPELINE with dynamic type
  hash: move rte_thash_gfni stubs out of header file
  hash: move rte_hash_set_alg out header
  hash: convert RTE_LOGTYPE_HASH to dynamic type

 app/test/test_acl.c               |  3 +-
 app/test/test_table_acl.c         | 50 +++++++++++------------
 app/test/test_table_pipeline.c    | 40 +++++++++----------
 examples/distributor/main.c       |  2 +-
 examples/ipsec-secgw/sa.c         |  6 +--
 examples/l3fwd-power/main.c       | 15 +++----
 lib/acl/acl_bld.c                 |  1 +
 lib/acl/acl_gen.c                 |  1 +
 lib/acl/acl_log.h                 |  4 ++
 lib/acl/rte_acl.c                 |  4 ++
 lib/acl/tb_mem.c                  |  3 +-
 lib/eal/common/eal_common_log.c   | 17 --------
 lib/eal/include/rte_log.h         | 34 ++++++++--------
 lib/efd/rte_efd.c                 |  4 ++
 lib/fib/fib_log.h                 |  4 ++
 lib/fib/rte_fib.c                 |  3 ++
 lib/fib/rte_fib6.c                |  2 +
 lib/gso/rte_gso.c                 |  5 +--
 lib/gso/rte_gso.h                 |  1 +
 lib/hash/meson.build              |  9 ++++-
 lib/hash/rte_cuckoo_hash.c        |  5 +++
 lib/hash/rte_fbk_hash.c           |  5 +++
 lib/hash/rte_hash_crc.c           | 66 +++++++++++++++++++++++++++++++
 lib/hash/rte_hash_crc.h           | 46 +--------------------
 lib/hash/rte_thash.c              |  3 ++
 lib/hash/rte_thash_gfni.c         | 46 +++++++++++++++++++++
 lib/hash/rte_thash_gfni.h         | 28 +++----------
 lib/hash/version.map              |  5 +++
 lib/kni/rte_kni.c                 |  3 ++
 lib/lpm/lpm_log.h                 |  4 ++
 lib/lpm/rte_lpm.c                 |  3 ++
 lib/lpm/rte_lpm6.c                |  1 +
 lib/mbuf/mbuf_log.h               |  4 ++
 lib/mbuf/rte_mbuf.c               |  4 ++
 lib/mbuf/rte_mbuf_dyn.c           |  2 +
 lib/mbuf/rte_mbuf_pool_ops.c      |  2 +
 lib/mempool/rte_mempool.c         |  2 +
 lib/mempool/rte_mempool.h         |  8 ++++
 lib/mempool/version.map           |  3 ++
 lib/pipeline/rte_pipeline.c       |  3 ++
 lib/port/rte_port_ethdev.c        |  3 ++
 lib/port/rte_port_eventdev.c      |  4 ++
 lib/port/rte_port_fd.c            |  3 ++
 lib/port/rte_port_frag.c          |  3 ++
 lib/port/rte_port_kni.c           |  3 ++
 lib/port/rte_port_ras.c           |  3 ++
 lib/port/rte_port_ring.c          |  3 ++
 lib/port/rte_port_sched.c         |  3 ++
 lib/port/rte_port_source_sink.c   |  3 ++
 lib/port/rte_port_sym_crypto.c    |  3 ++
 lib/power/guest_channel.c         |  3 +-
 lib/power/power_common.c          |  2 +
 lib/power/power_common.h          |  3 +-
 lib/power/power_kvm_vm.c          |  1 +
 lib/power/rte_power.c             |  1 +
 lib/rib/rib_log.h                 |  4 ++
 lib/rib/rte_rib.c                 |  3 ++
 lib/rib/rte_rib6.c                |  3 ++
 lib/ring/rte_ring.c               |  3 ++
 lib/sched/rte_pie.c               |  1 +
 lib/sched/rte_sched.c             |  5 +++
 lib/sched/rte_sched_log.h         |  4 ++
 lib/table/rte_table_acl.c         |  3 ++
 lib/table/rte_table_array.c       |  3 ++
 lib/table/rte_table_hash_cuckoo.c |  3 ++
 lib/table/rte_table_hash_ext.c    |  3 ++
 lib/table/rte_table_hash_key16.c  |  3 ++
 lib/table/rte_table_hash_key32.c  |  5 ++-
 lib/table/rte_table_hash_key8.c   |  5 ++-
 lib/table/rte_table_hash_lru.c    |  3 ++
 lib/table/rte_table_lpm.c         |  3 ++
 lib/table/rte_table_lpm_ipv6.c    |  3 ++
 lib/table/rte_table_stub.c        |  3 ++
 73 files changed, 383 insertions(+), 169 deletions(-)
 create mode 100644 lib/acl/acl_log.h
 create mode 100644 lib/fib/fib_log.h
 create mode 100644 lib/hash/rte_hash_crc.c
 create mode 100644 lib/hash/rte_thash_gfni.c
 create mode 100644 lib/lpm/lpm_log.h
 create mode 100644 lib/mbuf/mbuf_log.h
 create mode 100644 lib/rib/rib_log.h
 create mode 100644 lib/sched/rte_sched_log.h

-- 
2.39.1


^ permalink raw reply	[relevance 3%]

* Re: [PATCH v1 04/13] graph: add get/set graph worker model APIs
  @ 2023-02-20 13:50  3%   ` Jerin Jacob
  2023-02-24  6:31  0%     ` Yan, Zhirun
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2023-02-20 13:50 UTC (permalink / raw)
  To: Zhirun Yan
  Cc: dev, jerinj, kirankumark, ndabilpuram, cunming.liang, haiyue.wang

On Thu, Nov 17, 2022 at 10:40 AM Zhirun Yan <zhirun.yan@intel.com> wrote:
>
> Add new get/set APIs to configure graph worker model which is used to
> determine which model will be chosen.
>
> Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
> Signed-off-by: Cunming Liang <cunming.liang@intel.com>
> Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
> ---
>  lib/graph/rte_graph_worker.h        | 51 +++++++++++++++++++++++++++++
>  lib/graph/rte_graph_worker_common.h | 13 ++++++++
>  lib/graph/version.map               |  3 ++
>  3 files changed, 67 insertions(+)
>
> diff --git a/lib/graph/rte_graph_worker.h b/lib/graph/rte_graph_worker.h
> index 54d1390786..a0ea0df153 100644
> --- a/lib/graph/rte_graph_worker.h
> +++ b/lib/graph/rte_graph_worker.h
> @@ -1,5 +1,56 @@
>  #include "rte_graph_model_rtc.h"
>
> +static enum rte_graph_worker_model worker_model = RTE_GRAPH_MODEL_DEFAULT;

This will break the multiprocess.

> +
> +/** Graph worker models */
> +enum rte_graph_worker_model {
> +#define WORKER_MODEL_DEFAULT "default"

Why need strings?
Also, every symbol in a public header file should start with RTE_ to
avoid namespace conflict.

> +       RTE_GRAPH_MODEL_DEFAULT = 0,
> +#define WORKER_MODEL_RTC "rtc"
> +       RTE_GRAPH_MODEL_RTC,

Why not RTE_GRAPH_MODEL_RTC = RTE_GRAPH_MODEL_DEFAULT in enum itself.

> +#define WORKER_MODEL_GENERIC "generic"

Generic is a very overloaded term. Use pipeline here i.e
RTE_GRAPH_MODEL_PIPELINE


> +       RTE_GRAPH_MODEL_GENERIC,
> +       RTE_GRAPH_MODEL_MAX,

No need for MAX, it will break the ABI for future. See other subsystem
such as cryptodev.

> +};

>

^ permalink raw reply	[relevance 3%]

* [PATCH v2 3/3] doc: add Corigine information to nfp documentation
  @ 2023-02-20  8:41  8%   ` Chaoyong He
  0 siblings, 0 replies; 200+ results
From: Chaoyong He @ 2023-02-20  8:41 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, niklas.soderlund, Walter Heymans, Chaoyong He

From: Walter Heymans <walter.heymans@corigine.com>

Add Corigine information to the nfp documentation. The Network Flow
Processor (NFP) PMD is used by products from both Netronome and
Corigine.

Signed-off-by: Walter Heymans <walter.heymans@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
 doc/guides/nics/nfp.rst | 78 +++++++++++++++++++++++++----------------
 1 file changed, 47 insertions(+), 31 deletions(-)

diff --git a/doc/guides/nics/nfp.rst b/doc/guides/nics/nfp.rst
index d133b6385c..f102238a28 100644
--- a/doc/guides/nics/nfp.rst
+++ b/doc/guides/nics/nfp.rst
@@ -1,19 +1,18 @@
 ..  SPDX-License-Identifier: BSD-3-Clause
     Copyright(c) 2015-2017 Netronome Systems, Inc. All rights reserved.
-    All rights reserved.
+    Copyright(c) 2021 Corigine, Inc. All rights reserved.
 
 NFP poll mode driver library
 ============================
 
-Netronome's sixth generation of flow processors pack 216 programmable
-cores and over 100 hardware accelerators that uniquely combine packet,
-flow, security and content processing in a single device that scales
+Netronome and Corigine's sixth generation of flow processors pack 216
+programmable cores and over 100 hardware accelerators that uniquely combine
+packet, flow, security and content processing in a single device that scales
 up to 400-Gb/s.
 
-This document explains how to use DPDK with the Netronome Poll Mode
-Driver (PMD) supporting Netronome's Network Flow Processor 6xxx
-(NFP-6xxx), Netronome's Network Flow Processor 4xxx (NFP-4xxx) and
-Netronome's Network Flow Processor 38xx (NFP-38xx).
+This document explains how to use DPDK with the Network Flow Processor (NFP)
+Poll Mode Driver (PMD) supporting Netronome and Corigine's NFP-6xxx, NFP-4xxx
+and NFP-38xx product lines.
 
 NFP is a SR-IOV capable device and the PMD supports the physical
 function (PF) and the virtual functions (VFs).
@@ -21,15 +20,16 @@ function (PF) and the virtual functions (VFs).
 Dependencies
 ------------
 
-Before using the Netronome's DPDK PMD some NFP configuration,
+Before using the NFP DPDK PMD some NFP configuration,
 which is not related to DPDK, is required. The system requires
-installation of **Netronome's BSP (Board Support Package)** along
-with a specific NFP firmware application. Netronome's NSP ABI
+installation of the **nfp-bsp (Board Support Package)** along
+with a specific NFP firmware application. The NSP ABI
 version should be 0.20 or higher.
 
-If you have a NFP device you should already have the code and
-documentation for this configuration. Contact
-**support@netronome.com** to obtain the latest available firmware.
+If you have a NFP device you should already have the documentation to perform
+this configuration. Contact **support@netronome.com** (for Netronome products)
+or **smartnic-support@corigine.com** (for Corigine products) to obtain the
+latest available firmware.
 
 The NFP Linux netdev kernel driver for VFs has been a part of the
 vanilla kernel since kernel version 4.5, and support for the PF
@@ -44,9 +44,9 @@ Linux kernel driver.
 Building the software
 ---------------------
 
-Netronome's PMD code is provided in the **drivers/net/nfp** directory.
-Although NFP PMD has Netronome´s BSP dependencies, it is possible to
-compile it along with other DPDK PMDs even if no BSP was installed previously.
+The NFP PMD code is provided in the **drivers/net/nfp** directory. Although
+NFP PMD has BSP dependencies, it is possible to compile it along with other
+DPDK PMDs even if no BSP was installed previously.
 Of course, a DPDK app will require such a BSP installed for using the
 NFP PMD, along with a specific NFP firmware application.
 
@@ -68,9 +68,9 @@ like uploading the firmware and configure the Link state properly when starting
 or stopping a PF port. Since DPDK 18.05 the firmware upload happens when
 a PF is initialized, which was not always true with older DPDK versions.
 
-Depending on the Netronome product installed in the system, firmware files
-should be available under ``/lib/firmware/netronome``. DPDK PMD supporting the
-PF looks for a firmware file in this order:
+Depending on the product installed in the system, firmware files should be
+available under ``/lib/firmware/netronome``. DPDK PMD supporting the PF looks
+for a firmware file in this order:
 
 	1) First try to find a firmware image specific for this device using the
 	   NFP serial number:
@@ -85,19 +85,22 @@ PF looks for a firmware file in this order:
 
 		nic_AMDA0099-0001_2x25.nffw
 
-Netronome's software packages install firmware files under
-``/lib/firmware/netronome`` to support all the Netronome's SmartNICs and
-different firmware applications. This is usually done using file names based on
-SmartNIC type and media and with a directory per firmware application. Options
-1 and 2 for firmware filenames allow more than one SmartNIC, same type of
-SmartNIC or different ones, and to upload a different firmware to each
+Netronome and Corigine's software packages install firmware files under
+``/lib/firmware/netronome`` to support all the Netronome and Corigine SmartNICs
+and different firmware applications. This is usually done using file names
+based on SmartNIC type and media and with a directory per firmware application.
+Options 1 and 2 for firmware filenames allow more than one SmartNIC, same type
+of SmartNIC or different ones, and to upload a different firmware to each
 SmartNIC.
 
    .. Note::
       Currently the NFP PMD supports using the PF with Agilio Firmware with
       NFD3 and Agilio Firmware with NFDk. See
-      https://help.netronome.com/support/solutions for more information on the
-      various firmwares supported by the Netronome Agilio CX smartNIC.
+      `Netronome Support <https://help.netronome.com/support/solutions>`_.
+      for more information on the various firmwares supported by the Netronome
+      Agilio SmartNIC range, or
+      `Corigine Support <https://www.corigine.com/productsOverviewList-30.html>`_.
+      for more information about Corigine's range.
 
 PF multiport support
 --------------------
@@ -164,6 +167,12 @@ System configuration
 
       lspci -d 19ee:
 
+   and on Corigine SmartNICs using:
+
+   .. code-block:: console
+
+      lspci -d 1da8:
+
    Now, for example, to configure two virtual functions on a NFP device
    whose PCI system identity is "0000:03:00.0":
 
@@ -171,12 +180,19 @@ System configuration
 
       echo 2 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs
 
-   The result of this command may be shown using lspci again:
+   The result of this command may be shown using lspci again on Netronome
+   SmartNICs:
 
    .. code-block:: console
 
       lspci -kd 19ee:
 
+   and on Corigine SmartNICs:
+
+   .. code-block:: console
+
+      lspci -kd 1da8:
+
    Two new PCI devices should appear in the output of the above command. The
    -k option shows the device driver, if any, that the devices are bound to.
    Depending on the modules loaded, at this point the new PCI devices may be
@@ -186,8 +202,8 @@ System configuration
 Flow offload
 ------------
 
-Use the flower firmware application, some type of Netronome's SmartNICs can
-offload the flow into cards.
+Using the flower firmware application, some types of Netronome or Corigine
+SmartNICs can offload the flows onto the cards.
 
 The flower firmware application requires the PMD running two services:
 
-- 
2.29.3


^ permalink raw reply	[relevance 8%]

* Re: [PATCH v3 6/6] test/dmadev: add tests for stopping and restarting dev
  2023-02-16 11:09  3%   ` [PATCH v3 6/6] test/dmadev: add tests for stopping and restarting dev Bruce Richardson
@ 2023-02-16 11:42  0%     ` fengchengwen
  0 siblings, 0 replies; 200+ results
From: fengchengwen @ 2023-02-16 11:42 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: Kevin Laatz

Acked-by: Chengwen Feng <fengchengwen@huawei.com>

On 2023/2/16 19:09, Bruce Richardson wrote:
> Validate device operation when a device is stopped or restarted.
> 
> The only complication - and gap in the dmadev ABI specification - is
> what happens to the job ids on restart. Some drivers reset them to 0,
> while others continue where things left off. Take account of both
> possibilities in the test case.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> Acked-by: Kevin Laatz <kevin.laatz@intel.com>

...

^ permalink raw reply	[relevance 0%]

* [PATCH v3 6/6] test/dmadev: add tests for stopping and restarting dev
  @ 2023-02-16 11:09  3%   ` Bruce Richardson
  2023-02-16 11:42  0%     ` fengchengwen
  0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2023-02-16 11:09 UTC (permalink / raw)
  To: dev; +Cc: fengchengwen, Bruce Richardson, Kevin Laatz

Validate device operation when a device is stopped or restarted.

The only complication - and gap in the dmadev ABI specification - is
what happens to the job ids on restart. Some drivers reset them to 0,
while others continue where things left off. Take account of both
possibilities in the test case.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Kevin Laatz <kevin.laatz@intel.com>
---
 app/test/test_dmadev.c | 46 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 46 insertions(+)

diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
index 0296c52d2a..0736ff2a18 100644
--- a/app/test/test_dmadev.c
+++ b/app/test/test_dmadev.c
@@ -304,6 +304,48 @@ test_enqueue_copies(int16_t dev_id, uint16_t vchan)
 			|| do_multi_copies(dev_id, vchan, 0, 0, 1);
 }
 
+static int
+test_stop_start(int16_t dev_id, uint16_t vchan)
+{
+	/* device is already started on input, should be (re)started on output */
+
+	uint16_t id = 0;
+	enum rte_dma_status_code status = RTE_DMA_STATUS_SUCCESSFUL;
+
+	/* - test stopping a device works ok,
+	 * - then do a start-stop without doing a copy
+	 * - finally restart the device
+	 * checking for errors at each stage, and validating we can still copy at the end.
+	 */
+	if (rte_dma_stop(dev_id) < 0)
+		ERR_RETURN("Error stopping device\n");
+
+	if (rte_dma_start(dev_id) < 0)
+		ERR_RETURN("Error restarting device\n");
+	if (rte_dma_stop(dev_id) < 0)
+		ERR_RETURN("Error stopping device after restart (no jobs executed)\n");
+
+	if (rte_dma_start(dev_id) < 0)
+		ERR_RETURN("Error restarting device after multiple stop-starts\n");
+
+	/* before doing a copy, we need to know what the next id will be it should
+	 * either be:
+	 * - the last completed job before start if driver does not reset id on stop
+	 * - or -1 i.e. next job is 0, if driver does reset the job ids on stop
+	 */
+	if (rte_dma_completed_status(dev_id, vchan, 1, &id, &status) != 0)
+		ERR_RETURN("Error with rte_dma_completed_status when no job done\n");
+	id += 1; /* id_count is next job id */
+	if (id != id_count && id != 0)
+		ERR_RETURN("Unexpected next id from device after stop-start. Got %u, expected %u or 0\n",
+				id, id_count);
+
+	id_count = id;
+	if (test_single_copy(dev_id, vchan) < 0)
+		ERR_RETURN("Error performing copy after device restart\n");
+	return 0;
+}
+
 /* Failure handling test cases - global macros and variables for those tests*/
 #define COMP_BURST_SZ	16
 #define OPT_FENCE(idx) ((fence && idx == 8) ? RTE_DMA_OP_FLAG_FENCE : 0)
@@ -819,6 +861,10 @@ test_dmadev_instance(int16_t dev_id)
 	if (runtest("copy", test_enqueue_copies, 640, dev_id, vchan, CHECK_ERRS) < 0)
 		goto err;
 
+	/* run tests stopping/starting devices and check jobs still work after restart */
+	if (runtest("stop-start", test_stop_start, 1, dev_id, vchan, CHECK_ERRS) < 0)
+		goto err;
+
 	/* run some burst capacity tests */
 	if (rte_dma_burst_capacity(dev_id, vchan) < 64)
 		printf("DMA Dev %u: insufficient burst capacity (64 required), skipping tests\n",
-- 
2.37.2


^ permalink raw reply	[relevance 3%]

* Re: [PATCH v2 6/6] test/dmadev: add tests for stopping and restarting dev
  2023-02-16  1:24  0%         ` fengchengwen
@ 2023-02-16  9:24  0%           ` Bruce Richardson
  0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2023-02-16  9:24 UTC (permalink / raw)
  To: fengchengwen; +Cc: dev, Kevin Laatz

On Thu, Feb 16, 2023 at 09:24:38AM +0800, fengchengwen wrote:
> On 2023/2/15 19:57, Bruce Richardson wrote:
> > On Wed, Feb 15, 2023 at 09:59:06AM +0800, fengchengwen wrote:
> >> On 2023/1/17 1:37, Bruce Richardson wrote:
> >>> Validate device operation when a device is stopped or restarted.
> >>>
> >>> The only complication - and gap in the dmadev ABI specification - is
> >>> what happens to the job ids on restart. Some drivers reset them to 0,
> >>> while others continue where things left off. Take account of both
> >>> possibilities in the test case.
> >>>
> >>> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com> ---
> >>> app/test/test_dmadev.c | 46 ++++++++++++++++++++++++++++++++++++++++++
> >>> 1 file changed, 46 insertions(+)
> >>>
> >>> diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c index
> >>> de787c14e2..8fb73a41e2 100644 --- a/app/test/test_dmadev.c +++
> >>> b/app/test/test_dmadev.c @@ -304,6 +304,48 @@
> >>> test_enqueue_copies(int16_t dev_id, uint16_t vchan) ||
> >>> do_multi_copies(dev_id, vchan, 0, 0, 1); }
> >>>  
> >>> +static int +test_stop_start(int16_t dev_id, uint16_t vchan) +{ +	/*
> >>> device is already started on input, should be (re)started on output */
> >>> + +	uint16_t id = 0; +	enum rte_dma_status_code status =
> >>> RTE_DMA_STATUS_SUCCESSFUL; + +	/* - test stopping a device works
> >>> ok, +	 * - then do a start-stop without doing a copy +	 *
> >>> - finally restart the device +	 * checking for errors at each
> >>> stage, and validating we can still copy at the end.  +	 */ +	if
> >>> (rte_dma_stop(dev_id) < 0) +		ERR_RETURN("Error stopping
> >>> device\n"); + +	if (rte_dma_start(dev_id) < 0) +
> >>> ERR_RETURN("Error restarting device\n"); +	if (rte_dma_stop(dev_id) <
> >>> 0) +		ERR_RETURN("Error stopping device after restart (no
> >>> jobs executed)\n"); + +	if (rte_dma_start(dev_id) < 0) +
> >>> ERR_RETURN("Error restarting device after multiple stop-starts\n"); + +
> >>> /* before doing a copy, we need to know what the next id will be it
> >>> should +	 * either be: +	 * - the last completed job before start if
> >>> driver does not reset id on stop +	 * - or -1 i.e. next job is 0, if
> >>> driver does reset the job ids on stop +	 */ +	if
> >>> (rte_dma_completed_status(dev_id, vchan, 1, &id, &status) != 0) +
> >>> ERR_RETURN("Error with rte_dma_completed_status when no job done\n"); +
> >>> id += 1; /* id_count is next job id */ +	if (id != id_count && id !=
> >>> 0) +		ERR_RETURN("Unexpected next id from device after
> >>> stop-start. Got %u, expected %u or 0\n", +				id,
> >>> id_count);
> >>
> >> Hi Bruce,
> >>
> >> Suggest add a warn LOG to identify the id was not reset zero.  So that
> >> new driver could detect break ABI specification.
> >>
> > Not sure that that is necessary. The actual ABI, nor the doxygen docs,
> > doesn't specify what happens to the values on doing stop and then start. My
> > thinking was that it should continue numbering as it would be equivalent to
> > suspend and resume, but other drivers appear to treat it as a "reset". I
> > suspect there are advantages and disadvantages to both schemes. Until we
> > decide on what the correct behaviour should be - or decide to allow both -
> > I don't think warning is the right thing to do here.
> 
> In this point, agree to upstream this patch first, and then discuss the correct
> behavior should be for restart scenario.
> 
+1. Thanks.

With this patch in place we will also be better able to help drivers
enforce the correct behaviour once we define it.

I'll do v3 keeping this as-is for now.

/Bruce

^ permalink raw reply	[relevance 0%]

* Re: [PATCH v2 6/6] test/dmadev: add tests for stopping and restarting dev
  2023-02-15 11:57  3%       ` Bruce Richardson
@ 2023-02-16  1:24  0%         ` fengchengwen
  2023-02-16  9:24  0%           ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: fengchengwen @ 2023-02-16  1:24 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, Kevin Laatz

On 2023/2/15 19:57, Bruce Richardson wrote:
> On Wed, Feb 15, 2023 at 09:59:06AM +0800, fengchengwen wrote:
>> On 2023/1/17 1:37, Bruce Richardson wrote:
>>> Validate device operation when a device is stopped or restarted.
>>>
>>> The only complication - and gap in the dmadev ABI specification - is
>>> what happens to the job ids on restart. Some drivers reset them to 0,
>>> while others continue where things left off. Take account of both
>>> possibilities in the test case.
>>>
>>> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com> ---
>>> app/test/test_dmadev.c | 46 ++++++++++++++++++++++++++++++++++++++++++
>>> 1 file changed, 46 insertions(+)
>>>
>>> diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c index
>>> de787c14e2..8fb73a41e2 100644 --- a/app/test/test_dmadev.c +++
>>> b/app/test/test_dmadev.c @@ -304,6 +304,48 @@
>>> test_enqueue_copies(int16_t dev_id, uint16_t vchan) ||
>>> do_multi_copies(dev_id, vchan, 0, 0, 1); }
>>>  
>>> +static int +test_stop_start(int16_t dev_id, uint16_t vchan) +{ +	/*
>>> device is already started on input, should be (re)started on output */
>>> + +	uint16_t id = 0; +	enum rte_dma_status_code status =
>>> RTE_DMA_STATUS_SUCCESSFUL; + +	/* - test stopping a device works
>>> ok, +	 * - then do a start-stop without doing a copy +	 *
>>> - finally restart the device +	 * checking for errors at each
>>> stage, and validating we can still copy at the end.  +	 */ +	if
>>> (rte_dma_stop(dev_id) < 0) +		ERR_RETURN("Error stopping
>>> device\n"); + +	if (rte_dma_start(dev_id) < 0) +
>>> ERR_RETURN("Error restarting device\n"); +	if (rte_dma_stop(dev_id) <
>>> 0) +		ERR_RETURN("Error stopping device after restart (no
>>> jobs executed)\n"); + +	if (rte_dma_start(dev_id) < 0) +
>>> ERR_RETURN("Error restarting device after multiple stop-starts\n"); + +
>>> /* before doing a copy, we need to know what the next id will be it
>>> should +	 * either be: +	 * - the last completed job before start if
>>> driver does not reset id on stop +	 * - or -1 i.e. next job is 0, if
>>> driver does reset the job ids on stop +	 */ +	if
>>> (rte_dma_completed_status(dev_id, vchan, 1, &id, &status) != 0) +
>>> ERR_RETURN("Error with rte_dma_completed_status when no job done\n"); +
>>> id += 1; /* id_count is next job id */ +	if (id != id_count && id !=
>>> 0) +		ERR_RETURN("Unexpected next id from device after
>>> stop-start. Got %u, expected %u or 0\n", +				id,
>>> id_count);
>>
>> Hi Bruce,
>>
>> Suggest add a warn LOG to identify the id was not reset zero.  So that
>> new driver could detect break ABI specification.
>>
> Not sure that that is necessary. The actual ABI, nor the doxygen docs,
> doesn't specify what happens to the values on doing stop and then start. My
> thinking was that it should continue numbering as it would be equivalent to
> suspend and resume, but other drivers appear to treat it as a "reset". I
> suspect there are advantages and disadvantages to both schemes. Until we
> decide on what the correct behaviour should be - or decide to allow both -
> I don't think warning is the right thing to do here.

In this point, agree to upstream this patch first, and then discuss the correct
behavior should be for restart scenario.

> 
> /Bruce
> 
> .
> 

^ permalink raw reply	[relevance 0%]

* Re: [PATCH] doc: update NFP documentation with Corigine information
  2023-02-15 13:37  0% ` Ferruh Yigit
@ 2023-02-15 17:58  0%   ` Niklas Söderlund
  0 siblings, 0 replies; 200+ results
From: Niklas Söderlund @ 2023-02-15 17:58 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Chaoyong He, dev, oss-drivers, Walter Heymans

Hello Ferruh,

Thanks for your feedback.

On 2023-02-15 13:37:05 +0000, Ferruh Yigit wrote:
> On 2/3/2023 8:08 AM, Chaoyong He wrote:
> > From: Walter Heymans <walter.heymans@corigine.com>
> > 
> > The NFP PMD documentation is updated to include information about
> > Corigine and their new vendor device ID.
> > 
> > Outdated information regarding the use of the PMD is also updated.
> > 
> > While making major changes to the document, the maximum number of
> > characters per line is updated to 80 characters to improve the
> > readability in raw format.
> > 
> 
> There are three groups of changes done to documentation as explained in
> three paragraphs above.
> 
> To help review, is it possible to separate this patch into three
> patches? Later they can be squashed and merged as a single patch.
> But as it is, easy to miss content changes among formatting changes.
> 
> (You can include simple grammar updates (that doesn't change either
> content or Corigine related information) to formatting update patch)

We will break this patch in to three as you suggest, address the 
comments below and post a v2.

> 
> 
> > Signed-off-by: Walter Heymans <walter.heymans@corigine.com>
> > Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
> > Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
> > ---
> >  doc/guides/nics/nfp.rst | 168 +++++++++++++++++++++-------------------
> >  1 file changed, 90 insertions(+), 78 deletions(-)
> > 
> > diff --git a/doc/guides/nics/nfp.rst b/doc/guides/nics/nfp.rst
> > index a085d7d9ae..6fea280411 100644
> > --- a/doc/guides/nics/nfp.rst
> > +++ b/doc/guides/nics/nfp.rst
> > @@ -1,35 +1,34 @@
> >  ..  SPDX-License-Identifier: BSD-3-Clause
> >      Copyright(c) 2015-2017 Netronome Systems, Inc. All rights reserved.
> > -    All rights reserved.
> > +    Copyright(c) 2021 Corigine, Inc. All rights reserved.
> >  
> >  NFP poll mode driver library
> >  ============================
> >  
> > -Netronome's sixth generation of flow processors pack 216 programmable
> > -cores and over 100 hardware accelerators that uniquely combine packet,
> > -flow, security and content processing in a single device that scales
> > +Netronome and Corigine's sixth generation of flow processors pack 216
> > +programmable cores and over 100 hardware accelerators that uniquely combine
> > +packet, flow, security and content processing in a single device that scales
> >  up to 400-Gb/s.
> >  
> > -This document explains how to use DPDK with the Netronome Poll Mode
> > -Driver (PMD) supporting Netronome's Network Flow Processor 6xxx
> > -(NFP-6xxx), Netronome's Network Flow Processor 4xxx (NFP-4xxx) and
> > -Netronome's Network Flow Processor 38xx (NFP-38xx).
> > +This document explains how to use DPDK with the Network Flow Processor (NFP)
> > +Poll Mode Driver (PMD) supporting Netronome and Corigine's NFP-6xxx, NFP-4xxx
> > +and NFP-38xx product lines.
> >  
> > -NFP is a SRIOV capable device and the PMD supports the physical
> > -function (PF) and the virtual functions (VFs).
> > +NFP is a SR-IOV capable device and the PMD supports the physical function (PF)
> > +and the virtual functions (VFs).
> >  
> >  Dependencies
> >  ------------
> >  
> > -Before using the Netronome's DPDK PMD some NFP configuration,
> > -which is not related to DPDK, is required. The system requires
> > -installation of **Netronome's BSP (Board Support Package)** along
> > -with a specific NFP firmware application. Netronome's NSP ABI
> > -version should be 0.20 or higher.
> > +Before using the NFP DPDK PMD some NFP configuration, which is not related to
> > +DPDK, is required. The system requires installation of
> > +**NFP-BSP (Board Support Package)** along with a specific NFP firmware
> > +application. The NSP ABI version should be 0.20 or higher.
> >  
> > -If you have a NFP device you should already have the code and
> > -documentation for this configuration. Contact
> > -**support@netronome.com** to obtain the latest available firmware.
> > +If you have a NFP device you should already have the documentation to perform
> > +this configuration. Contact **support@netronome.com** (for Netronome products)
> > +or **smartnic-support@corigine.com** (for Corigine products) to obtain the
> > +latest available firmware.
> >  
> >  The NFP Linux netdev kernel driver for VFs has been a part of the
> >  vanilla kernel since kernel version 4.5, and support for the PF
> > @@ -44,11 +43,11 @@ Linux kernel driver.
> >  Building the software
> >  ---------------------
> >  
> > -Netronome's PMD code is provided in the **drivers/net/nfp** directory.
> > -Although NFP PMD has Netronome´s BSP dependencies, it is possible to
> > -compile it along with other DPDK PMDs even if no BSP was installed previously.
> > -Of course, a DPDK app will require such a BSP installed for using the
> > -NFP PMD, along with a specific NFP firmware application.
> > +The NFP PMD code is provided in the **drivers/net/nfp** directory. Although
> > +NFP PMD has BSP dependencies, it is possible to compile it along with other
> > +DPDK PMDs even if no BSP was installed previously. Of course, a DPDK app will
> > +require such a BSP installed for using the NFP PMD, along with a specific NFP
> > +firmware application.
> >  
> >  Once the DPDK is built all the DPDK apps and examples include support for
> >  the NFP PMD.
> > @@ -57,27 +56,20 @@ the NFP PMD.
> >  Driver compilation and testing
> >  ------------------------------
> >  
> > -Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> > -for details.
> > +Refer to the document
> > +:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` for details.
> >  
> >  Using the PF
> >  ------------
> >  
> > -NFP PMD supports using the NFP PF as another DPDK port, but it does not
> > -have any functionality for controlling VFs. In fact, it is not possible to use
> > -the PMD with the VFs if the PF is being used by DPDK, that is, with the NFP PF
> > -bound to ``igb_uio`` or ``vfio-pci`` kernel drivers. Future DPDK versions will
> > -have a PMD able to work with the PF and VFs at the same time and with the PF
> > -implementing VF management along with other PF-only functionalities/offloads.
> > -
> 
> Why this paragraph is removed? Is it because it is not correct anymore,
> or just because of document organization change.
> 
> >  The PMD PF has extra work to do which will delay the DPDK app initialization
> > -like uploading the firmware and configure the Link state properly when starting or
> > -stopping a PF port. Since DPDK 18.05 the firmware upload happens when
> > +like uploading the firmware and configure the Link state properly when starting
> > +or stopping a PF port. Since DPDK 18.05 the firmware upload happens when
> >  a PF is initialized, which was not always true with older DPDK versions.
> >  
> > -Depending on the Netronome product installed in the system, firmware files
> > -should be available under ``/lib/firmware/netronome``. DPDK PMD supporting the
> > -PF looks for a firmware file in this order:
> > +Depending on the product installed in the system, firmware files should be
> > +available under ``/lib/firmware/netronome``. DPDK PMD supporting the PF looks
> > +for a firmware file in this order:
> >  
> >  	1) First try to find a firmware image specific for this device using the
> >  	   NFP serial number:
> > @@ -92,18 +84,21 @@ PF looks for a firmware file in this order:
> >  
> >  		nic_AMDA0099-0001_2x25.nffw
> >  
> > -Netronome's software packages install firmware files under ``/lib/firmware/netronome``
> > -to support all the Netronome's SmartNICs and different firmware applications.
> > -This is usually done using file names based on SmartNIC type and media and with a
> > -directory per firmware application. Options 1 and 2 for firmware filenames allow
> > -more than one SmartNIC, same type of SmartNIC or different ones, and to upload a
> > -different firmware to each SmartNIC.
> > +Netronome and Corigine's software packages install firmware files under
> > +``/lib/firmware/netronome`` to support all the SmartNICs and different firmware
> > +applications. This is usually done using file names based on SmartNIC type and
> > +media and with a directory per firmware application. Options 1 and 2 for
> > +firmware filenames allow more than one SmartNIC, same type of SmartNIC or
> > +different ones, and to upload a different firmware to each SmartNIC.
> >  
> >     .. Note::
> > -      Currently the NFP PMD supports using the PF with Agilio Firmware with NFD3
> > -      and Agilio Firmware with NFDk. See https://help.netronome.com/support/solutions
> > +      Currently the NFP PMD supports using the PF with Agilio Firmware with
> > +      NFD3 and Agilio Firmware with NFDk. See
> > +      `Netronome Support <https://help.netronome.com/support/solutions>`_.
> >        for more information on the various firmwares supported by the Netronome
> > -      Agilio CX smartNIC.
> > +      Agilio SmartNICs range, or
> > +      `Corigine Support <https://www.corigine.com/productsOverviewList-30.html>`_.
> > +      for more information about Corigine's range.
> >  
> >  PF multiport support
> >  --------------------
> > @@ -118,7 +113,7 @@ this particular configuration requires the PMD to create ports in a special way,
> >  although once they are created, DPDK apps should be able to use them as normal
> >  PCI ports.
> >  
> > -NFP ports belonging to same PF can be seen inside PMD initialization with a
> > +NFP ports belonging to the same PF can be seen inside PMD initialization with a
> >  suffix added to the PCI ID: wwww:xx:yy.z_portn. For example, a PF with PCI ID
> >  0000:03:00.0 and four ports is seen by the PMD code as:
> >  
> > @@ -137,50 +132,67 @@ suffix added to the PCI ID: wwww:xx:yy.z_portn. For example, a PF with PCI ID
> >  PF multiprocess support
> >  -----------------------
> >  
> > -Due to how the driver needs to access the NFP through a CPP interface, which implies
> > -to use specific registers inside the chip, the number of secondary processes with PF
> > -ports is limited to only one.
> > +Due to how the driver needs to access the NFP through a CPP interface, which
> > +implies to use specific registers inside the chip, the number of secondary
> > +processes with PF ports is limited to only one.
> >  
> > -This limitation will be solved in future versions but having basic multiprocess support
> > -is important for allowing development and debugging through the PF using a secondary
> > -process which will create a CPP bridge for user space tools accessing the NFP.
> > +This limitation will be solved in future versions, but having basic
> > +multiprocess support is important for allowing development and debugging
> > +through the PF using a secondary process, which will create a CPP bridge
> > +for user space tools accessing the NFP.
> >  
> >  
> >  System configuration
> >  --------------------
> >  
> >  #. **Enable SR-IOV on the NFP device:** The current NFP PMD supports the PF and
> > -   the VFs on a NFP device. However, it is not possible to work with both at the
> > -   same time because the VFs require the PF being bound to the NFP PF Linux
> > -   netdev driver.  Make sure you are working with a kernel with NFP PF support or
> > -   get the drivers from the above Github repository and follow the instructions
> > -   for building and installing it.
> > +   the VFs on a NFP device. However, it is not possible to work with both at
> > +   the same time when using the netdev NFP Linux netdev driver.
> 
> Old and new text doesn't say same thing.
> Old one says: "For DPDK to support VF, PF needs to bound to kernel driver.:
> 
> Is this changed, or just wording mistake?
> 
> 
> >     It is possible
> > +   to bind the PF to the ``vfio-pci`` kernel module, and create VFs afterwards.
> > +   This requires loading the ``vfio-pci`` module with the following parameters:
> > +
> > +   .. code-block:: console
> > +
> > +      modprobe vfio-pci enable_sriov=1 disable_idle_d3=1
> > +
> > +   VFs need to be enabled before they can be used with the PMD. Before enabling
> > +   the VFs it is useful to obtain information about the current NFP PCI device
> > +   detected by the system. This can be done on Netronome SmartNICs using:
> > +
> > +   .. code-block:: console
> > +
> > +      lspci -d 19ee:
> >  
> 
> What I understand is, to support VF by DPDK two things are required:
> 1) Ability to create VFs, this can be done both by using device's kernel
> driver or 'vfio-pci'
> 2) PF driver should support managing VFs.
> 
> Above lines document about item (1) and how 'vfio-pci' is used for it.
> 
> But old documentation mentions about item (2) is missing, why that part
> removed, isn't it valid anymore? I mean is "PF -> kernel, VF -> DPDK"
> combination supported now?
> 
> 
> > -   VFs need to be enabled before they can be used with the PMD.
> > -   Before enabling the VFs it is useful to obtain information about the
> > -   current NFP PCI device detected by the system:
> > +   and on Corigine SmartNICs using:
> >  
> >     .. code-block:: console
> >  
> > -      lspci -d19ee:
> > +      lspci -d 1da8:
> >  
> > -   Now, for example, configure two virtual functions on a NFP-6xxx device
> > +   Now, for example, to configure two virtual functions on a NFP device
> >     whose PCI system identity is "0000:03:00.0":
> >  
> >     .. code-block:: console
> >  
> >        echo 2 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs
> >  
> > -   The result of this command may be shown using lspci again:
> > +   The result of this command may be shown using lspci again on Netronome
> > +   SmartNICs:
> > +
> > +   .. code-block:: console
> > +
> > +      lspci -d 19ee: -k
> > +
> > +   and on Corigine SmartNICs:
> >  
> >     .. code-block:: console
> >  
> > -      lspci -d19ee: -k
> > +      lspci -d 1da8: -k
> >  
> >     Two new PCI devices should appear in the output of the above command. The
> > -   -k option shows the device driver, if any, that devices are bound to.
> > -   Depending on the modules loaded at this point the new PCI devices may be
> > -   bound to nfp_netvf driver.
> > +   -k option shows the device driver, if any, that the devices are bound to.
> > +   Depending on the modules loaded, at this point the new PCI devices may be
> > +   bound to the ``nfp`` kernel driver or ``vfio-pci``.
> >  
> >  
> >  Flow offload
> > @@ -193,13 +205,13 @@ The flower firmware application requires the PMD running two services:
> >  
> >  	* PF vNIC service: handling the feedback traffic.
> >  	* ctrl vNIC service: communicate between PMD and firmware through
> > -	  control message.
> > +	  control messages.
> >  
> >  To achieve the offload of flow, the representor ports are exposed to OVS.
> > -The flower firmware application support representor port for VF and physical
> > -port. There will always exist a representor port for each physical port,
> > -and the number of the representor port for VF is specified by the user through
> > -parameter.
> > +The flower firmware application supports VF, PF, and physical port representor
> > +ports. 
> 
> Again old document and new one is not saying same thing, is it intentional?
> 
> Old one says: "Having representor ports for both VF and PF is supported."
> 
> New one says: "FW supports representor port, VF and PF."
> 
> > There will always exist a representor port for a PF and each physical
> > +port. The number of the representor ports for VFs are specified by the user
> > +through a parameter.
> >  
> >  In the Rx direction, the flower firmware application will prepend the input
> >  port information into metadata for each packet which can't offloaded. The PF
> > @@ -207,12 +219,12 @@ vNIC service will keep polling packets from the firmware, and multiplex them
> >  to the corresponding representor port.
> >  
> >  In the Tx direction, the representor port will prepend the output port
> > -information into metadata for each packet, and then send it to firmware through
> > -PF vNIC.
> > +information into metadata for each packet, and then send it to the firmware
> > +through the PF vNIC.
> >  
> > -The ctrl vNIC service handling various control message, like the creation and
> > -configuration of representor port, the pattern and action of flow rules, the
> > -statistics of flow rules, and so on.
> > +The ctrl vNIC service handles various control messages, for example, the
> > +creation and configuration of a representor port, the pattern and action of
> > +flow rules, the statistics of flow rules, etc.
> >  
> >  Metadata Format
> >  ---------------
> 

-- 
Kind Regards,
Niklas Söderlund

^ permalink raw reply	[relevance 0%]

* [PATCH v7 21/22] hash: move rte_hash_set_alg out header
  2023-02-15 17:23  3% ` [PATCH v7 00/22] Replace use of static logtypes in libraries Stephen Hemminger
@ 2023-02-15 17:23  3%   ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-02-15 17:23 UTC (permalink / raw)
  To: dev
  Cc: Stephen Hemminger, Yipeng Wang, Sameh Gobriel, Bruce Richardson,
	Vladimir Medvedkin

The code for setting algorithm for hash is not at all perf sensitive,
and doing it inline has a couple of problems. First, it means that if
multiple files include the header, then the initialization gets done
multiple times. But also, it makes it harder to fix usage of RTE_LOG().

Despite what the checking script say. This is not an ABI change, the
previous version inlined the same code; therefore both old and new code
will work the same.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/hash/meson.build    |  1 +
 lib/hash/rte_hash_crc.c | 63 +++++++++++++++++++++++++++++++++++++++++
 lib/hash/rte_hash_crc.h | 46 ++----------------------------
 lib/hash/version.map    |  1 +
 4 files changed, 67 insertions(+), 44 deletions(-)
 create mode 100644 lib/hash/rte_hash_crc.c

diff --git a/lib/hash/meson.build b/lib/hash/meson.build
index e56ee8572564..c345c6f561fc 100644
--- a/lib/hash/meson.build
+++ b/lib/hash/meson.build
@@ -19,6 +19,7 @@ indirect_headers += files(
 
 sources = files(
     'rte_cuckoo_hash.c',
+    'rte_hash_crc.c',
     'rte_fbk_hash.c',
     'rte_thash.c',
     'rte_thash_gfni.c'
diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c
new file mode 100644
index 000000000000..c59eebccb1eb
--- /dev/null
+++ b/lib/hash/rte_hash_crc.c
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#include <rte_cpuflags.h>
+#include <rte_log.h>
+
+#include "rte_hash_crc.h"
+
+/**
+ * Allow or disallow use of SSE4.2/ARMv8 intrinsics for CRC32 hash
+ * calculation.
+ *
+ * @param alg
+ *   An OR of following flags:
+ *   - (CRC32_SW) Don't use SSE4.2/ARMv8 intrinsics (default non-[x86/ARMv8])
+ *   - (CRC32_SSE42) Use SSE4.2 intrinsics if available
+ *   - (CRC32_SSE42_x64) Use 64-bit SSE4.2 intrinsic if available (default x86)
+ *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
+ *
+ */
+void
+rte_hash_crc_set_alg(uint8_t alg)
+{
+	crc32_alg = CRC32_SW;
+
+	if (alg == CRC32_SW)
+		return;
+
+#if defined RTE_ARCH_X86
+	if (!(alg & CRC32_SSE42_x64))
+		RTE_LOG(WARNING, HASH,
+			"Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
+	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
+		crc32_alg = CRC32_SSE42;
+	else
+		crc32_alg = CRC32_SSE42_x64;
+#endif
+
+#if defined RTE_ARCH_ARM64
+	if (!(alg & CRC32_ARM64))
+		RTE_LOG(WARNING, HASH,
+			"Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
+		crc32_alg = CRC32_ARM64;
+#endif
+
+	if (crc32_alg == CRC32_SW)
+		RTE_LOG(WARNING, HASH,
+			"Unsupported CRC32 algorithm requested using CRC32_SW\n");
+}
+
+/* Setting the best available algorithm */
+RTE_INIT(rte_hash_crc_init_alg)
+{
+#if defined(RTE_ARCH_X86)
+	rte_hash_crc_set_alg(CRC32_SSE42_x64);
+#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
+	rte_hash_crc_set_alg(CRC32_ARM64);
+#else
+	rte_hash_crc_set_alg(CRC32_SW);
+#endif
+}
diff --git a/lib/hash/rte_hash_crc.h b/lib/hash/rte_hash_crc.h
index 0249ad16c5b6..e4acd99a0c81 100644
--- a/lib/hash/rte_hash_crc.h
+++ b/lib/hash/rte_hash_crc.h
@@ -20,8 +20,6 @@ extern "C" {
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_config.h>
-#include <rte_cpuflags.h>
-#include <rte_log.h>
 
 #include "rte_crc_sw.h"
 
@@ -53,48 +51,8 @@ static uint8_t crc32_alg = CRC32_SW;
  *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
  *
  */
-static inline void
-rte_hash_crc_set_alg(uint8_t alg)
-{
-	crc32_alg = CRC32_SW;
-
-	if (alg == CRC32_SW)
-		return;
-
-#if defined RTE_ARCH_X86
-	if (!(alg & CRC32_SSE42_x64))
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
-	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
-		crc32_alg = CRC32_SSE42;
-	else
-		crc32_alg = CRC32_SSE42_x64;
-#endif
-
-#if defined RTE_ARCH_ARM64
-	if (!(alg & CRC32_ARM64))
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
-	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
-		crc32_alg = CRC32_ARM64;
-#endif
-
-	if (crc32_alg == CRC32_SW)
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_SW\n");
-}
-
-/* Setting the best available algorithm */
-RTE_INIT(rte_hash_crc_init_alg)
-{
-#if defined(RTE_ARCH_X86)
-	rte_hash_crc_set_alg(CRC32_SSE42_x64);
-#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
-	rte_hash_crc_set_alg(CRC32_ARM64);
-#else
-	rte_hash_crc_set_alg(CRC32_SW);
-#endif
-}
+void
+rte_hash_crc_set_alg(uint8_t alg);
 
 #ifdef __DOXYGEN__
 
diff --git a/lib/hash/version.map b/lib/hash/version.map
index f03b047b2eec..a1d81835399c 100644
--- a/lib/hash/version.map
+++ b/lib/hash/version.map
@@ -9,6 +9,7 @@ DPDK_23 {
 	rte_hash_add_key_with_hash;
 	rte_hash_add_key_with_hash_data;
 	rte_hash_count;
+	rte_hash_crc_set_alg;
 	rte_hash_create;
 	rte_hash_del_key;
 	rte_hash_del_key_with_hash;
-- 
2.39.1


^ permalink raw reply	[relevance 3%]

* [PATCH v7 00/22] Replace use of static logtypes in libraries
                     ` (2 preceding siblings ...)
  2023-02-14 22:47  3% ` [PATCH v6 00/22] Replace use of static logtypes in libraries Stephen Hemminger
@ 2023-02-15 17:23  3% ` Stephen Hemminger
  2023-02-15 17:23  3%   ` [PATCH v7 21/22] hash: move rte_hash_set_alg out header Stephen Hemminger
  2023-02-20 23:35  3% ` [PATCH v8 00/22] Convert static logtypes in libraries Stephen Hemminger
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-02-15 17:23 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

This patchset removes the main uses of static LOGTYPE's in DPDK
libraries. It starts with the easy one and goes on to the more complex ones.

Note: there is one patch in this series that will get
flagged incorrectly as an ABI change.

v7 - fix commit message typ
     add error to gso_segment function doc
     fix missing cpuflags.h on arm

Stephen Hemminger (22):
  gso: don't log message on non TCP/UDP
  eal: drop no longer used GSO logtype
  log: drop unused RTE_LOGTYPE_TIMER
  efd: replace RTE_LOGTYPE_EFD with dynamic type
  mbuf: replace RTE_LOGTYPE_MBUF with dynamic type
  acl: replace LOGTYPE_ACL with dynamic type
  examples/power: replace use of RTE_LOGTYPE_POWER
  examples/l3fwd-power: replace use of RTE_LOGTYPE_POWER
  power: replace RTE_LOGTYPE_POWER with dynamic type
  ring: replace RTE_LOGTYPE_RING with dynamic type
  mempool: replace RTE_LOGTYPE_MEMPOOL with dynamic type
  lpm: replace RTE_LOGTYPE_LPM with dynamic types
  kni: replace RTE_LOGTYPE_KNI with dynamic type
  sched: replace RTE_LOGTYPE_SCHED with dynamic type
  examples/ipsecgw: replace RTE_LOGTYPE_PORT
  port: replace RTE_LOGTYPE_PORT with dynamic type
  table: convert RTE_LOGTYPE_TABLE to dynamic logtype
  app/test: remove use of RTE_LOGTYPE_PIPELINE
  pipeline: replace RTE_LOGTYPE_PIPELINE with dynamic type
  hash: move rte_thash_gfni stubs out of header file
  hash: move rte_hash_set_alg out header
  hash: convert RTE_LOGTYPE_HASH to dynamic type

 app/test/test_acl.c               |  3 +-
 app/test/test_table_acl.c         | 50 +++++++++++------------
 app/test/test_table_pipeline.c    | 40 +++++++++----------
 examples/distributor/main.c       |  2 +-
 examples/ipsec-secgw/sa.c         |  6 +--
 examples/l3fwd-power/main.c       | 15 +++----
 lib/acl/acl_bld.c                 |  1 +
 lib/acl/acl_gen.c                 |  1 +
 lib/acl/acl_log.h                 |  4 ++
 lib/acl/rte_acl.c                 |  4 ++
 lib/acl/tb_mem.c                  |  3 +-
 lib/eal/common/eal_common_log.c   | 17 --------
 lib/eal/include/rte_log.h         | 34 ++++++++--------
 lib/efd/rte_efd.c                 |  3 ++
 lib/fib/fib_log.h                 |  4 ++
 lib/fib/rte_fib.c                 |  3 ++
 lib/fib/rte_fib6.c                |  2 +
 lib/gso/rte_gso.c                 |  5 +--
 lib/gso/rte_gso.h                 |  1 +
 lib/hash/meson.build              |  9 ++++-
 lib/hash/rte_cuckoo_hash.c        |  5 +++
 lib/hash/rte_fbk_hash.c           |  5 +++
 lib/hash/rte_hash_crc.c           | 66 +++++++++++++++++++++++++++++++
 lib/hash/rte_hash_crc.h           | 46 +--------------------
 lib/hash/rte_thash.c              |  3 ++
 lib/hash/rte_thash_gfni.c         | 46 +++++++++++++++++++++
 lib/hash/rte_thash_gfni.h         | 28 +++----------
 lib/hash/version.map              |  5 +++
 lib/kni/rte_kni.c                 |  3 ++
 lib/lpm/lpm_log.h                 |  4 ++
 lib/lpm/rte_lpm.c                 |  3 ++
 lib/lpm/rte_lpm6.c                |  1 +
 lib/mbuf/mbuf_log.h               |  4 ++
 lib/mbuf/rte_mbuf.c               |  4 ++
 lib/mbuf/rte_mbuf_dyn.c           |  2 +
 lib/mbuf/rte_mbuf_pool_ops.c      |  2 +
 lib/mempool/rte_mempool.c         |  3 ++
 lib/mempool/rte_mempool_log.h     |  4 ++
 lib/mempool/rte_mempool_ops.c     |  1 +
 lib/pipeline/rte_pipeline.c       |  3 ++
 lib/port/rte_port_ethdev.c        |  3 ++
 lib/port/rte_port_eventdev.c      |  4 ++
 lib/port/rte_port_fd.c            |  3 ++
 lib/port/rte_port_frag.c          |  3 ++
 lib/port/rte_port_kni.c           |  3 ++
 lib/port/rte_port_ras.c           |  3 ++
 lib/port/rte_port_ring.c          |  3 ++
 lib/port/rte_port_sched.c         |  3 ++
 lib/port/rte_port_source_sink.c   |  3 ++
 lib/port/rte_port_sym_crypto.c    |  3 ++
 lib/power/guest_channel.c         |  3 +-
 lib/power/power_common.c          |  2 +
 lib/power/power_common.h          |  3 +-
 lib/power/power_kvm_vm.c          |  1 +
 lib/power/rte_power.c             |  1 +
 lib/power/rte_power_empty_poll.c  |  1 +
 lib/rib/rib_log.h                 |  4 ++
 lib/rib/rte_rib.c                 |  3 ++
 lib/rib/rte_rib6.c                |  3 ++
 lib/ring/rte_ring.c               |  3 ++
 lib/sched/rte_pie.c               |  1 +
 lib/sched/rte_sched.c             |  5 +++
 lib/sched/rte_sched_log.h         |  4 ++
 lib/table/rte_table_acl.c         |  3 ++
 lib/table/rte_table_array.c       |  3 ++
 lib/table/rte_table_hash_cuckoo.c |  3 ++
 lib/table/rte_table_hash_ext.c    |  3 ++
 lib/table/rte_table_hash_key16.c  |  3 ++
 lib/table/rte_table_hash_key32.c  |  5 ++-
 lib/table/rte_table_hash_key8.c   |  5 ++-
 lib/table/rte_table_hash_lru.c    |  3 ++
 lib/table/rte_table_lpm.c         |  3 ++
 lib/table/rte_table_lpm_ipv6.c    |  3 ++
 lib/table/rte_table_stub.c        |  3 ++
 74 files changed, 378 insertions(+), 169 deletions(-)
 create mode 100644 lib/acl/acl_log.h
 create mode 100644 lib/fib/fib_log.h
 create mode 100644 lib/hash/rte_hash_crc.c
 create mode 100644 lib/hash/rte_thash_gfni.c
 create mode 100644 lib/lpm/lpm_log.h
 create mode 100644 lib/mbuf/mbuf_log.h
 create mode 100644 lib/mempool/rte_mempool_log.h
 create mode 100644 lib/rib/rib_log.h
 create mode 100644 lib/sched/rte_sched_log.h

-- 
2.39.1


^ permalink raw reply	[relevance 3%]

* Re: [PATCH v6 01/22] gso: don't log message on non TCP/UDP
  2023-02-15  7:26  3%     ` Hu, Jiayu
@ 2023-02-15 17:12  0%       ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-02-15 17:12 UTC (permalink / raw)
  To: Hu, Jiayu; +Cc: dev, Konstantin Ananyev, Mark Kavanagh

On Wed, 15 Feb 2023 07:26:22 +0000
"Hu, Jiayu" <jiayu.hu@intel.com> wrote:

> > -----Original Message-----
> > From: Stephen Hemminger <stephen@networkplumber.org>
> > Sent: Wednesday, February 15, 2023 6:47 AM
> > To: dev@dpdk.org
> > Cc: Stephen Hemminger <stephen@networkplumber.org>; Hu, Jiayu
> > <jiayu.hu@intel.com>; Konstantin Ananyev
> > <konstantin.v.ananyev@yandex.ru>; Mark Kavanagh
> > <mark.b.kavanagh@intel.com>
> > Subject: [PATCH v6 01/22] gso: don't log message on non TCP/UDP
> > 
> > If a large packet is passed into GSO routines of unknown protocol then library
> > would log a message.
> > Better to tell the application instead of logging.
> > 
> > Fixes: 119583797b6a ("gso: support TCP/IPv4 GSO")
> > Cc: jiayu.hu@intel.com
> > Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> > ---
> >  lib/gso/rte_gso.c | 5 ++---
> >  1 file changed, 2 insertions(+), 3 deletions(-)
> > 
> > diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c index
> > 4b59217c16ee..c8e67c2d4b48 100644
> > --- a/lib/gso/rte_gso.c
> > +++ b/lib/gso/rte_gso.c
> > @@ -80,9 +80,8 @@ rte_gso_segment(struct rte_mbuf *pkt,
> >  		ret = gso_udp4_segment(pkt, gso_size, direct_pool,
> >  				indirect_pool, pkts_out, nb_pkts_out);
> >  	} else {
> > -		/* unsupported packet, skip */
> > -		RTE_LOG(DEBUG, GSO, "Unsupported packet type\n");
> > -		ret = 0;
> > +		ret = -ENOTSUP;	/* only UDP or TCP allowed */
> > +  
> 
> The function signature annotation in rte_gso.h also needs update for ENOTSUP.
> In addition, will it break ABI? 

Not really, if anybody hits this error case, nothing good would have
been happening.

^ permalink raw reply	[relevance 0%]

* Re: [PATCH] doc: update NFP documentation with Corigine information
  @ 2023-02-15 13:37  0% ` Ferruh Yigit
  2023-02-15 17:58  0%   ` Niklas Söderlund
    1 sibling, 1 reply; 200+ results
From: Ferruh Yigit @ 2023-02-15 13:37 UTC (permalink / raw)
  To: Chaoyong He, dev; +Cc: oss-drivers, niklas.soderlund, Walter Heymans

On 2/3/2023 8:08 AM, Chaoyong He wrote:
> From: Walter Heymans <walter.heymans@corigine.com>
> 
> The NFP PMD documentation is updated to include information about
> Corigine and their new vendor device ID.
> 
> Outdated information regarding the use of the PMD is also updated.
> 
> While making major changes to the document, the maximum number of
> characters per line is updated to 80 characters to improve the
> readability in raw format.
> 

There are three groups of changes done to documentation as explained in
three paragraphs above.

To help review, is it possible to separate this patch into three
patches? Later they can be squashed and merged as a single patch.
But as it is, easy to miss content changes among formatting changes.

(You can include simple grammar updates (that doesn't change either
content or Corigine related information) to formatting update patch)


> Signed-off-by: Walter Heymans <walter.heymans@corigine.com>
> Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
> Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
> ---
>  doc/guides/nics/nfp.rst | 168 +++++++++++++++++++++-------------------
>  1 file changed, 90 insertions(+), 78 deletions(-)
> 
> diff --git a/doc/guides/nics/nfp.rst b/doc/guides/nics/nfp.rst
> index a085d7d9ae..6fea280411 100644
> --- a/doc/guides/nics/nfp.rst
> +++ b/doc/guides/nics/nfp.rst
> @@ -1,35 +1,34 @@
>  ..  SPDX-License-Identifier: BSD-3-Clause
>      Copyright(c) 2015-2017 Netronome Systems, Inc. All rights reserved.
> -    All rights reserved.
> +    Copyright(c) 2021 Corigine, Inc. All rights reserved.
>  
>  NFP poll mode driver library
>  ============================
>  
> -Netronome's sixth generation of flow processors pack 216 programmable
> -cores and over 100 hardware accelerators that uniquely combine packet,
> -flow, security and content processing in a single device that scales
> +Netronome and Corigine's sixth generation of flow processors pack 216
> +programmable cores and over 100 hardware accelerators that uniquely combine
> +packet, flow, security and content processing in a single device that scales
>  up to 400-Gb/s.
>  
> -This document explains how to use DPDK with the Netronome Poll Mode
> -Driver (PMD) supporting Netronome's Network Flow Processor 6xxx
> -(NFP-6xxx), Netronome's Network Flow Processor 4xxx (NFP-4xxx) and
> -Netronome's Network Flow Processor 38xx (NFP-38xx).
> +This document explains how to use DPDK with the Network Flow Processor (NFP)
> +Poll Mode Driver (PMD) supporting Netronome and Corigine's NFP-6xxx, NFP-4xxx
> +and NFP-38xx product lines.
>  
> -NFP is a SRIOV capable device and the PMD supports the physical
> -function (PF) and the virtual functions (VFs).
> +NFP is a SR-IOV capable device and the PMD supports the physical function (PF)
> +and the virtual functions (VFs).
>  
>  Dependencies
>  ------------
>  
> -Before using the Netronome's DPDK PMD some NFP configuration,
> -which is not related to DPDK, is required. The system requires
> -installation of **Netronome's BSP (Board Support Package)** along
> -with a specific NFP firmware application. Netronome's NSP ABI
> -version should be 0.20 or higher.
> +Before using the NFP DPDK PMD some NFP configuration, which is not related to
> +DPDK, is required. The system requires installation of
> +**NFP-BSP (Board Support Package)** along with a specific NFP firmware
> +application. The NSP ABI version should be 0.20 or higher.
>  
> -If you have a NFP device you should already have the code and
> -documentation for this configuration. Contact
> -**support@netronome.com** to obtain the latest available firmware.
> +If you have a NFP device you should already have the documentation to perform
> +this configuration. Contact **support@netronome.com** (for Netronome products)
> +or **smartnic-support@corigine.com** (for Corigine products) to obtain the
> +latest available firmware.
>  
>  The NFP Linux netdev kernel driver for VFs has been a part of the
>  vanilla kernel since kernel version 4.5, and support for the PF
> @@ -44,11 +43,11 @@ Linux kernel driver.
>  Building the software
>  ---------------------
>  
> -Netronome's PMD code is provided in the **drivers/net/nfp** directory.
> -Although NFP PMD has Netronome´s BSP dependencies, it is possible to
> -compile it along with other DPDK PMDs even if no BSP was installed previously.
> -Of course, a DPDK app will require such a BSP installed for using the
> -NFP PMD, along with a specific NFP firmware application.
> +The NFP PMD code is provided in the **drivers/net/nfp** directory. Although
> +NFP PMD has BSP dependencies, it is possible to compile it along with other
> +DPDK PMDs even if no BSP was installed previously. Of course, a DPDK app will
> +require such a BSP installed for using the NFP PMD, along with a specific NFP
> +firmware application.
>  
>  Once the DPDK is built all the DPDK apps and examples include support for
>  the NFP PMD.
> @@ -57,27 +56,20 @@ the NFP PMD.
>  Driver compilation and testing
>  ------------------------------
>  
> -Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> -for details.
> +Refer to the document
> +:ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` for details.
>  
>  Using the PF
>  ------------
>  
> -NFP PMD supports using the NFP PF as another DPDK port, but it does not
> -have any functionality for controlling VFs. In fact, it is not possible to use
> -the PMD with the VFs if the PF is being used by DPDK, that is, with the NFP PF
> -bound to ``igb_uio`` or ``vfio-pci`` kernel drivers. Future DPDK versions will
> -have a PMD able to work with the PF and VFs at the same time and with the PF
> -implementing VF management along with other PF-only functionalities/offloads.
> -

Why this paragraph is removed? Is it because it is not correct anymore,
or just because of document organization change.

>  The PMD PF has extra work to do which will delay the DPDK app initialization
> -like uploading the firmware and configure the Link state properly when starting or
> -stopping a PF port. Since DPDK 18.05 the firmware upload happens when
> +like uploading the firmware and configure the Link state properly when starting
> +or stopping a PF port. Since DPDK 18.05 the firmware upload happens when
>  a PF is initialized, which was not always true with older DPDK versions.
>  
> -Depending on the Netronome product installed in the system, firmware files
> -should be available under ``/lib/firmware/netronome``. DPDK PMD supporting the
> -PF looks for a firmware file in this order:
> +Depending on the product installed in the system, firmware files should be
> +available under ``/lib/firmware/netronome``. DPDK PMD supporting the PF looks
> +for a firmware file in this order:
>  
>  	1) First try to find a firmware image specific for this device using the
>  	   NFP serial number:
> @@ -92,18 +84,21 @@ PF looks for a firmware file in this order:
>  
>  		nic_AMDA0099-0001_2x25.nffw
>  
> -Netronome's software packages install firmware files under ``/lib/firmware/netronome``
> -to support all the Netronome's SmartNICs and different firmware applications.
> -This is usually done using file names based on SmartNIC type and media and with a
> -directory per firmware application. Options 1 and 2 for firmware filenames allow
> -more than one SmartNIC, same type of SmartNIC or different ones, and to upload a
> -different firmware to each SmartNIC.
> +Netronome and Corigine's software packages install firmware files under
> +``/lib/firmware/netronome`` to support all the SmartNICs and different firmware
> +applications. This is usually done using file names based on SmartNIC type and
> +media and with a directory per firmware application. Options 1 and 2 for
> +firmware filenames allow more than one SmartNIC, same type of SmartNIC or
> +different ones, and to upload a different firmware to each SmartNIC.
>  
>     .. Note::
> -      Currently the NFP PMD supports using the PF with Agilio Firmware with NFD3
> -      and Agilio Firmware with NFDk. See https://help.netronome.com/support/solutions
> +      Currently the NFP PMD supports using the PF with Agilio Firmware with
> +      NFD3 and Agilio Firmware with NFDk. See
> +      `Netronome Support <https://help.netronome.com/support/solutions>`_.
>        for more information on the various firmwares supported by the Netronome
> -      Agilio CX smartNIC.
> +      Agilio SmartNICs range, or
> +      `Corigine Support <https://www.corigine.com/productsOverviewList-30.html>`_.
> +      for more information about Corigine's range.
>  
>  PF multiport support
>  --------------------
> @@ -118,7 +113,7 @@ this particular configuration requires the PMD to create ports in a special way,
>  although once they are created, DPDK apps should be able to use them as normal
>  PCI ports.
>  
> -NFP ports belonging to same PF can be seen inside PMD initialization with a
> +NFP ports belonging to the same PF can be seen inside PMD initialization with a
>  suffix added to the PCI ID: wwww:xx:yy.z_portn. For example, a PF with PCI ID
>  0000:03:00.0 and four ports is seen by the PMD code as:
>  
> @@ -137,50 +132,67 @@ suffix added to the PCI ID: wwww:xx:yy.z_portn. For example, a PF with PCI ID
>  PF multiprocess support
>  -----------------------
>  
> -Due to how the driver needs to access the NFP through a CPP interface, which implies
> -to use specific registers inside the chip, the number of secondary processes with PF
> -ports is limited to only one.
> +Due to how the driver needs to access the NFP through a CPP interface, which
> +implies to use specific registers inside the chip, the number of secondary
> +processes with PF ports is limited to only one.
>  
> -This limitation will be solved in future versions but having basic multiprocess support
> -is important for allowing development and debugging through the PF using a secondary
> -process which will create a CPP bridge for user space tools accessing the NFP.
> +This limitation will be solved in future versions, but having basic
> +multiprocess support is important for allowing development and debugging
> +through the PF using a secondary process, which will create a CPP bridge
> +for user space tools accessing the NFP.
>  
>  
>  System configuration
>  --------------------
>  
>  #. **Enable SR-IOV on the NFP device:** The current NFP PMD supports the PF and
> -   the VFs on a NFP device. However, it is not possible to work with both at the
> -   same time because the VFs require the PF being bound to the NFP PF Linux
> -   netdev driver.  Make sure you are working with a kernel with NFP PF support or
> -   get the drivers from the above Github repository and follow the instructions
> -   for building and installing it.
> +   the VFs on a NFP device. However, it is not possible to work with both at
> +   the same time when using the netdev NFP Linux netdev driver.

Old and new text doesn't say same thing.
Old one says: "For DPDK to support VF, PF needs to bound to kernel driver.:

Is this changed, or just wording mistake?


>     It is possible
> +   to bind the PF to the ``vfio-pci`` kernel module, and create VFs afterwards.
> +   This requires loading the ``vfio-pci`` module with the following parameters:
> +
> +   .. code-block:: console
> +
> +      modprobe vfio-pci enable_sriov=1 disable_idle_d3=1
> +
> +   VFs need to be enabled before they can be used with the PMD. Before enabling
> +   the VFs it is useful to obtain information about the current NFP PCI device
> +   detected by the system. This can be done on Netronome SmartNICs using:
> +
> +   .. code-block:: console
> +
> +      lspci -d 19ee:
>  

What I understand is, to support VF by DPDK two things are required:
1) Ability to create VFs, this can be done both by using device's kernel
driver or 'vfio-pci'
2) PF driver should support managing VFs.

Above lines document about item (1) and how 'vfio-pci' is used for it.

But old documentation mentions about item (2) is missing, why that part
removed, isn't it valid anymore? I mean is "PF -> kernel, VF -> DPDK"
combination supported now?


> -   VFs need to be enabled before they can be used with the PMD.
> -   Before enabling the VFs it is useful to obtain information about the
> -   current NFP PCI device detected by the system:
> +   and on Corigine SmartNICs using:
>  
>     .. code-block:: console
>  
> -      lspci -d19ee:
> +      lspci -d 1da8:
>  
> -   Now, for example, configure two virtual functions on a NFP-6xxx device
> +   Now, for example, to configure two virtual functions on a NFP device
>     whose PCI system identity is "0000:03:00.0":
>  
>     .. code-block:: console
>  
>        echo 2 > /sys/bus/pci/devices/0000:03:00.0/sriov_numvfs
>  
> -   The result of this command may be shown using lspci again:
> +   The result of this command may be shown using lspci again on Netronome
> +   SmartNICs:
> +
> +   .. code-block:: console
> +
> +      lspci -d 19ee: -k
> +
> +   and on Corigine SmartNICs:
>  
>     .. code-block:: console
>  
> -      lspci -d19ee: -k
> +      lspci -d 1da8: -k
>  
>     Two new PCI devices should appear in the output of the above command. The
> -   -k option shows the device driver, if any, that devices are bound to.
> -   Depending on the modules loaded at this point the new PCI devices may be
> -   bound to nfp_netvf driver.
> +   -k option shows the device driver, if any, that the devices are bound to.
> +   Depending on the modules loaded, at this point the new PCI devices may be
> +   bound to the ``nfp`` kernel driver or ``vfio-pci``.
>  
>  
>  Flow offload
> @@ -193,13 +205,13 @@ The flower firmware application requires the PMD running two services:
>  
>  	* PF vNIC service: handling the feedback traffic.
>  	* ctrl vNIC service: communicate between PMD and firmware through
> -	  control message.
> +	  control messages.
>  
>  To achieve the offload of flow, the representor ports are exposed to OVS.
> -The flower firmware application support representor port for VF and physical
> -port. There will always exist a representor port for each physical port,
> -and the number of the representor port for VF is specified by the user through
> -parameter.
> +The flower firmware application supports VF, PF, and physical port representor
> +ports. 

Again old document and new one is not saying same thing, is it intentional?

Old one says: "Having representor ports for both VF and PF is supported."

New one says: "FW supports representor port, VF and PF."

> There will always exist a representor port for a PF and each physical
> +port. The number of the representor ports for VFs are specified by the user
> +through a parameter.
>  
>  In the Rx direction, the flower firmware application will prepend the input
>  port information into metadata for each packet which can't offloaded. The PF
> @@ -207,12 +219,12 @@ vNIC service will keep polling packets from the firmware, and multiplex them
>  to the corresponding representor port.
>  
>  In the Tx direction, the representor port will prepend the output port
> -information into metadata for each packet, and then send it to firmware through
> -PF vNIC.
> +information into metadata for each packet, and then send it to the firmware
> +through the PF vNIC.
>  
> -The ctrl vNIC service handling various control message, like the creation and
> -configuration of representor port, the pattern and action of flow rules, the
> -statistics of flow rules, and so on.
> +The ctrl vNIC service handles various control messages, for example, the
> +creation and configuration of a representor port, the pattern and action of
> +flow rules, the statistics of flow rules, etc.
>  
>  Metadata Format
>  ---------------


^ permalink raw reply	[relevance 0%]

* Re: [PATCH v2 6/6] test/dmadev: add tests for stopping and restarting dev
  2023-02-15  1:59  3%     ` fengchengwen
@ 2023-02-15 11:57  3%       ` Bruce Richardson
  2023-02-16  1:24  0%         ` fengchengwen
  0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2023-02-15 11:57 UTC (permalink / raw)
  To: fengchengwen; +Cc: dev, Kevin Laatz

On Wed, Feb 15, 2023 at 09:59:06AM +0800, fengchengwen wrote:
> On 2023/1/17 1:37, Bruce Richardson wrote:
> > Validate device operation when a device is stopped or restarted.
> > 
> > The only complication - and gap in the dmadev ABI specification - is
> > what happens to the job ids on restart. Some drivers reset them to 0,
> > while others continue where things left off. Take account of both
> > possibilities in the test case.
> > 
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com> ---
> > app/test/test_dmadev.c | 46 ++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 46 insertions(+)
> > 
> > diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c index
> > de787c14e2..8fb73a41e2 100644 --- a/app/test/test_dmadev.c +++
> > b/app/test/test_dmadev.c @@ -304,6 +304,48 @@
> > test_enqueue_copies(int16_t dev_id, uint16_t vchan) ||
> > do_multi_copies(dev_id, vchan, 0, 0, 1); }
> >  
> > +static int +test_stop_start(int16_t dev_id, uint16_t vchan) +{ +	/*
> > device is already started on input, should be (re)started on output */
> > + +	uint16_t id = 0; +	enum rte_dma_status_code status =
> > RTE_DMA_STATUS_SUCCESSFUL; + +	/* - test stopping a device works
> > ok, +	 * - then do a start-stop without doing a copy +	 *
> > - finally restart the device +	 * checking for errors at each
> > stage, and validating we can still copy at the end.  +	 */ +	if
> > (rte_dma_stop(dev_id) < 0) +		ERR_RETURN("Error stopping
> > device\n"); + +	if (rte_dma_start(dev_id) < 0) +
> > ERR_RETURN("Error restarting device\n"); +	if (rte_dma_stop(dev_id) <
> > 0) +		ERR_RETURN("Error stopping device after restart (no
> > jobs executed)\n"); + +	if (rte_dma_start(dev_id) < 0) +
> > ERR_RETURN("Error restarting device after multiple stop-starts\n"); + +
> > /* before doing a copy, we need to know what the next id will be it
> > should +	 * either be: +	 * - the last completed job before start if
> > driver does not reset id on stop +	 * - or -1 i.e. next job is 0, if
> > driver does reset the job ids on stop +	 */ +	if
> > (rte_dma_completed_status(dev_id, vchan, 1, &id, &status) != 0) +
> > ERR_RETURN("Error with rte_dma_completed_status when no job done\n"); +
> > id += 1; /* id_count is next job id */ +	if (id != id_count && id !=
> > 0) +		ERR_RETURN("Unexpected next id from device after
> > stop-start. Got %u, expected %u or 0\n", +				id,
> > id_count);
> 
> Hi Bruce,
> 
> Suggest add a warn LOG to identify the id was not reset zero.  So that
> new driver could detect break ABI specification.
> 
Not sure that that is necessary. The actual ABI, nor the doxygen docs,
doesn't specify what happens to the values on doing stop and then start. My
thinking was that it should continue numbering as it would be equivalent to
suspend and resume, but other drivers appear to treat it as a "reset". I
suspect there are advantages and disadvantages to both schemes. Until we
decide on what the correct behaviour should be - or decide to allow both -
I don't think warning is the right thing to do here.

/Bruce

^ permalink raw reply	[relevance 3%]

* RE: [PATCH v6 01/22] gso: don't log message on non TCP/UDP
  @ 2023-02-15  7:26  3%     ` Hu, Jiayu
  2023-02-15 17:12  0%       ` Stephen Hemminger
  0 siblings, 1 reply; 200+ results
From: Hu, Jiayu @ 2023-02-15  7:26 UTC (permalink / raw)
  To: Stephen Hemminger, dev; +Cc: Konstantin Ananyev, Mark Kavanagh



> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Wednesday, February 15, 2023 6:47 AM
> To: dev@dpdk.org
> Cc: Stephen Hemminger <stephen@networkplumber.org>; Hu, Jiayu
> <jiayu.hu@intel.com>; Konstantin Ananyev
> <konstantin.v.ananyev@yandex.ru>; Mark Kavanagh
> <mark.b.kavanagh@intel.com>
> Subject: [PATCH v6 01/22] gso: don't log message on non TCP/UDP
> 
> If a large packet is passed into GSO routines of unknown protocol then library
> would log a message.
> Better to tell the application instead of logging.
> 
> Fixes: 119583797b6a ("gso: support TCP/IPv4 GSO")
> Cc: jiayu.hu@intel.com
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
>  lib/gso/rte_gso.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c index
> 4b59217c16ee..c8e67c2d4b48 100644
> --- a/lib/gso/rte_gso.c
> +++ b/lib/gso/rte_gso.c
> @@ -80,9 +80,8 @@ rte_gso_segment(struct rte_mbuf *pkt,
>  		ret = gso_udp4_segment(pkt, gso_size, direct_pool,
>  				indirect_pool, pkts_out, nb_pkts_out);
>  	} else {
> -		/* unsupported packet, skip */
> -		RTE_LOG(DEBUG, GSO, "Unsupported packet type\n");
> -		ret = 0;
> +		ret = -ENOTSUP;	/* only UDP or TCP allowed */
> +

The function signature annotation in rte_gso.h also needs update for ENOTSUP.
In addition, will it break ABI? 

Thanks,
Jiayu
>  	}
> 
>  	if (ret < 0) {
> --
> 2.39.1


^ permalink raw reply	[relevance 3%]

* Re: [PATCH v2 6/6] test/dmadev: add tests for stopping and restarting dev
    2023-02-14 16:04  0%     ` Kevin Laatz
@ 2023-02-15  1:59  3%     ` fengchengwen
  2023-02-15 11:57  3%       ` Bruce Richardson
  1 sibling, 1 reply; 200+ results
From: fengchengwen @ 2023-02-15  1:59 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: Kevin Laatz

On 2023/1/17 1:37, Bruce Richardson wrote:
> Validate device operation when a device is stopped or restarted.
> 
> The only complication - and gap in the dmadev ABI specification - is
> what happens to the job ids on restart. Some drivers reset them to 0,
> while others continue where things left off. Take account of both
> possibilities in the test case.
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>  app/test/test_dmadev.c | 46 ++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 46 insertions(+)
> 
> diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c
> index de787c14e2..8fb73a41e2 100644
> --- a/app/test/test_dmadev.c
> +++ b/app/test/test_dmadev.c
> @@ -304,6 +304,48 @@ test_enqueue_copies(int16_t dev_id, uint16_t vchan)
>  			|| do_multi_copies(dev_id, vchan, 0, 0, 1);
>  }
>  
> +static int
> +test_stop_start(int16_t dev_id, uint16_t vchan)
> +{
> +	/* device is already started on input, should be (re)started on output */
> +
> +	uint16_t id = 0;
> +	enum rte_dma_status_code status = RTE_DMA_STATUS_SUCCESSFUL;
> +
> +	/* - test stopping a device works ok,
> +	 * - then do a start-stop without doing a copy
> +	 * - finally restart the device
> +	 * checking for errors at each stage, and validating we can still copy at the end.
> +	 */
> +	if (rte_dma_stop(dev_id) < 0)
> +		ERR_RETURN("Error stopping device\n");
> +
> +	if (rte_dma_start(dev_id) < 0)
> +		ERR_RETURN("Error restarting device\n");
> +	if (rte_dma_stop(dev_id) < 0)
> +		ERR_RETURN("Error stopping device after restart (no jobs executed)\n");
> +
> +	if (rte_dma_start(dev_id) < 0)
> +		ERR_RETURN("Error restarting device after multiple stop-starts\n");
> +
> +	/* before doing a copy, we need to know what the next id will be it should
> +	 * either be:
> +	 * - the last completed job before start if driver does not reset id on stop
> +	 * - or -1 i.e. next job is 0, if driver does reset the job ids on stop
> +	 */
> +	if (rte_dma_completed_status(dev_id, vchan, 1, &id, &status) != 0)
> +		ERR_RETURN("Error with rte_dma_completed_status when no job done\n");
> +	id += 1; /* id_count is next job id */
> +	if (id != id_count && id != 0)
> +		ERR_RETURN("Unexpected next id from device after stop-start. Got %u, expected %u or 0\n",
> +				id, id_count);

Hi Bruce,

Suggest add a warn LOG to identify the id was not reset zero.
So that new driver could detect break ABI specification.

Thanks.


^ permalink raw reply	[relevance 3%]

* [PATCH v6 21/22] hash: move rte_hash_set_alg out header
  2023-02-14 22:47  3% ` [PATCH v6 00/22] Replace use of static logtypes in libraries Stephen Hemminger
  @ 2023-02-14 22:47  3%   ` Stephen Hemminger
  1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-02-14 22:47 UTC (permalink / raw)
  To: dev
  Cc: Stephen Hemminger, Yipeng Wang, Sameh Gobriel, Bruce Richardson,
	Vladimir Medvedkin

The code for setting algorithm for hash is not at all perf sensitive,
and doing it inline has a couple of problems. First, it means that if
multiple files include the header, then the initialization gets done
multiple times. But also, it makes it harder to fix usage of RTE_LOG().

Despite what the checking script say. This is not an ABI change, the
previous version inlined the same code; therefore both old and new code
will work the same.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/hash/meson.build    |  1 +
 lib/hash/rte_hash_crc.c | 63 +++++++++++++++++++++++++++++++++++++++++
 lib/hash/rte_hash_crc.h | 46 ++----------------------------
 lib/hash/version.map    |  1 +
 4 files changed, 67 insertions(+), 44 deletions(-)
 create mode 100644 lib/hash/rte_hash_crc.c

diff --git a/lib/hash/meson.build b/lib/hash/meson.build
index e56ee8572564..c345c6f561fc 100644
--- a/lib/hash/meson.build
+++ b/lib/hash/meson.build
@@ -19,6 +19,7 @@ indirect_headers += files(
 
 sources = files(
     'rte_cuckoo_hash.c',
+    'rte_hash_crc.c',
     'rte_fbk_hash.c',
     'rte_thash.c',
     'rte_thash_gfni.c'
diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c
new file mode 100644
index 000000000000..c59eebccb1eb
--- /dev/null
+++ b/lib/hash/rte_hash_crc.c
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#include <rte_cpuflags.h>
+#include <rte_log.h>
+
+#include "rte_hash_crc.h"
+
+/**
+ * Allow or disallow use of SSE4.2/ARMv8 intrinsics for CRC32 hash
+ * calculation.
+ *
+ * @param alg
+ *   An OR of following flags:
+ *   - (CRC32_SW) Don't use SSE4.2/ARMv8 intrinsics (default non-[x86/ARMv8])
+ *   - (CRC32_SSE42) Use SSE4.2 intrinsics if available
+ *   - (CRC32_SSE42_x64) Use 64-bit SSE4.2 intrinsic if available (default x86)
+ *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
+ *
+ */
+void
+rte_hash_crc_set_alg(uint8_t alg)
+{
+	crc32_alg = CRC32_SW;
+
+	if (alg == CRC32_SW)
+		return;
+
+#if defined RTE_ARCH_X86
+	if (!(alg & CRC32_SSE42_x64))
+		RTE_LOG(WARNING, HASH,
+			"Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
+	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
+		crc32_alg = CRC32_SSE42;
+	else
+		crc32_alg = CRC32_SSE42_x64;
+#endif
+
+#if defined RTE_ARCH_ARM64
+	if (!(alg & CRC32_ARM64))
+		RTE_LOG(WARNING, HASH,
+			"Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
+		crc32_alg = CRC32_ARM64;
+#endif
+
+	if (crc32_alg == CRC32_SW)
+		RTE_LOG(WARNING, HASH,
+			"Unsupported CRC32 algorithm requested using CRC32_SW\n");
+}
+
+/* Setting the best available algorithm */
+RTE_INIT(rte_hash_crc_init_alg)
+{
+#if defined(RTE_ARCH_X86)
+	rte_hash_crc_set_alg(CRC32_SSE42_x64);
+#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
+	rte_hash_crc_set_alg(CRC32_ARM64);
+#else
+	rte_hash_crc_set_alg(CRC32_SW);
+#endif
+}
diff --git a/lib/hash/rte_hash_crc.h b/lib/hash/rte_hash_crc.h
index 0249ad16c5b6..e4acd99a0c81 100644
--- a/lib/hash/rte_hash_crc.h
+++ b/lib/hash/rte_hash_crc.h
@@ -20,8 +20,6 @@ extern "C" {
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_config.h>
-#include <rte_cpuflags.h>
-#include <rte_log.h>
 
 #include "rte_crc_sw.h"
 
@@ -53,48 +51,8 @@ static uint8_t crc32_alg = CRC32_SW;
  *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
  *
  */
-static inline void
-rte_hash_crc_set_alg(uint8_t alg)
-{
-	crc32_alg = CRC32_SW;
-
-	if (alg == CRC32_SW)
-		return;
-
-#if defined RTE_ARCH_X86
-	if (!(alg & CRC32_SSE42_x64))
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
-	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
-		crc32_alg = CRC32_SSE42;
-	else
-		crc32_alg = CRC32_SSE42_x64;
-#endif
-
-#if defined RTE_ARCH_ARM64
-	if (!(alg & CRC32_ARM64))
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
-	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
-		crc32_alg = CRC32_ARM64;
-#endif
-
-	if (crc32_alg == CRC32_SW)
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_SW\n");
-}
-
-/* Setting the best available algorithm */
-RTE_INIT(rte_hash_crc_init_alg)
-{
-#if defined(RTE_ARCH_X86)
-	rte_hash_crc_set_alg(CRC32_SSE42_x64);
-#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
-	rte_hash_crc_set_alg(CRC32_ARM64);
-#else
-	rte_hash_crc_set_alg(CRC32_SW);
-#endif
-}
+void
+rte_hash_crc_set_alg(uint8_t alg);
 
 #ifdef __DOXYGEN__
 
diff --git a/lib/hash/version.map b/lib/hash/version.map
index f03b047b2eec..a1d81835399c 100644
--- a/lib/hash/version.map
+++ b/lib/hash/version.map
@@ -9,6 +9,7 @@ DPDK_23 {
 	rte_hash_add_key_with_hash;
 	rte_hash_add_key_with_hash_data;
 	rte_hash_count;
+	rte_hash_crc_set_alg;
 	rte_hash_create;
 	rte_hash_del_key;
 	rte_hash_del_key_with_hash;
-- 
2.39.1


^ permalink raw reply	[relevance 3%]

* [PATCH v6 00/22] Replace use of static logtypes in libraries
    2023-02-13 19:55  3% ` [PATCH v4 00/19] Replace use of static logtypes Stephen Hemminger
  2023-02-14  2:18  3% ` [PATCH v5 00/22] Replace us of static logtypes Stephen Hemminger
@ 2023-02-14 22:47  3% ` Stephen Hemminger
    2023-02-14 22:47  3%   ` [PATCH v6 21/22] hash: move rte_hash_set_alg out header Stephen Hemminger
  2023-02-15 17:23  3% ` [PATCH v7 00/22] Replace use of static logtypes in libraries Stephen Hemminger
                   ` (4 subsequent siblings)
  7 siblings, 2 replies; 200+ results
From: Stephen Hemminger @ 2023-02-14 22:47 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

This patchset removes the main uses of static LOGTYPE's in DPDK
libraries. It starts with the easy one and goes on to the more complex ones.

Note: there is one patch in this series that will get
flagged incorrectly as an ABI change.

v6 - fix typo in kni port 

v5 - fix use of LOGTYPE PORT and POWER in examples

v4 - use simpler/shorter method for setting local LOGTYPE
     split up steps of some of the changes

Stephen Hemminger (22):
  gso: don't log message on non TCP/UDP
  eal: drop no longer used GSO logtype
  log: drop unused RTE_LOGTYPE_TIMER
  efd: replace RTE_LOGTYPE_EFD with dynamic type
  mbuf: replace RTE_LOGTYPE_MBUF with dynamic type
  acl: replace LOGTYPE_ACL with dynamic type
  examples/power: replace use of RTE_LOGTYPE_POWER
  examples/l3fwd-power: replace use of RTE_LOGTYPE_POWER
  power: replace RTE_LOGTYPE_POWER with dynamic type
  ring: replace RTE_LOGTYPE_RING with dynamic type
  mempool: replace RTE_LOGTYPE_MEMPOOL with dynamic type
  lpm: replace RTE_LOGTYPE_LPM with dynamic types
  kni: replace RTE_LOGTYPE_KNI with dynamic type
  sched: replace RTE_LOGTYPE_SCHED with dynamic type
  examples/ipsecgw: replace RTE_LOGTYPE_PORT
  port: replace RTE_LOGTYPE_PORT with dynamic type
  table: convert RTE_LOGTYPE_TABLE to dynamic logtype
  app/test: remove use of RTE_LOGTYPE_PIPELINE
  pipeline: replace RTE_LOGTYPE_PIPELINE with dynamic type
  hash: move rte_thash_gfni stubs out of header file
  hash: move rte_hash_set_alg out header
  hash: convert RTE_LOGTYPE_HASH to dynamic type

 app/test/test_acl.c               |  3 +-
 app/test/test_table_acl.c         | 50 +++++++++++------------
 app/test/test_table_pipeline.c    | 40 +++++++++----------
 examples/distributor/main.c       |  2 +-
 examples/ipsec-secgw/sa.c         |  6 +--
 examples/l3fwd-power/main.c       | 15 +++----
 lib/acl/acl_bld.c                 |  1 +
 lib/acl/acl_gen.c                 |  1 +
 lib/acl/acl_log.h                 |  4 ++
 lib/acl/rte_acl.c                 |  4 ++
 lib/acl/tb_mem.c                  |  3 +-
 lib/eal/common/eal_common_log.c   | 17 --------
 lib/eal/include/rte_log.h         | 34 ++++++++--------
 lib/efd/rte_efd.c                 |  3 ++
 lib/fib/fib_log.h                 |  4 ++
 lib/fib/rte_fib.c                 |  3 ++
 lib/fib/rte_fib6.c                |  2 +
 lib/gso/rte_gso.c                 |  5 +--
 lib/hash/meson.build              |  9 ++++-
 lib/hash/rte_cuckoo_hash.c        |  5 +++
 lib/hash/rte_fbk_hash.c           |  3 ++
 lib/hash/rte_hash_crc.c           | 66 +++++++++++++++++++++++++++++++
 lib/hash/rte_hash_crc.h           | 46 +--------------------
 lib/hash/rte_thash.c              |  3 ++
 lib/hash/rte_thash_gfni.c         | 46 +++++++++++++++++++++
 lib/hash/rte_thash_gfni.h         | 28 +++----------
 lib/hash/version.map              |  5 +++
 lib/kni/rte_kni.c                 |  3 ++
 lib/lpm/lpm_log.h                 |  4 ++
 lib/lpm/rte_lpm.c                 |  3 ++
 lib/lpm/rte_lpm6.c                |  1 +
 lib/mbuf/mbuf_log.h               |  4 ++
 lib/mbuf/rte_mbuf.c               |  4 ++
 lib/mbuf/rte_mbuf_dyn.c           |  2 +
 lib/mbuf/rte_mbuf_pool_ops.c      |  2 +
 lib/mempool/rte_mempool.c         |  3 ++
 lib/mempool/rte_mempool_log.h     |  4 ++
 lib/mempool/rte_mempool_ops.c     |  1 +
 lib/pipeline/rte_pipeline.c       |  3 ++
 lib/port/rte_port_ethdev.c        |  3 ++
 lib/port/rte_port_eventdev.c      |  4 ++
 lib/port/rte_port_fd.c            |  3 ++
 lib/port/rte_port_frag.c          |  3 ++
 lib/port/rte_port_kni.c           |  3 ++
 lib/port/rte_port_ras.c           |  3 ++
 lib/port/rte_port_ring.c          |  3 ++
 lib/port/rte_port_sched.c         |  3 ++
 lib/port/rte_port_source_sink.c   |  3 ++
 lib/port/rte_port_sym_crypto.c    |  3 ++
 lib/power/guest_channel.c         |  3 +-
 lib/power/power_common.c          |  2 +
 lib/power/power_common.h          |  3 +-
 lib/power/power_kvm_vm.c          |  1 +
 lib/power/rte_power.c             |  1 +
 lib/power/rte_power_empty_poll.c  |  1 +
 lib/rib/rib_log.h                 |  4 ++
 lib/rib/rte_rib.c                 |  3 ++
 lib/rib/rte_rib6.c                |  3 ++
 lib/ring/rte_ring.c               |  3 ++
 lib/sched/rte_pie.c               |  1 +
 lib/sched/rte_sched.c             |  5 +++
 lib/sched/rte_sched_log.h         |  4 ++
 lib/table/rte_table_acl.c         |  3 ++
 lib/table/rte_table_array.c       |  3 ++
 lib/table/rte_table_hash_cuckoo.c |  3 ++
 lib/table/rte_table_hash_ext.c    |  3 ++
 lib/table/rte_table_hash_key16.c  |  3 ++
 lib/table/rte_table_hash_key32.c  |  5 ++-
 lib/table/rte_table_hash_key8.c   |  5 ++-
 lib/table/rte_table_hash_lru.c    |  3 ++
 lib/table/rte_table_lpm.c         |  3 ++
 lib/table/rte_table_lpm_ipv6.c    |  3 ++
 lib/table/rte_table_stub.c        |  3 ++
 73 files changed, 375 insertions(+), 169 deletions(-)
 create mode 100644 lib/acl/acl_log.h
 create mode 100644 lib/fib/fib_log.h
 create mode 100644 lib/hash/rte_hash_crc.c
 create mode 100644 lib/hash/rte_thash_gfni.c
 create mode 100644 lib/lpm/lpm_log.h
 create mode 100644 lib/mbuf/mbuf_log.h
 create mode 100644 lib/mempool/rte_mempool_log.h
 create mode 100644 lib/rib/rib_log.h
 create mode 100644 lib/sched/rte_sched_log.h

-- 
2.39.1


^ permalink raw reply	[relevance 3%]

* Re: [PATCH v2 6/6] test/dmadev: add tests for stopping and restarting dev
  @ 2023-02-14 16:04  0%     ` Kevin Laatz
  2023-02-15  1:59  3%     ` fengchengwen
  1 sibling, 0 replies; 200+ results
From: Kevin Laatz @ 2023-02-14 16:04 UTC (permalink / raw)
  To: Bruce Richardson, dev; +Cc: Chengwen Feng

On 16/01/2023 17:37, Bruce Richardson wrote:
> Validate device operation when a device is stopped or restarted.
>
> The only complication - and gap in the dmadev ABI specification - is
> what happens to the job ids on restart. Some drivers reset them to 0,
> while others continue where things left off. Take account of both
> possibilities in the test case.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   app/test/test_dmadev.c | 46 ++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 46 insertions(+)
>
Acked-by: Kevin Laatz <kevin.laatz@intel.com>

^ permalink raw reply	[relevance 0%]

* Re: [PATCH v4 1/2] ethdev: introduce the PHY affinity field in Tx queue API
  2023-02-14  9:38  0%       ` Jiawei(Jonny) Wang
@ 2023-02-14 10:01  0%         ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2023-02-14 10:01 UTC (permalink / raw)
  To: Jiawei(Jonny) Wang, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon (EXTERNAL),
	andrew.rybchenko, Aman Singh, Yuying Zhang
  Cc: dev, Raslan Darawsheh

On 2/14/2023 9:38 AM, Jiawei(Jonny) Wang wrote:
> Hi,
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>> Sent: Friday, February 10, 2023 3:45 AM
>> To: Jiawei(Jonny) Wang <jiaweiw@nvidia.com>; Slava Ovsiienko
>> <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-
>> Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>;
>> andrew.rybchenko@oktetlabs.ru; Aman Singh <aman.deep.singh@intel.com>;
>> Yuying Zhang <yuying.zhang@intel.com>
>> Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
>> Subject: Re: [PATCH v4 1/2] ethdev: introduce the PHY affinity field in Tx queue
>> API
>>
>> On 2/3/2023 1:33 PM, Jiawei Wang wrote:
>>> When multiple physical ports are connected to a single DPDK port,
>>> (example: kernel bonding, DPDK bonding, failsafe, etc.), we want to
>>> know which physical port is used for Rx and Tx.
>>>
>>
>> I assume "kernel bonding" is out of context, but this patch concerns DPDK
>> bonding, failsafe or softnic. (I will refer them as virtual bonding
>> device.)
>>
>> To use specific queues of the virtual bonding device may interfere with the
>> logic of these devices, like bonding modes or RSS of the underlying devices. I
>> can see feature focuses on a very specific use case, but not sure if all possible
>> side effects taken into consideration.
>>
>>
>> And although the feature is only relavent to virtual bondiong device, core
>> ethdev structures are updated for this. Most use cases won't need these, so is
>> there a way to reduce the scope of the changes to virtual bonding devices?
>>
>>
>> There are a few very core ethdev APIs, like:
>> rte_eth_dev_configure()
>> rte_eth_tx_queue_setup()
>> rte_eth_rx_queue_setup()
>> rte_eth_dev_start()
>> rte_eth_dev_info_get()
>>
>> Almost every user of ehtdev uses these APIs, since these are so fundemental I
>> am for being a little more conservative on these APIs.
>>
>> Every eccentric features are targetting these APIs first because they are
>> common and extending them gives an easy solution, but in long run making
>> these APIs more complex, harder to maintain and harder for PMDs to support
>> them correctly. So I am for not updating them unless it is a generic use case.
>>
>>
>> Also as we talked about PMDs supporting them, I assume your coming PMD
>> patch will be implementing 'tx_phy_affinity' config option only for mlx drivers.
>> What will happen for other NICs? Will they silently ignore the config option
>> from user? So this is a problem for the DPDK application portabiltiy.
>>
>>
>>
>> As far as I understand target is application controlling which sub-device is used
>> under the virtual bonding device, can you pleaes give more information why
>> this is required, perhaps it can help to provide a better/different solution.
>> Like adding the ability to use both bonding device and sub-device for data path,
>> this way application can use whichever it wants. (this is just first solution I
>> come with, I am not suggesting as replacement solution, but if you can describe
>> the problem more I am sure other people can come with better solutions.)
>>
>> And isn't this against the applicatio transparent to underneath device being
>> bonding device or actual device?
>>
>>
> 
> OK, I will send the new version with separate functions in ethdev layer, 
> to support the Map a Tx queue to port and get the number of ports.
> And these functions work with device ops callback, other NICs will reported
> The unsupported the ops callback is NULL.
> 

OK, thanks Jonny, at least this separates the fetaure to its own APIs
which reduces the impact for applications and drivers that are not using
this feature.


>>> This patch maps a DPDK Tx queue with a physical port, by adding
>>> tx_phy_affinity setting in Tx queue.
>>> The affinity number is the physical port ID where packets will be
>>> sent.
>>> Value 0 means no affinity and traffic could be routed to any connected
>>> physical ports, this is the default current behavior.
>>>
>>> The number of physical ports is reported with rte_eth_dev_info_get().
>>>
>>> The new tx_phy_affinity field is added into the padding hole of
>>> rte_eth_txconf structure, the size of rte_eth_txconf keeps the same.
>>> An ABI check rule needs to be added to avoid false warning.
>>>
>>> Add the testpmd command line:
>>> testpmd> port config (port_id) txq (queue_id) phy_affinity (value)
>>>
>>> For example, there're two physical ports connected to a single DPDK
>>> port (port id 0), and phy_affinity 1 stood for the first physical port
>>> and phy_affinity 2 stood for the second physical port.
>>> Use the below commands to config tx phy affinity for per Tx Queue:
>>>         port config 0 txq 0 phy_affinity 1
>>>         port config 0 txq 1 phy_affinity 1
>>>         port config 0 txq 2 phy_affinity 2
>>>         port config 0 txq 3 phy_affinity 2
>>>
>>> These commands config the Tx Queue index 0 and Tx Queue index 1 with
>>> phy affinity 1, uses Tx Queue 0 or Tx Queue 1 send packets, these
>>> packets will be sent from the first physical port, and similar with
>>> the second physical port if sending packets with Tx Queue 2 or Tx
>>> Queue 3.
>>>
>>> Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
>>> ---
>>>  app/test-pmd/cmdline.c                      | 100 ++++++++++++++++++++
>>>  app/test-pmd/config.c                       |   1 +
>>>  devtools/libabigail.abignore                |   5 +
>>>  doc/guides/rel_notes/release_23_03.rst      |   4 +
>>>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  13 +++
>>>  lib/ethdev/rte_ethdev.h                     |  10 ++
>>>  6 files changed, 133 insertions(+)
>>>
>>> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
>>> cb8c174020..f771fcf8ac 100644
>>> --- a/app/test-pmd/cmdline.c
>>> +++ b/app/test-pmd/cmdline.c
>>> @@ -776,6 +776,10 @@ static void cmd_help_long_parsed(void
>>> *parsed_result,
>>>
>>>  			"port cleanup (port_id) txq (queue_id) (free_cnt)\n"
>>>  			"    Cleanup txq mbufs for a specific Tx queue\n\n"
>>> +
>>> +			"port config (port_id) txq (queue_id) phy_affinity
>> (value)\n"
>>> +			"    Set the physical affinity value "
>>> +			"on a specific Tx queue\n\n"
>>>  		);
>>>  	}
>>>
>>> @@ -12633,6 +12637,101 @@ static cmdline_parse_inst_t
>> cmd_show_port_flow_transfer_proxy = {
>>>  	}
>>>  };
>>>
>>> +/* *** configure port txq phy_affinity value *** */ struct
>>> +cmd_config_tx_phy_affinity {
>>> +	cmdline_fixed_string_t port;
>>> +	cmdline_fixed_string_t config;
>>> +	portid_t portid;
>>> +	cmdline_fixed_string_t txq;
>>> +	uint16_t qid;
>>> +	cmdline_fixed_string_t phy_affinity;
>>> +	uint8_t value;
>>> +};
>>> +
>>> +static void
>>> +cmd_config_tx_phy_affinity_parsed(void *parsed_result,
>>> +				  __rte_unused struct cmdline *cl,
>>> +				  __rte_unused void *data)
>>> +{
>>> +	struct cmd_config_tx_phy_affinity *res = parsed_result;
>>> +	struct rte_eth_dev_info dev_info;
>>> +	struct rte_port *port;
>>> +	int ret;
>>> +
>>> +	if (port_id_is_invalid(res->portid, ENABLED_WARN))
>>> +		return;
>>> +
>>> +	if (res->portid == (portid_t)RTE_PORT_ALL) {
>>> +		printf("Invalid port id\n");
>>> +		return;
>>> +	}
>>> +
>>> +	port = &ports[res->portid];
>>> +
>>> +	if (strcmp(res->txq, "txq")) {
>>> +		printf("Unknown parameter\n");
>>> +		return;
>>> +	}
>>> +	if (tx_queue_id_is_invalid(res->qid))
>>> +		return;
>>> +
>>> +	ret = eth_dev_info_get_print_err(res->portid, &dev_info);
>>> +	if (ret != 0)
>>> +		return;
>>> +
>>> +	if (dev_info.nb_phy_ports == 0) {
>>> +		printf("Number of physical ports is 0 which is invalid for PHY
>> Affinity\n");
>>> +		return;
>>> +	}
>>> +	printf("The number of physical ports is %u\n", dev_info.nb_phy_ports);
>>> +	if (dev_info.nb_phy_ports < res->value) {
>>> +		printf("The PHY affinity value %u is Invalid, exceeds the "
>>> +		       "number of physical ports\n", res->value);
>>> +		return;
>>> +	}
>>> +	port->txq[res->qid].conf.tx_phy_affinity = res->value;
>>> +
>>> +	cmd_reconfig_device_queue(res->portid, 0, 1); }
>>> +
>>> +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_port =
>>> +	TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity,
>>> +				 port, "port");
>>> +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_config =
>>> +	TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity,
>>> +				 config, "config");
>>> +cmdline_parse_token_num_t cmd_config_tx_phy_affinity_portid =
>>> +	TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity,
>>> +				 portid, RTE_UINT16);
>>> +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_txq =
>>> +	TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity,
>>> +				 txq, "txq");
>>> +cmdline_parse_token_num_t cmd_config_tx_phy_affinity_qid =
>>> +	TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity,
>>> +			      qid, RTE_UINT16);
>>> +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_hwport =
>>> +	TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity,
>>> +				 phy_affinity, "phy_affinity");
>>> +cmdline_parse_token_num_t cmd_config_tx_phy_affinity_value =
>>> +	TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity,
>>> +			      value, RTE_UINT8);
>>> +
>>> +static cmdline_parse_inst_t cmd_config_tx_phy_affinity = {
>>> +	.f = cmd_config_tx_phy_affinity_parsed,
>>> +	.data = (void *)0,
>>> +	.help_str = "port config <port_id> txq <queue_id> phy_affinity <value>",
>>> +	.tokens = {
>>> +		(void *)&cmd_config_tx_phy_affinity_port,
>>> +		(void *)&cmd_config_tx_phy_affinity_config,
>>> +		(void *)&cmd_config_tx_phy_affinity_portid,
>>> +		(void *)&cmd_config_tx_phy_affinity_txq,
>>> +		(void *)&cmd_config_tx_phy_affinity_qid,
>>> +		(void *)&cmd_config_tx_phy_affinity_hwport,
>>> +		(void *)&cmd_config_tx_phy_affinity_value,
>>> +		NULL,
>>> +	},
>>> +};
>>> +
>>>  /*
>>>
>> ****************************************************************
>> ******
>>> ********** */
>>>
>>>  /* list of instructions */
>>> @@ -12866,6 +12965,7 @@ static cmdline_parse_ctx_t builtin_ctx[] = {
>>>  	(cmdline_parse_inst_t *)&cmd_show_port_cman_capa,
>>>  	(cmdline_parse_inst_t *)&cmd_show_port_cman_config,
>>>  	(cmdline_parse_inst_t *)&cmd_set_port_cman_config,
>>> +	(cmdline_parse_inst_t *)&cmd_config_tx_phy_affinity,
>>>  	NULL,
>>>  };
>>>
>>> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index
>>> acccb6b035..b83fb17cfa 100644
>>> --- a/app/test-pmd/config.c
>>> +++ b/app/test-pmd/config.c
>>> @@ -936,6 +936,7 @@ port_infos_display(portid_t port_id)
>>>  		printf("unknown\n");
>>>  		break;
>>>  	}
>>> +	printf("Current number of physical ports: %u\n",
>>> +dev_info.nb_phy_ports);
>>>  }
>>>
>>>  void
>>> diff --git a/devtools/libabigail.abignore
>>> b/devtools/libabigail.abignore index 7a93de3ba1..ac7d3fb2da 100644
>>> --- a/devtools/libabigail.abignore
>>> +++ b/devtools/libabigail.abignore
>>> @@ -34,3 +34,8 @@
>>>  ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
>>>  ; Temporary exceptions till next major ABI version ;
>>> ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
>>> +
>>> +; Ignore fields inserted in padding hole of rte_eth_txconf
>>> +[suppress_type]
>>> +        name = rte_eth_txconf
>>> +        has_data_member_inserted_between =
>>> +{offset_of(tx_deferred_start), offset_of(offloads)}
>>> diff --git a/doc/guides/rel_notes/release_23_03.rst
>>> b/doc/guides/rel_notes/release_23_03.rst
>>> index 73f5d94e14..e99bd2dcb6 100644
>>> --- a/doc/guides/rel_notes/release_23_03.rst
>>> +++ b/doc/guides/rel_notes/release_23_03.rst
>>> @@ -55,6 +55,10 @@ New Features
>>>       Also, make sure to start the actual text at the margin.
>>>       =======================================================
>>>
>>> +* **Added affinity for multiple physical ports connected to a single
>>> +DPDK port.**
>>> +
>>> +  * Added Tx affinity in queue setup to map a physical port.
>>> +
>>>  * **Updated AMD axgbe driver.**
>>>
>>>    * Added multi-process support.
>>> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>> index 79a1fa9cb7..5c716f7679 100644
>>> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>> @@ -1605,6 +1605,19 @@ Enable or disable a per queue Tx offloading only
>> on a specific Tx queue::
>>>
>>>  This command should be run when the port is stopped, or else it will fail.
>>>
>>> +config per queue Tx physical affinity
>>> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>> +
>>> +Configure a per queue physical affinity value only on a specific Tx queue::
>>> +
>>> +   testpmd> port (port_id) txq (queue_id) phy_affinity (value)
>>> +
>>> +* ``phy_affinity``: physical port to use for sending,
>>> +                    when multiple physical ports are connected to
>>> +                    a single DPDK port.
>>> +
>>> +This command should be run when the port is stopped, otherwise it fails.
>>> +
>>>  Config VXLAN Encap outer layers
>>>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>
>>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
>>> c129ca1eaf..2fd971b7b5 100644
>>> --- a/lib/ethdev/rte_ethdev.h
>>> +++ b/lib/ethdev/rte_ethdev.h
>>> @@ -1138,6 +1138,14 @@ struct rte_eth_txconf {
>>>  				      less free descriptors than this value. */
>>>
>>>  	uint8_t tx_deferred_start; /**< Do not start queue with
>>> rte_eth_dev_start(). */
>>> +	/**
>>> +	 * Affinity with one of the multiple physical ports connected to the
>> DPDK port.
>>> +	 * Value 0 means no affinity and traffic could be routed to any
>> connected
>>> +	 * physical port.
>>> +	 * The first physical port is number 1 and so on.
>>> +	 * Number of physical ports is reported by nb_phy_ports in
>> rte_eth_dev_info.
>>> +	 */
>>> +	uint8_t tx_phy_affinity;
>>>  	/**
>>>  	 * Per-queue Tx offloads to be set  using RTE_ETH_TX_OFFLOAD_*
>> flags.
>>>  	 * Only offloads set on tx_queue_offload_capa or tx_offload_capa @@
>>> -1744,6 +1752,8 @@ struct rte_eth_dev_info {
>>>  	/** Device redirection table size, the total number of entries. */
>>>  	uint16_t reta_size;
>>>  	uint8_t hash_key_size; /**< Hash key size in bytes */
>>> +	/** Number of physical ports connected with DPDK port. */
>>> +	uint8_t nb_phy_ports;
>>>  	/** Bit mask of RSS offloads, the bit offset also means flow type */
>>>  	uint64_t flow_type_rss_offloads;
>>>  	struct rte_eth_rxconf default_rxconf; /**< Default Rx configuration
>>> */
> 


^ permalink raw reply	[relevance 0%]

* RE: [PATCH v4 1/2] ethdev: introduce the PHY affinity field in Tx queue API
  @ 2023-02-14  9:38  0%       ` Jiawei(Jonny) Wang
  2023-02-14 10:01  0%         ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Jiawei(Jonny) Wang @ 2023-02-14  9:38 UTC (permalink / raw)
  To: Ferruh Yigit, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon (EXTERNAL),
	andrew.rybchenko, Aman Singh, Yuying Zhang
  Cc: dev, Raslan Darawsheh

Hi,

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Friday, February 10, 2023 3:45 AM
> To: Jiawei(Jonny) Wang <jiaweiw@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-
> Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>;
> andrew.rybchenko@oktetlabs.ru; Aman Singh <aman.deep.singh@intel.com>;
> Yuying Zhang <yuying.zhang@intel.com>
> Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> Subject: Re: [PATCH v4 1/2] ethdev: introduce the PHY affinity field in Tx queue
> API
> 
> On 2/3/2023 1:33 PM, Jiawei Wang wrote:
> > When multiple physical ports are connected to a single DPDK port,
> > (example: kernel bonding, DPDK bonding, failsafe, etc.), we want to
> > know which physical port is used for Rx and Tx.
> >
> 
> I assume "kernel bonding" is out of context, but this patch concerns DPDK
> bonding, failsafe or softnic. (I will refer them as virtual bonding
> device.)
> 
> To use specific queues of the virtual bonding device may interfere with the
> logic of these devices, like bonding modes or RSS of the underlying devices. I
> can see feature focuses on a very specific use case, but not sure if all possible
> side effects taken into consideration.
> 
> 
> And although the feature is only relavent to virtual bondiong device, core
> ethdev structures are updated for this. Most use cases won't need these, so is
> there a way to reduce the scope of the changes to virtual bonding devices?
> 
> 
> There are a few very core ethdev APIs, like:
> rte_eth_dev_configure()
> rte_eth_tx_queue_setup()
> rte_eth_rx_queue_setup()
> rte_eth_dev_start()
> rte_eth_dev_info_get()
> 
> Almost every user of ehtdev uses these APIs, since these are so fundemental I
> am for being a little more conservative on these APIs.
> 
> Every eccentric features are targetting these APIs first because they are
> common and extending them gives an easy solution, but in long run making
> these APIs more complex, harder to maintain and harder for PMDs to support
> them correctly. So I am for not updating them unless it is a generic use case.
> 
> 
> Also as we talked about PMDs supporting them, I assume your coming PMD
> patch will be implementing 'tx_phy_affinity' config option only for mlx drivers.
> What will happen for other NICs? Will they silently ignore the config option
> from user? So this is a problem for the DPDK application portabiltiy.
> 
> 
> 
> As far as I understand target is application controlling which sub-device is used
> under the virtual bonding device, can you pleaes give more information why
> this is required, perhaps it can help to provide a better/different solution.
> Like adding the ability to use both bonding device and sub-device for data path,
> this way application can use whichever it wants. (this is just first solution I
> come with, I am not suggesting as replacement solution, but if you can describe
> the problem more I am sure other people can come with better solutions.)
> 
> And isn't this against the applicatio transparent to underneath device being
> bonding device or actual device?
> 
> 

OK, I will send the new version with separate functions in ethdev layer, 
to support the Map a Tx queue to port and get the number of ports.
And these functions work with device ops callback, other NICs will reported
The unsupported the ops callback is NULL.

> > This patch maps a DPDK Tx queue with a physical port, by adding
> > tx_phy_affinity setting in Tx queue.
> > The affinity number is the physical port ID where packets will be
> > sent.
> > Value 0 means no affinity and traffic could be routed to any connected
> > physical ports, this is the default current behavior.
> >
> > The number of physical ports is reported with rte_eth_dev_info_get().
> >
> > The new tx_phy_affinity field is added into the padding hole of
> > rte_eth_txconf structure, the size of rte_eth_txconf keeps the same.
> > An ABI check rule needs to be added to avoid false warning.
> >
> > Add the testpmd command line:
> > testpmd> port config (port_id) txq (queue_id) phy_affinity (value)
> >
> > For example, there're two physical ports connected to a single DPDK
> > port (port id 0), and phy_affinity 1 stood for the first physical port
> > and phy_affinity 2 stood for the second physical port.
> > Use the below commands to config tx phy affinity for per Tx Queue:
> >         port config 0 txq 0 phy_affinity 1
> >         port config 0 txq 1 phy_affinity 1
> >         port config 0 txq 2 phy_affinity 2
> >         port config 0 txq 3 phy_affinity 2
> >
> > These commands config the Tx Queue index 0 and Tx Queue index 1 with
> > phy affinity 1, uses Tx Queue 0 or Tx Queue 1 send packets, these
> > packets will be sent from the first physical port, and similar with
> > the second physical port if sending packets with Tx Queue 2 or Tx
> > Queue 3.
> >
> > Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
> > ---
> >  app/test-pmd/cmdline.c                      | 100 ++++++++++++++++++++
> >  app/test-pmd/config.c                       |   1 +
> >  devtools/libabigail.abignore                |   5 +
> >  doc/guides/rel_notes/release_23_03.rst      |   4 +
> >  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  13 +++
> >  lib/ethdev/rte_ethdev.h                     |  10 ++
> >  6 files changed, 133 insertions(+)
> >
> > diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
> > cb8c174020..f771fcf8ac 100644
> > --- a/app/test-pmd/cmdline.c
> > +++ b/app/test-pmd/cmdline.c
> > @@ -776,6 +776,10 @@ static void cmd_help_long_parsed(void
> > *parsed_result,
> >
> >  			"port cleanup (port_id) txq (queue_id) (free_cnt)\n"
> >  			"    Cleanup txq mbufs for a specific Tx queue\n\n"
> > +
> > +			"port config (port_id) txq (queue_id) phy_affinity
> (value)\n"
> > +			"    Set the physical affinity value "
> > +			"on a specific Tx queue\n\n"
> >  		);
> >  	}
> >
> > @@ -12633,6 +12637,101 @@ static cmdline_parse_inst_t
> cmd_show_port_flow_transfer_proxy = {
> >  	}
> >  };
> >
> > +/* *** configure port txq phy_affinity value *** */ struct
> > +cmd_config_tx_phy_affinity {
> > +	cmdline_fixed_string_t port;
> > +	cmdline_fixed_string_t config;
> > +	portid_t portid;
> > +	cmdline_fixed_string_t txq;
> > +	uint16_t qid;
> > +	cmdline_fixed_string_t phy_affinity;
> > +	uint8_t value;
> > +};
> > +
> > +static void
> > +cmd_config_tx_phy_affinity_parsed(void *parsed_result,
> > +				  __rte_unused struct cmdline *cl,
> > +				  __rte_unused void *data)
> > +{
> > +	struct cmd_config_tx_phy_affinity *res = parsed_result;
> > +	struct rte_eth_dev_info dev_info;
> > +	struct rte_port *port;
> > +	int ret;
> > +
> > +	if (port_id_is_invalid(res->portid, ENABLED_WARN))
> > +		return;
> > +
> > +	if (res->portid == (portid_t)RTE_PORT_ALL) {
> > +		printf("Invalid port id\n");
> > +		return;
> > +	}
> > +
> > +	port = &ports[res->portid];
> > +
> > +	if (strcmp(res->txq, "txq")) {
> > +		printf("Unknown parameter\n");
> > +		return;
> > +	}
> > +	if (tx_queue_id_is_invalid(res->qid))
> > +		return;
> > +
> > +	ret = eth_dev_info_get_print_err(res->portid, &dev_info);
> > +	if (ret != 0)
> > +		return;
> > +
> > +	if (dev_info.nb_phy_ports == 0) {
> > +		printf("Number of physical ports is 0 which is invalid for PHY
> Affinity\n");
> > +		return;
> > +	}
> > +	printf("The number of physical ports is %u\n", dev_info.nb_phy_ports);
> > +	if (dev_info.nb_phy_ports < res->value) {
> > +		printf("The PHY affinity value %u is Invalid, exceeds the "
> > +		       "number of physical ports\n", res->value);
> > +		return;
> > +	}
> > +	port->txq[res->qid].conf.tx_phy_affinity = res->value;
> > +
> > +	cmd_reconfig_device_queue(res->portid, 0, 1); }
> > +
> > +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_port =
> > +	TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity,
> > +				 port, "port");
> > +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_config =
> > +	TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity,
> > +				 config, "config");
> > +cmdline_parse_token_num_t cmd_config_tx_phy_affinity_portid =
> > +	TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity,
> > +				 portid, RTE_UINT16);
> > +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_txq =
> > +	TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity,
> > +				 txq, "txq");
> > +cmdline_parse_token_num_t cmd_config_tx_phy_affinity_qid =
> > +	TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity,
> > +			      qid, RTE_UINT16);
> > +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_hwport =
> > +	TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity,
> > +				 phy_affinity, "phy_affinity");
> > +cmdline_parse_token_num_t cmd_config_tx_phy_affinity_value =
> > +	TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity,
> > +			      value, RTE_UINT8);
> > +
> > +static cmdline_parse_inst_t cmd_config_tx_phy_affinity = {
> > +	.f = cmd_config_tx_phy_affinity_parsed,
> > +	.data = (void *)0,
> > +	.help_str = "port config <port_id> txq <queue_id> phy_affinity <value>",
> > +	.tokens = {
> > +		(void *)&cmd_config_tx_phy_affinity_port,
> > +		(void *)&cmd_config_tx_phy_affinity_config,
> > +		(void *)&cmd_config_tx_phy_affinity_portid,
> > +		(void *)&cmd_config_tx_phy_affinity_txq,
> > +		(void *)&cmd_config_tx_phy_affinity_qid,
> > +		(void *)&cmd_config_tx_phy_affinity_hwport,
> > +		(void *)&cmd_config_tx_phy_affinity_value,
> > +		NULL,
> > +	},
> > +};
> > +
> >  /*
> >
> ****************************************************************
> ******
> > ********** */
> >
> >  /* list of instructions */
> > @@ -12866,6 +12965,7 @@ static cmdline_parse_ctx_t builtin_ctx[] = {
> >  	(cmdline_parse_inst_t *)&cmd_show_port_cman_capa,
> >  	(cmdline_parse_inst_t *)&cmd_show_port_cman_config,
> >  	(cmdline_parse_inst_t *)&cmd_set_port_cman_config,
> > +	(cmdline_parse_inst_t *)&cmd_config_tx_phy_affinity,
> >  	NULL,
> >  };
> >
> > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index
> > acccb6b035..b83fb17cfa 100644
> > --- a/app/test-pmd/config.c
> > +++ b/app/test-pmd/config.c
> > @@ -936,6 +936,7 @@ port_infos_display(portid_t port_id)
> >  		printf("unknown\n");
> >  		break;
> >  	}
> > +	printf("Current number of physical ports: %u\n",
> > +dev_info.nb_phy_ports);
> >  }
> >
> >  void
> > diff --git a/devtools/libabigail.abignore
> > b/devtools/libabigail.abignore index 7a93de3ba1..ac7d3fb2da 100644
> > --- a/devtools/libabigail.abignore
> > +++ b/devtools/libabigail.abignore
> > @@ -34,3 +34,8 @@
> >  ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
> >  ; Temporary exceptions till next major ABI version ;
> > ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
> > +
> > +; Ignore fields inserted in padding hole of rte_eth_txconf
> > +[suppress_type]
> > +        name = rte_eth_txconf
> > +        has_data_member_inserted_between =
> > +{offset_of(tx_deferred_start), offset_of(offloads)}
> > diff --git a/doc/guides/rel_notes/release_23_03.rst
> > b/doc/guides/rel_notes/release_23_03.rst
> > index 73f5d94e14..e99bd2dcb6 100644
> > --- a/doc/guides/rel_notes/release_23_03.rst
> > +++ b/doc/guides/rel_notes/release_23_03.rst
> > @@ -55,6 +55,10 @@ New Features
> >       Also, make sure to start the actual text at the margin.
> >       =======================================================
> >
> > +* **Added affinity for multiple physical ports connected to a single
> > +DPDK port.**
> > +
> > +  * Added Tx affinity in queue setup to map a physical port.
> > +
> >  * **Updated AMD axgbe driver.**
> >
> >    * Added multi-process support.
> > diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > index 79a1fa9cb7..5c716f7679 100644
> > --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > @@ -1605,6 +1605,19 @@ Enable or disable a per queue Tx offloading only
> on a specific Tx queue::
> >
> >  This command should be run when the port is stopped, or else it will fail.
> >
> > +config per queue Tx physical affinity
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Configure a per queue physical affinity value only on a specific Tx queue::
> > +
> > +   testpmd> port (port_id) txq (queue_id) phy_affinity (value)
> > +
> > +* ``phy_affinity``: physical port to use for sending,
> > +                    when multiple physical ports are connected to
> > +                    a single DPDK port.
> > +
> > +This command should be run when the port is stopped, otherwise it fails.
> > +
> >  Config VXLAN Encap outer layers
> >  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> > c129ca1eaf..2fd971b7b5 100644
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -1138,6 +1138,14 @@ struct rte_eth_txconf {
> >  				      less free descriptors than this value. */
> >
> >  	uint8_t tx_deferred_start; /**< Do not start queue with
> > rte_eth_dev_start(). */
> > +	/**
> > +	 * Affinity with one of the multiple physical ports connected to the
> DPDK port.
> > +	 * Value 0 means no affinity and traffic could be routed to any
> connected
> > +	 * physical port.
> > +	 * The first physical port is number 1 and so on.
> > +	 * Number of physical ports is reported by nb_phy_ports in
> rte_eth_dev_info.
> > +	 */
> > +	uint8_t tx_phy_affinity;
> >  	/**
> >  	 * Per-queue Tx offloads to be set  using RTE_ETH_TX_OFFLOAD_*
> flags.
> >  	 * Only offloads set on tx_queue_offload_capa or tx_offload_capa @@
> > -1744,6 +1752,8 @@ struct rte_eth_dev_info {
> >  	/** Device redirection table size, the total number of entries. */
> >  	uint16_t reta_size;
> >  	uint8_t hash_key_size; /**< Hash key size in bytes */
> > +	/** Number of physical ports connected with DPDK port. */
> > +	uint8_t nb_phy_ports;
> >  	/** Bit mask of RSS offloads, the bit offset also means flow type */
> >  	uint64_t flow_type_rss_offloads;
> >  	struct rte_eth_rxconf default_rxconf; /**< Default Rx configuration
> > */


^ permalink raw reply	[relevance 0%]

* [PATCH v5 21/22] hash: move rte_hash_set_alg out header
  2023-02-14  2:18  3% ` [PATCH v5 00/22] Replace us of static logtypes Stephen Hemminger
@ 2023-02-14  2:19  3%   ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-02-14  2:19 UTC (permalink / raw)
  To: dev
  Cc: Stephen Hemminger, Yipeng Wang, Sameh Gobriel, Bruce Richardson,
	Vladimir Medvedkin

The code for setting algorithm for hash is not at all perf sensitive,
and doing it inline has a couple of problems. First, it means that if
multiple files include the header, then the initialization gets done
multiple times. But also, it makes it harder to fix usage of RTE_LOG().

Despite what the checking script say. This is not an ABI change, the
previous version inlined the same code; therefore both old and new code
will work the same.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/hash/meson.build    |  1 +
 lib/hash/rte_hash_crc.c | 63 +++++++++++++++++++++++++++++++++++++++++
 lib/hash/rte_hash_crc.h | 46 ++----------------------------
 lib/hash/version.map    |  1 +
 4 files changed, 67 insertions(+), 44 deletions(-)
 create mode 100644 lib/hash/rte_hash_crc.c

diff --git a/lib/hash/meson.build b/lib/hash/meson.build
index e56ee8572564..c345c6f561fc 100644
--- a/lib/hash/meson.build
+++ b/lib/hash/meson.build
@@ -19,6 +19,7 @@ indirect_headers += files(
 
 sources = files(
     'rte_cuckoo_hash.c',
+    'rte_hash_crc.c',
     'rte_fbk_hash.c',
     'rte_thash.c',
     'rte_thash_gfni.c'
diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c
new file mode 100644
index 000000000000..c59eebccb1eb
--- /dev/null
+++ b/lib/hash/rte_hash_crc.c
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#include <rte_cpuflags.h>
+#include <rte_log.h>
+
+#include "rte_hash_crc.h"
+
+/**
+ * Allow or disallow use of SSE4.2/ARMv8 intrinsics for CRC32 hash
+ * calculation.
+ *
+ * @param alg
+ *   An OR of following flags:
+ *   - (CRC32_SW) Don't use SSE4.2/ARMv8 intrinsics (default non-[x86/ARMv8])
+ *   - (CRC32_SSE42) Use SSE4.2 intrinsics if available
+ *   - (CRC32_SSE42_x64) Use 64-bit SSE4.2 intrinsic if available (default x86)
+ *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
+ *
+ */
+void
+rte_hash_crc_set_alg(uint8_t alg)
+{
+	crc32_alg = CRC32_SW;
+
+	if (alg == CRC32_SW)
+		return;
+
+#if defined RTE_ARCH_X86
+	if (!(alg & CRC32_SSE42_x64))
+		RTE_LOG(WARNING, HASH,
+			"Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
+	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
+		crc32_alg = CRC32_SSE42;
+	else
+		crc32_alg = CRC32_SSE42_x64;
+#endif
+
+#if defined RTE_ARCH_ARM64
+	if (!(alg & CRC32_ARM64))
+		RTE_LOG(WARNING, HASH,
+			"Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
+		crc32_alg = CRC32_ARM64;
+#endif
+
+	if (crc32_alg == CRC32_SW)
+		RTE_LOG(WARNING, HASH,
+			"Unsupported CRC32 algorithm requested using CRC32_SW\n");
+}
+
+/* Setting the best available algorithm */
+RTE_INIT(rte_hash_crc_init_alg)
+{
+#if defined(RTE_ARCH_X86)
+	rte_hash_crc_set_alg(CRC32_SSE42_x64);
+#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
+	rte_hash_crc_set_alg(CRC32_ARM64);
+#else
+	rte_hash_crc_set_alg(CRC32_SW);
+#endif
+}
diff --git a/lib/hash/rte_hash_crc.h b/lib/hash/rte_hash_crc.h
index 0249ad16c5b6..e4acd99a0c81 100644
--- a/lib/hash/rte_hash_crc.h
+++ b/lib/hash/rte_hash_crc.h
@@ -20,8 +20,6 @@ extern "C" {
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_config.h>
-#include <rte_cpuflags.h>
-#include <rte_log.h>
 
 #include "rte_crc_sw.h"
 
@@ -53,48 +51,8 @@ static uint8_t crc32_alg = CRC32_SW;
  *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
  *
  */
-static inline void
-rte_hash_crc_set_alg(uint8_t alg)
-{
-	crc32_alg = CRC32_SW;
-
-	if (alg == CRC32_SW)
-		return;
-
-#if defined RTE_ARCH_X86
-	if (!(alg & CRC32_SSE42_x64))
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
-	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
-		crc32_alg = CRC32_SSE42;
-	else
-		crc32_alg = CRC32_SSE42_x64;
-#endif
-
-#if defined RTE_ARCH_ARM64
-	if (!(alg & CRC32_ARM64))
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
-	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
-		crc32_alg = CRC32_ARM64;
-#endif
-
-	if (crc32_alg == CRC32_SW)
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_SW\n");
-}
-
-/* Setting the best available algorithm */
-RTE_INIT(rte_hash_crc_init_alg)
-{
-#if defined(RTE_ARCH_X86)
-	rte_hash_crc_set_alg(CRC32_SSE42_x64);
-#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
-	rte_hash_crc_set_alg(CRC32_ARM64);
-#else
-	rte_hash_crc_set_alg(CRC32_SW);
-#endif
-}
+void
+rte_hash_crc_set_alg(uint8_t alg);
 
 #ifdef __DOXYGEN__
 
diff --git a/lib/hash/version.map b/lib/hash/version.map
index f03b047b2eec..a1d81835399c 100644
--- a/lib/hash/version.map
+++ b/lib/hash/version.map
@@ -9,6 +9,7 @@ DPDK_23 {
 	rte_hash_add_key_with_hash;
 	rte_hash_add_key_with_hash_data;
 	rte_hash_count;
+	rte_hash_crc_set_alg;
 	rte_hash_create;
 	rte_hash_del_key;
 	rte_hash_del_key_with_hash;
-- 
2.39.1


^ permalink raw reply	[relevance 3%]

* [PATCH v5 00/22] Replace us of static logtypes
    2023-02-13 19:55  3% ` [PATCH v4 00/19] Replace use of static logtypes Stephen Hemminger
@ 2023-02-14  2:18  3% ` Stephen Hemminger
  2023-02-14  2:19  3%   ` [PATCH v5 21/22] hash: move rte_hash_set_alg out header Stephen Hemminger
  2023-02-14 22:47  3% ` [PATCH v6 00/22] Replace use of static logtypes in libraries Stephen Hemminger
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-02-14  2:18 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

This patchset removes the main uses of static LOGTYPE's in DPDK
libraries. It starts with the easy one and goes on to the more complex ones.

Note: there is one patch in this series that will get
flagged incorrectly as an ABI change.

v5 - fix use of LOGTYPE PORT and POWER in examples

v4 - use simpler/shorter method for setting local LOGTYPE
     split up steps of some of the changes

Stephen Hemminger (22):
  gso: don't log message on non TCP/UDP
  eal: drop no longer used GSO logtype
  log: drop unused RTE_LOGTYPE_TIMER
  efd: replace RTE_LOGTYPE_EFD with dynamic type
  mbuf: replace RTE_LOGTYPE_MBUF with dynamic type
  acl: replace LOGTYPE_ACL with dynamic type
  examples/power: replace use of RTE_LOGTYPE_POWER
  examples/l3fwd-power: replace use of RTE_LOGTYPE_POWER
  power: replace RTE_LOGTYPE_POWER with dynamic type
  ring: replace RTE_LOGTYPE_RING with dynamic type
  mempool: replace RTE_LOGTYPE_MEMPOOL with dynamic type
  lpm: replace RTE_LOGTYPE_LPM with dynamic types
  kni: replace RTE_LOGTYPE_KNI with dynamic type
  sched: replace RTE_LOGTYPE_SCHED with dynamic type
  examples/ipsecgw: replace RTE_LOGTYPE_PORT
  port: replace RTE_LOGTYPE_PORT with dynamic type
  table: convert RTE_LOGTYPE_TABLE to dynamic logtype
  app/test: remove use of RTE_LOGTYPE_PIPELINE
  pipeline: replace RTE_LOGTYPE_PIPELINE with dynamic type
  hash: move rte_thash_gfni stubs out of header file
  hash: move rte_hash_set_alg out header
  hash: convert RTE_LOGTYPE_HASH to dynamic type

 app/test/test_acl.c               |  3 +-
 app/test/test_table_acl.c         | 50 +++++++++++------------
 app/test/test_table_pipeline.c    | 40 +++++++++----------
 examples/distributor/main.c       |  2 +-
 examples/ipsec-secgw/sa.c         |  6 +--
 examples/l3fwd-power/main.c       | 15 +++----
 lib/acl/acl_bld.c                 |  1 +
 lib/acl/acl_gen.c                 |  1 +
 lib/acl/acl_log.h                 |  4 ++
 lib/acl/rte_acl.c                 |  4 ++
 lib/acl/tb_mem.c                  |  3 +-
 lib/eal/common/eal_common_log.c   | 17 --------
 lib/eal/include/rte_log.h         | 34 ++++++++--------
 lib/efd/rte_efd.c                 |  3 ++
 lib/fib/fib_log.h                 |  4 ++
 lib/fib/rte_fib.c                 |  3 ++
 lib/fib/rte_fib6.c                |  2 +
 lib/gso/rte_gso.c                 |  5 +--
 lib/hash/meson.build              |  9 ++++-
 lib/hash/rte_cuckoo_hash.c        |  5 +++
 lib/hash/rte_fbk_hash.c           |  3 ++
 lib/hash/rte_hash_crc.c           | 66 +++++++++++++++++++++++++++++++
 lib/hash/rte_hash_crc.h           | 46 +--------------------
 lib/hash/rte_thash.c              |  3 ++
 lib/hash/rte_thash_gfni.c         | 46 +++++++++++++++++++++
 lib/hash/rte_thash_gfni.h         | 28 +++----------
 lib/hash/version.map              |  5 +++
 lib/kni/rte_kni.c                 |  3 ++
 lib/lpm/lpm_log.h                 |  4 ++
 lib/lpm/rte_lpm.c                 |  3 ++
 lib/lpm/rte_lpm6.c                |  1 +
 lib/mbuf/mbuf_log.h               |  4 ++
 lib/mbuf/rte_mbuf.c               |  4 ++
 lib/mbuf/rte_mbuf_dyn.c           |  2 +
 lib/mbuf/rte_mbuf_pool_ops.c      |  2 +
 lib/mempool/rte_mempool.c         |  3 ++
 lib/mempool/rte_mempool_log.h     |  4 ++
 lib/mempool/rte_mempool_ops.c     |  1 +
 lib/pipeline/rte_pipeline.c       |  3 ++
 lib/port/rte_port_ethdev.c        |  3 ++
 lib/port/rte_port_eventdev.c      |  4 ++
 lib/port/rte_port_fd.c            |  3 ++
 lib/port/rte_port_frag.c          |  3 ++
 lib/port/rte_port_kni.c           |  3 ++
 lib/port/rte_port_ras.c           |  3 ++
 lib/port/rte_port_ring.c          |  3 ++
 lib/port/rte_port_sched.c         |  3 ++
 lib/port/rte_port_source_sink.c   |  3 ++
 lib/port/rte_port_sym_crypto.c    |  3 ++
 lib/power/guest_channel.c         |  3 +-
 lib/power/power_common.c          |  2 +
 lib/power/power_common.h          |  3 +-
 lib/power/power_kvm_vm.c          |  1 +
 lib/power/rte_power.c             |  1 +
 lib/power/rte_power_empty_poll.c  |  1 +
 lib/rib/rib_log.h                 |  4 ++
 lib/rib/rte_rib.c                 |  3 ++
 lib/rib/rte_rib6.c                |  3 ++
 lib/ring/rte_ring.c               |  3 ++
 lib/sched/rte_pie.c               |  1 +
 lib/sched/rte_sched.c             |  5 +++
 lib/sched/rte_sched_log.h         |  4 ++
 lib/table/rte_table_acl.c         |  3 ++
 lib/table/rte_table_array.c       |  3 ++
 lib/table/rte_table_hash_cuckoo.c |  3 ++
 lib/table/rte_table_hash_ext.c    |  3 ++
 lib/table/rte_table_hash_key16.c  |  3 ++
 lib/table/rte_table_hash_key32.c  |  5 ++-
 lib/table/rte_table_hash_key8.c   |  5 ++-
 lib/table/rte_table_hash_lru.c    |  3 ++
 lib/table/rte_table_lpm.c         |  3 ++
 lib/table/rte_table_lpm_ipv6.c    |  3 ++
 lib/table/rte_table_stub.c        |  3 ++
 73 files changed, 375 insertions(+), 169 deletions(-)
 create mode 100644 lib/acl/acl_log.h
 create mode 100644 lib/fib/fib_log.h
 create mode 100644 lib/hash/rte_hash_crc.c
 create mode 100644 lib/hash/rte_thash_gfni.c
 create mode 100644 lib/lpm/lpm_log.h
 create mode 100644 lib/mbuf/mbuf_log.h
 create mode 100644 lib/mempool/rte_mempool_log.h
 create mode 100644 lib/rib/rib_log.h
 create mode 100644 lib/sched/rte_sched_log.h

-- 
2.39.1


^ permalink raw reply	[relevance 3%]

* Re: [PATCH] eal: introduce atomics abstraction
  2023-02-13  5:04  0%                     ` Honnappa Nagarahalli
  2023-02-13 15:28  0%                       ` Ben Magistro
@ 2023-02-13 23:18  0%                       ` Tyler Retzlaff
  1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-02-13 23:18 UTC (permalink / raw)
  To: Honnappa Nagarahalli
  Cc: Morten Brørup, thomas, dev, bruce.richardson,
	david.marchand, jerinj, konstantin.ananyev, ferruh.yigit, nd,
	techboard

On Mon, Feb 13, 2023 at 05:04:49AM +0000, Honnappa Nagarahalli wrote:
> Hi Tyler,
> 	Few more comments inline. Let us continue to make progress, I will add this topic for Techboard discussion for 22nd Feb.
> 
> > -----Original Message-----
> > From: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > Sent: Friday, February 10, 2023 2:30 PM
> > To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> > Cc: Morten Brørup <mb@smartsharesystems.com>; thomas@monjalon.net;
> > dev@dpdk.org; bruce.richardson@intel.com; david.marchand@redhat.com;
> > jerinj@marvell.com; konstantin.ananyev@huawei.com;
> > ferruh.yigit@amd.com; nd <nd@arm.com>; techboard@dpdk.org
> > Subject: Re: [PATCH] eal: introduce atomics abstraction
> > 
> > On Fri, Feb 10, 2023 at 05:30:00AM +0000, Honnappa Nagarahalli wrote:
> > > <snip>
> > >
> > > > On Thu, Feb 09, 2023 at 12:16:38AM +0000, Honnappa Nagarahalli wrote:
> > > > > <snip>
> > > > >
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > For environments where stdatomics are not supported,
> > > > > > > > > > > we could
> > > > > > > > have a
> > > > > > > > > > stdatomic.h in DPDK implementing the same APIs (we have
> > > > > > > > > > to support
> > > > > > > > only
> > > > > > > > > > _explicit APIs). This allows the code to use stdatomics
> > > > > > > > > > APIs and
> > > > > > > > when we move
> > > > > > > > > > to minimum supported standard C11, we just need to get
> > > > > > > > > > rid of the
> > > > > > > > file in DPDK
> > > > > > > > > > repo.
> > > > > > > > > >
> > > > > > > > > > my concern with this is that if we provide a stdatomic.h
> > > > > > > > > > or
> > > > > > > > introduce names
> > > > > > > > > > from stdatomic.h it's a violation of the C standard.
> > > > > > > > > >
> > > > > > > > > > references:
> > > > > > > > > >  * ISO/IEC 9899:2011 sections 7.1.2, 7.1.3.
> > > > > > > > > >  * GNU libc manual
> > > > > > > > > >
> > > > > > > > > > https://www.gnu.org/software/libc/manual/html_node/Reser
> > > > > > > > > > ved-
> > > > > > > > > > Names.html
> > > > > > > > > >
> > > > > > > > > > in effect the header, the names and in some instances
> > > > > > > > > > namespaces
> > > > > > > > introduced
> > > > > > > > > > are reserved by the implementation. there are several
> > > > > > > > > > reasons in
> > > > > > > > the GNU libc
> > > > > > > > > Wouldn't this apply only after the particular APIs were
> > introduced?
> > > > > > > > i.e. it should not apply if the compiler does not support stdatomics.
> > > > > > > >
> > > > > > > > yeah, i agree they're being a bit wishy washy in the
> > > > > > > > wording, but i'm not convinced glibc folks are documenting
> > > > > > > > this as permissive guidance against.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > > manual that explain the justification for these
> > > > > > > > > > reservations and if
> > > > > > > > if we think
> > > > > > > > > > about ODR and ABI compatibility we can conceive of others.
> > > > > > > > > >
> > > > > > > > > > i'll also remark that the inter-mingling of names from
> > > > > > > > > > the POSIX
> > > > > > > > standard
> > > > > > > > > > implicitly exposed as a part of the EAL public API has
> > > > > > > > > > been
> > > > > > > > problematic for
> > > > > > > > > > portability.
> > > > > > > > > These should be exposed as EAL APIs only when compiled
> > > > > > > > > with a
> > > > > > > > compiler that does not support stdatomics.
> > > > > > > >
> > > > > > > > you don't necessarily compile dpdk, the application or its
> > > > > > > > other dynamically linked dependencies with the same compiler
> > > > > > > > at the same time.
> > > > > > > > i.e. basically the model of any dpdk-dev package on any
> > > > > > > > linux distribution.
> > > > > > > >
> > > > > > > > if dpdk is built without real stdatomic types but the
> > > > > > > > application has to interoperate with a different kit or
> > > > > > > > library that does they would be forced to dance around dpdk
> > > > > > > > with their own version of a shim to hide our faked up stdatomics.
> > > > > > > >
> > > > > > >
> > > > > > > So basically, if we want a binary DPDK distribution to be
> > > > > > > compatible with a
> > > > > > separate application build environment, they both have to
> > > > > > implement atomics the same way, i.e. agree on the ABI for atomics.
> > > > > > >
> > > > > > > Summing up, this leaves us with only two realistic options:
> > > > > > >
> > > > > > > 1. Go all in on C11 stdatomics, also requiring the application
> > > > > > > build
> > > > > > environment to support C11 stdatomics.
> > > > > > > 2. Provide our own DPDK atomics library.
> > > > > > >
> > > > > > > (As mentioned by Tyler, the third option - using C11
> > > > > > > stdatomics inside DPDK, and requiring a build environment
> > > > > > > without C11 stdatomics to implement a shim - is not
> > > > > > > realistic!)
> > > > > > >
> > > > > > > I strongly want atomics to be available for use across inline
> > > > > > > and compiled
> > > > > > code; i.e. it must be possible for both compiled DPDK functions
> > > > > > and inline functions to perform atomic transactions on the same
> > atomic variable.
> > > > > >
> > > > > > i consider it a mandatory requirement. i don't see practically
> > > > > > how we could withdraw existing use and even if we had clean way
> > > > > > i don't see why we would want to. so this item is defintely
> > > > > > settled if you were
> > > > concerned.
> > > > > I think I agree here.
> > > > >
> > > > > >
> > > > > > >
> > > > > > > So either we upgrade the DPDK build requirements to support
> > > > > > > C11 (including
> > > > > > the optional stdatomics), or we provide our own DPDK atomics.
> > > > > >
> > > > > > i think the issue of requiring a toolchain conformant to a
> > > > > > specific standard is a separate matter because any adoption of
> > > > > > C11 standard atomics is a potential abi break from the current use of
> > intrinsics.
> > > > > I am not sure why you are calling it as ABI break. Referring to
> > > > > [1], I just see
> > > > wrappers around intrinsics (though [2] does not use the intrinsics).
> > > > >
> > > > > [1]
> > > > > https://github.com/gcc-mirror/gcc/blob/master/gcc/ginclude/stdatom
> > > > > ic.h
> > > > > [2]
> > > > > https://github.com/llvm-mirror/clang/blob/master/lib/Headers/stdat
> > > > > omic
> > > > > .h
> > > >
> > > > it's a potential abi break because atomic types are not the same
> > > > types as their corresponding integer types etc.. (or at least are
> > > > not guaranteed to be by all implementations of c as an abstract language).
> > > >
> > > >     ISO/IEC 9899:2011
> > > >
> > > >     6.2.5 (27)
> > > >     Further, there is the _Atomic qualifier. The presence of the _Atomic
> > > >     qualifier designates an atomic type. The size, representation, and
> > alignment
> > > >     of an atomic type need not be the same as those of the corresponding
> > > >     unqualified type.
> > > >
> > > >     7.17.6 (3)
> > > >     NOTE The representation of atomic integer types need not have the
> > same size
> > > >     as their corresponding regular types. They should have the same
> > > > size whenever
> > > >     possible, as it eases effort required to port existing code.
> > > >
> > > > i use the term `potential abi break' with intent because for me to
> > > > assert in absolute terms i would have to evaluate the implementation
> > > > of every current and potential future compilers atomic vs non-atomic
> > > > types. this as i'm sure you understand is not practical, it would
> > > > also defeat the purpose of moving to a standard. therefore i rely on
> > > > the specification prescribed by the standard not the detail of a specific
> > implementation.
> > > Can we say that the platforms 'supported' by DPDK today do not have this
> > problem? Any future platforms that will come to DPDK have to evaluate this.
> > 
> > sadly i don't think we can. i believe in an earlier post i linked a bug filed on
> > gcc that shows that clang / gcc were producing different layout than the
> > equivalent non-atomic type.
> I looked at that bug again, it is to do with structure.

just to be clear, you're saying you aren't concerned because we don't
have in our public api struct objects to which we apply atomic
operations?

if that guarantee is absolute and stays true in our public api then i am
satisfied and we can drop the issue.

hypothetically if we make this assumption are you proposing that all
platform/toolchain combinations that support std=c11 and optional
stdatomic should adopt them as default on?

there are other implications to doing this, let's dig into the details
at the next technical board meeting.

> 
> > 
> > >
> > > >
> > > >
> > > > > > the abstraction (whatever namespace it resides) allows the
> > > > > > existing toolchain/platform combinations to maintain
> > > > > > compatibility by defaulting to current non-standard intrinsics.
> > > > > How about using the intrinsics (__atomic_xxx) name space for
> > abstraction?
> > > > This covers the GCC and Clang compilers.
> > 
> > i haven't investigated fully but there are usages of these intrinsics that
> > indicate there may be undesirable difference between clang and gcc versions.
> > the hint is there seems to be conditionally compiled code under __clang__
> > when using some __atomic's.
> I sent an RFC to address this [1]. I think the size specific intrinsics are not necessary.
> 
> [1] http://patches.dpdk.org/project/dpdk/patch/20230211015622.408487-1-honnappa.nagarahalli@arm.com/

yep, looks good to me. i acked the change.

thank you.

> 
> > 
> > for the purpose of this discussion clang just tries to look like gcc so i don't
> > regard them as being different compilers for the purpose of this discussion.
> > 
> > > >
> > > > the namespace starting with `__` is also reserved for the implementation.
> > > > this is why compilers gcc/clang/msvc place name their intrinsic and
> > > > builtin functions starting with __ to explicitly avoid collision
> > > > with the application namespace.
> > 
> > > Agreed. But, here we are considering '__atomic_' specifically (i.e.
> > > not just '__')
> > 
> > i don't understand the confusion __atomic is within the __ namespace that is
> > reserved.
> What I mean is, we are not formulating a policy/rule to allow for any name space that starts with '__'.

understood, but we appear to be trying to formulate a policy allowing a name
within that space which is reserved by the standard for and claimed by gcc.

anyway, let's discuss further at the meeting.

> 
> > 
> > let me ask this another way, what benefit do you see to trying to overlap with
> > the standard namespace? the only benefit i can see is that at some point in
> > the future it avoids having to perform a mechanical change to eventually
> > retire the abstraction once all platform/toolchains support standard atomics.
> > i.e. basically s/rte_atomic/atomic/g
> > 
> > is there another benefit i'm missing?
> The abstraction you have proposed solves the problem for the long term. The proposed abstraction stops us from thinking about moving to stdatomics.

i think this is where you've got me a bit confused. i'd like to
understand how it stops us thinking about moving to stdatomics.

> IMO, the problem is short term. Using the __atomic_ name space does not have any practical issues with the platforms DPDK supports (unless msvc has a problem with this, more questions below).

oh, sorry for not answering this previously. msvc (and as it happens
clang) both use __c11_atomic_xxx as a namespace. i'm only aware of gcc
and potentially compilers that try to look like gcc using __atomic_xxxx.

so if you're asking if that selection would interfere with msvc, it
wouldn't. i'm only concerned with mingling in a namespace that gcc has
claimed.

> 
> > 
> > >
> > > >
> > > >     ISO/IEC 9899:2011
> > > >
> > > >     7.1.3 (1)
> > > >     All identifiers that begin with an underscore and either an uppercase
> > > >     letter or another underscore are always reserved for any use.
> > > >
> > > >     ...
> > > >
> > > > > If there is another platform that uses the same name space for
> > > > > something
> > > > else, I think DPDK should not be supporting that platform.
> > > >
> > > > that's effectively a statement excluding windows platform and all
> > > > non-gcc compilers from ever supporting dpdk.
> > > Apologies, I did not understand your comment on windows platform. Do
> > you mean to say a compiler for windows platform uses '__atomic_xxx' name
> > space to provide some other functionality (and hence it would get excluded)?
> > 
> > i mean dpdk can never fully be supported without msvc except for statically
> > linked builds which are niche and limit it too severely for many consumers to
> > practically use dpdk. there are also many application developers who would
> > like to integrate dpdk but can't and telling them their only choice is to re-port
> > their entire application to clang isn't feasible.
> > 
> > i can see no technical reason why we should be excluding a major compiler in
> > broad use if it is capable of building dpdk. msvc arguably has some of the
> > most sophisticated security features in the industry and the use of those
> > features is mandated by many of the customers who might deploy dpdk
> > applications on windows.
> I did not mean DPDK should not support msvc (may be my sentence below was misunderstood).
> Does msvc provide '__atomic_xxx' intrinsics?

msvc provides stdatomic (behind stdatomic there are intrinsics)

> 
> > 
> > > Clang supports these intrinsics. I am not sure about the merit of supporting
> > other non-gcc compilers. May be a topic Techboard discussion.
> > >
> > > >
> > > > > What problems do you see?
> > > >
> > > > i'm fairly certain at least one other compiler uses the __atomic
> > > > namespace but
> > > Do you mean __atomic namespace is used for some other purpose?
> > >
> > > > it would take me time to check, the most notable potential issue
> > > > that comes to mind is if such an intrinsic with the same name is
> > > > provided in a different implementation and has either regressive
> > > > code generation or different semantics it would be bad because it is
> > > > intrinsic you can't just hack around it with #undef __atomic to shim in a
> > semantically correct version.
> > > I do not think we should worry about regressive code generation problem. It
> > should be fixed by that compiler.
> > > Different semantics is something we need to worry about. It would be good
> > to find out more about a compiler that does this.
> > 
> > again, this is about portability it's about potential not that we can find an
> > example.
> > 
> > >
> > > >
> > > > how about this, is there another possible namespace you might
> > > > suggest that conforms or doesn't conflict with the the rules defined
> > > > in ISO/IEC 9899:2011
> > > > 7.1.3 i think if there were that would satisfy all of my concerns
> > > > related to namespaces.
> > > >
> > > > keep in mind the point of moving to a standard is to achieve
> > > > portability so if we do things that will regress us back to being
> > > > dependent on an implementation we haven't succeeded. that's all i'm
> > trying to guarantee here.
> > > Agree. We are trying to solve a problem that is temporary. I am trying to
> > keep the problem scope narrow which might help us push to adopt the
> > standard sooner.
> > 
> > i do wish we could just target the standard but unless we are willing to draw a
> > line and say no more non std=c11 and also we potentially break the abi we
> > are talking years. i don't think it is reasonable to block progress for years, so
> > i'm offering a transitional path. it's an evolution over time that we have to
> > manage.
> Apologies if I am sounding like I am blocking progress. Rest assured, we will find a way. It is just about which solution we are going to pick.

no problems, i really appreciate any help.

> Also, is there are any information on how long before we move to C11?

we need to clear all long term compatibility promises for gcc/linux
platforms that don't support -std=c11 and implement stdatomic option. the
last discussion was that it was years i believe.

Bruce has a patch series and another thread going talking about moving
to -std=c99 which is good but of course doesn't get us to std=c99.

> 
> > 
> > >
> > > >
> > > > i feel like we are really close on this discussion, if we can just
> > > > iron this issue out we can probably get going on the actual changes.
> > > >
> > > > thanks for the consideration.
> > > >
> > > > >
> > > > > >
> > > > > > once in place it provides an opportunity to introduce new
> > > > > > toolchain/platform combinations and enables an opt-in capability
> > > > > > to use stdatomics on existing toolchain/platform combinations
> > > > > > subject to community discussion on how/if/when.
> > > > > >
> > > > > > it would be good to get more participants into the discussion so
> > > > > > i'll cc techboard for some attention. i feel like the only area
> > > > > > that isn't decided is to do or not do this in rte_ namespace.
> > > > > >
> > > > > > i'm strongly in favor of rte_ namespace after discussion, mainly
> > > > > > due to to disadvantages of trying to overlap with the standard
> > > > > > namespace while not providing a compatible api/abi and because
> > > > > > it provides clear disambiguation of that difference in semantics
> > > > > > and compatibility with
> > > > the standard api.
> > > > > >
> > > > > > so far i've noted the following
> > > > > >
> > > > > > * we will not provide the non-explicit apis.
> > > > > +1
> > > > >
> > > > > > * we will make no attempt to support operate on struct/union atomics
> > > > > >   with our apis.
> > > > > +1
> > > > >
> > > > > > * we will mirror the standard api potentially in the rte_ namespace to
> > > > > >   - reference the standard api documentation.
> > > > > >   - assume compatible semantics (sans exceptions from first 2 points).
> > > > > >
> > > > > > my vote is to remove 'potentially' from the last point above for
> > > > > > reasons previously discussed in postings to the mail thread.
> > > > > >
> > > > > > thanks all for the discussion, i'll send up a patch removing
> > > > > > non-explicit apis for viewing.
> > > > > >
> > > > > > ty

^ permalink raw reply	[relevance 0%]

* [PATCH v4 18/19] hash: move rte_hash_set_alg out header
  2023-02-13 19:55  3% ` [PATCH v4 00/19] Replace use of static logtypes Stephen Hemminger
@ 2023-02-13 19:55  3%   ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-02-13 19:55 UTC (permalink / raw)
  To: dev
  Cc: Stephen Hemminger, Yipeng Wang, Sameh Gobriel, Bruce Richardson,
	Vladimir Medvedkin

The code for setting algorithm for hash is not at all perf sensitive,
and doing it inline has a couple of problems. First, it means that if
multiple files include the header, then the initialization gets done
multiple times. But also, it makes it harder to fix usage of RTE_LOG().

Despite what the checking script say. This is not an ABI change, the
previous version inlined the same code; therefore both old and new code
will work the same.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/hash/meson.build    |  1 +
 lib/hash/rte_hash_crc.c | 63 +++++++++++++++++++++++++++++++++++++++++
 lib/hash/rte_hash_crc.h | 46 ++----------------------------
 lib/hash/version.map    |  1 +
 4 files changed, 67 insertions(+), 44 deletions(-)
 create mode 100644 lib/hash/rte_hash_crc.c

diff --git a/lib/hash/meson.build b/lib/hash/meson.build
index e56ee8572564..c345c6f561fc 100644
--- a/lib/hash/meson.build
+++ b/lib/hash/meson.build
@@ -19,6 +19,7 @@ indirect_headers += files(
 
 sources = files(
     'rte_cuckoo_hash.c',
+    'rte_hash_crc.c',
     'rte_fbk_hash.c',
     'rte_thash.c',
     'rte_thash_gfni.c'
diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c
new file mode 100644
index 000000000000..c59eebccb1eb
--- /dev/null
+++ b/lib/hash/rte_hash_crc.c
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#include <rte_cpuflags.h>
+#include <rte_log.h>
+
+#include "rte_hash_crc.h"
+
+/**
+ * Allow or disallow use of SSE4.2/ARMv8 intrinsics for CRC32 hash
+ * calculation.
+ *
+ * @param alg
+ *   An OR of following flags:
+ *   - (CRC32_SW) Don't use SSE4.2/ARMv8 intrinsics (default non-[x86/ARMv8])
+ *   - (CRC32_SSE42) Use SSE4.2 intrinsics if available
+ *   - (CRC32_SSE42_x64) Use 64-bit SSE4.2 intrinsic if available (default x86)
+ *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
+ *
+ */
+void
+rte_hash_crc_set_alg(uint8_t alg)
+{
+	crc32_alg = CRC32_SW;
+
+	if (alg == CRC32_SW)
+		return;
+
+#if defined RTE_ARCH_X86
+	if (!(alg & CRC32_SSE42_x64))
+		RTE_LOG(WARNING, HASH,
+			"Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
+	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
+		crc32_alg = CRC32_SSE42;
+	else
+		crc32_alg = CRC32_SSE42_x64;
+#endif
+
+#if defined RTE_ARCH_ARM64
+	if (!(alg & CRC32_ARM64))
+		RTE_LOG(WARNING, HASH,
+			"Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
+		crc32_alg = CRC32_ARM64;
+#endif
+
+	if (crc32_alg == CRC32_SW)
+		RTE_LOG(WARNING, HASH,
+			"Unsupported CRC32 algorithm requested using CRC32_SW\n");
+}
+
+/* Setting the best available algorithm */
+RTE_INIT(rte_hash_crc_init_alg)
+{
+#if defined(RTE_ARCH_X86)
+	rte_hash_crc_set_alg(CRC32_SSE42_x64);
+#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
+	rte_hash_crc_set_alg(CRC32_ARM64);
+#else
+	rte_hash_crc_set_alg(CRC32_SW);
+#endif
+}
diff --git a/lib/hash/rte_hash_crc.h b/lib/hash/rte_hash_crc.h
index 0249ad16c5b6..e4acd99a0c81 100644
--- a/lib/hash/rte_hash_crc.h
+++ b/lib/hash/rte_hash_crc.h
@@ -20,8 +20,6 @@ extern "C" {
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
 #include <rte_config.h>
-#include <rte_cpuflags.h>
-#include <rte_log.h>
 
 #include "rte_crc_sw.h"
 
@@ -53,48 +51,8 @@ static uint8_t crc32_alg = CRC32_SW;
  *   - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
  *
  */
-static inline void
-rte_hash_crc_set_alg(uint8_t alg)
-{
-	crc32_alg = CRC32_SW;
-
-	if (alg == CRC32_SW)
-		return;
-
-#if defined RTE_ARCH_X86
-	if (!(alg & CRC32_SSE42_x64))
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
-	if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
-		crc32_alg = CRC32_SSE42;
-	else
-		crc32_alg = CRC32_SSE42_x64;
-#endif
-
-#if defined RTE_ARCH_ARM64
-	if (!(alg & CRC32_ARM64))
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
-	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
-		crc32_alg = CRC32_ARM64;
-#endif
-
-	if (crc32_alg == CRC32_SW)
-		RTE_LOG(WARNING, HASH,
-			"Unsupported CRC32 algorithm requested using CRC32_SW\n");
-}
-
-/* Setting the best available algorithm */
-RTE_INIT(rte_hash_crc_init_alg)
-{
-#if defined(RTE_ARCH_X86)
-	rte_hash_crc_set_alg(CRC32_SSE42_x64);
-#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
-	rte_hash_crc_set_alg(CRC32_ARM64);
-#else
-	rte_hash_crc_set_alg(CRC32_SW);
-#endif
-}
+void
+rte_hash_crc_set_alg(uint8_t alg);
 
 #ifdef __DOXYGEN__
 
diff --git a/lib/hash/version.map b/lib/hash/version.map
index f03b047b2eec..a1d81835399c 100644
--- a/lib/hash/version.map
+++ b/lib/hash/version.map
@@ -9,6 +9,7 @@ DPDK_23 {
 	rte_hash_add_key_with_hash;
 	rte_hash_add_key_with_hash_data;
 	rte_hash_count;
+	rte_hash_crc_set_alg;
 	rte_hash_create;
 	rte_hash_del_key;
 	rte_hash_del_key_with_hash;
-- 
2.39.1


^ permalink raw reply	[relevance 3%]

* [PATCH v4 00/19] Replace use of static logtypes
  @ 2023-02-13 19:55  3% ` Stephen Hemminger
  2023-02-13 19:55  3%   ` [PATCH v4 18/19] hash: move rte_hash_set_alg out header Stephen Hemminger
  2023-02-14  2:18  3% ` [PATCH v5 00/22] Replace us of static logtypes Stephen Hemminger
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-02-13 19:55 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

This patchset removes the main uses of static LOGTYPE's in DPDK
libraries. It starts with the easy one and goes on to the more complex ones.

Note: there is one patch in this series that will get
flagged incorrectly as an ABI change.

v4 - use simpler/shorter method for setting local LOGTYPE
     split up steps of some of the changes

Stephen Hemminger (19):
  gso: don't log message on non TCP/UDP
  eal: drop no longer used GSO logtype
  log: drop unused RTE_LOGTYPE_TIMER
  efd: replace RTE_LOGTYPE_EFD with dynamic type
  mbuf: replace RTE_LOGTYPE_MBUF with dynamic type
  acl: replace LOGTYPE_ACL with dynamic type
  power: replace RTE_LOGTYPE_POWER with dynamic type
  ring: replace RTE_LOGTYPE_RING with dynamic type
  mempool: replace RTE_LOGTYPE_MEMPOOL with dynamic type
  lpm: replace RTE_LOGTYPE_LPM with dynamic types
  kni: replace RTE_LOGTYPE_KNI with dynamic type
  sched: replace RTE_LOGTYPE_SCHED with dynamic type
  port: replace RTE_LOGTYPE_PORT with dynamic type
  table: convert RTE_LOGTYPE_TABLE to dynamic logtype
  app/test: remove use of RTE_LOGTYPE_PIPELINE
  pipeline: replace RTE_LOGTYPE_PIPELINE with dynamic type
  hash: move rte_thash_gfni stubs out of header file
  hash: move rte_hash_set_alg out header
  hash: convert RTE_LOGTYPE_HASH to dynamic type

 app/test/test_acl.c               |  3 +-
 app/test/test_table_acl.c         | 50 +++++++++++------------
 app/test/test_table_pipeline.c    | 40 +++++++++----------
 lib/acl/acl_bld.c                 |  1 +
 lib/acl/acl_gen.c                 |  1 +
 lib/acl/acl_log.h                 |  4 ++
 lib/acl/rte_acl.c                 |  4 ++
 lib/acl/tb_mem.c                  |  3 +-
 lib/eal/common/eal_common_log.c   | 17 --------
 lib/eal/include/rte_log.h         | 34 ++++++++--------
 lib/efd/rte_efd.c                 |  3 ++
 lib/fib/fib_log.h                 |  4 ++
 lib/fib/rte_fib.c                 |  3 ++
 lib/fib/rte_fib6.c                |  2 +
 lib/gso/rte_gso.c                 |  5 +--
 lib/hash/meson.build              |  9 ++++-
 lib/hash/rte_cuckoo_hash.c        |  5 +++
 lib/hash/rte_fbk_hash.c           |  3 ++
 lib/hash/rte_hash_crc.c           | 66 +++++++++++++++++++++++++++++++
 lib/hash/rte_hash_crc.h           | 46 +--------------------
 lib/hash/rte_thash.c              |  3 ++
 lib/hash/rte_thash_gfni.c         | 46 +++++++++++++++++++++
 lib/hash/rte_thash_gfni.h         | 28 +++----------
 lib/hash/version.map              |  5 +++
 lib/kni/rte_kni.c                 |  3 ++
 lib/lpm/lpm_log.h                 |  4 ++
 lib/lpm/rte_lpm.c                 |  3 ++
 lib/lpm/rte_lpm6.c                |  1 +
 lib/mbuf/mbuf_log.h               |  4 ++
 lib/mbuf/rte_mbuf.c               |  4 ++
 lib/mbuf/rte_mbuf_dyn.c           |  2 +
 lib/mbuf/rte_mbuf_pool_ops.c      |  2 +
 lib/mempool/rte_mempool.c         |  3 ++
 lib/mempool/rte_mempool_log.h     |  4 ++
 lib/mempool/rte_mempool_ops.c     |  1 +
 lib/pipeline/rte_pipeline.c       |  3 ++
 lib/port/rte_port_ethdev.c        |  3 ++
 lib/port/rte_port_eventdev.c      |  4 ++
 lib/port/rte_port_fd.c            |  3 ++
 lib/port/rte_port_frag.c          |  3 ++
 lib/port/rte_port_kni.c           |  3 ++
 lib/port/rte_port_ras.c           |  3 ++
 lib/port/rte_port_ring.c          |  3 ++
 lib/port/rte_port_sched.c         |  3 ++
 lib/port/rte_port_source_sink.c   |  3 ++
 lib/port/rte_port_sym_crypto.c    |  3 ++
 lib/power/guest_channel.c         |  3 +-
 lib/power/power_common.c          |  2 +
 lib/power/power_common.h          |  3 +-
 lib/power/power_kvm_vm.c          |  1 +
 lib/power/rte_power.c             |  1 +
 lib/power/rte_power_empty_poll.c  |  1 +
 lib/rib/rib_log.h                 |  4 ++
 lib/rib/rte_rib.c                 |  3 ++
 lib/rib/rte_rib6.c                |  3 ++
 lib/ring/rte_ring.c               |  3 ++
 lib/sched/rte_pie.c               |  1 +
 lib/sched/rte_sched.c             |  5 +++
 lib/sched/rte_sched_log.h         |  4 ++
 lib/table/rte_table_acl.c         |  3 ++
 lib/table/rte_table_array.c       |  3 ++
 lib/table/rte_table_hash_cuckoo.c |  3 ++
 lib/table/rte_table_hash_ext.c    |  3 ++
 lib/table/rte_table_hash_key16.c  |  3 ++
 lib/table/rte_table_hash_key32.c  |  5 ++-
 lib/table/rte_table_hash_key8.c   |  5 ++-
 lib/table/rte_table_hash_lru.c    |  3 ++
 lib/table/rte_table_lpm.c         |  3 ++
 lib/table/rte_table_lpm_ipv6.c    |  3 ++
 lib/table/rte_table_stub.c        |  3 ++
 70 files changed, 363 insertions(+), 158 deletions(-)
 create mode 100644 lib/acl/acl_log.h
 create mode 100644 lib/fib/fib_log.h
 create mode 100644 lib/hash/rte_hash_crc.c
 create mode 100644 lib/hash/rte_thash_gfni.c
 create mode 100644 lib/lpm/lpm_log.h
 create mode 100644 lib/mbuf/mbuf_log.h
 create mode 100644 lib/mempool/rte_mempool_log.h
 create mode 100644 lib/rib/rib_log.h
 create mode 100644 lib/sched/rte_sched_log.h

-- 
2.39.1


^ permalink raw reply	[relevance 3%]

* Re: [PATCH] eal: introduce atomics abstraction
  2023-02-13  5:04  0%                     ` Honnappa Nagarahalli
@ 2023-02-13 15:28  0%                       ` Ben Magistro
  2023-02-13 23:18  0%                       ` Tyler Retzlaff
  1 sibling, 0 replies; 200+ results
From: Ben Magistro @ 2023-02-13 15:28 UTC (permalink / raw)
  To: Honnappa Nagarahalli
  Cc: Tyler Retzlaff, Morten Brørup, thomas, dev,
	bruce.richardson, david.marchand, jerinj, konstantin.ananyev,
	ferruh.yigit, nd, techboard

[-- Attachment #1: Type: text/plain, Size: 17850 bytes --]

There is a thread discussing a change to the standard [1] but I have not
seen anything explicit yet about moving to C11.  I am personally in favor
of making the jump to C11 now as part of the 23.x branch and provided my
thoughts in the linked thread (what other projects using DPDK have as
minimum compiler requirements, CentOS 7 EOL dates).

Is the long term plan to backport this change set to the existing LTS
release or is this meant to be something introduced for use in 23.x and
going forward?  I think I was (probably naively) assuming this would be a
new feature in the 23.x going forward only.

[1] http://mails.dpdk.org/archives/dev/2023-February/262188.html

On Mon, Feb 13, 2023 at 12:05 AM Honnappa Nagarahalli <
Honnappa.Nagarahalli@arm.com> wrote:

> Hi Tyler,
>         Few more comments inline. Let us continue to make progress, I will
> add this topic for Techboard discussion for 22nd Feb.
>
> > -----Original Message-----
> > From: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > Sent: Friday, February 10, 2023 2:30 PM
> > To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> > Cc: Morten Brørup <mb@smartsharesystems.com>; thomas@monjalon.net;
> > dev@dpdk.org; bruce.richardson@intel.com; david.marchand@redhat.com;
> > jerinj@marvell.com; konstantin.ananyev@huawei.com;
> > ferruh.yigit@amd.com; nd <nd@arm.com>; techboard@dpdk.org
> > Subject: Re: [PATCH] eal: introduce atomics abstraction
> >
> > On Fri, Feb 10, 2023 at 05:30:00AM +0000, Honnappa Nagarahalli wrote:
> > > <snip>
> > >
> > > > On Thu, Feb 09, 2023 at 12:16:38AM +0000, Honnappa Nagarahalli wrote:
> > > > > <snip>
> > > > >
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > For environments where stdatomics are not supported,
> > > > > > > > > > > we could
> > > > > > > > have a
> > > > > > > > > > stdatomic.h in DPDK implementing the same APIs (we have
> > > > > > > > > > to support
> > > > > > > > only
> > > > > > > > > > _explicit APIs). This allows the code to use stdatomics
> > > > > > > > > > APIs and
> > > > > > > > when we move
> > > > > > > > > > to minimum supported standard C11, we just need to get
> > > > > > > > > > rid of the
> > > > > > > > file in DPDK
> > > > > > > > > > repo.
> > > > > > > > > >
> > > > > > > > > > my concern with this is that if we provide a stdatomic.h
> > > > > > > > > > or
> > > > > > > > introduce names
> > > > > > > > > > from stdatomic.h it's a violation of the C standard.
> > > > > > > > > >
> > > > > > > > > > references:
> > > > > > > > > >  * ISO/IEC 9899:2011 sections 7.1.2, 7.1.3.
> > > > > > > > > >  * GNU libc manual
> > > > > > > > > >
> > > > > > > > > > https://www.gnu.org/software/libc/manual/html_node/Reser
> > > > > > > > > > ved-
> > > > > > > > > > Names.html
> > > > > > > > > >
> > > > > > > > > > in effect the header, the names and in some instances
> > > > > > > > > > namespaces
> > > > > > > > introduced
> > > > > > > > > > are reserved by the implementation. there are several
> > > > > > > > > > reasons in
> > > > > > > > the GNU libc
> > > > > > > > > Wouldn't this apply only after the particular APIs were
> > introduced?
> > > > > > > > i.e. it should not apply if the compiler does not support
> stdatomics.
> > > > > > > >
> > > > > > > > yeah, i agree they're being a bit wishy washy in the
> > > > > > > > wording, but i'm not convinced glibc folks are documenting
> > > > > > > > this as permissive guidance against.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > > manual that explain the justification for these
> > > > > > > > > > reservations and if
> > > > > > > > if we think
> > > > > > > > > > about ODR and ABI compatibility we can conceive of
> others.
> > > > > > > > > >
> > > > > > > > > > i'll also remark that the inter-mingling of names from
> > > > > > > > > > the POSIX
> > > > > > > > standard
> > > > > > > > > > implicitly exposed as a part of the EAL public API has
> > > > > > > > > > been
> > > > > > > > problematic for
> > > > > > > > > > portability.
> > > > > > > > > These should be exposed as EAL APIs only when compiled
> > > > > > > > > with a
> > > > > > > > compiler that does not support stdatomics.
> > > > > > > >
> > > > > > > > you don't necessarily compile dpdk, the application or its
> > > > > > > > other dynamically linked dependencies with the same compiler
> > > > > > > > at the same time.
> > > > > > > > i.e. basically the model of any dpdk-dev package on any
> > > > > > > > linux distribution.
> > > > > > > >
> > > > > > > > if dpdk is built without real stdatomic types but the
> > > > > > > > application has to interoperate with a different kit or
> > > > > > > > library that does they would be forced to dance around dpdk
> > > > > > > > with their own version of a shim to hide our faked up
> stdatomics.
> > > > > > > >
> > > > > > >
> > > > > > > So basically, if we want a binary DPDK distribution to be
> > > > > > > compatible with a
> > > > > > separate application build environment, they both have to
> > > > > > implement atomics the same way, i.e. agree on the ABI for
> atomics.
> > > > > > >
> > > > > > > Summing up, this leaves us with only two realistic options:
> > > > > > >
> > > > > > > 1. Go all in on C11 stdatomics, also requiring the application
> > > > > > > build
> > > > > > environment to support C11 stdatomics.
> > > > > > > 2. Provide our own DPDK atomics library.
> > > > > > >
> > > > > > > (As mentioned by Tyler, the third option - using C11
> > > > > > > stdatomics inside DPDK, and requiring a build environment
> > > > > > > without C11 stdatomics to implement a shim - is not
> > > > > > > realistic!)
> > > > > > >
> > > > > > > I strongly want atomics to be available for use across inline
> > > > > > > and compiled
> > > > > > code; i.e. it must be possible for both compiled DPDK functions
> > > > > > and inline functions to perform atomic transactions on the same
> > atomic variable.
> > > > > >
> > > > > > i consider it a mandatory requirement. i don't see practically
> > > > > > how we could withdraw existing use and even if we had clean way
> > > > > > i don't see why we would want to. so this item is defintely
> > > > > > settled if you were
> > > > concerned.
> > > > > I think I agree here.
> > > > >
> > > > > >
> > > > > > >
> > > > > > > So either we upgrade the DPDK build requirements to support
> > > > > > > C11 (including
> > > > > > the optional stdatomics), or we provide our own DPDK atomics.
> > > > > >
> > > > > > i think the issue of requiring a toolchain conformant to a
> > > > > > specific standard is a separate matter because any adoption of
> > > > > > C11 standard atomics is a potential abi break from the current
> use of
> > intrinsics.
> > > > > I am not sure why you are calling it as ABI break. Referring to
> > > > > [1], I just see
> > > > wrappers around intrinsics (though [2] does not use the intrinsics).
> > > > >
> > > > > [1]
> > > > > https://github.com/gcc-mirror/gcc/blob/master/gcc/ginclude/stdatom
> > > > > ic.h
> > > > > [2]
> > > > > https://github.com/llvm-mirror/clang/blob/master/lib/Headers/stdat
> > > > > omic
> > > > > .h
> > > >
> > > > it's a potential abi break because atomic types are not the same
> > > > types as their corresponding integer types etc.. (or at least are
> > > > not guaranteed to be by all implementations of c as an abstract
> language).
> > > >
> > > >     ISO/IEC 9899:2011
> > > >
> > > >     6.2.5 (27)
> > > >     Further, there is the _Atomic qualifier. The presence of the
> _Atomic
> > > >     qualifier designates an atomic type. The size, representation,
> and
> > alignment
> > > >     of an atomic type need not be the same as those of the
> corresponding
> > > >     unqualified type.
> > > >
> > > >     7.17.6 (3)
> > > >     NOTE The representation of atomic integer types need not have the
> > same size
> > > >     as their corresponding regular types. They should have the same
> > > > size whenever
> > > >     possible, as it eases effort required to port existing code.
> > > >
> > > > i use the term `potential abi break' with intent because for me to
> > > > assert in absolute terms i would have to evaluate the implementation
> > > > of every current and potential future compilers atomic vs non-atomic
> > > > types. this as i'm sure you understand is not practical, it would
> > > > also defeat the purpose of moving to a standard. therefore i rely on
> > > > the specification prescribed by the standard not the detail of a
> specific
> > implementation.
> > > Can we say that the platforms 'supported' by DPDK today do not have
> this
> > problem? Any future platforms that will come to DPDK have to evaluate
> this.
> >
> > sadly i don't think we can. i believe in an earlier post i linked a bug
> filed on
> > gcc that shows that clang / gcc were producing different layout than the
> > equivalent non-atomic type.
> I looked at that bug again, it is to do with structure.
>
> >
> > >
> > > >
> > > >
> > > > > > the abstraction (whatever namespace it resides) allows the
> > > > > > existing toolchain/platform combinations to maintain
> > > > > > compatibility by defaulting to current non-standard intrinsics.
> > > > > How about using the intrinsics (__atomic_xxx) name space for
> > abstraction?
> > > > This covers the GCC and Clang compilers.
> >
> > i haven't investigated fully but there are usages of these intrinsics
> that
> > indicate there may be undesirable difference between clang and gcc
> versions.
> > the hint is there seems to be conditionally compiled code under __clang__
> > when using some __atomic's.
> I sent an RFC to address this [1]. I think the size specific intrinsics
> are not necessary.
>
> [1]
> http://patches.dpdk.org/project/dpdk/patch/20230211015622.408487-1-honnappa.nagarahalli@arm.com/
>
> >
> > for the purpose of this discussion clang just tries to look like gcc so
> i don't
> > regard them as being different compilers for the purpose of this
> discussion.
> >
> > > >
> > > > the namespace starting with `__` is also reserved for the
> implementation.
> > > > this is why compilers gcc/clang/msvc place name their intrinsic and
> > > > builtin functions starting with __ to explicitly avoid collision
> > > > with the application namespace.
> >
> > > Agreed. But, here we are considering '__atomic_' specifically (i.e.
> > > not just '__')
> >
> > i don't understand the confusion __atomic is within the __ namespace
> that is
> > reserved.
> What I mean is, we are not formulating a policy/rule to allow for any name
> space that starts with '__'.
>
> >
> > let me ask this another way, what benefit do you see to trying to
> overlap with
> > the standard namespace? the only benefit i can see is that at some point
> in
> > the future it avoids having to perform a mechanical change to eventually
> > retire the abstraction once all platform/toolchains support standard
> atomics.
> > i.e. basically s/rte_atomic/atomic/g
> >
> > is there another benefit i'm missing?
> The abstraction you have proposed solves the problem for the long term.
> The proposed abstraction stops us from thinking about moving to stdatomics.
> IMO, the problem is short term. Using the __atomic_ name space does not
> have any practical issues with the platforms DPDK supports (unless msvc has
> a problem with this, more questions below).
>
> >
> > >
> > > >
> > > >     ISO/IEC 9899:2011
> > > >
> > > >     7.1.3 (1)
> > > >     All identifiers that begin with an underscore and either an
> uppercase
> > > >     letter or another underscore are always reserved for any use.
> > > >
> > > >     ...
> > > >
> > > > > If there is another platform that uses the same name space for
> > > > > something
> > > > else, I think DPDK should not be supporting that platform.
> > > >
> > > > that's effectively a statement excluding windows platform and all
> > > > non-gcc compilers from ever supporting dpdk.
> > > Apologies, I did not understand your comment on windows platform. Do
> > you mean to say a compiler for windows platform uses '__atomic_xxx' name
> > space to provide some other functionality (and hence it would get
> excluded)?
> >
> > i mean dpdk can never fully be supported without msvc except for
> statically
> > linked builds which are niche and limit it too severely for many
> consumers to
> > practically use dpdk. there are also many application developers who
> would
> > like to integrate dpdk but can't and telling them their only choice is
> to re-port
> > their entire application to clang isn't feasible.
> >
> > i can see no technical reason why we should be excluding a major
> compiler in
> > broad use if it is capable of building dpdk. msvc arguably has some of
> the
> > most sophisticated security features in the industry and the use of those
> > features is mandated by many of the customers who might deploy dpdk
> > applications on windows.
> I did not mean DPDK should not support msvc (may be my sentence below was
> misunderstood).
> Does msvc provide '__atomic_xxx' intrinsics?
>
> >
> > > Clang supports these intrinsics. I am not sure about the merit of
> supporting
> > other non-gcc compilers. May be a topic Techboard discussion.
> > >
> > > >
> > > > > What problems do you see?
> > > >
> > > > i'm fairly certain at least one other compiler uses the __atomic
> > > > namespace but
> > > Do you mean __atomic namespace is used for some other purpose?
> > >
> > > > it would take me time to check, the most notable potential issue
> > > > that comes to mind is if such an intrinsic with the same name is
> > > > provided in a different implementation and has either regressive
> > > > code generation or different semantics it would be bad because it is
> > > > intrinsic you can't just hack around it with #undef __atomic to shim
> in a
> > semantically correct version.
> > > I do not think we should worry about regressive code generation
> problem. It
> > should be fixed by that compiler.
> > > Different semantics is something we need to worry about. It would be
> good
> > to find out more about a compiler that does this.
> >
> > again, this is about portability it's about potential not that we can
> find an
> > example.
> >
> > >
> > > >
> > > > how about this, is there another possible namespace you might
> > > > suggest that conforms or doesn't conflict with the the rules defined
> > > > in ISO/IEC 9899:2011
> > > > 7.1.3 i think if there were that would satisfy all of my concerns
> > > > related to namespaces.
> > > >
> > > > keep in mind the point of moving to a standard is to achieve
> > > > portability so if we do things that will regress us back to being
> > > > dependent on an implementation we haven't succeeded. that's all i'm
> > trying to guarantee here.
> > > Agree. We are trying to solve a problem that is temporary. I am trying
> to
> > keep the problem scope narrow which might help us push to adopt the
> > standard sooner.
> >
> > i do wish we could just target the standard but unless we are willing to
> draw a
> > line and say no more non std=c11 and also we potentially break the abi we
> > are talking years. i don't think it is reasonable to block progress for
> years, so
> > i'm offering a transitional path. it's an evolution over time that we
> have to
> > manage.
> Apologies if I am sounding like I am blocking progress. Rest assured, we
> will find a way. It is just about which solution we are going to pick.
> Also, is there are any information on how long before we move to C11?
>
> >
> > >
> > > >
> > > > i feel like we are really close on this discussion, if we can just
> > > > iron this issue out we can probably get going on the actual changes.
> > > >
> > > > thanks for the consideration.
> > > >
> > > > >
> > > > > >
> > > > > > once in place it provides an opportunity to introduce new
> > > > > > toolchain/platform combinations and enables an opt-in capability
> > > > > > to use stdatomics on existing toolchain/platform combinations
> > > > > > subject to community discussion on how/if/when.
> > > > > >
> > > > > > it would be good to get more participants into the discussion so
> > > > > > i'll cc techboard for some attention. i feel like the only area
> > > > > > that isn't decided is to do or not do this in rte_ namespace.
> > > > > >
> > > > > > i'm strongly in favor of rte_ namespace after discussion, mainly
> > > > > > due to to disadvantages of trying to overlap with the standard
> > > > > > namespace while not providing a compatible api/abi and because
> > > > > > it provides clear disambiguation of that difference in semantics
> > > > > > and compatibility with
> > > > the standard api.
> > > > > >
> > > > > > so far i've noted the following
> > > > > >
> > > > > > * we will not provide the non-explicit apis.
> > > > > +1
> > > > >
> > > > > > * we will make no attempt to support operate on struct/union
> atomics
> > > > > >   with our apis.
> > > > > +1
> > > > >
> > > > > > * we will mirror the standard api potentially in the rte_
> namespace to
> > > > > >   - reference the standard api documentation.
> > > > > >   - assume compatible semantics (sans exceptions from first 2
> points).
> > > > > >
> > > > > > my vote is to remove 'potentially' from the last point above for
> > > > > > reasons previously discussed in postings to the mail thread.
> > > > > >
> > > > > > thanks all for the discussion, i'll send up a patch removing
> > > > > > non-explicit apis for viewing.
> > > > > >
> > > > > > ty
>

[-- Attachment #2: Type: text/html, Size: 24434 bytes --]

^ permalink raw reply	[relevance 0%]

* RE: [PATCH] eal: introduce atomics abstraction
  @ 2023-02-13  5:04  0%                     ` Honnappa Nagarahalli
  2023-02-13 15:28  0%                       ` Ben Magistro
  2023-02-13 23:18  0%                       ` Tyler Retzlaff
  0 siblings, 2 replies; 200+ results
From: Honnappa Nagarahalli @ 2023-02-13  5:04 UTC (permalink / raw)
  To: Tyler Retzlaff
  Cc: Morten Brørup, thomas, dev, bruce.richardson,
	david.marchand, jerinj, konstantin.ananyev, ferruh.yigit, nd,
	techboard, nd

Hi Tyler,
	Few more comments inline. Let us continue to make progress, I will add this topic for Techboard discussion for 22nd Feb.

> -----Original Message-----
> From: Tyler Retzlaff <roretzla@linux.microsoft.com>
> Sent: Friday, February 10, 2023 2:30 PM
> To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Cc: Morten Brørup <mb@smartsharesystems.com>; thomas@monjalon.net;
> dev@dpdk.org; bruce.richardson@intel.com; david.marchand@redhat.com;
> jerinj@marvell.com; konstantin.ananyev@huawei.com;
> ferruh.yigit@amd.com; nd <nd@arm.com>; techboard@dpdk.org
> Subject: Re: [PATCH] eal: introduce atomics abstraction
> 
> On Fri, Feb 10, 2023 at 05:30:00AM +0000, Honnappa Nagarahalli wrote:
> > <snip>
> >
> > > On Thu, Feb 09, 2023 at 12:16:38AM +0000, Honnappa Nagarahalli wrote:
> > > > <snip>
> > > >
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > For environments where stdatomics are not supported,
> > > > > > > > > > we could
> > > > > > > have a
> > > > > > > > > stdatomic.h in DPDK implementing the same APIs (we have
> > > > > > > > > to support
> > > > > > > only
> > > > > > > > > _explicit APIs). This allows the code to use stdatomics
> > > > > > > > > APIs and
> > > > > > > when we move
> > > > > > > > > to minimum supported standard C11, we just need to get
> > > > > > > > > rid of the
> > > > > > > file in DPDK
> > > > > > > > > repo.
> > > > > > > > >
> > > > > > > > > my concern with this is that if we provide a stdatomic.h
> > > > > > > > > or
> > > > > > > introduce names
> > > > > > > > > from stdatomic.h it's a violation of the C standard.
> > > > > > > > >
> > > > > > > > > references:
> > > > > > > > >  * ISO/IEC 9899:2011 sections 7.1.2, 7.1.3.
> > > > > > > > >  * GNU libc manual
> > > > > > > > >
> > > > > > > > > https://www.gnu.org/software/libc/manual/html_node/Reser
> > > > > > > > > ved-
> > > > > > > > > Names.html
> > > > > > > > >
> > > > > > > > > in effect the header, the names and in some instances
> > > > > > > > > namespaces
> > > > > > > introduced
> > > > > > > > > are reserved by the implementation. there are several
> > > > > > > > > reasons in
> > > > > > > the GNU libc
> > > > > > > > Wouldn't this apply only after the particular APIs were
> introduced?
> > > > > > > i.e. it should not apply if the compiler does not support stdatomics.
> > > > > > >
> > > > > > > yeah, i agree they're being a bit wishy washy in the
> > > > > > > wording, but i'm not convinced glibc folks are documenting
> > > > > > > this as permissive guidance against.
> > > > > > >
> > > > > > > >
> > > > > > > > > manual that explain the justification for these
> > > > > > > > > reservations and if
> > > > > > > if we think
> > > > > > > > > about ODR and ABI compatibility we can conceive of others.
> > > > > > > > >
> > > > > > > > > i'll also remark that the inter-mingling of names from
> > > > > > > > > the POSIX
> > > > > > > standard
> > > > > > > > > implicitly exposed as a part of the EAL public API has
> > > > > > > > > been
> > > > > > > problematic for
> > > > > > > > > portability.
> > > > > > > > These should be exposed as EAL APIs only when compiled
> > > > > > > > with a
> > > > > > > compiler that does not support stdatomics.
> > > > > > >
> > > > > > > you don't necessarily compile dpdk, the application or its
> > > > > > > other dynamically linked dependencies with the same compiler
> > > > > > > at the same time.
> > > > > > > i.e. basically the model of any dpdk-dev package on any
> > > > > > > linux distribution.
> > > > > > >
> > > > > > > if dpdk is built without real stdatomic types but the
> > > > > > > application has to interoperate with a different kit or
> > > > > > > library that does they would be forced to dance around dpdk
> > > > > > > with their own version of a shim to hide our faked up stdatomics.
> > > > > > >
> > > > > >
> > > > > > So basically, if we want a binary DPDK distribution to be
> > > > > > compatible with a
> > > > > separate application build environment, they both have to
> > > > > implement atomics the same way, i.e. agree on the ABI for atomics.
> > > > > >
> > > > > > Summing up, this leaves us with only two realistic options:
> > > > > >
> > > > > > 1. Go all in on C11 stdatomics, also requiring the application
> > > > > > build
> > > > > environment to support C11 stdatomics.
> > > > > > 2. Provide our own DPDK atomics library.
> > > > > >
> > > > > > (As mentioned by Tyler, the third option - using C11
> > > > > > stdatomics inside DPDK, and requiring a build environment
> > > > > > without C11 stdatomics to implement a shim - is not
> > > > > > realistic!)
> > > > > >
> > > > > > I strongly want atomics to be available for use across inline
> > > > > > and compiled
> > > > > code; i.e. it must be possible for both compiled DPDK functions
> > > > > and inline functions to perform atomic transactions on the same
> atomic variable.
> > > > >
> > > > > i consider it a mandatory requirement. i don't see practically
> > > > > how we could withdraw existing use and even if we had clean way
> > > > > i don't see why we would want to. so this item is defintely
> > > > > settled if you were
> > > concerned.
> > > > I think I agree here.
> > > >
> > > > >
> > > > > >
> > > > > > So either we upgrade the DPDK build requirements to support
> > > > > > C11 (including
> > > > > the optional stdatomics), or we provide our own DPDK atomics.
> > > > >
> > > > > i think the issue of requiring a toolchain conformant to a
> > > > > specific standard is a separate matter because any adoption of
> > > > > C11 standard atomics is a potential abi break from the current use of
> intrinsics.
> > > > I am not sure why you are calling it as ABI break. Referring to
> > > > [1], I just see
> > > wrappers around intrinsics (though [2] does not use the intrinsics).
> > > >
> > > > [1]
> > > > https://github.com/gcc-mirror/gcc/blob/master/gcc/ginclude/stdatom
> > > > ic.h
> > > > [2]
> > > > https://github.com/llvm-mirror/clang/blob/master/lib/Headers/stdat
> > > > omic
> > > > .h
> > >
> > > it's a potential abi break because atomic types are not the same
> > > types as their corresponding integer types etc.. (or at least are
> > > not guaranteed to be by all implementations of c as an abstract language).
> > >
> > >     ISO/IEC 9899:2011
> > >
> > >     6.2.5 (27)
> > >     Further, there is the _Atomic qualifier. The presence of the _Atomic
> > >     qualifier designates an atomic type. The size, representation, and
> alignment
> > >     of an atomic type need not be the same as those of the corresponding
> > >     unqualified type.
> > >
> > >     7.17.6 (3)
> > >     NOTE The representation of atomic integer types need not have the
> same size
> > >     as their corresponding regular types. They should have the same
> > > size whenever
> > >     possible, as it eases effort required to port existing code.
> > >
> > > i use the term `potential abi break' with intent because for me to
> > > assert in absolute terms i would have to evaluate the implementation
> > > of every current and potential future compilers atomic vs non-atomic
> > > types. this as i'm sure you understand is not practical, it would
> > > also defeat the purpose of moving to a standard. therefore i rely on
> > > the specification prescribed by the standard not the detail of a specific
> implementation.
> > Can we say that the platforms 'supported' by DPDK today do not have this
> problem? Any future platforms that will come to DPDK have to evaluate this.
> 
> sadly i don't think we can. i believe in an earlier post i linked a bug filed on
> gcc that shows that clang / gcc were producing different layout than the
> equivalent non-atomic type.
I looked at that bug again, it is to do with structure.

> 
> >
> > >
> > >
> > > > > the abstraction (whatever namespace it resides) allows the
> > > > > existing toolchain/platform combinations to maintain
> > > > > compatibility by defaulting to current non-standard intrinsics.
> > > > How about using the intrinsics (__atomic_xxx) name space for
> abstraction?
> > > This covers the GCC and Clang compilers.
> 
> i haven't investigated fully but there are usages of these intrinsics that
> indicate there may be undesirable difference between clang and gcc versions.
> the hint is there seems to be conditionally compiled code under __clang__
> when using some __atomic's.
I sent an RFC to address this [1]. I think the size specific intrinsics are not necessary.

[1] http://patches.dpdk.org/project/dpdk/patch/20230211015622.408487-1-honnappa.nagarahalli@arm.com/

> 
> for the purpose of this discussion clang just tries to look like gcc so i don't
> regard them as being different compilers for the purpose of this discussion.
> 
> > >
> > > the namespace starting with `__` is also reserved for the implementation.
> > > this is why compilers gcc/clang/msvc place name their intrinsic and
> > > builtin functions starting with __ to explicitly avoid collision
> > > with the application namespace.
> 
> > Agreed. But, here we are considering '__atomic_' specifically (i.e.
> > not just '__')
> 
> i don't understand the confusion __atomic is within the __ namespace that is
> reserved.
What I mean is, we are not formulating a policy/rule to allow for any name space that starts with '__'.

> 
> let me ask this another way, what benefit do you see to trying to overlap with
> the standard namespace? the only benefit i can see is that at some point in
> the future it avoids having to perform a mechanical change to eventually
> retire the abstraction once all platform/toolchains support standard atomics.
> i.e. basically s/rte_atomic/atomic/g
> 
> is there another benefit i'm missing?
The abstraction you have proposed solves the problem for the long term. The proposed abstraction stops us from thinking about moving to stdatomics.
IMO, the problem is short term. Using the __atomic_ name space does not have any practical issues with the platforms DPDK supports (unless msvc has a problem with this, more questions below).

> 
> >
> > >
> > >     ISO/IEC 9899:2011
> > >
> > >     7.1.3 (1)
> > >     All identifiers that begin with an underscore and either an uppercase
> > >     letter or another underscore are always reserved for any use.
> > >
> > >     ...
> > >
> > > > If there is another platform that uses the same name space for
> > > > something
> > > else, I think DPDK should not be supporting that platform.
> > >
> > > that's effectively a statement excluding windows platform and all
> > > non-gcc compilers from ever supporting dpdk.
> > Apologies, I did not understand your comment on windows platform. Do
> you mean to say a compiler for windows platform uses '__atomic_xxx' name
> space to provide some other functionality (and hence it would get excluded)?
> 
> i mean dpdk can never fully be supported without msvc except for statically
> linked builds which are niche and limit it too severely for many consumers to
> practically use dpdk. there are also many application developers who would
> like to integrate dpdk but can't and telling them their only choice is to re-port
> their entire application to clang isn't feasible.
> 
> i can see no technical reason why we should be excluding a major compiler in
> broad use if it is capable of building dpdk. msvc arguably has some of the
> most sophisticated security features in the industry and the use of those
> features is mandated by many of the customers who might deploy dpdk
> applications on windows.
I did not mean DPDK should not support msvc (may be my sentence below was misunderstood).
Does msvc provide '__atomic_xxx' intrinsics?

> 
> > Clang supports these intrinsics. I am not sure about the merit of supporting
> other non-gcc compilers. May be a topic Techboard discussion.
> >
> > >
> > > > What problems do you see?
> > >
> > > i'm fairly certain at least one other compiler uses the __atomic
> > > namespace but
> > Do you mean __atomic namespace is used for some other purpose?
> >
> > > it would take me time to check, the most notable potential issue
> > > that comes to mind is if such an intrinsic with the same name is
> > > provided in a different implementation and has either regressive
> > > code generation or different semantics it would be bad because it is
> > > intrinsic you can't just hack around it with #undef __atomic to shim in a
> semantically correct version.
> > I do not think we should worry about regressive code generation problem. It
> should be fixed by that compiler.
> > Different semantics is something we need to worry about. It would be good
> to find out more about a compiler that does this.
> 
> again, this is about portability it's about potential not that we can find an
> example.
> 
> >
> > >
> > > how about this, is there another possible namespace you might
> > > suggest that conforms or doesn't conflict with the the rules defined
> > > in ISO/IEC 9899:2011
> > > 7.1.3 i think if there were that would satisfy all of my concerns
> > > related to namespaces.
> > >
> > > keep in mind the point of moving to a standard is to achieve
> > > portability so if we do things that will regress us back to being
> > > dependent on an implementation we haven't succeeded. that's all i'm
> trying to guarantee here.
> > Agree. We are trying to solve a problem that is temporary. I am trying to
> keep the problem scope narrow which might help us push to adopt the
> standard sooner.
> 
> i do wish we could just target the standard but unless we are willing to draw a
> line and say no more non std=c11 and also we potentially break the abi we
> are talking years. i don't think it is reasonable to block progress for years, so
> i'm offering a transitional path. it's an evolution over time that we have to
> manage.
Apologies if I am sounding like I am blocking progress. Rest assured, we will find a way. It is just about which solution we are going to pick.
Also, is there are any information on how long before we move to C11?

> 
> >
> > >
> > > i feel like we are really close on this discussion, if we can just
> > > iron this issue out we can probably get going on the actual changes.
> > >
> > > thanks for the consideration.
> > >
> > > >
> > > > >
> > > > > once in place it provides an opportunity to introduce new
> > > > > toolchain/platform combinations and enables an opt-in capability
> > > > > to use stdatomics on existing toolchain/platform combinations
> > > > > subject to community discussion on how/if/when.
> > > > >
> > > > > it would be good to get more participants into the discussion so
> > > > > i'll cc techboard for some attention. i feel like the only area
> > > > > that isn't decided is to do or not do this in rte_ namespace.
> > > > >
> > > > > i'm strongly in favor of rte_ namespace after discussion, mainly
> > > > > due to to disadvantages of trying to overlap with the standard
> > > > > namespace while not providing a compatible api/abi and because
> > > > > it provides clear disambiguation of that difference in semantics
> > > > > and compatibility with
> > > the standard api.
> > > > >
> > > > > so far i've noted the following
> > > > >
> > > > > * we will not provide the non-explicit apis.
> > > > +1
> > > >
> > > > > * we will make no attempt to support operate on struct/union atomics
> > > > >   with our apis.
> > > > +1
> > > >
> > > > > * we will mirror the standard api potentially in the rte_ namespace to
> > > > >   - reference the standard api documentation.
> > > > >   - assume compatible semantics (sans exceptions from first 2 points).
> > > > >
> > > > > my vote is to remove 'potentially' from the last point above for
> > > > > reasons previously discussed in postings to the mail thread.
> > > > >
> > > > > thanks all for the discussion, i'll send up a patch removing
> > > > > non-explicit apis for viewing.
> > > > >
> > > > > ty

^ permalink raw reply	[relevance 0%]

* Re: [PATCH v6 1/3] ethdev: skip congestion management configuration
  @ 2023-02-11  5:16  0%     ` Jerin Jacob
  0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2023-02-11  5:16 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Rakesh Kudurumalla, Ori Kam, Thomas Monjalon, Andrew Rybchenko,
	jerinj, ndabilpuram, dev, David Marchand

On Sat, Feb 11, 2023 at 6:05 AM Ferruh Yigit <ferruh.yigit@amd.com> wrote:
>
> On 2/10/2023 8:26 AM, Rakesh Kudurumalla wrote:
> > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> > index b60987db4b..f4eb4232d4 100644
> > --- a/lib/ethdev/rte_flow.h
> > +++ b/lib/ethdev/rte_flow.h
> > @@ -2203,6 +2203,17 @@ enum rte_flow_action_type {
> >        */
> >       RTE_FLOW_ACTION_TYPE_DROP,
> >
> > +     /**
> > +      * Skip congestion management configuration
> > +      *
> > +      * Using rte_eth_cman_config_set() API the application
> > +      * can configure ethdev Rx queue's congestion mechanism.
> > +      * Introducing RTE_FLOW_ACTION_TYPE_SKIP_CMAN flow action to skip the
> > +      * congestion configuration applied to the given ethdev Rx queue.
> > +      *
> > +      */
> > +     RTE_FLOW_ACTION_TYPE_SKIP_CMAN,
> > +
>
> Inserting new enum item in to the middle of the enum upsets the ABI
> checks [1], can it go to the end?

Yes.

>
>
>
>
> [1]
> 1 function with some indirect sub-type change:
>
>   [C] 'function size_t rte_flow_copy(rte_flow_desc*, size_t, const
> rte_flow_attr*, const rte_flow_item*, const rte_flow_action*)' at
> rte_flow.c:1092:1 has some indirect sub-type changes:
>     parameter 1 of type 'rte_flow_desc*' has sub-type changes:
>       in pointed to type 'struct rte_flow_desc' at rte_flow.h:4326:1:
>         type size hasn't changed
>         1 data member changes (1 filtered):
>           type of 'rte_flow_action* actions' changed:
>             in pointed to type 'struct rte_flow_action' at
> rte_flow.h:3775:1:
>               type size hasn't changed
>               1 data member change:
>                 type of 'rte_flow_action_type type' changed:
>                   type size hasn't changed
>                   1 enumerator insertion:
>
> 'rte_flow_action_type::RTE_FLOW_ACTION_TYPE_SKIP_CMAN' value '8'
>                   50 enumerator changes:
>                     'rte_flow_action_type::RTE_FLOW_ACTION_TYPE_COUNT'
> from value '8' to '9' at rte_flow.h:2216:1
>                     ...

^ permalink raw reply	[relevance 0%]

Results 1601-1800 of ~18000   |  | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2021-09-13  8:45     [dpdk-dev] Questions about rte_eth_link_speed_to_str API Min Hu (Connor)
2021-09-16  2:56     ` [dpdk-dev] [RFC] ethdev: improve link speed to string Min Hu (Connor)
2021-09-16  6:22       ` Andrew Rybchenko
2021-09-16  8:16         ` Min Hu (Connor)
2021-09-16  8:21           ` Andrew Rybchenko
2021-09-17  0:43             ` Min Hu (Connor)
2023-01-19 11:41               ` Ferruh Yigit
2023-01-19 16:45                 ` Stephen Hemminger
2023-02-10 14:41                   ` Ferruh Yigit
2023-03-23 14:40  3%                 ` Ferruh Yigit
2022-04-20  8:16     [PATCH v1 0/5] Direct re-arming of buffers on receive side Feifei Wang
2023-01-04  7:30     ` [PATCH v3 0/3] " Feifei Wang
2023-01-04  7:30       ` [PATCH v3 1/3] ethdev: enable direct rearm with separate API Feifei Wang
2023-01-04  8:21         ` Morten Brørup
2023-01-04  8:51           ` 回复: " Feifei Wang
2023-01-04 10:11             ` Morten Brørup
2023-02-24  8:55  0%           ` 回复: " Feifei Wang
2022-09-28 12:45     [PATCH 4/5] test/security: add inline MACsec cases Akhil Goyal
2023-05-23 19:49     ` [PATCH 00/13] Add MACsec unit test cases Akhil Goyal
2023-05-23 19:49       ` [PATCH 02/13] security: add MACsec packet number threshold Akhil Goyal
2023-05-23 21:29  3%     ` Stephen Hemminger
2023-05-24  7:12  0%       ` [EXT] " Akhil Goyal
2023-05-24  8:09  3%         ` Akhil Goyal
2022-10-20  9:31     [PATCH V5] ethdev: fix one address occupies two indexes in MAC addrs Huisong Li
2023-02-02 12:36     ` [PATCH V8] ethdev: fix one address occupies two entries " Huisong Li
2023-05-16 11:47  0%   ` lihuisong (C)
2023-05-16 14:13  0%     ` Ferruh Yigit
2023-05-17  7:45  0%       ` lihuisong (C)
2023-05-17  8:53  0%         ` Ferruh Yigit
2023-05-19  3:00  4% ` [PATCH V9] " Huisong Li
2023-05-19  9:31  4% ` [PATCH V10] " Huisong Li
2022-11-03 15:47     [PATCH 0/2] ABI check updates David Marchand
2023-03-23 17:15  9% ` [PATCH v2 " David Marchand
2023-03-23 17:15 21%   ` [PATCH v2 1/2] devtools: unify configuration for ABI check David Marchand
2023-03-23 17:15 41%   ` [PATCH v2 2/2] devtools: stop depending on libabigail xml format David Marchand
2023-03-28 18:38  4%   ` [PATCH v2 0/2] ABI check updates Thomas Monjalon
2022-11-17  5:09     [PATCH v1 00/13] graph enhancement for multi-core dispatch Zhirun Yan
2022-11-17  5:09     ` [PATCH v1 04/13] graph: add get/set graph worker model APIs Zhirun Yan
2023-02-20 13:50  3%   ` Jerin Jacob
2023-02-24  6:31  0%     ` Yan, Zhirun
2023-02-26 22:23  0%       ` Jerin Jacob
2023-03-02  8:38  0%         ` Yan, Zhirun
2023-03-02 13:58  0%           ` Jerin Jacob
2023-03-07  8:26  0%             ` Yan, Zhirun
2023-01-16 15:37     [PATCH 0/5] dma/ioat: fix issues with stopping and restarting device Bruce Richardson
2023-01-16 17:37     ` [PATCH v2 0/6] " Bruce Richardson
2023-01-16 17:37       ` [PATCH v2 6/6] test/dmadev: add tests for stopping and restarting dev Bruce Richardson
2023-02-14 16:04  0%     ` Kevin Laatz
2023-02-15  1:59  3%     ` fengchengwen
2023-02-15 11:57  3%       ` Bruce Richardson
2023-02-16  1:24  0%         ` fengchengwen
2023-02-16  9:24  0%           ` Bruce Richardson
2023-02-16 11:09     ` [PATCH v3 0/6] dma/ioat: fix issues with stopping and restarting device Bruce Richardson
2023-02-16 11:09  3%   ` [PATCH v3 6/6] test/dmadev: add tests for stopping and restarting dev Bruce Richardson
2023-02-16 11:42  0%     ` fengchengwen
2023-01-31 22:42     [PATCH] eal: introduce atomics abstraction Thomas Monjalon
2023-02-01  1:07     ` Honnappa Nagarahalli
2023-02-01 21:41       ` Tyler Retzlaff
2023-02-07 23:34         ` Honnappa Nagarahalli
2023-02-08  1:20           ` Tyler Retzlaff
2023-02-08  8:31             ` Morten Brørup
2023-02-08 16:35               ` Tyler Retzlaff
2023-02-09  0:16                 ` Honnappa Nagarahalli
2023-02-09 17:30                   ` Tyler Retzlaff
2023-02-10  5:30                     ` Honnappa Nagarahalli
2023-02-10 20:30                       ` Tyler Retzlaff
2023-02-13  5:04  0%                     ` Honnappa Nagarahalli
2023-02-13 15:28  0%                       ` Ben Magistro
2023-02-13 23:18  0%                       ` Tyler Retzlaff
2023-02-03  5:07     [PATCH v3 0/2] add new PHY affinity in the flow item and Tx queue API Jiawei Wang
2023-02-03 13:33     ` [PATCH v4 " Jiawei Wang
2023-02-03 13:33       ` [PATCH v4 1/2] ethdev: introduce the PHY affinity field in " Jiawei Wang
2023-02-09 19:44         ` Ferruh Yigit
2023-02-14  9:38  0%       ` Jiawei(Jonny) Wang
2023-02-14 10:01  0%         ` Ferruh Yigit
2023-02-03  8:08     [PATCH] doc: update NFP documentation with Corigine information Chaoyong He
2023-02-15 13:37  0% ` Ferruh Yigit
2023-02-15 17:58  0%   ` Niklas Söderlund
2023-02-20  8:41     ` [PATCH v2 0/3] update NFP documentation Chaoyong He
2023-02-20  8:41  8%   ` [PATCH v2 3/3] doc: add Corigine information to nfp documentation Chaoyong He
2023-02-07 20:41     [RFC 00/13] Replace static logtypes with static Stephen Hemminger
2023-02-13 19:55  3% ` [PATCH v4 00/19] Replace use of static logtypes Stephen Hemminger
2023-02-13 19:55  3%   ` [PATCH v4 18/19] hash: move rte_hash_set_alg out header Stephen Hemminger
2023-02-14  2:18  3% ` [PATCH v5 00/22] Replace us of static logtypes Stephen Hemminger
2023-02-14  2:19  3%   ` [PATCH v5 21/22] hash: move rte_hash_set_alg out header Stephen Hemminger
2023-02-14 22:47  3% ` [PATCH v6 00/22] Replace use of static logtypes in libraries Stephen Hemminger
2023-02-14 22:47       ` [PATCH v6 01/22] gso: don't log message on non TCP/UDP Stephen Hemminger
2023-02-15  7:26  3%     ` Hu, Jiayu
2023-02-15 17:12  0%       ` Stephen Hemminger
2023-02-14 22:47  3%   ` [PATCH v6 21/22] hash: move rte_hash_set_alg out header Stephen Hemminger
2023-02-15 17:23  3% ` [PATCH v7 00/22] Replace use of static logtypes in libraries Stephen Hemminger
2023-02-15 17:23  3%   ` [PATCH v7 21/22] hash: move rte_hash_set_alg out header Stephen Hemminger
2023-02-20 23:35  3% ` [PATCH v8 00/22] Convert static logtypes in libraries Stephen Hemminger
2023-02-20 23:35  3%   ` [PATCH v8 21/22] hash: move rte_hash_set_alg out header Stephen Hemminger
2023-02-21 15:02  0%     ` David Marchand
2023-02-21 19:01  2% ` [PATCH v9 00/22] Convert static logtypes in libraries Stephen Hemminger
2023-02-21 19:02  2%   ` [PATCH v9 21/22] hash: move rte_hash_set_alg out header Stephen Hemminger
2023-02-22 16:07  2% ` [PATCH v10 00/22] Convert static log type values in libraries Stephen Hemminger
2023-02-22 16:08  2%   ` [PATCH v10 21/22] hash: move rte_hash_set_alg out header Stephen Hemminger
2023-02-22 21:55  2% ` [PATCH v11 00/22] Convert static log type values in libraries Stephen Hemminger
2023-02-22 21:55  2%   ` [PATCH v11 21/22] hash: move rte_hash_set_alg out header Stephen Hemminger
2023-02-23  7:11  0%     ` Ruifeng Wang
2023-02-23  7:27  0%       ` Ruifeng Wang
2023-02-24  9:45  0%     ` Ruifeng Wang
2023-02-09  3:03     [PATCH] mem: fix displaying heap ID failed for heap info command Huisong Li
2023-02-22  7:49  4% ` [PATCH v2] " Huisong Li
2023-02-10  2:48     [PATCH v4 0/3] add telemetry cmds for ring Jie Hai
2023-05-09  1:29  3% ` [PATCH v5 " Jie Hai
2023-05-09  1:29  3%   ` [PATCH v5 1/3] ring: fix unmatched type definition and usage Jie Hai
2023-05-09  6:23  0%     ` Ruifeng Wang
2023-05-09  8:15  0%       ` Jie Hai
2023-05-09  9:24  3%   ` [PATCH v6 0/3] add telemetry cmds for ring Jie Hai
2023-05-09  9:24  3%     ` [PATCH v6 1/3] ring: fix unmatched type definition and usage Jie Hai
2023-02-10  8:14     [PATCH v5 1/3] ethdev: skip congestion management configuration Rakesh Kudurumalla
2023-02-10  8:26     ` [PATCH v6 " Rakesh Kudurumalla
2023-02-11  0:35       ` Ferruh Yigit
2023-02-11  5:16  0%     ` Jerin Jacob
2023-02-13  2:19     [PATCH v6 00/21] add support for cpfl PMD in DPDK Mingxia Liu
2023-02-16  0:29     ` [PATCH v7 01/21] net/cpfl: support device initialization Mingxia Liu
2023-02-27 13:46       ` Ferruh Yigit
2023-02-27 15:45         ` Thomas Monjalon
2023-02-27 23:38  3%       ` Ferruh Yigit
2023-02-13 11:31     [PATCH v10 0/4] add support for self monitoring Tomasz Duszynski
2023-02-16 17:54     ` [PATCH v11 " Tomasz Duszynski
2023-02-16 17:54       ` [PATCH v11 1/4] lib: add generic support for reading PMU events Tomasz Duszynski
2023-02-16 23:50         ` Konstantin Ananyev
2023-02-17  8:49           ` [EXT] " Tomasz Duszynski
2023-02-17 10:14             ` Konstantin Ananyev
2023-02-19 14:23               ` Tomasz Duszynski
2023-02-20 14:31                 ` Konstantin Ananyev
2023-02-20 16:59                   ` Tomasz Duszynski
2023-02-20 17:21                     ` Konstantin Ananyev
2023-02-20 20:42                       ` Tomasz Duszynski
2023-02-21  0:48  3%                     ` Konstantin Ananyev
2023-02-27  8:12  0%                       ` Tomasz Duszynski
2023-02-19 11:55     [PATCH] drivers: skip build of sub-libs not supporting IOVA mode Thomas Monjalon
2023-03-06 16:13     ` [PATCH v2 0/2] refactor diasbling IOVA as PA Thomas Monjalon
2023-03-06 16:13  2%   ` [PATCH v2 1/2] build: clarify configuration without IOVA field in mbuf Thomas Monjalon
2023-03-09  1:43  0%     ` fengchengwen
2023-03-09  7:29  0%       ` Thomas Monjalon
2023-03-09 11:23  0%         ` fengchengwen
2023-03-09 12:12  0%           ` Thomas Monjalon
2023-03-09 13:10  0%             ` Bruce Richardson
2023-03-13 15:51  0%               ` Thomas Monjalon
2023-02-21  3:10     [PATCH 0/2] configure RSS and handle metadata correctly Chaoyong He
2023-02-21  3:10  3% ` [PATCH 2/2] net/nfp: modify RSS's processing logic Chaoyong He
2023-02-21  3:29     ` [PATCH v2 0/2] configure RSS and handle metadata correctly Chaoyong He
2023-02-21  3:29  3%   ` [PATCH v2 2/2] net/nfp: modify RSS's processing logic Chaoyong He
2023-02-21  3:55       ` [PATCH v2 0/2] configure RSS and handle metadata correctly Chaoyong He
2023-02-21  3:55  3%     ` [PATCH v2 2/2] net/nfp: modify RSS's processing logic Chaoyong He
2023-02-22 21:43     [PATCH] vhost: fix madvise arguments alignment Mike Pattrick
2023-02-23  4:35     ` [PATCH v2] " Mike Pattrick
2023-02-23 16:12  3%   ` Maxime Coquelin
2023-02-23 16:57  0%     ` Mike Pattrick
2023-02-24 15:05  4%       ` Patrick Robb
2023-02-23 16:04     [RFC PATCH] drivers/net: fix RSS multi-queue mode check Ferruh Yigit
2023-02-27  1:34     ` lihuisong (C)
2023-02-27  9:57       ` Ferruh Yigit
2023-02-28  1:24         ` lihuisong (C)
2023-02-28  8:23  3%       ` Ferruh Yigit
2023-02-28  9:39  3% [RFC 0/2] Add high-performance timer facility Mattias Rönnblom
2023-02-28 16:01  0% ` Morten Brørup
2023-03-01 11:18  0%   ` Mattias Rönnblom
2023-03-01 13:31  3%     ` Morten Brørup
2023-03-01 15:50  3%       ` Mattias Rönnblom
2023-03-01 17:06  0%         ` Morten Brørup
2023-03-15 17:03  3% ` [RFC v2 " Mattias Rönnblom
2023-03-09  8:56  4% [RFC 1/2] security: introduce out of place support for inline ingress Nithin Dabilpuram
2023-04-11 10:04  4% ` [PATCH 1/3] " Nithin Dabilpuram
2023-04-11 18:05  3%   ` Stephen Hemminger
2023-04-18  8:33  4%     ` Jerin Jacob
2023-04-24 22:41  3%       ` Thomas Monjalon
2023-05-19  8:07  4%         ` Jerin Jacob
2023-03-13  7:26     [PATCH] lib/hash: new feature adding existing key Abdullah Ömer Yamaç
2023-03-13  7:35     ` Abdullah Ömer Yamaç
2023-03-13 15:48  3%   ` Stephen Hemminger
2023-03-13  9:34     [PATCH] reorder: fix registration of dynamic field in mbuf Volodymyr Fialko
2023-03-13 10:19  3% ` David Marchand
2023-03-14 12:48     [PATCH 0/5] fix segment fault when parse args Chengwen Feng
2023-03-16 18:18     ` Ferruh Yigit
2023-03-17  2:43  3%   ` fengchengwen
2023-03-21 13:50  0%     ` Ferruh Yigit
2023-03-22  1:15  0%       ` fengchengwen
2023-03-22  8:53  0%         ` Ferruh Yigit
2023-03-22 13:49  0%           ` Thomas Monjalon
2023-03-23 11:58  3%             ` fengchengwen
2023-03-23 12:51  3%               ` Thomas Monjalon
2023-03-15 11:00     [PATCH 0/5] support setting and querying RSS algorithms Dongdong Liu
2023-03-15 11:00 10% ` [PATCH 1/5] ethdev: support setting and querying rss algorithm Dongdong Liu
2023-03-15 11:28  0%   ` Ivan Malov
2023-03-16 13:10  3%     ` Dongdong Liu
2023-03-16 14:31  0%       ` Ivan Malov
2023-03-15 13:43  3%   ` Thomas Monjalon
2023-03-16 13:16  3%     ` Dongdong Liu
2023-03-20 10:26     [PATCH 1/2] app/mldev: fix build with debug David Marchand
2023-03-20 10:26  5% ` [PATCH 2/2] ci: test compilation " David Marchand
2023-03-20 12:18     ` [PATCH v2 1/2] app/mldev: fix build " David Marchand
2023-03-20 12:18 19%   ` [PATCH v2 2/2] ci: test compilation with debug in GHA David Marchand
2023-03-24  2:16     [PATCH v2 00/15] graph enhancement for multi-core dispatch Zhirun Yan
2023-03-29  6:43     ` [PATCH v3 " Zhirun Yan
2023-03-29  6:43       ` [PATCH v3 03/15] graph: move node process into inline function Zhirun Yan
2023-03-29 15:34  3%     ` Stephen Hemminger
2023-03-29 15:41  0%       ` Jerin Jacob
2023-03-29 23:40  2% [PATCH v12 00/22] Covert static log types in libraries to dynamic Stephen Hemminger
2023-03-29 23:40  2% ` [PATCH v12 18/22] hash: move rte_hash_set_alg out header Stephen Hemminger
2023-03-31  4:02     [PATCH v5 00/15] graph enhancement for multi-core dispatch Zhirun Yan
2023-05-09  6:03     ` [PATCH v6 " Zhirun Yan
2023-05-09  6:03       ` [PATCH v6 04/15] graph: add get/set graph worker model APIs Zhirun Yan
2023-05-24  6:08  3%     ` Jerin Jacob
2023-03-31 17:17  3% DPDK 23.03 released Thomas Monjalon
2023-03-31 20:08     [PATCH] devtools: add script to check for non inclusive naming Stephen Hemminger
2023-04-03 14:47 14% ` [PATCH v2] " Stephen Hemminger
2023-04-03  6:59 10% [PATCH] version: 23.07-rc0 David Marchand
2023-04-03  9:37 10% ` [PATCH v2] " David Marchand
2023-04-06  7:44  0%   ` David Marchand
2023-04-03 21:52     [PATCH 0/9] msvc integration changes Tyler Retzlaff
2023-04-03 21:52  6% ` [PATCH 6/9] eal: expand most macros to empty when using msvc Tyler Retzlaff
2023-04-03 21:52  3% ` [PATCH 9/9] telemetry: avoid expanding versioned symbol macros on msvc Tyler Retzlaff
2023-04-04 20:07     ` [PATCH v2 0/9] msvc integration changes Tyler Retzlaff
2023-04-04 20:07  6%   ` [PATCH v2 6/9] eal: expand most macros to empty when using msvc Tyler Retzlaff
2023-04-04 20:07  3%   ` [PATCH v2 9/9] telemetry: avoid expanding versioned symbol macros on msvc Tyler Retzlaff
2023-04-05 10:56  0%     ` Bruce Richardson
2023-04-05 16:02  0%       ` Tyler Retzlaff
2023-04-05 16:17  0%         ` Bruce Richardson
2023-04-06  0:45     ` [PATCH v3 00/11] msvc integration changes Tyler Retzlaff
2023-04-06  0:45  6%   ` [PATCH v3 08/11] eal: expand most macros to empty when using msvc Tyler Retzlaff
2023-04-06  0:45  3%   ` [PATCH v3 11/11] telemetry: avoid expanding versioned symbol macros on msvc Tyler Retzlaff
2023-04-11 10:24  0%     ` Bruce Richardson
2023-04-11 20:34  0%       ` Tyler Retzlaff
2023-04-12  8:50  0%         ` Bruce Richardson
2023-04-11 21:12     ` [PATCH v4 00/14] msvc integration changes Tyler Retzlaff
2023-04-11 21:12  6%   ` [PATCH v4 11/14] eal: expand most macros to empty when using MSVC Tyler Retzlaff
2023-04-11 21:12  3%   ` [PATCH v4 13/14] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
2023-04-13 21:25     ` [PATCH v5 00/14] msvc integration changes Tyler Retzlaff
2023-04-13 21:26  6%   ` [PATCH v5 11/14] eal: expand most macros to empty when using MSVC Tyler Retzlaff
2023-04-14  6:45         ` Morten Brørup
2023-04-14 17:02  4%       ` Tyler Retzlaff
2023-04-15  7:16  3%         ` Morten Brørup
2023-04-15 20:52  4%           ` Tyler Retzlaff
2023-04-15 22:41  4%             ` Morten Brørup
2023-04-13 21:26  3%   ` [PATCH v5 13/14] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
2023-04-15  1:15     ` [PATCH v6 00/15] msvc integration changes Tyler Retzlaff
2023-04-15  1:15  5%   ` [PATCH v6 11/15] eal: expand most macros to empty when using MSVC Tyler Retzlaff
2023-04-15  1:15  3%   ` [PATCH v6 13/15] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
2023-04-17 16:10     ` [PATCH v7 00/14] msvc integration changes Tyler Retzlaff
2023-04-17 16:10  5%   ` [PATCH v7 10/14] eal: expand most macros to empty when using MSVC Tyler Retzlaff
2023-04-17 16:10  3%   ` [PATCH v7 12/14] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
2023-05-02  3:15     ` [PATCH v8 00/14] msvc integration changes Tyler Retzlaff
2023-05-02  3:15  5%   ` [PATCH v8 10/14] eal: expand most macros to empty when using MSVC Tyler Retzlaff
2023-05-02  3:15  3%   ` [PATCH v8 12/14] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
2023-04-05 12:40  3% [PATCH v2 0/3] vhost: add device op to offload the interrupt kick Eelco Chaudron
2023-04-05 12:41     ` [PATCH v2 3/3] " Eelco Chaudron
2023-05-10 11:44       ` David Marchand
2023-05-16  8:53         ` Eelco Chaudron
2023-05-16 10:12  3%       ` David Marchand
2023-05-16 11:36  0%         ` Eelco Chaudron
2023-05-16 11:45  0%           ` Maxime Coquelin
2023-05-16 12:07  0%             ` Eelco Chaudron
2023-05-17  9:18  0%           ` Eelco Chaudron
2023-05-08 13:58  0% ` [PATCH v2 0/3] " Eelco Chaudron
2023-04-05 23:12 17% [PATCH] MAINTAINERS: sort file entries Stephen Hemminger
2023-04-13 11:53     [PATCH v2 1/3] eal: add x86 cpuid support for monitorx Sivaprasad Tummala
2023-04-13 11:53  3% ` [PATCH v2 2/3] doc: announce new cpu flag added to rte_cpu_flag_t Sivaprasad Tummala
2023-04-17  4:31  3%   ` [PATCH v3 1/4] " Sivaprasad Tummala
2023-04-18  8:25         ` [PATCH v4 0/4] power: monitor support for AMD EPYC processors Sivaprasad Tummala
2023-04-18  8:25  3%       ` [PATCH v4 1/4] doc: announce new cpu flag added to rte_cpu_flag_t Sivaprasad Tummala
2023-04-18  8:52  3%         ` Ferruh Yigit
2023-04-18  9:22  3%           ` Bruce Richardson
2023-04-14  8:43     [PATCH] reorder: improve buffer structure layout Volodymyr Fialko
2023-04-14 14:52  3% ` Stephen Hemminger
2023-04-14 14:54  3%   ` Bruce Richardson
2023-04-14 15:30  0%     ` Stephen Hemminger
2023-04-18  5:30     [RFC 0/4] Support VFIO sparse mmap in PCI bus Chenbo Xia
2023-04-18  7:46  3% ` David Marchand
2023-04-18  9:27  0%   ` Xia, Chenbo
2023-04-18  9:33  0%   ` Xia, Chenbo
2023-04-18 10:45     [PATCH] eventdev: fix alignment padding Sivaprasad Tummala
2023-04-18 11:06  4% ` Morten Brørup
2023-04-18 12:40  3%   ` Mattias Rönnblom
2023-04-18 12:30     ` Mattias Rönnblom
2023-04-18 14:07       ` Morten Brørup
2023-04-18 15:16         ` Mattias Rönnblom
2023-05-17 13:20           ` Jerin Jacob
2023-05-17 13:35  3%         ` Morten Brørup
2023-05-23 15:15  3%           ` Jerin Jacob
2023-04-19  8:36     [RFC] lib: set/get max memzone segments Ophir Munk
2023-04-20  7:43     ` Thomas Monjalon
2023-04-20 18:20       ` Tyler Retzlaff
2023-04-21  8:34  4%     ` Thomas Monjalon
2023-04-28 10:31     [dpdk-dev] [PATCH] net/liquidio: removed LiquidIO ethdev driver jerinj
2023-05-02 14:18  5% ` Ferruh Yigit
2023-05-08 13:44  1% ` [dpdk-dev] [PATCH v2] net/liquidio: remove " jerinj
2023-05-17 15:47  0%   ` Jerin Jacob
     [not found]     <20230125075636.363cafaf@hermes.local>
     [not found]     ` <3688057.uBEoKPz9u1@thomas>
     [not found]       ` <DS0PR11MB73090EC350B82E0730D0D9A197CE9@DS0PR11MB7309.namprd11.prod.outlook.com>
2023-05-05 15:05  3%     ` Minutes of Technical Board Meeting, 2023-01-11 Stephen Hemminger
2023-05-11  8:16     [PATCH v2] eventdev: avoid non-burst shortcut for variable-size bursts Mattias Rönnblom
2023-05-11  8:24     ` [PATCH v3] " Mattias Rönnblom
2023-05-12 11:59       ` Jerin Jacob
2023-05-12 13:15         ` Mattias Rönnblom
2023-05-15 12:38           ` Jerin Jacob
2023-05-15 20:52  3%         ` Mattias Rönnblom
2023-05-16 13:08  0%           ` Jerin Jacob
2023-05-17  7:16  3%             ` Mattias Rönnblom
2023-05-17 12:28  0%               ` Jerin Jacob
2023-05-16  6:37     [PATCH v1 0/7] ethdev: modify field API for multiple headers Michael Baum
2023-05-16  6:37  3% ` [PATCH v1 5/7] ethdev: add GENEVE TLV option modification support Michael Baum
2023-05-18 17:40     ` [PATCH v2 0/5] ethdev: modify field API for multiple headers Michael Baum
2023-05-18 17:40  3%   ` [PATCH v2 4/5] ethdev: add GENEVE TLV option modification support Michael Baum
2023-05-22 19:27       ` [PATCH v3 0/5] ethdev: modify field API for multiple headers Michael Baum
2023-05-22 19:28  3%     ` [PATCH v3 4/5] ethdev: add GENEVE TLV option modification support Michael Baum
2023-05-22 19:28  3%     ` [PATCH v3 5/5] ethdev: add MPLS header " Michael Baum
2023-05-23 12:48         ` [PATCH v4 0/5] ethdev: modify field API for multiple headers Michael Baum
2023-05-23 12:48  3%       ` [PATCH v4 4/5] ethdev: add GENEVE TLV option modification support Michael Baum
2023-05-23 12:48  2%       ` [PATCH v4 5/5] ethdev: add MPLS header " Michael Baum
2023-05-23 21:31           ` [PATCH v5 0/5] ethdev: modify field API for multiple headers Michael Baum
2023-05-23 21:31  3%         ` [PATCH v5 4/5] ethdev: add GENEVE TLV option modification support Michael Baum
2023-05-23 21:31  2%         ` [PATCH v5 5/5] ethdev: add MPLS header " Michael Baum
     [not found]     <20220825024425.10534-1-lihuisong@huawei.com>
2023-01-31  3:33     ` [PATCH V5 0/5] app/testpmd: support multiple process attach and detach port Huisong Li
2023-05-16 11:27  0%   ` lihuisong (C)
2023-05-23  0:46  0%   ` fengchengwen
2023-05-17  6:59     [PATCH] net/bonding: replace master/slave to main/member Chaoyong He
2023-05-17 14:52  1% ` Stephen Hemminger
2023-05-18  6:32  1% ` [PATCH v2] " Chaoyong He
2023-05-18  7:01  1%   ` [PATCH v3] " Chaoyong He
2023-05-18  8:44  1%     ` [PATCH v4] " Chaoyong He
2023-05-18 15:39  3%       ` Stephen Hemminger
2023-05-17  9:08  4% [PATCH v3 0/4] vhost: add device op to offload the interrupt kick Eelco Chaudron
2023-05-17 16:15     [PATCH 00/20] Replace use of term sanity-check Stephen Hemminger
2023-05-18 16:45     ` [PATCH v2 00/19] Replace use of the " Stephen Hemminger
2023-05-18 16:45  2%   ` [PATCH v2 01/19] mbuf: replace term sanity check Stephen Hemminger
2023-05-19 17:45     ` [PATCH v3 00/19] Replace use "sanity check" Stephen Hemminger
2023-05-19 17:45  2%   ` [PATCH v3 01/19] mbuf: replace term sanity check Stephen Hemminger
2023-05-22 11:40     [PATCH 0/2] add support of showing firmware version Chaoyong He
2023-05-22 11:40  6% ` [PATCH 1/2] net/nfp: align reading of version info with kernel driver Chaoyong He

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).