DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@intel.com>
To: kkanas@marvell.com, dev@dpdk.org
Cc: declan.doherty@intel.com, Chas Williams <chas3@att.com>
Subject: Re: [dpdk-dev] [PATCH] net/bonding: fix test bonding MAC assignment
Date: Mon, 29 Apr 2019 15:56:37 +0100	[thread overview]
Message-ID: <124dad47-edb3-843c-e83d-359d408f050c@intel.com> (raw)
Message-ID: <20190429145637.hxAoY_OcPevFlorTyQEGbfyswRyN7wGbWCj86ceoyMs@z> (raw)
In-Reply-To: <20190426223029.23677-1-kkanas@marvell.com>

On 4/26/2019 11:30 PM, kkanas@marvell.com wrote:
> From: Krzysztof Kanas <kkanas@marvell.com>
> 
> Fix test_set_bonded_port_initialization_mac_assignment so that it works
> after 're run' test_link_bonding.
> 
> Fixes: f2ef6f21ee2e ("bond: fix mac assignment to slaves")
> Cc: declan.doherty@intel.com
> 
> Signed-off-by: Krzysztof Kanas <kkanas@marvell.com>

cc'ed maintainer, Chas.

> ---
>  app/test/test_link_bonding.c | 53 +++++++++++++++++++++---------------
>  1 file changed, 31 insertions(+), 22 deletions(-)
> 
> diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
> index 0fe1d78eb0f5..c00ec6c445bd 100644
> --- a/app/test/test_link_bonding.c
> +++ b/app/test/test_link_bonding.c
> @@ -201,6 +201,7 @@ configure_ethdev(uint16_t port_id, uint8_t start, uint8_t en_isr)
>  }
>  
>  static int slaves_initialized;
> +static int mac_slaves_initialized;
>  
>  static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
>  static pthread_cond_t cvar = PTHREAD_COND_INITIALIZER;
> @@ -873,10 +874,11 @@ test_set_explicit_bonded_mac(void)
>  static int
>  test_set_bonded_port_initialization_mac_assignment(void)
>  {
> -	int i, slave_count, bonded_port_id;
> +	int i, slave_count;
>  
>  	uint16_t slaves[RTE_MAX_ETHPORTS];
> -	int slave_port_ids[BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT];
> +	static int bonded_port_id = -1;
> +	static int slave_port_ids[BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT];
>  
>  	struct ether_addr slave_mac_addr, bonded_mac_addr, read_mac_addr;
>  
> @@ -887,42 +889,49 @@ test_set_bonded_port_initialization_mac_assignment(void)
>  	/*
>  	 * 1. a - Create / configure  bonded / slave ethdevs
>  	 */
> -	bonded_port_id = rte_eth_bond_create("net_bonding_mac_ass_test",
> -			BONDING_MODE_ACTIVE_BACKUP, rte_socket_id());
> -	TEST_ASSERT(bonded_port_id > 0, "failed to create bonded device");
> +	if (bonded_port_id == -1) {
> +		bonded_port_id = rte_eth_bond_create("net_bonding_mac_ass_test",
> +				BONDING_MODE_ACTIVE_BACKUP, rte_socket_id());
> +		TEST_ASSERT(bonded_port_id > 0, "failed to create bonded device");
>  
> -	TEST_ASSERT_SUCCESS(configure_ethdev(bonded_port_id, 0, 0),
> -				"Failed to configure bonded ethdev");
> +		TEST_ASSERT_SUCCESS(configure_ethdev(bonded_port_id, 0, 0),
> +					"Failed to configure bonded ethdev");
> +	}
>  
> -	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
> -		char pmd_name[RTE_ETH_NAME_MAX_LEN];
> +	if (!mac_slaves_initialized) {
>  
> -		slave_mac_addr.addr_bytes[ETHER_ADDR_LEN-1] = i + 100;
> +		for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
> +			char pmd_name[RTE_ETH_NAME_MAX_LEN];
>  
> -		snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "eth_slave_%d", i);
> +			slave_mac_addr.addr_bytes[ETHER_ADDR_LEN-1] = i + 100;
>  
> -		slave_port_ids[i] = virtual_ethdev_create(pmd_name,
> -				&slave_mac_addr, rte_socket_id(), 1);
> +			snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN,
> +				"eth_slave_%d", i);
>  
> -		TEST_ASSERT(slave_port_ids[i] >= 0,
> -				"Failed to create slave ethdev %s", pmd_name);
> +			slave_port_ids[i] = virtual_ethdev_create(pmd_name,
> +					&slave_mac_addr, rte_socket_id(), 1);
>  
> -		TEST_ASSERT_SUCCESS(configure_ethdev(slave_port_ids[i], 1, 0),
> -				"Failed to configure virtual ethdev %s",
> -				pmd_name);
> -	}
> +			TEST_ASSERT(slave_port_ids[i] >= 0,
> +					"Failed to create slave ethdev %s",
> +					pmd_name);
>  
> +			TEST_ASSERT_SUCCESS(configure_ethdev(
> +						slave_port_ids[i], 1, 0),
> +					"Failed to configure virtual ethdev %s",
> +					pmd_name);
> +		}
>  
> +		mac_slaves_initialized = 1;
> +	}
>  	/*
> -	 * 2. Add slave ethdevs to bonded device
> -	 */
> +	* 2. Add slave ethdevs to bonded device
> +	*/
>  	for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
>  		TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(bonded_port_id,
>  				slave_port_ids[i]),
>  				"Failed to add slave (%d) to bonded port (%d).",
>  				slave_port_ids[i], bonded_port_id);
>  	}
> -
>  	slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
>  			RTE_MAX_ETHPORTS);
>  	TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT, slave_count,
> 


  parent reply	other threads:[~2019-04-29 14:56 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-26 22:30 kkanas
2019-04-26 22:30 ` kkanas
2019-04-29 14:56 ` Ferruh Yigit [this message]
2019-04-29 14:56   ` Ferruh Yigit
2019-04-29 16:45   ` Chas Williams
2019-04-29 16:45     ` Chas Williams
2019-04-30 11:25     ` Ferruh Yigit
2019-04-30 11:25       ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=124dad47-edb3-843c-e83d-359d408f050c@intel.com \
    --to=ferruh.yigit@intel.com \
    --cc=chas3@att.com \
    --cc=declan.doherty@intel.com \
    --cc=dev@dpdk.org \
    --cc=kkanas@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).