* [dpdk-dev] [PATCH v2 1/6] bonding: add spinlock to rx and tx queues
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 0/6] bonding: locks Bernard Iremonger
@ 2016-05-26 16:38 ` Bernard Iremonger
2016-06-10 18:12 ` Ananyev, Konstantin
2016-06-12 17:11 ` [dpdk-dev] [PATCH v3 0/4] bonding: locks Bernard Iremonger
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 2/6] bonding: grab queue spinlocks in slave add and remove Bernard Iremonger
` (5 subsequent siblings)
6 siblings, 2 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-26 16:38 UTC (permalink / raw)
To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
drivers/net/bonding/rte_eth_bond_pmd.c | 4 ++++
drivers/net/bonding/rte_eth_bond_private.h | 4 +++-
2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 129f04b..2e624bb 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1676,6 +1676,8 @@ bond_ethdev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
if (bd_rx_q == NULL)
return -1;
+ rte_spinlock_init(&bd_rx_q->lock);
+
bd_rx_q->queue_id = rx_queue_id;
bd_rx_q->dev_private = dev->data->dev_private;
@@ -1701,6 +1703,8 @@ bond_ethdev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
if (bd_tx_q == NULL)
return -1;
+ rte_spinlock_init(&bd_tx_q->lock);
+
bd_tx_q->queue_id = tx_queue_id;
bd_tx_q->dev_private = dev->data->dev_private;
diff --git a/drivers/net/bonding/rte_eth_bond_private.h b/drivers/net/bonding/rte_eth_bond_private.h
index 8312397..b6abcba 100644
--- a/drivers/net/bonding/rte_eth_bond_private.h
+++ b/drivers/net/bonding/rte_eth_bond_private.h
@@ -1,7 +1,7 @@
/*-
* BSD LICENSE
*
- * Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ * Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@@ -76,6 +76,7 @@ struct bond_rx_queue {
/**< Copy of RX configuration structure for queue */
struct rte_mempool *mb_pool;
/**< Reference to mbuf pool to use for RX queue */
+ rte_spinlock_t lock;
};
struct bond_tx_queue {
@@ -87,6 +88,7 @@ struct bond_tx_queue {
/**< Number of TX descriptors available for the queue */
struct rte_eth_txconf tx_conf;
/**< Copy of TX configuration structure for queue */
+ rte_spinlock_t lock;
};
/** Bonded slave devices structure */
--
2.6.3
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/6] bonding: add spinlock to rx and tx queues
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 1/6] bonding: add spinlock to rx and tx queues Bernard Iremonger
@ 2016-06-10 18:12 ` Ananyev, Konstantin
2016-06-12 17:11 ` [dpdk-dev] [PATCH v3 0/4] bonding: locks Bernard Iremonger
1 sibling, 0 replies; 42+ messages in thread
From: Ananyev, Konstantin @ 2016-06-10 18:12 UTC (permalink / raw)
To: Iremonger, Bernard, dev; +Cc: Doherty, Declan
> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> ---
> drivers/net/bonding/rte_eth_bond_pmd.c | 4 ++++
> drivers/net/bonding/rte_eth_bond_private.h | 4 +++-
> 2 files changed, 7 insertions(+), 1 deletion(-)
>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v3 0/4] bonding: locks
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 1/6] bonding: add spinlock to rx and tx queues Bernard Iremonger
2016-06-10 18:12 ` Ananyev, Konstantin
@ 2016-06-12 17:11 ` Bernard Iremonger
2016-06-12 17:11 ` [dpdk-dev] [PATCH v3 1/4] bonding: add spinlock to rx and tx queues Bernard Iremonger
` (3 more replies)
1 sibling, 4 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-06-12 17:11 UTC (permalink / raw)
To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger
Add spinlock to bonding rx and tx queues.
Take spinlock in rx and tx burst functions.
Take all spinlocks in slave add and remove functions.
With spinlocks in place remove memcpy of slaves.
Changes in v3:
Rebase to latest master.
Drop patches 4 and 5 from v2 patchset.
Update commit messages on patches.
Changes in v2:
Replace patch 1.
Add patch 2 and reorder patches.
Add spinlock to bonding rx and tx queues.
Take all spinlocks in slave add and remove functions.
Replace readlocks with spinlocks.
Bernard Iremonger (4):
bonding: add spinlock to rx and tx queues
bonding: grab queue spinlocks in slave add and remove
bonding: take queue spinlock in rx/tx burst functions
bonding: remove memcpy from burst functions
drivers/net/bonding/rte_eth_bond_api.c | 52 +++++++-
drivers/net/bonding/rte_eth_bond_pmd.c | 189 ++++++++++++++++++-----------
drivers/net/bonding/rte_eth_bond_private.h | 4 +-
3 files changed, 167 insertions(+), 78 deletions(-)
--
2.6.3
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v3 1/4] bonding: add spinlock to rx and tx queues
2016-06-12 17:11 ` [dpdk-dev] [PATCH v3 0/4] bonding: locks Bernard Iremonger
@ 2016-06-12 17:11 ` Bernard Iremonger
2016-06-12 17:11 ` [dpdk-dev] [PATCH v3 2/4] bonding: grab queue spinlocks in slave add and remove Bernard Iremonger
` (2 subsequent siblings)
3 siblings, 0 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-06-12 17:11 UTC (permalink / raw)
To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger
At present it is possible to add and remove slave devices from the
bonding device while traffic is running. This can result in
segmentation faults occurring in the rx and tx burst functions.
To resolve this issue spinlocks have been added to the rx and tx
queues.
Now when a slave is added or removed the rx and tx queue spinlocks
must be held.
Fixes: 2efb58cbab6e ("bond: new link bonding library")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
drivers/net/bonding/rte_eth_bond_pmd.c | 4 ++++
drivers/net/bonding/rte_eth_bond_private.h | 4 +++-
2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 129f04b..2e624bb 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1676,6 +1676,8 @@ bond_ethdev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
if (bd_rx_q == NULL)
return -1;
+ rte_spinlock_init(&bd_rx_q->lock);
+
bd_rx_q->queue_id = rx_queue_id;
bd_rx_q->dev_private = dev->data->dev_private;
@@ -1701,6 +1703,8 @@ bond_ethdev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
if (bd_tx_q == NULL)
return -1;
+ rte_spinlock_init(&bd_tx_q->lock);
+
bd_tx_q->queue_id = tx_queue_id;
bd_tx_q->dev_private = dev->data->dev_private;
diff --git a/drivers/net/bonding/rte_eth_bond_private.h b/drivers/net/bonding/rte_eth_bond_private.h
index 8312397..b6abcba 100644
--- a/drivers/net/bonding/rte_eth_bond_private.h
+++ b/drivers/net/bonding/rte_eth_bond_private.h
@@ -1,7 +1,7 @@
/*-
* BSD LICENSE
*
- * Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ * Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@@ -76,6 +76,7 @@ struct bond_rx_queue {
/**< Copy of RX configuration structure for queue */
struct rte_mempool *mb_pool;
/**< Reference to mbuf pool to use for RX queue */
+ rte_spinlock_t lock;
};
struct bond_tx_queue {
@@ -87,6 +88,7 @@ struct bond_tx_queue {
/**< Number of TX descriptors available for the queue */
struct rte_eth_txconf tx_conf;
/**< Copy of TX configuration structure for queue */
+ rte_spinlock_t lock;
};
/** Bonded slave devices structure */
--
2.6.3
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v3 2/4] bonding: grab queue spinlocks in slave add and remove
2016-06-12 17:11 ` [dpdk-dev] [PATCH v3 0/4] bonding: locks Bernard Iremonger
2016-06-12 17:11 ` [dpdk-dev] [PATCH v3 1/4] bonding: add spinlock to rx and tx queues Bernard Iremonger
@ 2016-06-12 17:11 ` Bernard Iremonger
2016-06-12 17:11 ` [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions Bernard Iremonger
2016-06-12 17:11 ` [dpdk-dev] [PATCH v3 4/4] bonding: remove memcpy from " Bernard Iremonger
3 siblings, 0 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-06-12 17:11 UTC (permalink / raw)
To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger
When adding or removing a slave device from the bonding device
the rx and tx queue spinlocks should be held.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
drivers/net/bonding/rte_eth_bond_api.c | 52 ++++++++++++++++++++++++++++++++--
1 file changed, 49 insertions(+), 3 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 53df9fe..006c901 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -437,8 +437,10 @@ rte_eth_bond_slave_add(uint8_t bonded_port_id, uint8_t slave_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
-
+ struct bond_tx_queue *bd_tx_q;
+ struct bond_rx_queue *bd_rx_q;
int retval;
+ uint16_t i;
/* Verify that port id's are valid bonded and slave ports */
if (valid_bonded_port_id(bonded_port_id) != 0)
@@ -448,11 +450,30 @@ rte_eth_bond_slave_add(uint8_t bonded_port_id, uint8_t slave_port_id)
internals = bonded_eth_dev->data->dev_private;
rte_spinlock_lock(&internals->lock);
+ if (bonded_eth_dev->data->dev_started) {
+ for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+ bd_rx_q = bonded_eth_dev->data->rx_queues[i];
+ rte_spinlock_lock(&bd_rx_q->lock);
+ }
+ for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+ bd_tx_q = bonded_eth_dev->data->tx_queues[i];
+ rte_spinlock_lock(&bd_tx_q->lock);
+ }
+ }
retval = __eth_bond_slave_add_lock_free(bonded_port_id, slave_port_id);
+ if (bonded_eth_dev->data->dev_started) {
+ for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+ bd_rx_q = bonded_eth_dev->data->rx_queues[i];
+ rte_spinlock_unlock(&bd_rx_q->lock);
+ }
+ for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+ bd_tx_q = bonded_eth_dev->data->tx_queues[i];
+ rte_spinlock_unlock(&bd_tx_q->lock);
+ }
+ }
rte_spinlock_unlock(&internals->lock);
-
return retval;
}
@@ -541,7 +562,10 @@ rte_eth_bond_slave_remove(uint8_t bonded_port_id, uint8_t slave_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
+ struct bond_tx_queue *bd_tx_q;
+ struct bond_rx_queue *bd_rx_q;
int retval;
+ uint16_t i;
if (valid_bonded_port_id(bonded_port_id) != 0)
return -1;
@@ -550,11 +574,33 @@ rte_eth_bond_slave_remove(uint8_t bonded_port_id, uint8_t slave_port_id)
internals = bonded_eth_dev->data->dev_private;
rte_spinlock_lock(&internals->lock);
+ if (bonded_eth_dev->data->dev_started) {
+ for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+ bd_rx_q = bonded_eth_dev->data->rx_queues[i];
+ rte_spinlock_lock(&bd_rx_q->lock);
+ }
+
+ for (i = 0; i < bonded_eth_dev->data->nb_tx_queues; i++) {
+ bd_tx_q = bonded_eth_dev->data->tx_queues[i];
+ rte_spinlock_lock(&bd_tx_q->lock);
+ }
+ }
retval = __eth_bond_slave_remove_lock_free(bonded_port_id, slave_port_id);
- rte_spinlock_unlock(&internals->lock);
+ if (bonded_eth_dev->data->dev_started) {
+ for (i = 0; i < bonded_eth_dev->data->nb_tx_queues; i++) {
+ bd_tx_q = bonded_eth_dev->data->tx_queues[i];
+ rte_spinlock_unlock(&bd_tx_q->lock);
+ }
+ for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+ bd_rx_q = bonded_eth_dev->data->rx_queues[i];
+ rte_spinlock_unlock(&bd_rx_q->lock);
+ }
+ rte_spinlock_unlock(&internals->lock);
+ }
+ rte_spinlock_unlock(&internals->lock);
return retval;
}
--
2.6.3
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
2016-06-12 17:11 ` [dpdk-dev] [PATCH v3 0/4] bonding: locks Bernard Iremonger
2016-06-12 17:11 ` [dpdk-dev] [PATCH v3 1/4] bonding: add spinlock to rx and tx queues Bernard Iremonger
2016-06-12 17:11 ` [dpdk-dev] [PATCH v3 2/4] bonding: grab queue spinlocks in slave add and remove Bernard Iremonger
@ 2016-06-12 17:11 ` Bernard Iremonger
2016-06-13 9:18 ` Bruce Richardson
2016-09-09 11:29 ` Ferruh Yigit
2016-06-12 17:11 ` [dpdk-dev] [PATCH v3 4/4] bonding: remove memcpy from " Bernard Iremonger
3 siblings, 2 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-06-12 17:11 UTC (permalink / raw)
To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger
Use rte_spinlock_trylock() in the rx/tx burst functions to
take the queue spinlock.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
drivers/net/bonding/rte_eth_bond_pmd.c | 116 ++++++++++++++++++++++++---------
1 file changed, 84 insertions(+), 32 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 2e624bb..93043ef 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1,7 +1,7 @@
/*-
* BSD LICENSE
*
- * Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ * Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@@ -92,16 +92,22 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
internals = bd_rx_q->dev_private;
-
- for (i = 0; i < internals->active_slave_count && nb_pkts; i++) {
- /* Offset of pointer to *bufs increases as packets are received
- * from other slaves */
- num_rx_slave = rte_eth_rx_burst(internals->active_slaves[i],
- bd_rx_q->queue_id, bufs + num_rx_total, nb_pkts);
- if (num_rx_slave) {
- num_rx_total += num_rx_slave;
- nb_pkts -= num_rx_slave;
+ if (rte_spinlock_trylock(&bd_rx_q->lock)) {
+ for (i = 0; i < internals->active_slave_count && nb_pkts; i++) {
+ /* Offset of pointer to *bufs increases as packets
+ * are received from other slaves
+ */
+ num_rx_slave = rte_eth_rx_burst(
+ internals->active_slaves[i],
+ bd_rx_q->queue_id,
+ bufs + num_rx_total,
+ nb_pkts);
+ if (num_rx_slave) {
+ num_rx_total += num_rx_slave;
+ nb_pkts -= num_rx_slave;
+ }
}
+ rte_spinlock_unlock(&bd_rx_q->lock);
}
return num_rx_total;
@@ -112,14 +118,19 @@ bond_ethdev_rx_burst_active_backup(void *queue, struct rte_mbuf **bufs,
uint16_t nb_pkts)
{
struct bond_dev_private *internals;
+ uint16_t ret = 0;
/* Cast to structure, containing bonded device's port id and queue id */
struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue;
internals = bd_rx_q->dev_private;
- return rte_eth_rx_burst(internals->current_primary_port,
- bd_rx_q->queue_id, bufs, nb_pkts);
+ if (rte_spinlock_trylock(&bd_rx_q->lock)) {
+ ret = rte_eth_rx_burst(internals->current_primary_port,
+ bd_rx_q->queue_id, bufs, nb_pkts);
+ rte_spinlock_unlock(&bd_rx_q->lock);
+ }
+ return ret;
}
static uint16_t
@@ -143,8 +154,10 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
uint8_t i, j, k;
rte_eth_macaddr_get(internals->port_id, &bond_mac);
- /* Copy slave list to protect against slave up/down changes during tx
- * bursting */
+
+ if (rte_spinlock_trylock(&bd_rx_q->lock) == 0)
+ return num_rx_total;
+
slave_count = internals->active_slave_count;
memcpy(slaves, internals->active_slaves,
sizeof(internals->active_slaves[0]) * slave_count);
@@ -190,7 +203,7 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
j++;
}
}
-
+ rte_spinlock_unlock(&bd_rx_q->lock);
return num_rx_total;
}
@@ -406,14 +419,19 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
+ if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+ return num_tx_total;
+
/* Copy slave list to protect against slave up/down changes during tx
* bursting */
num_of_slaves = internals->active_slave_count;
memcpy(slaves, internals->active_slaves,
sizeof(internals->active_slaves[0]) * num_of_slaves);
- if (num_of_slaves < 1)
+ if (num_of_slaves < 1) {
+ rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
+ }
/* Populate slaves mbuf with which packets are to be sent on it */
for (i = 0; i < nb_pkts; i++) {
@@ -444,7 +462,7 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
num_tx_total += num_tx_slave;
}
}
-
+ rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
}
@@ -454,15 +472,23 @@ bond_ethdev_tx_burst_active_backup(void *queue,
{
struct bond_dev_private *internals;
struct bond_tx_queue *bd_tx_q;
+ uint16_t ret = 0;
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
- if (internals->active_slave_count < 1)
- return 0;
+ if (rte_spinlock_trylock(&bd_tx_q->lock)) {
+ if (internals->active_slave_count < 1) {
+ rte_spinlock_unlock(&bd_tx_q->lock);
+ return 0;
+ }
- return rte_eth_tx_burst(internals->current_primary_port, bd_tx_q->queue_id,
- bufs, nb_pkts);
+ ret = rte_eth_tx_burst(internals->current_primary_port,
+ bd_tx_q->queue_id,
+ bufs, nb_pkts);
+ rte_spinlock_unlock(&bd_tx_q->lock);
+ }
+ return ret;
}
static inline uint16_t
@@ -694,20 +720,25 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
uint16_t num_tx_total = 0;
uint8_t i, j;
- uint8_t num_of_slaves = internals->active_slave_count;
+ uint8_t num_of_slaves;
uint8_t slaves[RTE_MAX_ETHPORTS];
struct ether_hdr *ether_hdr;
struct ether_addr primary_slave_addr;
struct ether_addr active_slave_addr;
- if (num_of_slaves < 1)
+ if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
return num_tx_total;
+ num_of_slaves = internals->active_slave_count;
+ if (num_of_slaves < 1) {
+ rte_spinlock_unlock(&bd_tx_q->lock);
+ return num_tx_total;
+ }
+
memcpy(slaves, internals->tlb_slaves_order,
sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
-
ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
if (nb_pkts > 3) {
@@ -735,7 +766,7 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if (num_tx_total == nb_pkts)
break;
}
-
+ rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
}
@@ -785,6 +816,9 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
int i, j;
+ if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+ return num_tx_total;
+
/* Search tx buffer for ARP packets and forward them to alb */
for (i = 0; i < nb_pkts; i++) {
eth_h = rte_pktmbuf_mtod(bufs[i], struct ether_hdr *);
@@ -875,6 +909,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
#endif
}
}
+ rte_spinlock_unlock(&bd_tx_q->lock);
/* Send non-ARP packets using tlb policy */
if (slave_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
@@ -914,14 +949,19 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
+ if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+ return num_tx_total;
+
/* Copy slave list to protect against slave up/down changes during tx
* bursting */
num_of_slaves = internals->active_slave_count;
memcpy(slaves, internals->active_slaves,
sizeof(internals->active_slaves[0]) * num_of_slaves);
- if (num_of_slaves < 1)
+ if (num_of_slaves < 1) {
+ rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
+ }
/* Populate slaves mbuf with the packets which are to be sent on it */
for (i = 0; i < nb_pkts; i++) {
@@ -951,7 +991,7 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
num_tx_total += num_tx_slave;
}
}
-
+ rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
}
@@ -984,17 +1024,24 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
+ if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+ return num_tx_total;
+
/* Copy slave list to protect against slave up/down changes during tx
* bursting */
num_of_slaves = internals->active_slave_count;
- if (num_of_slaves < 1)
+ if (num_of_slaves < 1) {
+ rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
+ }
memcpy(slaves, internals->active_slaves, sizeof(slaves[0]) * num_of_slaves);
distributing_count = 0;
for (i = 0; i < num_of_slaves; i++) {
- struct port *port = &mode_8023ad_ports[slaves[i]];
+ struct port *port;
+
+ port = &mode_8023ad_ports[internals->active_slaves[i]];
slave_slow_nb_pkts[i] = rte_ring_dequeue_burst(port->tx_ring,
slow_pkts, BOND_MODE_8023AX_SLAVE_TX_PKTS);
@@ -1043,7 +1090,7 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
bufs[j] = slave_bufs[i][num_tx_slave];
}
}
-
+ rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
}
@@ -1065,14 +1112,19 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
+ if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+ return 0;
+
/* Copy slave list to protect against slave up/down changes during tx
* bursting */
num_of_slaves = internals->active_slave_count;
memcpy(slaves, internals->active_slaves,
sizeof(internals->active_slaves[0]) * num_of_slaves);
- if (num_of_slaves < 1)
+ if (num_of_slaves < 1) {
+ rte_spinlock_unlock(&bd_tx_q->lock);
return 0;
+ }
/* Increment reference count on mbufs */
for (i = 0; i < nb_pkts; i++)
@@ -1093,6 +1145,7 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
most_successful_tx_slave = i;
}
}
+ rte_spinlock_unlock(&bd_tx_q->lock);
/* if slaves fail to transmit packets from burst, the calling application
* is not expected to know about multiple references to packets so we must
@@ -1819,7 +1872,6 @@ bond_ethdev_link_update(struct rte_eth_dev *bonded_eth_dev,
bonded_eth_dev->data->dev_link.link_status = link_up;
}
-
return 0;
}
--
2.6.3
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
2016-06-12 17:11 ` [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions Bernard Iremonger
@ 2016-06-13 9:18 ` Bruce Richardson
2016-06-13 12:28 ` Iremonger, Bernard
2016-09-09 11:29 ` Ferruh Yigit
1 sibling, 1 reply; 42+ messages in thread
From: Bruce Richardson @ 2016-06-13 9:18 UTC (permalink / raw)
To: Bernard Iremonger; +Cc: dev, declan.doherty, konstantin.ananyev
On Sun, Jun 12, 2016 at 06:11:28PM +0100, Bernard Iremonger wrote:
> Use rte_spinlock_trylock() in the rx/tx burst functions to
> take the queue spinlock.
>
> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
Why does this particular PMD need spinlocks when doing RX and TX, while other
device types do not? How is adding/removing devices from a bonded device different
to other control operations that can be done on physical PMDs? Is this not
similar to say bringing down or hotplugging out a physical port just before an
RX or TX operation takes place?
For all other PMDs we rely on the app to synchronise control and data plane
operation - why not here?
/Bruce
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
2016-06-13 9:18 ` Bruce Richardson
@ 2016-06-13 12:28 ` Iremonger, Bernard
2016-06-16 14:32 ` Bruce Richardson
0 siblings, 1 reply; 42+ messages in thread
From: Iremonger, Bernard @ 2016-06-13 12:28 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: dev, Doherty, Declan, Ananyev, Konstantin
Hi Bruce,
<snip>
> Subject: Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx
> burst functions
>
> On Sun, Jun 12, 2016 at 06:11:28PM +0100, Bernard Iremonger wrote:
> > Use rte_spinlock_trylock() in the rx/tx burst functions to take the
> > queue spinlock.
> >
> > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
>
> Why does this particular PMD need spinlocks when doing RX and TX, while
> other device types do not? How is adding/removing devices from a bonded
> device different to other control operations that can be done on physical
> PMDs? Is this not similar to say bringing down or hotplugging out a physical
> port just before an RX or TX operation takes place?
> For all other PMDs we rely on the app to synchronise control and data plane
> operation - why not here?
>
> /Bruce
This issue arose during VM live migration testing.
For VM live migration it is necessary (while traffic is running) to be able to remove a bonded slave device, stop it, close it and detach it.
It a slave device is removed from a bonded device while traffic is running a segmentation fault may occur in the rx/tx burst function. The spinlock has been added to prevent this occurring.
The bonding device already uses a spinlock to synchronise between the add and remove functionality and the slave_link_status_change_monitor code.
Previously testpmd did not allow, stop, close or detach of PMD while traffic was running. Testpmd has been modified with the following patchset
http://dpdk.org/dev/patchwork/patch/13472/
It now allows stop, close and detach of a PMD provided in it is not forwarding and is not a slave of bonded PMD.
Regards,
Bernard.
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
2016-06-13 12:28 ` Iremonger, Bernard
@ 2016-06-16 14:32 ` Bruce Richardson
2016-06-16 15:00 ` Thomas Monjalon
0 siblings, 1 reply; 42+ messages in thread
From: Bruce Richardson @ 2016-06-16 14:32 UTC (permalink / raw)
To: Iremonger, Bernard; +Cc: dev, Doherty, Declan, Ananyev, Konstantin
On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard wrote:
> Hi Bruce,
>
> <snip>
>
> > Subject: Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx
> > burst functions
> >
> > On Sun, Jun 12, 2016 at 06:11:28PM +0100, Bernard Iremonger wrote:
> > > Use rte_spinlock_trylock() in the rx/tx burst functions to take the
> > > queue spinlock.
> > >
> > > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > ---
> >
> > Why does this particular PMD need spinlocks when doing RX and TX, while
> > other device types do not? How is adding/removing devices from a bonded
> > device different to other control operations that can be done on physical
> > PMDs? Is this not similar to say bringing down or hotplugging out a physical
> > port just before an RX or TX operation takes place?
> > For all other PMDs we rely on the app to synchronise control and data plane
> > operation - why not here?
> >
> > /Bruce
>
> This issue arose during VM live migration testing.
> For VM live migration it is necessary (while traffic is running) to be able to remove a bonded slave device, stop it, close it and detach it.
> It a slave device is removed from a bonded device while traffic is running a segmentation fault may occur in the rx/tx burst function. The spinlock has been added to prevent this occurring.
>
> The bonding device already uses a spinlock to synchronise between the add and remove functionality and the slave_link_status_change_monitor code.
>
> Previously testpmd did not allow, stop, close or detach of PMD while traffic was running. Testpmd has been modified with the following patchset
>
> http://dpdk.org/dev/patchwork/patch/13472/
>
> It now allows stop, close and detach of a PMD provided in it is not forwarding and is not a slave of bonded PMD.
>
I will admit to not being fully convinced, but if nobody else has any serious
objections, and since this patch has been reviewed and acked, I'm ok to merge it
in. I'll do so shortly.
/Bruce
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
2016-06-16 14:32 ` Bruce Richardson
@ 2016-06-16 15:00 ` Thomas Monjalon
2016-06-16 16:41 ` Iremonger, Bernard
0 siblings, 1 reply; 42+ messages in thread
From: Thomas Monjalon @ 2016-06-16 15:00 UTC (permalink / raw)
To: Bruce Richardson, Iremonger, Bernard
Cc: dev, Doherty, Declan, Ananyev, Konstantin
2016-06-16 15:32, Bruce Richardson:
> On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard wrote:
> > > Why does this particular PMD need spinlocks when doing RX and TX, while
> > > other device types do not? How is adding/removing devices from a bonded
> > > device different to other control operations that can be done on physical
> > > PMDs? Is this not similar to say bringing down or hotplugging out a physical
> > > port just before an RX or TX operation takes place?
> > > For all other PMDs we rely on the app to synchronise control and data plane
> > > operation - why not here?
> > >
> > > /Bruce
> >
> > This issue arose during VM live migration testing.
> > For VM live migration it is necessary (while traffic is running) to be able to remove a bonded slave device, stop it, close it and detach it.
> > It a slave device is removed from a bonded device while traffic is running a segmentation fault may occur in the rx/tx burst function. The spinlock has been added to prevent this occurring.
> >
> > The bonding device already uses a spinlock to synchronise between the add and remove functionality and the slave_link_status_change_monitor code.
> >
> > Previously testpmd did not allow, stop, close or detach of PMD while traffic was running. Testpmd has been modified with the following patchset
> >
> > http://dpdk.org/dev/patchwork/patch/13472/
> >
> > It now allows stop, close and detach of a PMD provided in it is not forwarding and is not a slave of bonded PMD.
> >
> I will admit to not being fully convinced, but if nobody else has any serious
> objections, and since this patch has been reviewed and acked, I'm ok to merge it
> in. I'll do so shortly.
Please hold on.
Seeing locks introduced in the Rx/Tx path is an alert.
We clearly need a design document to explain where locks can be used
and what are the responsibility of the control plane.
If everybody agrees in this document that DPDK can have some locks
in the fast path, then OK to merge it.
So I would say NACK for 16.07 and maybe postpone to 16.11.
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
2016-06-16 15:00 ` Thomas Monjalon
@ 2016-06-16 16:41 ` Iremonger, Bernard
2016-06-16 18:38 ` Thomas Monjalon
0 siblings, 1 reply; 42+ messages in thread
From: Iremonger, Bernard @ 2016-06-16 16:41 UTC (permalink / raw)
To: Thomas Monjalon, Richardson, Bruce
Cc: dev, Doherty, Declan, Ananyev, Konstantin, Mcnamara, John
Hi Thomas,
<snip>
> 2016-06-16 15:32, Bruce Richardson:
> > On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard wrote:
> > > > Why does this particular PMD need spinlocks when doing RX and TX,
> > > > while other device types do not? How is adding/removing devices
> > > > from a bonded device different to other control operations that
> > > > can be done on physical PMDs? Is this not similar to say bringing
> > > > down or hotplugging out a physical port just before an RX or TX
> operation takes place?
> > > > For all other PMDs we rely on the app to synchronise control and
> > > > data plane operation - why not here?
> > > >
> > > > /Bruce
> > >
> > > This issue arose during VM live migration testing.
> > > For VM live migration it is necessary (while traffic is running) to be able to
> remove a bonded slave device, stop it, close it and detach it.
> > > It a slave device is removed from a bonded device while traffic is running
> a segmentation fault may occur in the rx/tx burst function. The spinlock has
> been added to prevent this occurring.
> > >
> > > The bonding device already uses a spinlock to synchronise between the
> add and remove functionality and the slave_link_status_change_monitor
> code.
> > >
> > > Previously testpmd did not allow, stop, close or detach of PMD while
> > > traffic was running. Testpmd has been modified with the following
> > > patchset
> > >
> > > http://dpdk.org/dev/patchwork/patch/13472/
> > >
> > > It now allows stop, close and detach of a PMD provided in it is not
> forwarding and is not a slave of bonded PMD.
> > >
> > I will admit to not being fully convinced, but if nobody else has any
> > serious objections, and since this patch has been reviewed and acked,
> > I'm ok to merge it in. I'll do so shortly.
>
> Please hold on.
> Seeing locks introduced in the Rx/Tx path is an alert.
> We clearly need a design document to explain where locks can be used and
> what are the responsibility of the control plane.
> If everybody agrees in this document that DPDK can have some locks in the
> fast path, then OK to merge it.
>
> So I would say NACK for 16.07 and maybe postpone to 16.11.
Looking at the documentation for the bonding PMD.
http://dpdk.org/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.html
In section 10.2 it states the following:
Bonded devices support the dynamical addition and removal of slave devices using the rte_eth_bond_slave_add / rte_eth_bond_slave_remove APIs.
If a slave device is added or removed while traffic is running, there is the possibility of a segmentation fault in the rx/tx burst functions. This is most likely to occur in the round robin bonding mode.
This patch set fixes what appears to be a bug in the bonding PMD.
Performance measurements have been made with this patch set applied and without the patches applied using 64 byte packets.
With the patches applied the following drop in performance was observed:
% drop for fwd+io: 0.16%
% drop for fwd+mac: 0.39%
This patch set has been reviewed and ack'ed, so I think it should be applied in 16.07
Regards,
Bernard.
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
2016-06-16 16:41 ` Iremonger, Bernard
@ 2016-06-16 18:38 ` Thomas Monjalon
2017-02-15 18:01 ` Ferruh Yigit
0 siblings, 1 reply; 42+ messages in thread
From: Thomas Monjalon @ 2016-06-16 18:38 UTC (permalink / raw)
To: Iremonger, Bernard
Cc: Richardson, Bruce, dev, Doherty, Declan, Ananyev, Konstantin,
Mcnamara, John
2016-06-16 16:41, Iremonger, Bernard:
> Hi Thomas,
> <snip>
> > 2016-06-16 15:32, Bruce Richardson:
> > > On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard wrote:
> > > > > Why does this particular PMD need spinlocks when doing RX and TX,
> > > > > while other device types do not? How is adding/removing devices
> > > > > from a bonded device different to other control operations that
> > > > > can be done on physical PMDs? Is this not similar to say bringing
> > > > > down or hotplugging out a physical port just before an RX or TX
> > operation takes place?
> > > > > For all other PMDs we rely on the app to synchronise control and
> > > > > data plane operation - why not here?
> > > > >
> > > > > /Bruce
> > > >
> > > > This issue arose during VM live migration testing.
> > > > For VM live migration it is necessary (while traffic is running) to be able to
> > remove a bonded slave device, stop it, close it and detach it.
> > > > It a slave device is removed from a bonded device while traffic is running
> > a segmentation fault may occur in the rx/tx burst function. The spinlock has
> > been added to prevent this occurring.
> > > >
> > > > The bonding device already uses a spinlock to synchronise between the
> > add and remove functionality and the slave_link_status_change_monitor
> > code.
> > > >
> > > > Previously testpmd did not allow, stop, close or detach of PMD while
> > > > traffic was running. Testpmd has been modified with the following
> > > > patchset
> > > >
> > > > http://dpdk.org/dev/patchwork/patch/13472/
> > > >
> > > > It now allows stop, close and detach of a PMD provided in it is not
> > forwarding and is not a slave of bonded PMD.
> > > >
> > > I will admit to not being fully convinced, but if nobody else has any
> > > serious objections, and since this patch has been reviewed and acked,
> > > I'm ok to merge it in. I'll do so shortly.
> >
> > Please hold on.
> > Seeing locks introduced in the Rx/Tx path is an alert.
> > We clearly need a design document to explain where locks can be used and
> > what are the responsibility of the control plane.
> > If everybody agrees in this document that DPDK can have some locks in the
> > fast path, then OK to merge it.
> >
> > So I would say NACK for 16.07 and maybe postpone to 16.11.
>
> Looking at the documentation for the bonding PMD.
>
> http://dpdk.org/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.html
>
> In section 10.2 it states the following:
>
> Bonded devices support the dynamical addition and removal of slave devices using the rte_eth_bond_slave_add / rte_eth_bond_slave_remove APIs.
>
> If a slave device is added or removed while traffic is running, there is the possibility of a segmentation fault in the rx/tx burst functions. This is most likely to occur in the round robin bonding mode.
>
> This patch set fixes what appears to be a bug in the bonding PMD.
It can be fixed by removing this statement in the doc.
One of the design principle of DPDK is to avoid locks.
> Performance measurements have been made with this patch set applied and without the patches applied using 64 byte packets.
>
> With the patches applied the following drop in performance was observed:
>
> % drop for fwd+io: 0.16%
> % drop for fwd+mac: 0.39%
>
> This patch set has been reviewed and ack'ed, so I think it should be applied in 16.07
I understand your point of view and I gave mine.
Now we need more opinions from others.
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
2016-06-16 18:38 ` Thomas Monjalon
@ 2017-02-15 18:01 ` Ferruh Yigit
2017-02-16 9:13 ` Bruce Richardson
0 siblings, 1 reply; 42+ messages in thread
From: Ferruh Yigit @ 2017-02-15 18:01 UTC (permalink / raw)
To: Thomas Monjalon, Bernard Iremonger, Bruce Richardson, Ananyev,
Konstantin, Declan Doherty
Cc: DPDK
On 6/16/2016 7:38 PM, thomas.monjalon at 6wind.com (Thomas Monjalon) wrote:
> 2016-06-16 16:41, Iremonger, Bernard:
>> Hi Thomas,
>> <snip>
>>> 2016-06-16 15:32, Bruce Richardson:
>>>> On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard wrote:
>>>>>> Why does this particular PMD need spinlocks when doing RX and TX,
>>>>>> while other device types do not? How is adding/removing devices
>>>>>> from a bonded device different to other control operations that
>>>>>> can be done on physical PMDs? Is this not similar to say bringing
>>>>>> down or hotplugging out a physical port just before an RX or TX
>>> operation takes place?
>>>>>> For all other PMDs we rely on the app to synchronise control and
>>>>>> data plane operation - why not here?
>>>>>>
>>>>>> /Bruce
>>>>>
>>>>> This issue arose during VM live migration testing.
>>>>> For VM live migration it is necessary (while traffic is running) to be able to
>>> remove a bonded slave device, stop it, close it and detach it.
>>>>> It a slave device is removed from a bonded device while traffic is running
>>> a segmentation fault may occur in the rx/tx burst function. The spinlock has
>>> been added to prevent this occurring.
>>>>>
>>>>> The bonding device already uses a spinlock to synchronise between the
>>> add and remove functionality and the slave_link_status_change_monitor
>>> code.
>>>>>
>>>>> Previously testpmd did not allow, stop, close or detach of PMD while
>>>>> traffic was running. Testpmd has been modified with the following
>>>>> patchset
>>>>>
>>>>> http://dpdk.org/dev/patchwork/patch/13472/
>>>>>
>>>>> It now allows stop, close and detach of a PMD provided in it is not
>>> forwarding and is not a slave of bonded PMD.
>>>>>
>>>> I will admit to not being fully convinced, but if nobody else has any
>>>> serious objections, and since this patch has been reviewed and acked,
>>>> I'm ok to merge it in. I'll do so shortly.
>>>
>>> Please hold on.
>>> Seeing locks introduced in the Rx/Tx path is an alert.
>>> We clearly need a design document to explain where locks can be used and
>>> what are the responsibility of the control plane.
>>> If everybody agrees in this document that DPDK can have some locks in the
>>> fast path, then OK to merge it.
>>>
>>> So I would say NACK for 16.07 and maybe postpone to 16.11.
>>
>> Looking at the documentation for the bonding PMD.
>>
>> http://dpdk.org/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.html
>>
>> In section 10.2 it states the following:
>>
>> Bonded devices support the dynamical addition and removal of slave devices using the rte_eth_bond_slave_add / rte_eth_bond_slave_remove APIs.
>>
>> If a slave device is added or removed while traffic is running, there is the possibility of a segmentation fault in the rx/tx burst functions. This is most likely to occur in the round robin bonding mode.
>>
>> This patch set fixes what appears to be a bug in the bonding PMD.
>
> It can be fixed by removing this statement in the doc.
>
> One of the design principle of DPDK is to avoid locks.
>
>> Performance measurements have been made with this patch set applied and without the patches applied using 64 byte packets.
>>
>> With the patches applied the following drop in performance was observed:
>>
>> % drop for fwd+io: 0.16%
>> % drop for fwd+mac: 0.39%
>>
>> This patch set has been reviewed and ack'ed, so I think it should be applied in 16.07
>
> I understand your point of view and I gave mine.
> Now we need more opinions from others.
>
Hi,
These patches are sitting in the patchwork for a long time. Discussion
never concluded and patches kept deferred each release.
I think we should give a decision about them:
1- We can merge them in this release, they are fixing a valid problem,
and patches are already acked.
2- We can reject them, if not having them for more than six months not
caused a problem, perhaps they are not really that required. And if
somebody needs them in the future, we can resurrect them from patchwork.
I vote for option 2, any comments?
Thanks,
ferruh
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
2017-02-15 18:01 ` Ferruh Yigit
@ 2017-02-16 9:13 ` Bruce Richardson
2017-02-16 11:39 ` Iremonger, Bernard
0 siblings, 1 reply; 42+ messages in thread
From: Bruce Richardson @ 2017-02-16 9:13 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Thomas Monjalon, Bernard Iremonger, Ananyev, Konstantin,
Declan Doherty, DPDK
On Wed, Feb 15, 2017 at 06:01:45PM +0000, Ferruh Yigit wrote:
> On 6/16/2016 7:38 PM, thomas.monjalon at 6wind.com (Thomas Monjalon) wrote:
> > 2016-06-16 16:41, Iremonger, Bernard:
> >> Hi Thomas,
> >> <snip>
> >>> 2016-06-16 15:32, Bruce Richardson:
> >>>> On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard wrote:
> >>>>>> Why does this particular PMD need spinlocks when doing RX and TX,
> >>>>>> while other device types do not? How is adding/removing devices
> >>>>>> from a bonded device different to other control operations that
> >>>>>> can be done on physical PMDs? Is this not similar to say bringing
> >>>>>> down or hotplugging out a physical port just before an RX or TX
> >>> operation takes place?
> >>>>>> For all other PMDs we rely on the app to synchronise control and
> >>>>>> data plane operation - why not here?
> >>>>>>
> >>>>>> /Bruce
> >>>>>
> >>>>> This issue arose during VM live migration testing.
> >>>>> For VM live migration it is necessary (while traffic is running) to be able to
> >>> remove a bonded slave device, stop it, close it and detach it.
> >>>>> It a slave device is removed from a bonded device while traffic is running
> >>> a segmentation fault may occur in the rx/tx burst function. The spinlock has
> >>> been added to prevent this occurring.
> >>>>>
> >>>>> The bonding device already uses a spinlock to synchronise between the
> >>> add and remove functionality and the slave_link_status_change_monitor
> >>> code.
> >>>>>
> >>>>> Previously testpmd did not allow, stop, close or detach of PMD while
> >>>>> traffic was running. Testpmd has been modified with the following
> >>>>> patchset
> >>>>>
> >>>>> http://dpdk.org/dev/patchwork/patch/13472/
> >>>>>
> >>>>> It now allows stop, close and detach of a PMD provided in it is not
> >>> forwarding and is not a slave of bonded PMD.
> >>>>>
> >>>> I will admit to not being fully convinced, but if nobody else has any
> >>>> serious objections, and since this patch has been reviewed and acked,
> >>>> I'm ok to merge it in. I'll do so shortly.
> >>>
> >>> Please hold on.
> >>> Seeing locks introduced in the Rx/Tx path is an alert.
> >>> We clearly need a design document to explain where locks can be used and
> >>> what are the responsibility of the control plane.
> >>> If everybody agrees in this document that DPDK can have some locks in the
> >>> fast path, then OK to merge it.
> >>>
> >>> So I would say NACK for 16.07 and maybe postpone to 16.11.
> >>
> >> Looking at the documentation for the bonding PMD.
> >>
> >> http://dpdk.org/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.html
> >>
> >> In section 10.2 it states the following:
> >>
> >> Bonded devices support the dynamical addition and removal of slave devices using the rte_eth_bond_slave_add / rte_eth_bond_slave_remove APIs.
> >>
> >> If a slave device is added or removed while traffic is running, there is the possibility of a segmentation fault in the rx/tx burst functions. This is most likely to occur in the round robin bonding mode.
> >>
> >> This patch set fixes what appears to be a bug in the bonding PMD.
> >
> > It can be fixed by removing this statement in the doc.
> >
> > One of the design principle of DPDK is to avoid locks.
> >
> >> Performance measurements have been made with this patch set applied and without the patches applied using 64 byte packets.
> >>
> >> With the patches applied the following drop in performance was observed:
> >>
> >> % drop for fwd+io: 0.16%
> >> % drop for fwd+mac: 0.39%
> >>
> >> This patch set has been reviewed and ack'ed, so I think it should be applied in 16.07
> >
> > I understand your point of view and I gave mine.
> > Now we need more opinions from others.
> >
>
> Hi,
>
> These patches are sitting in the patchwork for a long time. Discussion
> never concluded and patches kept deferred each release.
>
> I think we should give a decision about them:
>
> 1- We can merge them in this release, they are fixing a valid problem,
> and patches are already acked.
>
> 2- We can reject them, if not having them for more than six months not
> caused a problem, perhaps they are not really that required. And if
> somebody needs them in the future, we can resurrect them from patchwork.
>
> I vote for option 2, any comments?
>
+1 on option 2. There are obviously not badly needed if nobody is asking
for them for over six months.
/Bruce
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
2017-02-16 9:13 ` Bruce Richardson
@ 2017-02-16 11:39 ` Iremonger, Bernard
2017-02-20 11:15 ` Ferruh Yigit
0 siblings, 1 reply; 42+ messages in thread
From: Iremonger, Bernard @ 2017-02-16 11:39 UTC (permalink / raw)
To: Richardson, Bruce, Yigit, Ferruh
Cc: Thomas Monjalon, Ananyev, Konstantin, Doherty, Declan, DPDK
Hi Ferruh,
> -----Original Message-----
> From: Richardson, Bruce
> Sent: Thursday, February 16, 2017 9:14 AM
> To: Yigit, Ferruh <ferruh.yigit@intel.com>
> Cc: Thomas Monjalon <thomas.monjalon@6wind.com>; Iremonger, Bernard
> <bernard.iremonger@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Doherty, Declan
> <declan.doherty@intel.com>; DPDK <dev@dpdk.org>
> Subject: Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx
> burst functions
>
> On Wed, Feb 15, 2017 at 06:01:45PM +0000, Ferruh Yigit wrote:
> > On 6/16/2016 7:38 PM, thomas.monjalon at 6wind.com (Thomas Monjalon)
> wrote:
> > > 2016-06-16 16:41, Iremonger, Bernard:
> > >> Hi Thomas,
> > >> <snip>
> > >>> 2016-06-16 15:32, Bruce Richardson:
> > >>>> On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard
> wrote:
> > >>>>>> Why does this particular PMD need spinlocks when doing RX and
> > >>>>>> TX, while other device types do not? How is adding/removing
> > >>>>>> devices from a bonded device different to other control
> > >>>>>> operations that can be done on physical PMDs? Is this not
> > >>>>>> similar to say bringing down or hotplugging out a physical port
> > >>>>>> just before an RX or TX
> > >>> operation takes place?
> > >>>>>> For all other PMDs we rely on the app to synchronise control
> > >>>>>> and data plane operation - why not here?
> > >>>>>>
> > >>>>>> /Bruce
> > >>>>>
> > >>>>> This issue arose during VM live migration testing.
> > >>>>> For VM live migration it is necessary (while traffic is running)
> > >>>>> to be able to
> > >>> remove a bonded slave device, stop it, close it and detach it.
> > >>>>> It a slave device is removed from a bonded device while traffic
> > >>>>> is running
> > >>> a segmentation fault may occur in the rx/tx burst function. The
> > >>> spinlock has been added to prevent this occurring.
> > >>>>>
> > >>>>> The bonding device already uses a spinlock to synchronise
> > >>>>> between the
> > >>> add and remove functionality and the
> > >>> slave_link_status_change_monitor code.
> > >>>>>
> > >>>>> Previously testpmd did not allow, stop, close or detach of PMD
> > >>>>> while traffic was running. Testpmd has been modified with the
> > >>>>> following patchset
> > >>>>>
> > >>>>> http://dpdk.org/dev/patchwork/patch/13472/
> > >>>>>
> > >>>>> It now allows stop, close and detach of a PMD provided in it is
> > >>>>> not
> > >>> forwarding and is not a slave of bonded PMD.
> > >>>>>
> > >>>> I will admit to not being fully convinced, but if nobody else has
> > >>>> any serious objections, and since this patch has been reviewed
> > >>>> and acked, I'm ok to merge it in. I'll do so shortly.
> > >>>
> > >>> Please hold on.
> > >>> Seeing locks introduced in the Rx/Tx path is an alert.
> > >>> We clearly need a design document to explain where locks can be
> > >>> used and what are the responsibility of the control plane.
> > >>> If everybody agrees in this document that DPDK can have some locks
> > >>> in the fast path, then OK to merge it.
> > >>>
> > >>> So I would say NACK for 16.07 and maybe postpone to 16.11.
> > >>
> > >> Looking at the documentation for the bonding PMD.
> > >>
> > >>
> http://dpdk.org/doc/guides/prog_guide/link_bonding_poll_mode_drv_li
> > >> b.html
> > >>
> > >> In section 10.2 it states the following:
> > >>
> > >> Bonded devices support the dynamical addition and removal of slave
> devices using the rte_eth_bond_slave_add / rte_eth_bond_slave_remove
> APIs.
> > >>
> > >> If a slave device is added or removed while traffic is running, there is the
> possibility of a segmentation fault in the rx/tx burst functions. This is most
> likely to occur in the round robin bonding mode.
> > >>
> > >> This patch set fixes what appears to be a bug in the bonding PMD.
> > >
> > > It can be fixed by removing this statement in the doc.
> > >
> > > One of the design principle of DPDK is to avoid locks.
> > >
> > >> Performance measurements have been made with this patch set
> applied and without the patches applied using 64 byte packets.
> > >>
> > >> With the patches applied the following drop in performance was
> observed:
> > >>
> > >> % drop for fwd+io: 0.16%
> > >> % drop for fwd+mac: 0.39%
> > >>
> > >> This patch set has been reviewed and ack'ed, so I think it should
> > >> be applied in 16.07
> > >
> > > I understand your point of view and I gave mine.
> > > Now we need more opinions from others.
> > >
> >
> > Hi,
> >
> > These patches are sitting in the patchwork for a long time. Discussion
> > never concluded and patches kept deferred each release.
> >
> > I think we should give a decision about them:
> >
> > 1- We can merge them in this release, they are fixing a valid problem,
> > and patches are already acked.
> >
> > 2- We can reject them, if not having them for more than six months not
> > caused a problem, perhaps they are not really that required. And if
> > somebody needs them in the future, we can resurrect them from
> patchwork.
> >
> > I vote for option 2, any comments?
> >
> +1 on option 2. There are obviously not badly needed if nobody is asking
> for them for over six months.
>
> /Bruce
I am ok with option 2, provided they can be retrieved if needed.
Regards,
Bernard.
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
2017-02-16 11:39 ` Iremonger, Bernard
@ 2017-02-20 11:15 ` Ferruh Yigit
0 siblings, 0 replies; 42+ messages in thread
From: Ferruh Yigit @ 2017-02-20 11:15 UTC (permalink / raw)
To: Iremonger, Bernard, Richardson, Bruce
Cc: Thomas Monjalon, Ananyev, Konstantin, Doherty, Declan, DPDK
On 2/16/2017 11:39 AM, Iremonger, Bernard wrote:
> Hi Ferruh,
>
>> -----Original Message-----
>> From: Richardson, Bruce
>> Sent: Thursday, February 16, 2017 9:14 AM
>> To: Yigit, Ferruh <ferruh.yigit@intel.com>
>> Cc: Thomas Monjalon <thomas.monjalon@6wind.com>; Iremonger, Bernard
>> <bernard.iremonger@intel.com>; Ananyev, Konstantin
>> <konstantin.ananyev@intel.com>; Doherty, Declan
>> <declan.doherty@intel.com>; DPDK <dev@dpdk.org>
>> Subject: Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx
>> burst functions
>>
>> On Wed, Feb 15, 2017 at 06:01:45PM +0000, Ferruh Yigit wrote:
>>> On 6/16/2016 7:38 PM, thomas.monjalon at 6wind.com (Thomas Monjalon)
>> wrote:
>>>> 2016-06-16 16:41, Iremonger, Bernard:
>>>>> Hi Thomas,
>>>>> <snip>
>>>>>> 2016-06-16 15:32, Bruce Richardson:
>>>>>>> On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard
>> wrote:
>>>>>>>>> Why does this particular PMD need spinlocks when doing RX and
>>>>>>>>> TX, while other device types do not? How is adding/removing
>>>>>>>>> devices from a bonded device different to other control
>>>>>>>>> operations that can be done on physical PMDs? Is this not
>>>>>>>>> similar to say bringing down or hotplugging out a physical port
>>>>>>>>> just before an RX or TX
>>>>>> operation takes place?
>>>>>>>>> For all other PMDs we rely on the app to synchronise control
>>>>>>>>> and data plane operation - why not here?
>>>>>>>>>
>>>>>>>>> /Bruce
>>>>>>>>
>>>>>>>> This issue arose during VM live migration testing.
>>>>>>>> For VM live migration it is necessary (while traffic is running)
>>>>>>>> to be able to
>>>>>> remove a bonded slave device, stop it, close it and detach it.
>>>>>>>> It a slave device is removed from a bonded device while traffic
>>>>>>>> is running
>>>>>> a segmentation fault may occur in the rx/tx burst function. The
>>>>>> spinlock has been added to prevent this occurring.
>>>>>>>>
>>>>>>>> The bonding device already uses a spinlock to synchronise
>>>>>>>> between the
>>>>>> add and remove functionality and the
>>>>>> slave_link_status_change_monitor code.
>>>>>>>>
>>>>>>>> Previously testpmd did not allow, stop, close or detach of PMD
>>>>>>>> while traffic was running. Testpmd has been modified with the
>>>>>>>> following patchset
>>>>>>>>
>>>>>>>> http://dpdk.org/dev/patchwork/patch/13472/
>>>>>>>>
>>>>>>>> It now allows stop, close and detach of a PMD provided in it is
>>>>>>>> not
>>>>>> forwarding and is not a slave of bonded PMD.
>>>>>>>>
>>>>>>> I will admit to not being fully convinced, but if nobody else has
>>>>>>> any serious objections, and since this patch has been reviewed
>>>>>>> and acked, I'm ok to merge it in. I'll do so shortly.
>>>>>>
>>>>>> Please hold on.
>>>>>> Seeing locks introduced in the Rx/Tx path is an alert.
>>>>>> We clearly need a design document to explain where locks can be
>>>>>> used and what are the responsibility of the control plane.
>>>>>> If everybody agrees in this document that DPDK can have some locks
>>>>>> in the fast path, then OK to merge it.
>>>>>>
>>>>>> So I would say NACK for 16.07 and maybe postpone to 16.11.
>>>>>
>>>>> Looking at the documentation for the bonding PMD.
>>>>>
>>>>>
>> http://dpdk.org/doc/guides/prog_guide/link_bonding_poll_mode_drv_li
>>>>> b.html
>>>>>
>>>>> In section 10.2 it states the following:
>>>>>
>>>>> Bonded devices support the dynamical addition and removal of slave
>> devices using the rte_eth_bond_slave_add / rte_eth_bond_slave_remove
>> APIs.
>>>>>
>>>>> If a slave device is added or removed while traffic is running, there is the
>> possibility of a segmentation fault in the rx/tx burst functions. This is most
>> likely to occur in the round robin bonding mode.
>>>>>
>>>>> This patch set fixes what appears to be a bug in the bonding PMD.
>>>>
>>>> It can be fixed by removing this statement in the doc.
>>>>
>>>> One of the design principle of DPDK is to avoid locks.
>>>>
>>>>> Performance measurements have been made with this patch set
>> applied and without the patches applied using 64 byte packets.
>>>>>
>>>>> With the patches applied the following drop in performance was
>> observed:
>>>>>
>>>>> % drop for fwd+io: 0.16%
>>>>> % drop for fwd+mac: 0.39%
>>>>>
>>>>> This patch set has been reviewed and ack'ed, so I think it should
>>>>> be applied in 16.07
>>>>
>>>> I understand your point of view and I gave mine.
>>>> Now we need more opinions from others.
>>>>
>>>
>>> Hi,
>>>
>>> These patches are sitting in the patchwork for a long time. Discussion
>>> never concluded and patches kept deferred each release.
>>>
>>> I think we should give a decision about them:
>>>
>>> 1- We can merge them in this release, they are fixing a valid problem,
>>> and patches are already acked.
>>>
>>> 2- We can reject them, if not having them for more than six months not
>>> caused a problem, perhaps they are not really that required. And if
>>> somebody needs them in the future, we can resurrect them from
>> patchwork.
>>>
>>> I vote for option 2, any comments?
>>>
>> +1 on option 2. There are obviously not badly needed if nobody is asking
>> for them for over six months.
>>
>> /Bruce
>
> I am ok with option 2, provided they can be retrieved if needed.
Patches marked as rejected in patchwork.
For future reference, patchwork ids:
http://dpdk.org/dev/patchwork/patch/13482/
http://dpdk.org/dev/patchwork/patch/13483/
http://dpdk.org/dev/patchwork/patch/13484/
http://dpdk.org/dev/patchwork/patch/13485/
Thanks,
ferruh
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
2016-06-12 17:11 ` [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions Bernard Iremonger
2016-06-13 9:18 ` Bruce Richardson
@ 2016-09-09 11:29 ` Ferruh Yigit
1 sibling, 0 replies; 42+ messages in thread
From: Ferruh Yigit @ 2016-09-09 11:29 UTC (permalink / raw)
To: Bernard Iremonger, dev; +Cc: Doherty, Declan, Ananyev, Konstantin
Hi Bernard,
This is an old patch, sorry for commenting after this long.
On 6/12/2016 6:11 PM, Bernard Iremonger wrote:
> Use rte_spinlock_trylock() in the rx/tx burst functions to
> take the queue spinlock.
>
> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
...
> static uint16_t
> @@ -143,8 +154,10 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
> uint8_t i, j, k;
>
> rte_eth_macaddr_get(internals->port_id, &bond_mac);
> - /* Copy slave list to protect against slave up/down changes during tx
> - * bursting */
This piece,
...
> for (i = 0; i < num_of_slaves; i++) {
> - struct port *port = &mode_8023ad_ports[slaves[i]];
> + struct port *port;
> +
> + port = &mode_8023ad_ports[internals->active_slaves[i]];
And this piece seems needs to be moved into next patch in the patchset.
...
And if you will send new version of the patchset, there are a few
warnings from check-git-log.sh:
Wrong headline prefix:
bonding: remove memcpy from burst functions
bonding: take queue spinlock in rx/tx burst functions
bonding: grab queue spinlocks in slave add and remove
bonding: add spinlock to rx and tx queues
Wrong headline lowercase:
bonding: take queue spinlock in rx/tx burst functions
bonding: add spinlock to rx and tx queues
Thanks,
ferruh
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v3 4/4] bonding: remove memcpy from burst functions
2016-06-12 17:11 ` [dpdk-dev] [PATCH v3 0/4] bonding: locks Bernard Iremonger
` (2 preceding siblings ...)
2016-06-12 17:11 ` [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions Bernard Iremonger
@ 2016-06-12 17:11 ` Bernard Iremonger
2016-09-11 12:39 ` Yuanhan Liu
3 siblings, 1 reply; 42+ messages in thread
From: Bernard Iremonger @ 2016-06-12 17:11 UTC (permalink / raw)
To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger
Now that the queue spinlocks have been added to the rx and
tx burst functions the memcpy of the slave data is no
longer necessary, so it has been removed.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
drivers/net/bonding/rte_eth_bond_pmd.c | 71 ++++++++++++++--------------------
1 file changed, 28 insertions(+), 43 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 93043ef..ce46450 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -146,7 +146,6 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
const uint16_t ether_type_slow_be = rte_be_to_cpu_16(ETHER_TYPE_SLOW);
uint16_t num_rx_total = 0; /* Total number of received packets */
- uint8_t slaves[RTE_MAX_ETHPORTS];
uint8_t slave_count;
uint8_t collecting; /* current slave collecting status */
@@ -159,15 +158,16 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
return num_rx_total;
slave_count = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * slave_count);
for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) {
j = num_rx_total;
- collecting = ACTOR_STATE(&mode_8023ad_ports[slaves[i]], COLLECTING);
+ collecting = ACTOR_STATE(
+ &mode_8023ad_ports[internals->active_slaves[i]],
+ COLLECTING);
/* Read packets from this slave */
- num_rx_total += rte_eth_rx_burst(slaves[i], bd_rx_q->queue_id,
+ num_rx_total += rte_eth_rx_burst(internals->active_slaves[i],
+ bd_rx_q->queue_id,
&bufs[num_rx_total], nb_pkts - num_rx_total);
for (k = j; k < 2 && k < num_rx_total; k++)
@@ -188,7 +188,9 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
!is_same_ether_addr(&bond_mac, &hdr->d_addr)))) {
if (hdr->ether_type == ether_type_slow_be) {
- bond_mode_8023ad_handle_slow_pkt(internals, slaves[i],
+ bond_mode_8023ad_handle_slow_pkt(
+ internals,
+ internals->active_slaves[i],
bufs[j]);
} else
rte_pktmbuf_free(bufs[j]);
@@ -409,8 +411,6 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
uint16_t slave_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
uint8_t num_of_slaves;
- uint8_t slaves[RTE_MAX_ETHPORTS];
-
uint16_t num_tx_total = 0, num_tx_slave;
static int slave_idx = 0;
@@ -422,12 +422,7 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
return num_tx_total;
- /* Copy slave list to protect against slave up/down changes during tx
- * bursting */
num_of_slaves = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * num_of_slaves);
-
if (num_of_slaves < 1) {
rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
@@ -446,7 +441,9 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
/* Send packet burst on each slave device */
for (i = 0; i < num_of_slaves; i++) {
if (slave_nb_pkts[i] > 0) {
- num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+ num_tx_slave = rte_eth_tx_burst(
+ internals->active_slaves[i],
+ bd_tx_q->queue_id,
slave_bufs[i], slave_nb_pkts[i]);
/* if tx burst fails move packets to end of bufs */
@@ -721,7 +718,6 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
uint8_t i, j;
uint8_t num_of_slaves;
- uint8_t slaves[RTE_MAX_ETHPORTS];
struct ether_hdr *ether_hdr;
struct ether_addr primary_slave_addr;
@@ -736,9 +732,6 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
return num_tx_total;
}
- memcpy(slaves, internals->tlb_slaves_order,
- sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
-
ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
if (nb_pkts > 3) {
@@ -747,7 +740,8 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
}
for (i = 0; i < num_of_slaves; i++) {
- rte_eth_macaddr_get(slaves[i], &active_slave_addr);
+ rte_eth_macaddr_get(internals->tlb_slaves_order[i],
+ &active_slave_addr);
for (j = num_tx_total; j < nb_pkts; j++) {
if (j + 3 < nb_pkts)
rte_prefetch0(rte_pktmbuf_mtod(bufs[j+3], void*));
@@ -760,8 +754,11 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
#endif
}
- num_tx_total += rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
- bufs + num_tx_total, nb_pkts - num_tx_total);
+ num_tx_total += rte_eth_tx_burst(
+ internals->tlb_slaves_order[i],
+ bd_tx_q->queue_id,
+ bufs + num_tx_total,
+ nb_pkts - num_tx_total);
if (num_tx_total == nb_pkts)
break;
@@ -937,7 +934,6 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
struct bond_tx_queue *bd_tx_q;
uint8_t num_of_slaves;
- uint8_t slaves[RTE_MAX_ETHPORTS];
uint16_t num_tx_total = 0, num_tx_slave = 0, tx_fail_total = 0;
@@ -952,12 +948,7 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
return num_tx_total;
- /* Copy slave list to protect against slave up/down changes during tx
- * bursting */
num_of_slaves = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * num_of_slaves);
-
if (num_of_slaves < 1) {
rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
@@ -975,7 +966,9 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
/* Send packet burst on each slave device */
for (i = 0; i < num_of_slaves; i++) {
if (slave_nb_pkts[i] > 0) {
- num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+ num_tx_slave = rte_eth_tx_burst(
+ internals->active_slaves[i],
+ bd_tx_q->queue_id,
slave_bufs[i], slave_nb_pkts[i]);
/* if tx burst fails move packets to end of bufs */
@@ -1003,7 +996,6 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
struct bond_tx_queue *bd_tx_q;
uint8_t num_of_slaves;
- uint8_t slaves[RTE_MAX_ETHPORTS];
/* positions in slaves, not ID */
uint8_t distributing_offsets[RTE_MAX_ETHPORTS];
uint8_t distributing_count;
@@ -1027,16 +1019,12 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
return num_tx_total;
- /* Copy slave list to protect against slave up/down changes during tx
- * bursting */
num_of_slaves = internals->active_slave_count;
if (num_of_slaves < 1) {
rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
}
- memcpy(slaves, internals->active_slaves, sizeof(slaves[0]) * num_of_slaves);
-
distributing_count = 0;
for (i = 0; i < num_of_slaves; i++) {
struct port *port;
@@ -1073,7 +1061,9 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
if (slave_nb_pkts[i] == 0)
continue;
- num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+ num_tx_slave = rte_eth_tx_burst(
+ internals->active_slaves[i],
+ bd_tx_q->queue_id,
slave_bufs[i], slave_nb_pkts[i]);
/* If tx burst fails drop slow packets */
@@ -1102,8 +1092,6 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
struct bond_tx_queue *bd_tx_q;
uint8_t tx_failed_flag = 0, num_of_slaves;
- uint8_t slaves[RTE_MAX_ETHPORTS];
-
uint16_t max_nb_of_tx_pkts = 0;
int slave_tx_total[RTE_MAX_ETHPORTS];
@@ -1115,12 +1103,7 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
return 0;
- /* Copy slave list to protect against slave up/down changes during tx
- * bursting */
num_of_slaves = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * num_of_slaves);
-
if (num_of_slaves < 1) {
rte_spinlock_unlock(&bd_tx_q->lock);
return 0;
@@ -1132,8 +1115,10 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
/* Transmit burst on each active slave */
for (i = 0; i < num_of_slaves; i++) {
- slave_tx_total[i] = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
- bufs, nb_pkts);
+ slave_tx_total[i] = rte_eth_tx_burst(
+ internals->active_slaves[i],
+ bd_tx_q->queue_id,
+ bufs, nb_pkts);
if (unlikely(slave_tx_total[i] < nb_pkts))
tx_failed_flag = 1;
--
2.6.3
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v3 4/4] bonding: remove memcpy from burst functions
2016-06-12 17:11 ` [dpdk-dev] [PATCH v3 4/4] bonding: remove memcpy from " Bernard Iremonger
@ 2016-09-11 12:39 ` Yuanhan Liu
0 siblings, 0 replies; 42+ messages in thread
From: Yuanhan Liu @ 2016-09-11 12:39 UTC (permalink / raw)
To: Bernard Iremonger
Cc: dev, declan.doherty, konstantin.ananyev, Thomas Monjalon
On Sun, Jun 12, 2016 at 06:11:29PM +0100, Bernard Iremonger wrote:
> Now that the queue spinlocks have been added to the rx and
> tx burst functions the memcpy of the slave data is no
> longer necessary, so it has been removed.
>
> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Hi,
FYI, my testrobot caught some errors when this patch is applied.
--yliu
---
x86_64-native-linuxapp-clang: config-all-yes
============================================
grep: /lib/modules/4.6.0/build/include/generated/utsrelease.h: No such file or directory
grep: /lib/modules/4.6.0/build/include/generated/utsrelease.h: No such file or directory
/root/dpdk/drivers/net/bonding/rte_eth_bond_pmd.c:753:41: error: use of undeclared identifier 'slaves'
mode6_debug("TX IPv4:", ether_hdr, slaves[i], &burstnumberTX);
^
1 error generated.
make[6]: *** [rte_eth_bond_pmd.o] Error 1
make[5]: *** [bonding] Error 2
make[5]: *** Waiting for unfinished jobs....
make[4]: *** [net] Error 2
make[3]: *** [drivers] Error 2
make[2]: *** [all] Error 2
make[1]: *** [pre_install] Error 2
make: *** [install] Error 2
error: build failed
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v2 2/6] bonding: grab queue spinlocks in slave add and remove
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 0/6] bonding: locks Bernard Iremonger
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 1/6] bonding: add spinlock to rx and tx queues Bernard Iremonger
@ 2016-05-26 16:38 ` Bernard Iremonger
2016-06-10 18:14 ` Ananyev, Konstantin
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 3/6] bonding: take queue spinlock in rx/tx burst functions Bernard Iremonger
` (4 subsequent siblings)
6 siblings, 1 reply; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-26 16:38 UTC (permalink / raw)
To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
drivers/net/bonding/rte_eth_bond_api.c | 52 ++++++++++++++++++++++++++++++++--
1 file changed, 49 insertions(+), 3 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 53df9fe..006c901 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -437,8 +437,10 @@ rte_eth_bond_slave_add(uint8_t bonded_port_id, uint8_t slave_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
-
+ struct bond_tx_queue *bd_tx_q;
+ struct bond_rx_queue *bd_rx_q;
int retval;
+ uint16_t i;
/* Verify that port id's are valid bonded and slave ports */
if (valid_bonded_port_id(bonded_port_id) != 0)
@@ -448,11 +450,30 @@ rte_eth_bond_slave_add(uint8_t bonded_port_id, uint8_t slave_port_id)
internals = bonded_eth_dev->data->dev_private;
rte_spinlock_lock(&internals->lock);
+ if (bonded_eth_dev->data->dev_started) {
+ for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+ bd_rx_q = bonded_eth_dev->data->rx_queues[i];
+ rte_spinlock_lock(&bd_rx_q->lock);
+ }
+ for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+ bd_tx_q = bonded_eth_dev->data->tx_queues[i];
+ rte_spinlock_lock(&bd_tx_q->lock);
+ }
+ }
retval = __eth_bond_slave_add_lock_free(bonded_port_id, slave_port_id);
+ if (bonded_eth_dev->data->dev_started) {
+ for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+ bd_rx_q = bonded_eth_dev->data->rx_queues[i];
+ rte_spinlock_unlock(&bd_rx_q->lock);
+ }
+ for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+ bd_tx_q = bonded_eth_dev->data->tx_queues[i];
+ rte_spinlock_unlock(&bd_tx_q->lock);
+ }
+ }
rte_spinlock_unlock(&internals->lock);
-
return retval;
}
@@ -541,7 +562,10 @@ rte_eth_bond_slave_remove(uint8_t bonded_port_id, uint8_t slave_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
+ struct bond_tx_queue *bd_tx_q;
+ struct bond_rx_queue *bd_rx_q;
int retval;
+ uint16_t i;
if (valid_bonded_port_id(bonded_port_id) != 0)
return -1;
@@ -550,11 +574,33 @@ rte_eth_bond_slave_remove(uint8_t bonded_port_id, uint8_t slave_port_id)
internals = bonded_eth_dev->data->dev_private;
rte_spinlock_lock(&internals->lock);
+ if (bonded_eth_dev->data->dev_started) {
+ for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+ bd_rx_q = bonded_eth_dev->data->rx_queues[i];
+ rte_spinlock_lock(&bd_rx_q->lock);
+ }
+
+ for (i = 0; i < bonded_eth_dev->data->nb_tx_queues; i++) {
+ bd_tx_q = bonded_eth_dev->data->tx_queues[i];
+ rte_spinlock_lock(&bd_tx_q->lock);
+ }
+ }
retval = __eth_bond_slave_remove_lock_free(bonded_port_id, slave_port_id);
- rte_spinlock_unlock(&internals->lock);
+ if (bonded_eth_dev->data->dev_started) {
+ for (i = 0; i < bonded_eth_dev->data->nb_tx_queues; i++) {
+ bd_tx_q = bonded_eth_dev->data->tx_queues[i];
+ rte_spinlock_unlock(&bd_tx_q->lock);
+ }
+ for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+ bd_rx_q = bonded_eth_dev->data->rx_queues[i];
+ rte_spinlock_unlock(&bd_rx_q->lock);
+ }
+ rte_spinlock_unlock(&internals->lock);
+ }
+ rte_spinlock_unlock(&internals->lock);
return retval;
}
--
2.6.3
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v2 3/6] bonding: take queue spinlock in rx/tx burst functions
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 0/6] bonding: locks Bernard Iremonger
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 1/6] bonding: add spinlock to rx and tx queues Bernard Iremonger
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 2/6] bonding: grab queue spinlocks in slave add and remove Bernard Iremonger
@ 2016-05-26 16:38 ` Bernard Iremonger
2016-06-10 18:14 ` Ananyev, Konstantin
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 4/6] bonding: add spinlock to stop function Bernard Iremonger
` (3 subsequent siblings)
6 siblings, 1 reply; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-26 16:38 UTC (permalink / raw)
To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
drivers/net/bonding/rte_eth_bond_pmd.c | 116 ++++++++++++++++++++++++---------
1 file changed, 84 insertions(+), 32 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 2e624bb..93043ef 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1,7 +1,7 @@
/*-
* BSD LICENSE
*
- * Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ * Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@@ -92,16 +92,22 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
internals = bd_rx_q->dev_private;
-
- for (i = 0; i < internals->active_slave_count && nb_pkts; i++) {
- /* Offset of pointer to *bufs increases as packets are received
- * from other slaves */
- num_rx_slave = rte_eth_rx_burst(internals->active_slaves[i],
- bd_rx_q->queue_id, bufs + num_rx_total, nb_pkts);
- if (num_rx_slave) {
- num_rx_total += num_rx_slave;
- nb_pkts -= num_rx_slave;
+ if (rte_spinlock_trylock(&bd_rx_q->lock)) {
+ for (i = 0; i < internals->active_slave_count && nb_pkts; i++) {
+ /* Offset of pointer to *bufs increases as packets
+ * are received from other slaves
+ */
+ num_rx_slave = rte_eth_rx_burst(
+ internals->active_slaves[i],
+ bd_rx_q->queue_id,
+ bufs + num_rx_total,
+ nb_pkts);
+ if (num_rx_slave) {
+ num_rx_total += num_rx_slave;
+ nb_pkts -= num_rx_slave;
+ }
}
+ rte_spinlock_unlock(&bd_rx_q->lock);
}
return num_rx_total;
@@ -112,14 +118,19 @@ bond_ethdev_rx_burst_active_backup(void *queue, struct rte_mbuf **bufs,
uint16_t nb_pkts)
{
struct bond_dev_private *internals;
+ uint16_t ret = 0;
/* Cast to structure, containing bonded device's port id and queue id */
struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue;
internals = bd_rx_q->dev_private;
- return rte_eth_rx_burst(internals->current_primary_port,
- bd_rx_q->queue_id, bufs, nb_pkts);
+ if (rte_spinlock_trylock(&bd_rx_q->lock)) {
+ ret = rte_eth_rx_burst(internals->current_primary_port,
+ bd_rx_q->queue_id, bufs, nb_pkts);
+ rte_spinlock_unlock(&bd_rx_q->lock);
+ }
+ return ret;
}
static uint16_t
@@ -143,8 +154,10 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
uint8_t i, j, k;
rte_eth_macaddr_get(internals->port_id, &bond_mac);
- /* Copy slave list to protect against slave up/down changes during tx
- * bursting */
+
+ if (rte_spinlock_trylock(&bd_rx_q->lock) == 0)
+ return num_rx_total;
+
slave_count = internals->active_slave_count;
memcpy(slaves, internals->active_slaves,
sizeof(internals->active_slaves[0]) * slave_count);
@@ -190,7 +203,7 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
j++;
}
}
-
+ rte_spinlock_unlock(&bd_rx_q->lock);
return num_rx_total;
}
@@ -406,14 +419,19 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
+ if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+ return num_tx_total;
+
/* Copy slave list to protect against slave up/down changes during tx
* bursting */
num_of_slaves = internals->active_slave_count;
memcpy(slaves, internals->active_slaves,
sizeof(internals->active_slaves[0]) * num_of_slaves);
- if (num_of_slaves < 1)
+ if (num_of_slaves < 1) {
+ rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
+ }
/* Populate slaves mbuf with which packets are to be sent on it */
for (i = 0; i < nb_pkts; i++) {
@@ -444,7 +462,7 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
num_tx_total += num_tx_slave;
}
}
-
+ rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
}
@@ -454,15 +472,23 @@ bond_ethdev_tx_burst_active_backup(void *queue,
{
struct bond_dev_private *internals;
struct bond_tx_queue *bd_tx_q;
+ uint16_t ret = 0;
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
- if (internals->active_slave_count < 1)
- return 0;
+ if (rte_spinlock_trylock(&bd_tx_q->lock)) {
+ if (internals->active_slave_count < 1) {
+ rte_spinlock_unlock(&bd_tx_q->lock);
+ return 0;
+ }
- return rte_eth_tx_burst(internals->current_primary_port, bd_tx_q->queue_id,
- bufs, nb_pkts);
+ ret = rte_eth_tx_burst(internals->current_primary_port,
+ bd_tx_q->queue_id,
+ bufs, nb_pkts);
+ rte_spinlock_unlock(&bd_tx_q->lock);
+ }
+ return ret;
}
static inline uint16_t
@@ -694,20 +720,25 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
uint16_t num_tx_total = 0;
uint8_t i, j;
- uint8_t num_of_slaves = internals->active_slave_count;
+ uint8_t num_of_slaves;
uint8_t slaves[RTE_MAX_ETHPORTS];
struct ether_hdr *ether_hdr;
struct ether_addr primary_slave_addr;
struct ether_addr active_slave_addr;
- if (num_of_slaves < 1)
+ if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
return num_tx_total;
+ num_of_slaves = internals->active_slave_count;
+ if (num_of_slaves < 1) {
+ rte_spinlock_unlock(&bd_tx_q->lock);
+ return num_tx_total;
+ }
+
memcpy(slaves, internals->tlb_slaves_order,
sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
-
ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
if (nb_pkts > 3) {
@@ -735,7 +766,7 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if (num_tx_total == nb_pkts)
break;
}
-
+ rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
}
@@ -785,6 +816,9 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
int i, j;
+ if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+ return num_tx_total;
+
/* Search tx buffer for ARP packets and forward them to alb */
for (i = 0; i < nb_pkts; i++) {
eth_h = rte_pktmbuf_mtod(bufs[i], struct ether_hdr *);
@@ -875,6 +909,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
#endif
}
}
+ rte_spinlock_unlock(&bd_tx_q->lock);
/* Send non-ARP packets using tlb policy */
if (slave_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
@@ -914,14 +949,19 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
+ if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+ return num_tx_total;
+
/* Copy slave list to protect against slave up/down changes during tx
* bursting */
num_of_slaves = internals->active_slave_count;
memcpy(slaves, internals->active_slaves,
sizeof(internals->active_slaves[0]) * num_of_slaves);
- if (num_of_slaves < 1)
+ if (num_of_slaves < 1) {
+ rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
+ }
/* Populate slaves mbuf with the packets which are to be sent on it */
for (i = 0; i < nb_pkts; i++) {
@@ -951,7 +991,7 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
num_tx_total += num_tx_slave;
}
}
-
+ rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
}
@@ -984,17 +1024,24 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
+ if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+ return num_tx_total;
+
/* Copy slave list to protect against slave up/down changes during tx
* bursting */
num_of_slaves = internals->active_slave_count;
- if (num_of_slaves < 1)
+ if (num_of_slaves < 1) {
+ rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
+ }
memcpy(slaves, internals->active_slaves, sizeof(slaves[0]) * num_of_slaves);
distributing_count = 0;
for (i = 0; i < num_of_slaves; i++) {
- struct port *port = &mode_8023ad_ports[slaves[i]];
+ struct port *port;
+
+ port = &mode_8023ad_ports[internals->active_slaves[i]];
slave_slow_nb_pkts[i] = rte_ring_dequeue_burst(port->tx_ring,
slow_pkts, BOND_MODE_8023AX_SLAVE_TX_PKTS);
@@ -1043,7 +1090,7 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
bufs[j] = slave_bufs[i][num_tx_slave];
}
}
-
+ rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
}
@@ -1065,14 +1112,19 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
+ if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+ return 0;
+
/* Copy slave list to protect against slave up/down changes during tx
* bursting */
num_of_slaves = internals->active_slave_count;
memcpy(slaves, internals->active_slaves,
sizeof(internals->active_slaves[0]) * num_of_slaves);
- if (num_of_slaves < 1)
+ if (num_of_slaves < 1) {
+ rte_spinlock_unlock(&bd_tx_q->lock);
return 0;
+ }
/* Increment reference count on mbufs */
for (i = 0; i < nb_pkts; i++)
@@ -1093,6 +1145,7 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
most_successful_tx_slave = i;
}
}
+ rte_spinlock_unlock(&bd_tx_q->lock);
/* if slaves fail to transmit packets from burst, the calling application
* is not expected to know about multiple references to packets so we must
@@ -1819,7 +1872,6 @@ bond_ethdev_link_update(struct rte_eth_dev *bonded_eth_dev,
bonded_eth_dev->data->dev_link.link_status = link_up;
}
-
return 0;
}
--
2.6.3
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v2 4/6] bonding: add spinlock to stop function
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 0/6] bonding: locks Bernard Iremonger
` (2 preceding siblings ...)
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 3/6] bonding: take queue spinlock in rx/tx burst functions Bernard Iremonger
@ 2016-05-26 16:38 ` Bernard Iremonger
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 5/6] bonding: add spinlock to link update function Bernard Iremonger
` (2 subsequent siblings)
6 siblings, 0 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-26 16:38 UTC (permalink / raw)
To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
drivers/net/bonding/rte_eth_bond_pmd.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 93043ef..55b37a5 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1651,6 +1651,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
struct bond_dev_private *internals = eth_dev->data->dev_private;
uint8_t i;
+ rte_spinlock_lock(&internals->lock);
if (internals->mode == BONDING_MODE_8023AD) {
struct port *port;
void *pkt = NULL;
@@ -1672,7 +1673,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
}
if (internals->mode == BONDING_MODE_TLB ||
- internals->mode == BONDING_MODE_ALB) {
+ internals->mode == BONDING_MODE_ALB) {
bond_tlb_disable(internals);
for (i = 0; i < internals->active_slave_count; i++)
tlb_last_obytets[internals->active_slaves[i]] = 0;
@@ -1685,6 +1686,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
eth_dev->data->dev_started = 0;
+ rte_spinlock_unlock(&internals->lock);
}
void
--
2.6.3
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v2 5/6] bonding: add spinlock to link update function
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 0/6] bonding: locks Bernard Iremonger
` (3 preceding siblings ...)
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 4/6] bonding: add spinlock to stop function Bernard Iremonger
@ 2016-05-26 16:38 ` Bernard Iremonger
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 6/6] bonding: remove memcpy from burst functions Bernard Iremonger
2016-06-10 14:45 ` [dpdk-dev] [PATCH v2 0/6] bonding: locks Bruce Richardson
6 siblings, 0 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-26 16:38 UTC (permalink / raw)
To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
drivers/net/bonding/rte_eth_bond_pmd.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 55b37a5..474bfcc 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1853,9 +1853,11 @@ bond_ethdev_link_update(struct rte_eth_dev *bonded_eth_dev,
{
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
+ rte_spinlock_lock(&internals->lock);
if (!bonded_eth_dev->data->dev_started ||
internals->active_slave_count == 0) {
bonded_eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ rte_spinlock_unlock(&internals->lock);
return 0;
} else {
struct rte_eth_dev *slave_eth_dev;
@@ -1874,6 +1876,7 @@ bond_ethdev_link_update(struct rte_eth_dev *bonded_eth_dev,
bonded_eth_dev->data->dev_link.link_status = link_up;
}
+ rte_spinlock_unlock(&internals->lock);
return 0;
}
--
2.6.3
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v2 6/6] bonding: remove memcpy from burst functions
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 0/6] bonding: locks Bernard Iremonger
` (4 preceding siblings ...)
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 5/6] bonding: add spinlock to link update function Bernard Iremonger
@ 2016-05-26 16:38 ` Bernard Iremonger
2016-06-10 18:15 ` Ananyev, Konstantin
2016-06-10 14:45 ` [dpdk-dev] [PATCH v2 0/6] bonding: locks Bruce Richardson
6 siblings, 1 reply; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-26 16:38 UTC (permalink / raw)
To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
drivers/net/bonding/rte_eth_bond_pmd.c | 71 ++++++++++++++--------------------
1 file changed, 28 insertions(+), 43 deletions(-)
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 474bfcc..d952658 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -146,7 +146,6 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
const uint16_t ether_type_slow_be = rte_be_to_cpu_16(ETHER_TYPE_SLOW);
uint16_t num_rx_total = 0; /* Total number of received packets */
- uint8_t slaves[RTE_MAX_ETHPORTS];
uint8_t slave_count;
uint8_t collecting; /* current slave collecting status */
@@ -159,15 +158,16 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
return num_rx_total;
slave_count = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * slave_count);
for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) {
j = num_rx_total;
- collecting = ACTOR_STATE(&mode_8023ad_ports[slaves[i]], COLLECTING);
+ collecting = ACTOR_STATE(
+ &mode_8023ad_ports[internals->active_slaves[i]],
+ COLLECTING);
/* Read packets from this slave */
- num_rx_total += rte_eth_rx_burst(slaves[i], bd_rx_q->queue_id,
+ num_rx_total += rte_eth_rx_burst(internals->active_slaves[i],
+ bd_rx_q->queue_id,
&bufs[num_rx_total], nb_pkts - num_rx_total);
for (k = j; k < 2 && k < num_rx_total; k++)
@@ -188,7 +188,9 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
!is_same_ether_addr(&bond_mac, &hdr->d_addr)))) {
if (hdr->ether_type == ether_type_slow_be) {
- bond_mode_8023ad_handle_slow_pkt(internals, slaves[i],
+ bond_mode_8023ad_handle_slow_pkt(
+ internals,
+ internals->active_slaves[i],
bufs[j]);
} else
rte_pktmbuf_free(bufs[j]);
@@ -409,8 +411,6 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
uint16_t slave_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
uint8_t num_of_slaves;
- uint8_t slaves[RTE_MAX_ETHPORTS];
-
uint16_t num_tx_total = 0, num_tx_slave;
static int slave_idx = 0;
@@ -422,12 +422,7 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
return num_tx_total;
- /* Copy slave list to protect against slave up/down changes during tx
- * bursting */
num_of_slaves = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * num_of_slaves);
-
if (num_of_slaves < 1) {
rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
@@ -446,7 +441,9 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
/* Send packet burst on each slave device */
for (i = 0; i < num_of_slaves; i++) {
if (slave_nb_pkts[i] > 0) {
- num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+ num_tx_slave = rte_eth_tx_burst(
+ internals->active_slaves[i],
+ bd_tx_q->queue_id,
slave_bufs[i], slave_nb_pkts[i]);
/* if tx burst fails move packets to end of bufs */
@@ -721,7 +718,6 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
uint8_t i, j;
uint8_t num_of_slaves;
- uint8_t slaves[RTE_MAX_ETHPORTS];
struct ether_hdr *ether_hdr;
struct ether_addr primary_slave_addr;
@@ -736,9 +732,6 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
return num_tx_total;
}
- memcpy(slaves, internals->tlb_slaves_order,
- sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
-
ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
if (nb_pkts > 3) {
@@ -747,7 +740,8 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
}
for (i = 0; i < num_of_slaves; i++) {
- rte_eth_macaddr_get(slaves[i], &active_slave_addr);
+ rte_eth_macaddr_get(internals->tlb_slaves_order[i],
+ &active_slave_addr);
for (j = num_tx_total; j < nb_pkts; j++) {
if (j + 3 < nb_pkts)
rte_prefetch0(rte_pktmbuf_mtod(bufs[j+3], void*));
@@ -760,8 +754,11 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
#endif
}
- num_tx_total += rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
- bufs + num_tx_total, nb_pkts - num_tx_total);
+ num_tx_total += rte_eth_tx_burst(
+ internals->tlb_slaves_order[i],
+ bd_tx_q->queue_id,
+ bufs + num_tx_total,
+ nb_pkts - num_tx_total);
if (num_tx_total == nb_pkts)
break;
@@ -937,7 +934,6 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
struct bond_tx_queue *bd_tx_q;
uint8_t num_of_slaves;
- uint8_t slaves[RTE_MAX_ETHPORTS];
uint16_t num_tx_total = 0, num_tx_slave = 0, tx_fail_total = 0;
@@ -952,12 +948,7 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
return num_tx_total;
- /* Copy slave list to protect against slave up/down changes during tx
- * bursting */
num_of_slaves = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * num_of_slaves);
-
if (num_of_slaves < 1) {
rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
@@ -975,7 +966,9 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
/* Send packet burst on each slave device */
for (i = 0; i < num_of_slaves; i++) {
if (slave_nb_pkts[i] > 0) {
- num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+ num_tx_slave = rte_eth_tx_burst(
+ internals->active_slaves[i],
+ bd_tx_q->queue_id,
slave_bufs[i], slave_nb_pkts[i]);
/* if tx burst fails move packets to end of bufs */
@@ -1003,7 +996,6 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
struct bond_tx_queue *bd_tx_q;
uint8_t num_of_slaves;
- uint8_t slaves[RTE_MAX_ETHPORTS];
/* positions in slaves, not ID */
uint8_t distributing_offsets[RTE_MAX_ETHPORTS];
uint8_t distributing_count;
@@ -1027,16 +1019,12 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
return num_tx_total;
- /* Copy slave list to protect against slave up/down changes during tx
- * bursting */
num_of_slaves = internals->active_slave_count;
if (num_of_slaves < 1) {
rte_spinlock_unlock(&bd_tx_q->lock);
return num_tx_total;
}
- memcpy(slaves, internals->active_slaves, sizeof(slaves[0]) * num_of_slaves);
-
distributing_count = 0;
for (i = 0; i < num_of_slaves; i++) {
struct port *port;
@@ -1073,7 +1061,9 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
if (slave_nb_pkts[i] == 0)
continue;
- num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+ num_tx_slave = rte_eth_tx_burst(
+ internals->active_slaves[i],
+ bd_tx_q->queue_id,
slave_bufs[i], slave_nb_pkts[i]);
/* If tx burst fails drop slow packets */
@@ -1102,8 +1092,6 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
struct bond_tx_queue *bd_tx_q;
uint8_t tx_failed_flag = 0, num_of_slaves;
- uint8_t slaves[RTE_MAX_ETHPORTS];
-
uint16_t max_nb_of_tx_pkts = 0;
int slave_tx_total[RTE_MAX_ETHPORTS];
@@ -1115,12 +1103,7 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
return 0;
- /* Copy slave list to protect against slave up/down changes during tx
- * bursting */
num_of_slaves = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * num_of_slaves);
-
if (num_of_slaves < 1) {
rte_spinlock_unlock(&bd_tx_q->lock);
return 0;
@@ -1132,8 +1115,10 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
/* Transmit burst on each active slave */
for (i = 0; i < num_of_slaves; i++) {
- slave_tx_total[i] = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
- bufs, nb_pkts);
+ slave_tx_total[i] = rte_eth_tx_burst(
+ internals->active_slaves[i],
+ bd_tx_q->queue_id,
+ bufs, nb_pkts);
if (unlikely(slave_tx_total[i] < nb_pkts))
tx_failed_flag = 1;
--
2.6.3
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/6] bonding: locks
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 0/6] bonding: locks Bernard Iremonger
` (5 preceding siblings ...)
2016-05-26 16:38 ` [dpdk-dev] [PATCH v2 6/6] bonding: remove memcpy from burst functions Bernard Iremonger
@ 2016-06-10 14:45 ` Bruce Richardson
2016-06-10 18:24 ` Iremonger, Bernard
6 siblings, 1 reply; 42+ messages in thread
From: Bruce Richardson @ 2016-06-10 14:45 UTC (permalink / raw)
To: Bernard Iremonger; +Cc: dev, declan.doherty, konstantin.ananyev
On Thu, May 26, 2016 at 05:38:41PM +0100, Bernard Iremonger wrote:
> Add spinlock to bonding rx and tx queues.
> Take spinlock in rx and tx burst functions.
> Take all spinlocks in slave add and remove functions.
> With spinlocks in place remove memcpy of slaves.
>
> Changes in v2:
> Replace patch 1.
> Add patch 2 and reorder patches.
> Add spinlock to bonding rx and tx queues.
> Take all spinlocks in slave add and remove functions.
> Replace readlocks with spinlocks.
>
> Bernard Iremonger (6):
> bonding: add spinlock to rx and tx queues
> bonding: grab queue spinlocks in slave add and remove
> bonding: take queue spinlock in rx/tx burst functions
> bonding: add spinlock to stop function
> bonding: add spinlock to link update function
> bonding: remove memcpy from burst functions
>
> drivers/net/bonding/rte_eth_bond_api.c | 52 +++++++-
> drivers/net/bonding/rte_eth_bond_pmd.c | 196 ++++++++++++++++++-----------
> drivers/net/bonding/rte_eth_bond_private.h | 4 +-
> 3 files changed, 173 insertions(+), 79 deletions(-)
>
> --
The patches in this set are missing any explanation for the reasons behind each
patch. The commit messages are empty for every patch.
I'm also concerned at the fact that this patchset is adding lock operations all
over the bonding PMD. While other PMDs need synchronization between control plane
and data plane threads so that e.g. you don't do IO on a stopped port, they
don't use locks so as to get max performance. Nowhere in the patchset is there
an explanation as to why bonding is so different that it needs locks where
other PMDs can do without them. This should also be explained in each individual
patch as to why the area covered by the patch needs locks in this PMD (again,
compared to other PMDs)
Thanks,
/Bruce
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/6] bonding: locks
2016-06-10 14:45 ` [dpdk-dev] [PATCH v2 0/6] bonding: locks Bruce Richardson
@ 2016-06-10 18:24 ` Iremonger, Bernard
0 siblings, 0 replies; 42+ messages in thread
From: Iremonger, Bernard @ 2016-06-10 18:24 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: dev, Doherty, Declan, Ananyev, Konstantin
Hi Bruce,
> -----Original Message-----
> From: Richardson, Bruce
> Sent: Friday, June 10, 2016 3:46 PM
> To: Iremonger, Bernard <bernard.iremonger@intel.com>
> Cc: dev@dpdk.org; Doherty, Declan <declan.doherty@intel.com>; Ananyev,
> Konstantin <konstantin.ananyev@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 0/6] bonding: locks
>
> On Thu, May 26, 2016 at 05:38:41PM +0100, Bernard Iremonger wrote:
> > Add spinlock to bonding rx and tx queues.
> > Take spinlock in rx and tx burst functions.
> > Take all spinlocks in slave add and remove functions.
> > With spinlocks in place remove memcpy of slaves.
> >
> > Changes in v2:
> > Replace patch 1.
> > Add patch 2 and reorder patches.
> > Add spinlock to bonding rx and tx queues.
> > Take all spinlocks in slave add and remove functions.
> > Replace readlocks with spinlocks.
> >
> > Bernard Iremonger (6):
> > bonding: add spinlock to rx and tx queues
> > bonding: grab queue spinlocks in slave add and remove
> > bonding: take queue spinlock in rx/tx burst functions
> > bonding: add spinlock to stop function
> > bonding: add spinlock to link update function
> > bonding: remove memcpy from burst functions
> >
> > drivers/net/bonding/rte_eth_bond_api.c | 52 +++++++-
> > drivers/net/bonding/rte_eth_bond_pmd.c | 196
> ++++++++++++++++++-----------
> > drivers/net/bonding/rte_eth_bond_private.h | 4 +-
> > 3 files changed, 173 insertions(+), 79 deletions(-)
> >
> > --
>
> The patches in this set are missing any explanation for the reasons behind
> each patch. The commit messages are empty for every patch.
>
> I'm also concerned at the fact that this patchset is adding lock operations all
> over the bonding PMD. While other PMDs need synchronization between
> control plane and data plane threads so that e.g. you don't do IO on a
> stopped port, they don't use locks so as to get max performance. Nowhere
> in the patchset is there an explanation as to why bonding is so different that
> it needs locks where other PMDs can do without them. This should also be
> explained in each individual patch as to why the area covered by the patch
> needs locks in this PMD (again, compared to other PMDs)
>
> Thanks,
> /Bruce
I will be sending a v3 for this patchset.
The empty commit messages were an oversight on my part, this will be corrected in the v3.
I will also try to explain why the locks are needed.
Regards,
Bernard.
^ permalink raw reply [flat|nested] 42+ messages in thread