DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/7] net/netvsc: more minor fixes
@ 2020-04-30 19:08 Stephen Hemminger
  2020-04-30 19:08 ` [dpdk-dev] [PATCH 1/7] net/netvsc: fix comment spelling errors Stephen Hemminger
                   ` (7 more replies)
  0 siblings, 8 replies; 9+ messages in thread
From: Stephen Hemminger @ 2020-04-30 19:08 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

These are fixes for several small issues found during review
and my testing of netvsc PMD.

Stephen Hemminger (7):
  net/netvsc: fix comment spelling errors
  bus/vmbus: fix comment spelling errors
  net/vmbus: add missing barrier
  net/netvsc: check the vmbus ring buffer more often
  net/netvsc: manage VF port under rwlock
  net/netvsc: do RSS across rx queue only
  net/netvsc: don't configure RSS if disabled

 drivers/bus/vmbus/linux/vmbus_uio.c |   2 +-
 drivers/bus/vmbus/vmbus_channel.c   |   1 +
 drivers/bus/vmbus/vmbus_common.c    |   2 +-
 drivers/net/netvsc/hn_ethdev.c      |  31 +++++----
 drivers/net/netvsc/hn_nvs.h         |   2 +-
 drivers/net/netvsc/hn_rxtx.c        |  30 +++++---
 drivers/net/netvsc/hn_var.h         |  10 +--
 drivers/net/netvsc/hn_vf.c          | 104 +++++++++++++---------------
 8 files changed, 93 insertions(+), 89 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH 1/7] net/netvsc: fix comment spelling errors
  2020-04-30 19:08 [dpdk-dev] [PATCH 0/7] net/netvsc: more minor fixes Stephen Hemminger
@ 2020-04-30 19:08 ` Stephen Hemminger
  2020-04-30 19:08 ` [dpdk-dev] [PATCH 2/7] bus/vmbus: " Stephen Hemminger
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Stephen Hemminger @ 2020-04-30 19:08 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

No code change here.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 drivers/net/netvsc/hn_ethdev.c | 2 +-
 drivers/net/netvsc/hn_nvs.h    | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 05f1a25a1abc..46da5a4e8170 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -72,7 +72,7 @@ static const struct hn_xstats_name_off hn_stat_strings[] = {
 
 /* The default RSS key.
  * This value is the same as MLX5 so that flows will be
- * received on same path for both VF ans synthetic NIC.
+ * received on same path for both VF and synthetic NIC.
  */
 static const uint8_t rss_default_key[NDIS_HASH_KEYSIZE_TOEPLITZ] = {
 	0x2c, 0xc6, 0x81, 0xd1,	0x5b, 0xdb, 0xf4, 0xf7,
diff --git a/drivers/net/netvsc/hn_nvs.h b/drivers/net/netvsc/hn_nvs.h
index 2563fd8d8669..015839e3644f 100644
--- a/drivers/net/netvsc/hn_nvs.h
+++ b/drivers/net/netvsc/hn_nvs.h
@@ -37,7 +37,7 @@
 #define NVS_RNDIS_MTYPE_CTRL		1
 
 /*
- * NVS message transacion status codes.
+ * NVS message transaction status codes.
  */
 #define NVS_STATUS_OK		1
 #define NVS_STATUS_FAILED		2
-- 
2.20.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH 2/7] bus/vmbus: fix comment spelling errors
  2020-04-30 19:08 [dpdk-dev] [PATCH 0/7] net/netvsc: more minor fixes Stephen Hemminger
  2020-04-30 19:08 ` [dpdk-dev] [PATCH 1/7] net/netvsc: fix comment spelling errors Stephen Hemminger
@ 2020-04-30 19:08 ` Stephen Hemminger
  2020-04-30 19:08 ` [dpdk-dev] [PATCH 3/7] net/vmbus: add missing barrier Stephen Hemminger
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Stephen Hemminger @ 2020-04-30 19:08 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

No code change here.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 drivers/bus/vmbus/linux/vmbus_uio.c | 2 +-
 drivers/bus/vmbus/vmbus_common.c    | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/bus/vmbus/linux/vmbus_uio.c b/drivers/bus/vmbus/linux/vmbus_uio.c
index 10e50c9b5a10..5451bfd1501d 100644
--- a/drivers/bus/vmbus/linux/vmbus_uio.c
+++ b/drivers/bus/vmbus/linux/vmbus_uio.c
@@ -165,7 +165,7 @@ vmbus_uio_map_resource_by_index(struct rte_vmbus_device *dev, int idx,
 	dev->resource[idx].addr = mapaddr;
 	vmbus_map_addr = RTE_PTR_ADD(mapaddr, size);
 
-	/* Record result of sucessful mapping for use by secondary */
+	/* Record result of successful mapping for use by secondary */
 	maps[idx].addr = mapaddr;
 	maps[idx].size = size;
 
diff --git a/drivers/bus/vmbus/vmbus_common.c b/drivers/bus/vmbus/vmbus_common.c
index 48a219f73529..eaf5c292f986 100644
--- a/drivers/bus/vmbus/vmbus_common.c
+++ b/drivers/bus/vmbus/vmbus_common.c
@@ -131,7 +131,7 @@ vmbus_probe_one_driver(struct rte_vmbus_driver *dr,
 }
 
 /*
- * IF device class GUID mathces, call the probe function of
+ * If device class GUID matches, call the probe function of
  * registere drivers for the vmbus device.
  * Return -1 if initialization failed,
  * and 1 if no driver found for this device.
-- 
2.20.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH 3/7] net/vmbus: add missing barrier
  2020-04-30 19:08 [dpdk-dev] [PATCH 0/7] net/netvsc: more minor fixes Stephen Hemminger
  2020-04-30 19:08 ` [dpdk-dev] [PATCH 1/7] net/netvsc: fix comment spelling errors Stephen Hemminger
  2020-04-30 19:08 ` [dpdk-dev] [PATCH 2/7] bus/vmbus: " Stephen Hemminger
@ 2020-04-30 19:08 ` Stephen Hemminger
  2020-04-30 19:08 ` [dpdk-dev] [PATCH 4/7] net/netvsc: check the vmbus ring buffer more often Stephen Hemminger
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Stephen Hemminger @ 2020-04-30 19:08 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

The check for event ring being empty needs a barrier
to avoid any over agressive optimization.
This is same barrier as Linux kernel.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 drivers/bus/vmbus/vmbus_channel.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/bus/vmbus/vmbus_channel.c b/drivers/bus/vmbus/vmbus_channel.c
index 46b3ba3f9f9e..ff2985c25758 100644
--- a/drivers/bus/vmbus/vmbus_channel.c
+++ b/drivers/bus/vmbus/vmbus_channel.c
@@ -199,6 +199,7 @@ bool rte_vmbus_chan_rx_empty(const struct vmbus_channel *channel)
 {
 	const struct vmbus_br *br = &channel->rxbr;
 
+	rte_smp_rmb();
 	return br->vbr->rindex == br->vbr->windex;
 }
 
-- 
2.20.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH 4/7] net/netvsc: check the vmbus ring buffer more often
  2020-04-30 19:08 [dpdk-dev] [PATCH 0/7] net/netvsc: more minor fixes Stephen Hemminger
                   ` (2 preceding siblings ...)
  2020-04-30 19:08 ` [dpdk-dev] [PATCH 3/7] net/vmbus: add missing barrier Stephen Hemminger
@ 2020-04-30 19:08 ` Stephen Hemminger
  2020-04-30 19:08 ` [dpdk-dev] [PATCH 5/7] net/netvsc: manage VF port under rwlock Stephen Hemminger
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Stephen Hemminger @ 2020-04-30 19:08 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

Since VF notfications are handled as VMBUS notifications on the
primary channel (and not as hotplug). The channel should be checked
before deciding to use VF for Rx or Tx.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 drivers/net/netvsc/hn_rxtx.c | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index 19f00a05285f..773ba31fcc64 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -1372,25 +1372,29 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	struct hn_data *hv = txq->hv;
 	struct rte_eth_dev *vf_dev;
 	bool need_sig = false;
-	uint16_t nb_tx, avail;
+	uint16_t nb_tx, tx_thresh;
 	int ret;
 
 	if (unlikely(hv->closed))
 		return 0;
 
+	/*
+	 * Always check for events on the primary channel
+	 * because that is where hotplug notifications occur.
+	 */
+	tx_thresh = RTE_MAX(txq->free_thresh, nb_pkts);
+	if (txq->queue_id == 0 ||
+	    rte_mempool_avail_count(txq->txdesc_pool) < tx_thresh)
+		hn_process_events(hv, txq->queue_id, 0);
+
 	/* Transmit over VF if present and up */
 	vf_dev = hn_get_vf_dev(hv);
-
 	if (vf_dev && vf_dev->data->dev_started) {
 		void *sub_q = vf_dev->data->tx_queues[queue_id];
 
 		return (*vf_dev->tx_pkt_burst)(sub_q, tx_pkts, nb_pkts);
 	}
 
-	avail = rte_mempool_avail_count(txq->txdesc_pool);
-	if (nb_pkts > avail || avail <= txq->free_thresh)
-		hn_process_events(hv, txq->queue_id, 0);
-
 	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
 		struct rte_mbuf *m = tx_pkts[nb_tx];
 		uint32_t pkt_size = m->pkt_len + HN_RNDIS_PKT_LEN;
@@ -1487,10 +1491,7 @@ hn_recv_pkts(void *prxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	if (unlikely(hv->closed))
 		return 0;
 
-	/* Receive from VF if present and up */
-	vf_dev = hn_get_vf_dev(hv);
-
-	/* Check for new completions */
+	/* Check for new completions (and hotplug) */
 	if (likely(rte_ring_count(rxq->rx_ring) < nb_pkts))
 		hn_process_events(hv, rxq->queue_id, 0);
 
@@ -1499,6 +1500,7 @@ hn_recv_pkts(void *prxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 					   (void **)rx_pkts, nb_pkts, NULL);
 
 	/* If VF is available, check that as well */
+	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev && vf_dev->data->dev_started)
 		nb_rcv += hn_recv_vf(vf_dev->data->port_id, rxq,
 				     rx_pkts + nb_rcv, nb_pkts - nb_rcv);
-- 
2.20.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH 5/7] net/netvsc: manage VF port under rwlock
  2020-04-30 19:08 [dpdk-dev] [PATCH 0/7] net/netvsc: more minor fixes Stephen Hemminger
                   ` (3 preceding siblings ...)
  2020-04-30 19:08 ` [dpdk-dev] [PATCH 4/7] net/netvsc: check the vmbus ring buffer more often Stephen Hemminger
@ 2020-04-30 19:08 ` Stephen Hemminger
  2020-04-30 19:08 ` [dpdk-dev] [PATCH 6/7] net/netvsc: do RSS across rx queue only Stephen Hemminger
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Stephen Hemminger @ 2020-04-30 19:08 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

With multiple channels, the primary channel may receive notification
that VF has been added or removed while secondary channel is in
process of doing receive or transmit.  Resolve this race by converting
existing vf_lock to a reader/writer lock.

Users of lock (tx/rx/stats) acquire for read, and actions like
add/remove acquire it for write.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 drivers/net/netvsc/hn_ethdev.c |   2 +-
 drivers/net/netvsc/hn_rxtx.c   |   8 ++-
 drivers/net/netvsc/hn_var.h    |  10 ++--
 drivers/net/netvsc/hn_vf.c     | 104 +++++++++++++++------------------
 4 files changed, 61 insertions(+), 63 deletions(-)

diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 46da5a4e8170..7bf5465ce5e7 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -957,7 +957,7 @@ eth_hn_dev_init(struct rte_eth_dev *eth_dev)
 	hv->port_id = eth_dev->data->port_id;
 	hv->latency = HN_CHAN_LATENCY_NS;
 	hv->max_queues = 1;
-	rte_spinlock_init(&hv->vf_lock);
+	rte_rwlock_init(&hv->vf_lock);
 	hv->vf_port = HN_INVALID_PORT;
 
 	err = hn_parse_args(eth_dev);
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index 773ba31fcc64..31fae5597598 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -1388,12 +1388,16 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		hn_process_events(hv, txq->queue_id, 0);
 
 	/* Transmit over VF if present and up */
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev && vf_dev->data->dev_started) {
 		void *sub_q = vf_dev->data->tx_queues[queue_id];
 
-		return (*vf_dev->tx_pkt_burst)(sub_q, tx_pkts, nb_pkts);
+		nb_tx = (*vf_dev->tx_pkt_burst)(sub_q, tx_pkts, nb_pkts);
+		rte_rwlock_read_unlock(&hv->vf_lock);
+		return nb_tx;
 	}
+	rte_rwlock_read_unlock(&hv->vf_lock);
 
 	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
 		struct rte_mbuf *m = tx_pkts[nb_tx];
@@ -1500,11 +1504,13 @@ hn_recv_pkts(void *prxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 					   (void **)rx_pkts, nb_pkts, NULL);
 
 	/* If VF is available, check that as well */
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev && vf_dev->data->dev_started)
 		nb_rcv += hn_recv_vf(vf_dev->data->port_id, rxq,
 				     rx_pkts + nb_rcv, nb_pkts - nb_rcv);
 
+	rte_rwlock_read_unlock(&hv->vf_lock);
 	return nb_rcv;
 }
 
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index b4c61717379f..d1d38b459601 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -98,7 +98,7 @@ struct hn_rx_bufinfo {
 struct hn_data {
 	struct rte_vmbus_device *vmbus;
 	struct hn_rx_queue *primary;
-	rte_spinlock_t  vf_lock;
+	rte_rwlock_t    vf_lock;
 	uint16_t	port_id;
 	uint16_t	vf_port;
 
@@ -188,15 +188,15 @@ hn_vf_attached(const struct hn_data *hv)
 	return hv->vf_port != HN_INVALID_PORT;
 }
 
-/* Get VF device for existing netvsc device */
+/*
+ * Get VF device for existing netvsc device
+ * Assumes vf_lock is held.
+ */
 static inline struct rte_eth_dev *
 hn_get_vf_dev(const struct hn_data *hv)
 {
 	uint16_t vf_port = hv->vf_port;
 
-	/* make sure vf_port is loaded */
-	rte_smp_rmb();
-
 	if (vf_port == HN_INVALID_PORT)
 		return NULL;
 	else
diff --git a/drivers/net/netvsc/hn_vf.c b/drivers/net/netvsc/hn_vf.c
index 1261b2e2ef85..b7e3ba46bf63 100644
--- a/drivers/net/netvsc/hn_vf.c
+++ b/drivers/net/netvsc/hn_vf.c
@@ -82,8 +82,6 @@ static int hn_vf_attach(struct hn_data *hv, uint16_t port_id)
 
 	PMD_DRV_LOG(DEBUG, "Attach VF device %u", port_id);
 	hv->vf_port = port_id;
-	rte_smp_wmb();
-
 	return 0;
 }
 
@@ -98,9 +96,7 @@ int hn_vf_add(struct rte_eth_dev *dev, struct hn_data *hv)
 		return port;
 	}
 
-	rte_spinlock_lock(&hv->vf_lock);
 	err = hn_vf_attach(hv, port);
-
 	if (err == 0) {
 		dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
 		hv->vf_intr = (struct rte_intr_handle) {
@@ -110,7 +106,6 @@ int hn_vf_add(struct rte_eth_dev *dev, struct hn_data *hv)
 		dev->intr_handle = &hv->vf_intr;
 		hn_nvs_set_datapath(hv, NVS_DATAPATH_VF);
 	}
-	rte_spinlock_unlock(&hv->vf_lock);
 
 	return err;
 }
@@ -119,8 +114,6 @@ int hn_vf_add(struct rte_eth_dev *dev, struct hn_data *hv)
 static void hn_vf_remove(struct hn_data *hv)
 {
 
-	rte_spinlock_lock(&hv->vf_lock);
-
 	if (!hn_vf_attached(hv)) {
 		PMD_DRV_LOG(ERR, "VF path not active");
 	} else {
@@ -129,12 +122,10 @@ static void hn_vf_remove(struct hn_data *hv)
 
 		/* Stop transmission over VF */
 		hv->vf_port = HN_INVALID_PORT;
-		rte_smp_wmb();
 
 		/* Give back ownership */
 		rte_eth_dev_owner_unset(hv->vf_port, hv->owner.id);
 	}
-	rte_spinlock_unlock(&hv->vf_lock);
 }
 
 /* Handle VF association message from host */
@@ -156,15 +147,16 @@ hn_nvs_handle_vfassoc(struct rte_eth_dev *dev,
 		    vf_assoc->allocated ? "add to" : "remove from",
 		    dev->data->port_id);
 
+	rte_rwlock_write_lock(&hv->vf_lock);
 	hv->vf_present = vf_assoc->allocated;
 
-	if (dev->state != RTE_ETH_DEV_ATTACHED)
-		return;
-
-	if (vf_assoc->allocated)
-		hn_vf_add(dev, hv);
-	else
-		hn_vf_remove(hv);
+	if (dev->state == RTE_ETH_DEV_ATTACHED) {
+		if (vf_assoc->allocated)
+			hn_vf_add(dev, hv);
+		else
+			hn_vf_remove(hv);
+	}
+	rte_rwlock_write_unlock(&hv->vf_lock);
 }
 
 static void
@@ -223,11 +215,11 @@ int hn_vf_info_get(struct hn_data *hv, struct rte_eth_dev_info *info)
 	struct rte_eth_dev *vf_dev;
 	int ret = 0;
 
-	rte_spinlock_lock(&hv->vf_lock);
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev)
 		ret = hn_vf_info_merge(vf_dev, info);
-	rte_spinlock_unlock(&hv->vf_lock);
+	rte_rwlock_read_unlock(&hv->vf_lock);
 	return ret;
 }
 
@@ -238,11 +230,11 @@ int hn_vf_link_update(struct rte_eth_dev *dev,
 	struct rte_eth_dev *vf_dev;
 	int ret = 0;
 
-	rte_spinlock_lock(&hv->vf_lock);
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev && vf_dev->dev_ops->link_update)
 		ret = (*vf_dev->dev_ops->link_update)(vf_dev, wait_to_complete);
-	rte_spinlock_unlock(&hv->vf_lock);
+	rte_rwlock_read_unlock(&hv->vf_lock);
 
 	return ret;
 }
@@ -315,10 +307,10 @@ int hn_vf_configure(struct rte_eth_dev *dev,
 	struct hn_data *hv = dev->data->dev_private;
 	int ret = 0;
 
-	rte_spinlock_lock(&hv->vf_lock);
+	rte_rwlock_read_lock(&hv->vf_lock);
 	if (hv->vf_port != HN_INVALID_PORT)
 		ret = _hn_vf_configure(dev, hv->vf_port, dev_conf);
-	rte_spinlock_unlock(&hv->vf_lock);
+	rte_rwlock_read_unlock(&hv->vf_lock);
 	return ret;
 }
 
@@ -328,11 +320,11 @@ const uint32_t *hn_vf_supported_ptypes(struct rte_eth_dev *dev)
 	struct rte_eth_dev *vf_dev;
 	const uint32_t *ptypes = NULL;
 
-	rte_spinlock_lock(&hv->vf_lock);
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev && vf_dev->dev_ops->dev_supported_ptypes_get)
 		ptypes = (*vf_dev->dev_ops->dev_supported_ptypes_get)(vf_dev);
-	rte_spinlock_unlock(&hv->vf_lock);
+	rte_rwlock_read_unlock(&hv->vf_lock);
 
 	return ptypes;
 }
@@ -343,11 +335,11 @@ int hn_vf_start(struct rte_eth_dev *dev)
 	struct rte_eth_dev *vf_dev;
 	int ret = 0;
 
-	rte_spinlock_lock(&hv->vf_lock);
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev)
 		ret = rte_eth_dev_start(vf_dev->data->port_id);
-	rte_spinlock_unlock(&hv->vf_lock);
+	rte_rwlock_read_unlock(&hv->vf_lock);
 	return ret;
 }
 
@@ -356,11 +348,11 @@ void hn_vf_stop(struct rte_eth_dev *dev)
 	struct hn_data *hv = dev->data->dev_private;
 	struct rte_eth_dev *vf_dev;
 
-	rte_spinlock_lock(&hv->vf_lock);
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev)
 		rte_eth_dev_stop(vf_dev->data->port_id);
-	rte_spinlock_unlock(&hv->vf_lock);
+	rte_rwlock_read_unlock(&hv->vf_lock);
 }
 
 /* If VF is present, then cascade configuration down */
@@ -368,11 +360,11 @@ void hn_vf_stop(struct rte_eth_dev *dev)
 	{							\
 		struct hn_data *hv = (dev)->data->dev_private;	\
 		struct rte_eth_dev *vf_dev;			\
-		rte_spinlock_lock(&hv->vf_lock);		\
+		rte_rwlock_read_lock(&hv->vf_lock);		\
 		vf_dev = hn_get_vf_dev(hv);			\
 		if (vf_dev)					\
 			func(vf_dev->data->port_id);		\
-		rte_spinlock_unlock(&hv->vf_lock);		\
+		rte_rwlock_read_unlock(&hv->vf_lock);		\
 	}
 
 /* If VF is present, then cascade configuration down */
@@ -381,11 +373,11 @@ void hn_vf_stop(struct rte_eth_dev *dev)
 		struct hn_data *hv = (dev)->data->dev_private;	\
 		struct rte_eth_dev *vf_dev;			\
 		int ret = 0;					\
-		rte_spinlock_lock(&hv->vf_lock);		\
+		rte_rwlock_read_lock(&hv->vf_lock);		\
 		vf_dev = hn_get_vf_dev(hv);			\
 		if (vf_dev)					\
 			ret = func(vf_dev->data->port_id);	\
-		rte_spinlock_unlock(&hv->vf_lock);		\
+		rte_rwlock_read_unlock(&hv->vf_lock);		\
 		return ret;					\
 	}
 
@@ -399,13 +391,13 @@ void hn_vf_close(struct rte_eth_dev *dev)
 	struct hn_data *hv = dev->data->dev_private;
 	uint16_t vf_port;
 
-	rte_spinlock_lock(&hv->vf_lock);
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_port = hv->vf_port;
 	if (vf_port != HN_INVALID_PORT)
 		rte_eth_dev_close(vf_port);
 
 	hv->vf_port = HN_INVALID_PORT;
-	rte_spinlock_unlock(&hv->vf_lock);
+	rte_rwlock_read_unlock(&hv->vf_lock);
 }
 
 int hn_vf_stats_reset(struct rte_eth_dev *dev)
@@ -441,12 +433,12 @@ int hn_vf_mc_addr_list(struct rte_eth_dev *dev,
 	struct rte_eth_dev *vf_dev;
 	int ret = 0;
 
-	rte_spinlock_lock(&hv->vf_lock);
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev)
 		ret = rte_eth_dev_set_mc_addr_list(vf_dev->data->port_id,
 						   mc_addr_set, nb_mc_addr);
-	rte_spinlock_unlock(&hv->vf_lock);
+	rte_rwlock_read_unlock(&hv->vf_lock);
 	return ret;
 }
 
@@ -459,13 +451,13 @@ int hn_vf_tx_queue_setup(struct rte_eth_dev *dev,
 	struct rte_eth_dev *vf_dev;
 	int ret = 0;
 
-	rte_spinlock_lock(&hv->vf_lock);
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev)
 		ret = rte_eth_tx_queue_setup(vf_dev->data->port_id,
 					     queue_idx, nb_desc,
 					     socket_id, tx_conf);
-	rte_spinlock_unlock(&hv->vf_lock);
+	rte_rwlock_read_unlock(&hv->vf_lock);
 	return ret;
 }
 
@@ -473,7 +465,7 @@ void hn_vf_tx_queue_release(struct hn_data *hv, uint16_t queue_id)
 {
 	struct rte_eth_dev *vf_dev;
 
-	rte_spinlock_lock(&hv->vf_lock);
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev && vf_dev->dev_ops->tx_queue_release) {
 		void *subq = vf_dev->data->tx_queues[queue_id];
@@ -481,7 +473,7 @@ void hn_vf_tx_queue_release(struct hn_data *hv, uint16_t queue_id)
 		(*vf_dev->dev_ops->tx_queue_release)(subq);
 	}
 
-	rte_spinlock_unlock(&hv->vf_lock);
+	rte_rwlock_read_unlock(&hv->vf_lock);
 }
 
 int hn_vf_rx_queue_setup(struct rte_eth_dev *dev,
@@ -494,13 +486,13 @@ int hn_vf_rx_queue_setup(struct rte_eth_dev *dev,
 	struct rte_eth_dev *vf_dev;
 	int ret = 0;
 
-	rte_spinlock_lock(&hv->vf_lock);
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev)
 		ret = rte_eth_rx_queue_setup(vf_dev->data->port_id,
 					     queue_idx, nb_desc,
 					     socket_id, rx_conf, mp);
-	rte_spinlock_unlock(&hv->vf_lock);
+	rte_rwlock_read_unlock(&hv->vf_lock);
 	return ret;
 }
 
@@ -508,14 +500,14 @@ void hn_vf_rx_queue_release(struct hn_data *hv, uint16_t queue_id)
 {
 	struct rte_eth_dev *vf_dev;
 
-	rte_spinlock_lock(&hv->vf_lock);
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev && vf_dev->dev_ops->rx_queue_release) {
 		void *subq = vf_dev->data->rx_queues[queue_id];
 
 		(*vf_dev->dev_ops->rx_queue_release)(subq);
 	}
-	rte_spinlock_unlock(&hv->vf_lock);
+	rte_rwlock_read_unlock(&hv->vf_lock);
 }
 
 int hn_vf_stats_get(struct rte_eth_dev *dev,
@@ -525,11 +517,11 @@ int hn_vf_stats_get(struct rte_eth_dev *dev,
 	struct rte_eth_dev *vf_dev;
 	int ret = 0;
 
-	rte_spinlock_lock(&hv->vf_lock);
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev)
 		ret = rte_eth_stats_get(vf_dev->data->port_id, stats);
-	rte_spinlock_unlock(&hv->vf_lock);
+	rte_rwlock_read_unlock(&hv->vf_lock);
 	return ret;
 }
 
@@ -541,12 +533,12 @@ int hn_vf_xstats_get_names(struct rte_eth_dev *dev,
 	struct rte_eth_dev *vf_dev;
 	int i, count = 0;
 
-	rte_spinlock_lock(&hv->vf_lock);
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev)
 		count = rte_eth_xstats_get_names(vf_dev->data->port_id,
 						 names, n);
-	rte_spinlock_unlock(&hv->vf_lock);
+	rte_rwlock_read_unlock(&hv->vf_lock);
 
 	/* add vf_ prefix to xstat names */
 	if (names) {
@@ -570,12 +562,12 @@ int hn_vf_xstats_get(struct rte_eth_dev *dev,
 	struct rte_eth_dev *vf_dev;
 	int i, count = 0;
 
-	rte_spinlock_lock(&hv->vf_lock);
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev)
 		count = rte_eth_xstats_get(vf_dev->data->port_id,
 					   xstats + offset, n - offset);
-	rte_spinlock_unlock(&hv->vf_lock);
+	rte_rwlock_read_unlock(&hv->vf_lock);
 
 	/* Offset id's for VF stats */
 	if (count > 0) {
@@ -592,13 +584,13 @@ int hn_vf_xstats_reset(struct rte_eth_dev *dev)
 	struct rte_eth_dev *vf_dev;
 	int ret;
 
-	rte_spinlock_lock(&hv->vf_lock);
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev)
 		ret = rte_eth_xstats_reset(vf_dev->data->port_id);
 	else
 		ret = -EINVAL;
-	rte_spinlock_unlock(&hv->vf_lock);
+	rte_rwlock_read_unlock(&hv->vf_lock);
 
 	return ret;
 }
@@ -610,11 +602,11 @@ int hn_vf_rss_hash_update(struct rte_eth_dev *dev,
 	struct rte_eth_dev *vf_dev;
 	int ret = 0;
 
-	rte_spinlock_lock(&hv->vf_lock);
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev && vf_dev->dev_ops->rss_hash_update)
 		ret = vf_dev->dev_ops->rss_hash_update(vf_dev, rss_conf);
-	rte_spinlock_unlock(&hv->vf_lock);
+	rte_rwlock_read_unlock(&hv->vf_lock);
 
 	return ret;
 }
@@ -627,12 +619,12 @@ int hn_vf_reta_hash_update(struct rte_eth_dev *dev,
 	struct rte_eth_dev *vf_dev;
 	int ret = 0;
 
-	rte_spinlock_lock(&hv->vf_lock);
+	rte_rwlock_read_lock(&hv->vf_lock);
 	vf_dev = hn_get_vf_dev(hv);
 	if (vf_dev && vf_dev->dev_ops->reta_update)
 		ret = vf_dev->dev_ops->reta_update(vf_dev,
 						   reta_conf, reta_size);
-	rte_spinlock_unlock(&hv->vf_lock);
+	rte_rwlock_read_unlock(&hv->vf_lock);
 
 	return ret;
 }
-- 
2.20.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH 6/7] net/netvsc: do RSS across rx queue only
  2020-04-30 19:08 [dpdk-dev] [PATCH 0/7] net/netvsc: more minor fixes Stephen Hemminger
                   ` (4 preceding siblings ...)
  2020-04-30 19:08 ` [dpdk-dev] [PATCH 5/7] net/netvsc: manage VF port under rwlock Stephen Hemminger
@ 2020-04-30 19:08 ` Stephen Hemminger
  2020-04-30 19:08 ` [dpdk-dev] [PATCH 7/7] net/netvsc: don't configure RSS if disabled Stephen Hemminger
  2020-05-05 18:14 ` [dpdk-dev] [PATCH 0/7] net/netvsc: more minor fixes Ferruh Yigit
  7 siblings, 0 replies; 9+ messages in thread
From: Stephen Hemminger @ 2020-04-30 19:08 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

If number of tx queues is greater than the number of rx queues;
the driver ends up allocating more channels than rx queues.
The problem is that the RSS indirection table is programed such
that some packets will end up on a channel that would never be
polled. The fix is to limit the RSS indirection table by number
of rx queues not channels.

Fixes: 92d23a57cafe ("net/netvsc: support configuring RSS parameters")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 drivers/net/netvsc/hn_ethdev.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 7bf5465ce5e7..60102362e124 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -575,7 +575,7 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
 				 dev->data->nb_tx_queues);
 
 	for (i = 0; i < NDIS_HASH_INDCNT; i++)
-		hv->rss_ind[i] = i % hv->num_queues;
+		hv->rss_ind[i] = i % dev->data->nb_rx_queues;
 
 	hn_rss_hash_init(hv, rss_conf);
 
-- 
2.20.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH 7/7] net/netvsc: don't configure RSS if disabled
  2020-04-30 19:08 [dpdk-dev] [PATCH 0/7] net/netvsc: more minor fixes Stephen Hemminger
                   ` (5 preceding siblings ...)
  2020-04-30 19:08 ` [dpdk-dev] [PATCH 6/7] net/netvsc: do RSS across rx queue only Stephen Hemminger
@ 2020-04-30 19:08 ` Stephen Hemminger
  2020-05-05 18:14 ` [dpdk-dev] [PATCH 0/7] net/netvsc: more minor fixes Ferruh Yigit
  7 siblings, 0 replies; 9+ messages in thread
From: Stephen Hemminger @ 2020-04-30 19:08 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger

This fixes the problem where driver would not start if only
have a single Rx queue and multiple Txq. In that case, RSS
should stay disabled.

Fixes: 92d23a57cafe ("net/netvsc: support configuring RSS parameters")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 drivers/net/netvsc/hn_ethdev.c | 25 ++++++++++++++-----------
 1 file changed, 14 insertions(+), 11 deletions(-)

diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 60102362e124..55b8a63804a9 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -376,14 +376,15 @@ static int hn_rss_hash_update(struct rte_eth_dev *dev,
 
 	hn_rss_hash_init(hv, rss_conf);
 
-	err = hn_rndis_conf_rss(hv, 0);
-	if (err) {
-		PMD_DRV_LOG(NOTICE,
-			    "rss reconfig failed (RSS disabled)");
-		return err;
+	if (rss_conf->rss_hf != 0) {
+		err = hn_rndis_conf_rss(hv, 0);
+		if (err) {
+			PMD_DRV_LOG(NOTICE,
+				    "rss reconfig failed (RSS disabled)");
+			return err;
+		}
 	}
 
-
 	return hn_vf_rss_hash_update(dev, rss_conf);
 }
 
@@ -595,11 +596,13 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
 			return err;
 		}
 
-		err = hn_rndis_conf_rss(hv, 0);
-		if (err) {
-			PMD_DRV_LOG(NOTICE,
-				    "initial RSS config failed");
-			return err;
+		if (rss_conf->rss_hf != 0) {
+			err = hn_rndis_conf_rss(hv, 0);
+			if (err) {
+				PMD_DRV_LOG(NOTICE,
+					    "initial RSS config failed");
+				return err;
+			}
 		}
 	}
 
-- 
2.20.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [PATCH 0/7] net/netvsc: more minor fixes
  2020-04-30 19:08 [dpdk-dev] [PATCH 0/7] net/netvsc: more minor fixes Stephen Hemminger
                   ` (6 preceding siblings ...)
  2020-04-30 19:08 ` [dpdk-dev] [PATCH 7/7] net/netvsc: don't configure RSS if disabled Stephen Hemminger
@ 2020-05-05 18:14 ` Ferruh Yigit
  7 siblings, 0 replies; 9+ messages in thread
From: Ferruh Yigit @ 2020-05-05 18:14 UTC (permalink / raw)
  To: Stephen Hemminger, dev

On 4/30/2020 8:08 PM, Stephen Hemminger wrote:
> These are fixes for several small issues found during review
> and my testing of netvsc PMD.
> 
> Stephen Hemminger (7):
>   net/netvsc: fix comment spelling errors
>   bus/vmbus: fix comment spelling errors
>   net/vmbus: add missing barrier
>   net/netvsc: check the vmbus ring buffer more often
>   net/netvsc: manage VF port under rwlock
>   net/netvsc: do RSS across rx queue only
>   net/netvsc: don't configure RSS if disabled
> 

Series applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-05-05 18:14 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-30 19:08 [dpdk-dev] [PATCH 0/7] net/netvsc: more minor fixes Stephen Hemminger
2020-04-30 19:08 ` [dpdk-dev] [PATCH 1/7] net/netvsc: fix comment spelling errors Stephen Hemminger
2020-04-30 19:08 ` [dpdk-dev] [PATCH 2/7] bus/vmbus: " Stephen Hemminger
2020-04-30 19:08 ` [dpdk-dev] [PATCH 3/7] net/vmbus: add missing barrier Stephen Hemminger
2020-04-30 19:08 ` [dpdk-dev] [PATCH 4/7] net/netvsc: check the vmbus ring buffer more often Stephen Hemminger
2020-04-30 19:08 ` [dpdk-dev] [PATCH 5/7] net/netvsc: manage VF port under rwlock Stephen Hemminger
2020-04-30 19:08 ` [dpdk-dev] [PATCH 6/7] net/netvsc: do RSS across rx queue only Stephen Hemminger
2020-04-30 19:08 ` [dpdk-dev] [PATCH 7/7] net/netvsc: don't configure RSS if disabled Stephen Hemminger
2020-05-05 18:14 ` [dpdk-dev] [PATCH 0/7] net/netvsc: more minor fixes Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).