* [dpdk-dev] [PATCH 0/4] netvsc PMD performance fixes
@ 2018-07-24 21:08 Stephen Hemminger
2018-07-24 21:08 ` [dpdk-dev] [PATCH 1/4] netvsc: change rx descriptor setup and sizing Stephen Hemminger
` (4 more replies)
0 siblings, 5 replies; 6+ messages in thread
From: Stephen Hemminger @ 2018-07-24 21:08 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
The netvsc PMD is faster than the kernel but is still slow
on receiving packets. These patches help.
Stephen Hemminger (4):
netvsc: change rx descriptor setup and sizing
netvsc: avoid over filling receive descriptor ring
netvsc: implement queue info get handles
netvsc/vmbus: avoid signalling host on read
drivers/bus/vmbus/rte_bus_vmbus.h | 13 ++-
drivers/bus/vmbus/rte_bus_vmbus_version.map | 1 +
drivers/bus/vmbus/vmbus_bufring.c | 3 +
drivers/bus/vmbus/vmbus_channel.c | 45 ++++----
drivers/net/netvsc/hn_ethdev.c | 2 +
drivers/net/netvsc/hn_rxtx.c | 110 ++++++++++----------
drivers/net/netvsc/hn_var.h | 7 +-
7 files changed, 99 insertions(+), 82 deletions(-)
--
2.18.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [dpdk-dev] [PATCH 1/4] netvsc: change rx descriptor setup and sizing
2018-07-24 21:08 [dpdk-dev] [PATCH 0/4] netvsc PMD performance fixes Stephen Hemminger
@ 2018-07-24 21:08 ` Stephen Hemminger
2018-07-24 21:08 ` [dpdk-dev] [PATCH 2/4] netvsc: avoid over filling receive descriptor ring Stephen Hemminger
` (3 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Stephen Hemminger @ 2018-07-24 21:08 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Stephen Hemminger
Increase the size of the ring used to hold mbuf's received
but not processed. The default is now based off the size
of thw receive mbuf pool not the number of sections from the host.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
---
drivers/net/netvsc/hn_rxtx.c | 24 +++++++-----------------
1 file changed, 7 insertions(+), 17 deletions(-)
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index 6d2f41c4c011..9a2dd9cb1beb 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -728,18 +728,12 @@ hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
struct rte_mempool *mp)
{
struct hn_data *hv = dev->data->dev_private;
- uint32_t qmax = hv->rxbuf_section_cnt;
char ring_name[RTE_RING_NAMESIZE];
struct hn_rx_queue *rxq;
unsigned int count;
- size_t size;
- int err = -ENOMEM;
PMD_INIT_FUNC_TRACE();
- if (nb_desc == 0 || nb_desc > qmax)
- nb_desc = qmax;
-
if (queue_idx == 0) {
rxq = hv->primary;
} else {
@@ -749,14 +743,9 @@ hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
}
rxq->mb_pool = mp;
-
- count = rte_align32pow2(nb_desc);
- size = sizeof(struct rte_ring) + count * sizeof(void *);
- rxq->rx_ring = rte_malloc_socket("RX_RING", size,
- RTE_CACHE_LINE_SIZE,
- socket_id);
- if (!rxq->rx_ring)
- goto fail;
+ count = rte_mempool_avail_count(mp) / dev->data->nb_rx_queues;
+ if (nb_desc == 0 || nb_desc > count)
+ nb_desc = count;
/*
* Staging ring from receive event logic to rx_pkts.
@@ -765,9 +754,10 @@ hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
*/
snprintf(ring_name, sizeof(ring_name),
"hn_rx_%u_%u", dev->data->port_id, queue_idx);
- err = rte_ring_init(rxq->rx_ring, ring_name,
- count, 0);
- if (err)
+ rxq->rx_ring = rte_ring_create(ring_name,
+ rte_align32pow2(nb_desc),
+ socket_id, 0);
+ if (!rxq->rx_ring)
goto fail;
dev->data->rx_queues[queue_idx] = rxq;
--
2.18.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [dpdk-dev] [PATCH 2/4] netvsc: avoid over filling receive descriptor ring
2018-07-24 21:08 [dpdk-dev] [PATCH 0/4] netvsc PMD performance fixes Stephen Hemminger
2018-07-24 21:08 ` [dpdk-dev] [PATCH 1/4] netvsc: change rx descriptor setup and sizing Stephen Hemminger
@ 2018-07-24 21:08 ` Stephen Hemminger
2018-07-24 21:08 ` [dpdk-dev] [PATCH 3/4] netvsc: implement queue info get handles Stephen Hemminger
` (2 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Stephen Hemminger @ 2018-07-24 21:08 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Stephen Hemminger
If the number of packets requested are already present in the
rx_ring then skip reading the ring buffer from the host.
If the ring between the poll and receive side is full, then don't
poll (let incoming packets stay on host).
If no more transmit descriptors are available, then still try and
flush any outstanding data.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
---
drivers/net/netvsc/hn_rxtx.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index 9a2dd9cb1beb..1aff64ee3ae5 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -878,11 +878,11 @@ void hn_process_events(struct hn_data *hv, uint16_t queue_id)
PMD_DRV_LOG(ERR, "unknown chan pkt %u", pkt->type);
break;
}
+
+ if (rxq->rx_ring && rte_ring_full(rxq->rx_ring))
+ break;
}
rte_spinlock_unlock(&rxq->ring_lock);
-
- if (unlikely(ret != -EAGAIN))
- PMD_DRV_LOG(ERR, "channel receive failed: %d", ret);
}
static void hn_append_to_chim(struct hn_tx_queue *txq,
@@ -1248,7 +1248,7 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
pkt = hn_try_txagg(hv, txq, pkt_size);
if (unlikely(!pkt))
- goto fail;
+ break;
hn_encap(pkt, txq->queue_id, m);
hn_append_to_chim(txq, pkt, m);
@@ -1269,7 +1269,7 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
} else {
txd = hn_new_txd(hv, txq);
if (unlikely(!txd))
- goto fail;
+ break;
}
pkt = txd->rndis_pkt;
@@ -1310,8 +1310,9 @@ hn_recv_pkts(void *prxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (unlikely(hv->closed))
return 0;
- /* Get all outstanding receive completions */
- hn_process_events(hv, rxq->queue_id);
+ /* If ring is empty then process more */
+ if (rte_ring_count(rxq->rx_ring) < nb_pkts)
+ hn_process_events(hv, rxq->queue_id);
/* Get mbufs off staging ring */
return rte_ring_sc_dequeue_burst(rxq->rx_ring, (void **)rx_pkts,
--
2.18.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [dpdk-dev] [PATCH 3/4] netvsc: implement queue info get handles
2018-07-24 21:08 [dpdk-dev] [PATCH 0/4] netvsc PMD performance fixes Stephen Hemminger
2018-07-24 21:08 ` [dpdk-dev] [PATCH 1/4] netvsc: change rx descriptor setup and sizing Stephen Hemminger
2018-07-24 21:08 ` [dpdk-dev] [PATCH 2/4] netvsc: avoid over filling receive descriptor ring Stephen Hemminger
@ 2018-07-24 21:08 ` Stephen Hemminger
2018-07-24 21:08 ` [dpdk-dev] [PATCH 4/4] netvsc/vmbus: avoid signalling host on read Stephen Hemminger
2018-08-05 9:14 ` [dpdk-dev] [PATCH 0/4] netvsc PMD performance fixes Thomas Monjalon
4 siblings, 0 replies; 6+ messages in thread
From: Stephen Hemminger @ 2018-07-24 21:08 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Stephen Hemminger
This helps when diagnosing ring issues in testpmd.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
---
drivers/net/netvsc/hn_ethdev.c | 2 ++
drivers/net/netvsc/hn_rxtx.c | 22 ++++++++++++++++++++++
drivers/net/netvsc/hn_var.h | 4 ++++
3 files changed, 28 insertions(+)
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 47ed760b825d..78b842ba2d68 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -536,6 +536,8 @@ static const struct eth_dev_ops hn_eth_dev_ops = {
.dev_stop = hn_dev_stop,
.dev_close = hn_dev_close,
.dev_infos_get = hn_dev_info_get,
+ .txq_info_get = hn_dev_tx_queue_info,
+ .rxq_info_get = hn_dev_rx_queue_info,
.promiscuous_enable = hn_dev_promiscuous_enable,
.promiscuous_disable = hn_dev_promiscuous_disable,
.allmulticast_enable = hn_dev_allmulticast_enable,
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index 1aff64ee3ae5..17cebeb74456 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -268,6 +268,17 @@ hn_dev_tx_queue_release(void *arg)
rte_free(txq);
}
+void
+hn_dev_tx_queue_info(struct rte_eth_dev *dev, uint16_t queue_idx,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct hn_data *hv = dev->data->dev_private;
+ struct hn_tx_queue *txq = dev->data->rx_queues[queue_idx];
+
+ qinfo->conf.tx_free_thresh = txq->free_thresh;
+ qinfo->nb_desc = hv->tx_pool->size;
+}
+
static void
hn_nvs_send_completed(struct rte_eth_dev *dev, uint16_t queue_id,
unsigned long xactid, const struct hn_nvs_rndis_ack *ack)
@@ -790,6 +801,17 @@ hn_dev_rx_queue_release(void *arg)
}
}
+void
+hn_dev_rx_queue_info(struct rte_eth_dev *dev, uint16_t queue_idx,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct hn_rx_queue *rxq = dev->data->rx_queues[queue_idx];
+
+ qinfo->mp = rxq->mb_pool;
+ qinfo->scattered_rx = 1;
+ qinfo->nb_desc = rte_ring_get_capacity(rxq->rx_ring);
+}
+
static void
hn_nvs_handle_notify(const struct vmbus_chanpkt_hdr *pkthdr,
const void *data)
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index f0358c58226a..3f3b442697af 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -141,6 +141,8 @@ int hn_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
uint16_t nb_desc, unsigned int socket_id,
const struct rte_eth_txconf *tx_conf);
void hn_dev_tx_queue_release(void *arg);
+void hn_dev_tx_queue_info(struct rte_eth_dev *dev, uint16_t queue_idx,
+ struct rte_eth_txq_info *qinfo);
struct hn_rx_queue *hn_rx_queue_alloc(struct hn_data *hv,
uint16_t queue_id,
@@ -151,3 +153,5 @@ int hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mp);
void hn_dev_rx_queue_release(void *arg);
+void hn_dev_rx_queue_info(struct rte_eth_dev *dev, uint16_t queue_idx,
+ struct rte_eth_rxq_info *qinfo);
--
2.18.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [dpdk-dev] [PATCH 4/4] netvsc/vmbus: avoid signalling host on read
2018-07-24 21:08 [dpdk-dev] [PATCH 0/4] netvsc PMD performance fixes Stephen Hemminger
` (2 preceding siblings ...)
2018-07-24 21:08 ` [dpdk-dev] [PATCH 3/4] netvsc: implement queue info get handles Stephen Hemminger
@ 2018-07-24 21:08 ` Stephen Hemminger
2018-08-05 9:14 ` [dpdk-dev] [PATCH 0/4] netvsc PMD performance fixes Thomas Monjalon
4 siblings, 0 replies; 6+ messages in thread
From: Stephen Hemminger @ 2018-07-24 21:08 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Stephen Hemminger
Don't signal host that receive ring has been read until all events
have been processed. This reduces the number of guest exits and
therefore improves performance.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
---
drivers/bus/vmbus/rte_bus_vmbus.h | 13 +++++-
drivers/bus/vmbus/rte_bus_vmbus_version.map | 1 +
drivers/bus/vmbus/vmbus_bufring.c | 3 ++
drivers/bus/vmbus/vmbus_channel.c | 45 +++++++++----------
drivers/net/netvsc/hn_rxtx.c | 49 +++++++--------------
drivers/net/netvsc/hn_var.h | 3 +-
6 files changed, 56 insertions(+), 58 deletions(-)
diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h
index 0100f80ff9a0..4a2c1f6fd918 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus.h
+++ b/drivers/bus/vmbus/rte_bus_vmbus.h
@@ -337,12 +337,23 @@ int rte_vmbus_chan_recv(struct vmbus_channel *chan,
* @param len
* Pointer to size of receive buffer (in/out)
* @return
- * On success, returns 0
+ * On success, returns number of bytes read.
* On failure, returns negative errno.
*/
int rte_vmbus_chan_recv_raw(struct vmbus_channel *chan,
void *data, uint32_t *len);
+/**
+ * Notify host of bytes read (after recv_raw)
+ * Signals host if required.
+ *
+ * @param channel
+ * Pointer to vmbus_channel structure.
+ * @param bytes_read
+ * Number of bytes read since last signal
+ */
+void rte_vmbus_chan_signal_read(struct vmbus_channel *chan, uint32_t bytes_read);
+
/**
* Determine sub channel index of the given channel
*
diff --git a/drivers/bus/vmbus/rte_bus_vmbus_version.map b/drivers/bus/vmbus/rte_bus_vmbus_version.map
index 5324fef4662c..dabb9203104b 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus_version.map
+++ b/drivers/bus/vmbus/rte_bus_vmbus_version.map
@@ -10,6 +10,7 @@ DPDK_18.08 {
rte_vmbus_chan_rx_empty;
rte_vmbus_chan_send;
rte_vmbus_chan_send_sglist;
+ rte_vmbus_chan_signal_read;
rte_vmbus_chan_signal_tx;
rte_vmbus_irq_mask;
rte_vmbus_irq_read;
diff --git a/drivers/bus/vmbus/vmbus_bufring.c b/drivers/bus/vmbus/vmbus_bufring.c
index c2d7d8cc2254..c88001605dbb 100644
--- a/drivers/bus/vmbus/vmbus_bufring.c
+++ b/drivers/bus/vmbus/vmbus_bufring.c
@@ -221,6 +221,9 @@ vmbus_rxbr_read(struct vmbus_br *rbr, void *data, size_t dlen, size_t skip)
if (vmbus_br_availread(rbr) < dlen + skip + sizeof(uint64_t))
return -EAGAIN;
+ /* Record where host was when we started read (for debug) */
+ rbr->windex = rbr->vbr->windex;
+
/*
* Copy channel packet from RX bufring.
*/
diff --git a/drivers/bus/vmbus/vmbus_channel.c b/drivers/bus/vmbus/vmbus_channel.c
index f9feada9b047..cc5f3e8379a5 100644
--- a/drivers/bus/vmbus/vmbus_channel.c
+++ b/drivers/bus/vmbus/vmbus_channel.c
@@ -176,49 +176,37 @@ bool rte_vmbus_chan_rx_empty(const struct vmbus_channel *channel)
return br->vbr->rindex == br->vbr->windex;
}
-static int vmbus_read_and_signal(struct vmbus_channel *chan,
- void *data, size_t dlen, size_t skip)
+/* Signal host after reading N bytes */
+void rte_vmbus_chan_signal_read(struct vmbus_channel *chan, uint32_t bytes_read)
{
struct vmbus_br *rbr = &chan->rxbr;
- uint32_t write_sz, pending_sz, bytes_read;
- int error;
-
- /* Record where host was when we started read (for debug) */
- rbr->windex = rbr->vbr->windex;
-
- /* Read data and skip packet header */
- error = vmbus_rxbr_read(rbr, data, dlen, skip);
- if (error)
- return error;
+ uint32_t write_sz, pending_sz;
/* No need for signaling on older versions */
if (!rbr->vbr->feature_bits.feat_pending_send_sz)
- return 0;
+ return;
/* Make sure reading of pending happens after new read index */
rte_mb();
pending_sz = rbr->vbr->pending_send;
if (!pending_sz)
- return 0;
+ return;
rte_smp_rmb();
write_sz = vmbus_br_availwrite(rbr, rbr->vbr->windex);
- bytes_read = dlen + skip + sizeof(uint64_t);
/* If there was space before then host was not blocked */
if (write_sz - bytes_read > pending_sz)
- return 0;
+ return;
/* If pending write will not fit */
if (write_sz <= pending_sz)
- return 0;
+ return;
vmbus_set_event(chan->device, chan);
- return 0;
}
-/* TODO: replace this with inplace ring buffer (no copy) */
int rte_vmbus_chan_recv(struct vmbus_channel *chan, void *data, uint32_t *len,
uint64_t *request_id)
{
@@ -256,10 +244,16 @@ int rte_vmbus_chan_recv(struct vmbus_channel *chan, void *data, uint32_t *len,
if (request_id)
*request_id = pkt.xactid;
- /* Read data and skip the header */
- return vmbus_read_and_signal(chan, data, dlen, hlen);
+ /* Read data and skip packet header */
+ error = vmbus_rxbr_read(&chan->rxbr, data, dlen, hlen);
+ if (error)
+ return error;
+
+ rte_vmbus_chan_signal_read(chan, dlen + hlen + sizeof(uint64_t));
+ return 0;
}
+/* TODO: replace this with inplace ring buffer (no copy) */
int rte_vmbus_chan_recv_raw(struct vmbus_channel *chan,
void *data, uint32_t *len)
{
@@ -291,8 +285,13 @@ int rte_vmbus_chan_recv_raw(struct vmbus_channel *chan,
if (unlikely(dlen > bufferlen))
return -ENOBUFS;
- /* Put packet header in data buffer */
- return vmbus_read_and_signal(chan, data, dlen, 0);
+ /* Read data and skip packet header */
+ error = vmbus_rxbr_read(&chan->rxbr, data, dlen, 0);
+ if (error)
+ return error;
+
+ /* Return the number of bytes read */
+ return dlen + sizeof(uint64_t);
}
int vmbus_chan_create(const struct rte_vmbus_device *device,
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index 17cebeb74456..38c1612a6ac6 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -40,7 +40,7 @@
#define HN_TXCOPY_THRESHOLD 512
#define HN_RXCOPY_THRESHOLD 256
-#define HN_RXQ_EVENT_DEFAULT 1024
+#define HN_RXQ_EVENT_DEFAULT 2048
struct hn_rxinfo {
uint32_t vlan_info;
@@ -709,7 +709,8 @@ struct hn_rx_queue *hn_rx_queue_alloc(struct hn_data *hv,
{
struct hn_rx_queue *rxq;
- rxq = rte_zmalloc_socket("HN_RXQ", sizeof(*rxq),
+ rxq = rte_zmalloc_socket("HN_RXQ",
+ sizeof(*rxq) + HN_RXQ_EVENT_DEFAULT,
RTE_CACHE_LINE_SIZE, socket_id);
if (rxq) {
rxq->hv = hv;
@@ -717,16 +718,6 @@ struct hn_rx_queue *hn_rx_queue_alloc(struct hn_data *hv,
rte_spinlock_init(&rxq->ring_lock);
rxq->port_id = hv->port_id;
rxq->queue_id = queue_id;
-
- rxq->event_sz = HN_RXQ_EVENT_DEFAULT;
- rxq->event_buf = rte_malloc_socket("RX_EVENTS",
- rxq->event_sz,
- RTE_CACHE_LINE_SIZE,
- socket_id);
- if (!rxq->event_buf) {
- rte_free(rxq);
- rxq = NULL;
- }
}
return rxq;
}
@@ -835,6 +826,7 @@ void hn_process_events(struct hn_data *hv, uint16_t queue_id)
{
struct rte_eth_dev *dev = &rte_eth_devices[hv->port_id];
struct hn_rx_queue *rxq;
+ uint32_t bytes_read = 0;
int ret = 0;
rxq = queue_id == 0 ? hv->primary : dev->data->rx_queues[queue_id];
@@ -852,34 +844,21 @@ void hn_process_events(struct hn_data *hv, uint16_t queue_id)
for (;;) {
const struct vmbus_chanpkt_hdr *pkt;
- uint32_t len = rxq->event_sz;
+ uint32_t len = HN_RXQ_EVENT_DEFAULT;
const void *data;
ret = rte_vmbus_chan_recv_raw(rxq->chan, rxq->event_buf, &len);
if (ret == -EAGAIN)
break; /* ring is empty */
- if (ret == -ENOBUFS) {
- /* expanded buffer needed */
- len = rte_align32pow2(len);
- PMD_DRV_LOG(DEBUG, "expand event buf to %u", len);
-
- rxq->event_buf = rte_realloc(rxq->event_buf,
- len, RTE_CACHE_LINE_SIZE);
- if (rxq->event_buf) {
- rxq->event_sz = len;
- continue;
- }
-
- rte_exit(EXIT_FAILURE, "can not expand event buf!\n");
- break;
- }
-
- if (ret != 0) {
- PMD_DRV_LOG(ERR, "vmbus ring buffer error: %d", ret);
- break;
- }
+ else if (ret == -ENOBUFS)
+ rte_exit(EXIT_FAILURE, "event buffer not big enough (%u < %u)",
+ HN_RXQ_EVENT_DEFAULT, len);
+ else if (ret <= 0)
+ rte_exit(EXIT_FAILURE,
+ "vmbus ring buffer error: %d", ret);
+ bytes_read += ret;
pkt = (const struct vmbus_chanpkt_hdr *)rxq->event_buf;
data = (char *)rxq->event_buf + vmbus_chanpkt_getlen(pkt->hlen);
@@ -904,6 +883,10 @@ void hn_process_events(struct hn_data *hv, uint16_t queue_id)
if (rxq->rx_ring && rte_ring_full(rxq->rx_ring))
break;
}
+
+ if (bytes_read > 0)
+ rte_vmbus_chan_signal_read(rxq->chan, bytes_read);
+
rte_spinlock_unlock(&rxq->ring_lock);
}
diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h
index 3f3b442697af..f7ff8585bc1c 100644
--- a/drivers/net/netvsc/hn_var.h
+++ b/drivers/net/netvsc/hn_var.h
@@ -69,7 +69,6 @@ struct hn_rx_queue {
struct vmbus_channel *chan;
struct rte_mempool *mb_pool;
struct rte_ring *rx_ring;
- void *event_buf;
rte_spinlock_t ring_lock;
uint32_t event_sz;
@@ -77,6 +76,8 @@ struct hn_rx_queue {
uint16_t queue_id;
struct hn_stats stats;
uint64_t ring_full;
+
+ uint8_t event_buf[];
};
--
2.18.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-dev] [PATCH 0/4] netvsc PMD performance fixes
2018-07-24 21:08 [dpdk-dev] [PATCH 0/4] netvsc PMD performance fixes Stephen Hemminger
` (3 preceding siblings ...)
2018-07-24 21:08 ` [dpdk-dev] [PATCH 4/4] netvsc/vmbus: avoid signalling host on read Stephen Hemminger
@ 2018-08-05 9:14 ` Thomas Monjalon
4 siblings, 0 replies; 6+ messages in thread
From: Thomas Monjalon @ 2018-08-05 9:14 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
24/07/2018 23:08, Stephen Hemminger:
> The netvsc PMD is faster than the kernel but is still slow
> on receiving packets. These patches help.
>
> Stephen Hemminger (4):
> netvsc: change rx descriptor setup and sizing
> netvsc: avoid over filling receive descriptor ring
> netvsc: implement queue info get handles
> netvsc/vmbus: avoid signalling host on read
Applied, thanks
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2018-08-05 9:14 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-24 21:08 [dpdk-dev] [PATCH 0/4] netvsc PMD performance fixes Stephen Hemminger
2018-07-24 21:08 ` [dpdk-dev] [PATCH 1/4] netvsc: change rx descriptor setup and sizing Stephen Hemminger
2018-07-24 21:08 ` [dpdk-dev] [PATCH 2/4] netvsc: avoid over filling receive descriptor ring Stephen Hemminger
2018-07-24 21:08 ` [dpdk-dev] [PATCH 3/4] netvsc: implement queue info get handles Stephen Hemminger
2018-07-24 21:08 ` [dpdk-dev] [PATCH 4/4] netvsc/vmbus: avoid signalling host on read Stephen Hemminger
2018-08-05 9:14 ` [dpdk-dev] [PATCH 0/4] netvsc PMD performance fixes Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).