* [dpdk-dev] [PATCH 0/4] net/szedata2: fixes or improvements
@ 2018-04-04 13:42 Matej Vido
2018-04-04 13:42 ` [dpdk-dev] [PATCH 1/4] net/szedata2: fix total stats Matej Vido
` (4 more replies)
0 siblings, 5 replies; 8+ messages in thread
From: Matej Vido @ 2018-04-04 13:42 UTC (permalink / raw)
To: dev; +Cc: remes
Matej Vido (4):
net/szedata2: fix total stats
net/szedata2: use dynamically allocated queues
net/szedata2: add stat of mbuf allocation failures
net/szedata2: fix format string for pci address
drivers/net/szedata2/rte_eth_szedata2.c | 171 ++++++++++++++++++++++----------
1 file changed, 121 insertions(+), 50 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 1/4] net/szedata2: fix total stats
2018-04-04 13:42 [dpdk-dev] [PATCH 0/4] net/szedata2: fixes or improvements Matej Vido
@ 2018-04-04 13:42 ` Matej Vido
2018-04-04 13:42 ` [dpdk-dev] [PATCH 2/4] net/szedata2: use dynamically allocated queues Matej Vido
` (3 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Matej Vido @ 2018-04-04 13:42 UTC (permalink / raw)
To: dev; +Cc: remes, stable
Counters from all queues have to be summed up for total stats
even though the number of queue stats counters is not sufficient.
Fixes: 83556fd2c0fc ("szedata2: change to physical device type")
Cc: stable@dpdk.org
Signed-off-by: Matej Vido <vido@cesnet.cz>
---
drivers/net/szedata2/rte_eth_szedata2.c | 33 ++++++++++++++++++++-------------
1 file changed, 20 insertions(+), 13 deletions(-)
diff --git a/drivers/net/szedata2/rte_eth_szedata2.c b/drivers/net/szedata2/rte_eth_szedata2.c
index 3cfe388..fc11d68 100644
--- a/drivers/net/szedata2/rte_eth_szedata2.c
+++ b/drivers/net/szedata2/rte_eth_szedata2.c
@@ -1058,22 +1058,29 @@ struct pmd_internals {
uint64_t tx_err_total = 0;
uint64_t rx_total_bytes = 0;
uint64_t tx_total_bytes = 0;
- const struct pmd_internals *internals = dev->data->dev_private;
- for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && i < nb_rx; i++) {
- stats->q_ipackets[i] = internals->rx_queue[i].rx_pkts;
- stats->q_ibytes[i] = internals->rx_queue[i].rx_bytes;
- rx_total += stats->q_ipackets[i];
- rx_total_bytes += stats->q_ibytes[i];
+ for (i = 0; i < nb_rx; i++) {
+ struct szedata2_rx_queue *rxq = dev->data->rx_queues[i];
+
+ if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) {
+ stats->q_ipackets[i] = rxq->rx_pkts;
+ stats->q_ibytes[i] = rxq->rx_bytes;
+ }
+ rx_total += rxq->rx_pkts;
+ rx_total_bytes += rxq->rx_bytes;
}
- for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && i < nb_tx; i++) {
- stats->q_opackets[i] = internals->tx_queue[i].tx_pkts;
- stats->q_obytes[i] = internals->tx_queue[i].tx_bytes;
- stats->q_errors[i] = internals->tx_queue[i].err_pkts;
- tx_total += stats->q_opackets[i];
- tx_total_bytes += stats->q_obytes[i];
- tx_err_total += stats->q_errors[i];
+ for (i = 0; i < nb_tx; i++) {
+ struct szedata2_tx_queue *txq = dev->data->tx_queues[i];
+
+ if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) {
+ stats->q_opackets[i] = txq->tx_pkts;
+ stats->q_obytes[i] = txq->tx_bytes;
+ stats->q_errors[i] = txq->err_pkts;
+ }
+ tx_total += txq->tx_pkts;
+ tx_total_bytes += txq->tx_bytes;
+ tx_err_total += txq->err_pkts;
}
stats->ipackets = rx_total;
--
1.8.3.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 2/4] net/szedata2: use dynamically allocated queues
2018-04-04 13:42 [dpdk-dev] [PATCH 0/4] net/szedata2: fixes or improvements Matej Vido
2018-04-04 13:42 ` [dpdk-dev] [PATCH 1/4] net/szedata2: fix total stats Matej Vido
@ 2018-04-04 13:42 ` Matej Vido
2018-04-06 13:20 ` Ferruh Yigit
2018-04-04 13:42 ` [dpdk-dev] [PATCH 3/4] net/szedata2: add stat of mbuf allocation failures Matej Vido
` (2 subsequent siblings)
4 siblings, 1 reply; 8+ messages in thread
From: Matej Vido @ 2018-04-04 13:42 UTC (permalink / raw)
To: dev; +Cc: remes
Previously the queues were the part of private data structure of the
Ethernet device.
Now the queues are allocated at setup thus numa-aware allocation is
possible.
Signed-off-by: Matej Vido <vido@cesnet.cz>
---
drivers/net/szedata2/rte_eth_szedata2.c | 99 ++++++++++++++++++++++++---------
1 file changed, 74 insertions(+), 25 deletions(-)
diff --git a/drivers/net/szedata2/rte_eth_szedata2.c b/drivers/net/szedata2/rte_eth_szedata2.c
index fc11d68..f41716d 100644
--- a/drivers/net/szedata2/rte_eth_szedata2.c
+++ b/drivers/net/szedata2/rte_eth_szedata2.c
@@ -86,8 +86,6 @@ struct szedata2_tx_queue {
};
struct pmd_internals {
- struct szedata2_rx_queue rx_queue[RTE_ETH_SZEDATA2_MAX_RX_QUEUES];
- struct szedata2_tx_queue tx_queue[RTE_ETH_SZEDATA2_MAX_TX_QUEUES];
uint16_t max_rx_queues;
uint16_t max_tx_queues;
char sze_dev[PATH_MAX];
@@ -1098,17 +1096,18 @@ struct pmd_internals {
uint16_t i;
uint16_t nb_rx = dev->data->nb_rx_queues;
uint16_t nb_tx = dev->data->nb_tx_queues;
- struct pmd_internals *internals = dev->data->dev_private;
for (i = 0; i < nb_rx; i++) {
- internals->rx_queue[i].rx_pkts = 0;
- internals->rx_queue[i].rx_bytes = 0;
- internals->rx_queue[i].err_pkts = 0;
+ struct szedata2_rx_queue *rxq = dev->data->rx_queues[i];
+ rxq->rx_pkts = 0;
+ rxq->rx_bytes = 0;
+ rxq->err_pkts = 0;
}
for (i = 0; i < nb_tx; i++) {
- internals->tx_queue[i].tx_pkts = 0;
- internals->tx_queue[i].tx_bytes = 0;
- internals->tx_queue[i].err_pkts = 0;
+ struct szedata2_tx_queue *txq = dev->data->tx_queues[i];
+ txq->tx_pkts = 0;
+ txq->tx_bytes = 0;
+ txq->err_pkts = 0;
}
}
@@ -1116,9 +1115,11 @@ struct pmd_internals {
eth_rx_queue_release(void *q)
{
struct szedata2_rx_queue *rxq = (struct szedata2_rx_queue *)q;
- if (rxq->sze != NULL) {
- szedata_close(rxq->sze);
- rxq->sze = NULL;
+
+ if (rxq != NULL) {
+ if (rxq->sze != NULL)
+ szedata_close(rxq->sze);
+ rte_free(rxq);
}
}
@@ -1126,9 +1127,11 @@ struct pmd_internals {
eth_tx_queue_release(void *q)
{
struct szedata2_tx_queue *txq = (struct szedata2_tx_queue *)q;
- if (txq->sze != NULL) {
- szedata_close(txq->sze);
- txq->sze = NULL;
+
+ if (txq != NULL) {
+ if (txq->sze != NULL)
+ szedata_close(txq->sze);
+ rte_free(txq);
}
}
@@ -1263,23 +1266,42 @@ struct pmd_internals {
eth_rx_queue_setup(struct rte_eth_dev *dev,
uint16_t rx_queue_id,
uint16_t nb_rx_desc __rte_unused,
- unsigned int socket_id __rte_unused,
+ unsigned int socket_id,
const struct rte_eth_rxconf *rx_conf __rte_unused,
struct rte_mempool *mb_pool)
{
struct pmd_internals *internals = dev->data->dev_private;
- struct szedata2_rx_queue *rxq = &internals->rx_queue[rx_queue_id];
+ struct szedata2_rx_queue *rxq;
int ret;
uint32_t rx = 1 << rx_queue_id;
uint32_t tx = 0;
+ if (dev->data->rx_queues[rx_queue_id] != NULL) {
+ eth_rx_queue_release(dev->data->rx_queues[rx_queue_id]);
+ dev->data->rx_queues[rx_queue_id] = NULL;
+ }
+
+ rxq = rte_zmalloc_socket("szedata2 rx queue",
+ sizeof(struct szedata2_rx_queue),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (rxq == NULL) {
+ RTE_LOG(ERR, PMD, "rte_zmalloc_socket() failed for rx queue id "
+ "%" PRIu16 "!\n", rx_queue_id);
+ return -ENOMEM;
+ }
+
rxq->sze = szedata_open(internals->sze_dev);
- if (rxq->sze == NULL)
+ if (rxq->sze == NULL) {
+ RTE_LOG(ERR, PMD, "szedata_open() failed for rx queue id "
+ "%" PRIu16 "!\n", rx_queue_id);
+ eth_rx_queue_release(rxq);
return -EINVAL;
+ }
ret = szedata_subscribe3(rxq->sze, &rx, &tx);
if (ret != 0 || rx == 0) {
- szedata_close(rxq->sze);
- rxq->sze = NULL;
+ RTE_LOG(ERR, PMD, "szedata_subscribe3() failed for rx queue id "
+ "%" PRIu16 "!\n", rx_queue_id);
+ eth_rx_queue_release(rxq);
return -EINVAL;
}
rxq->rx_channel = rx_queue_id;
@@ -1290,6 +1312,10 @@ struct pmd_internals {
rxq->err_pkts = 0;
dev->data->rx_queues[rx_queue_id] = rxq;
+
+ RTE_LOG(DEBUG, PMD, "Configured rx queue id %" PRIu16 " on socket "
+ "%u.\n", rx_queue_id, socket_id);
+
return 0;
}
@@ -1297,22 +1323,41 @@ struct pmd_internals {
eth_tx_queue_setup(struct rte_eth_dev *dev,
uint16_t tx_queue_id,
uint16_t nb_tx_desc __rte_unused,
- unsigned int socket_id __rte_unused,
+ unsigned int socket_id,
const struct rte_eth_txconf *tx_conf __rte_unused)
{
struct pmd_internals *internals = dev->data->dev_private;
- struct szedata2_tx_queue *txq = &internals->tx_queue[tx_queue_id];
+ struct szedata2_tx_queue *txq;
int ret;
uint32_t rx = 0;
uint32_t tx = 1 << tx_queue_id;
+ if (dev->data->tx_queues[tx_queue_id] != NULL) {
+ eth_tx_queue_release(dev->data->tx_queues[tx_queue_id]);
+ dev->data->tx_queues[tx_queue_id] = NULL;
+ }
+
+ txq = rte_zmalloc_socket("szedata2 tx queue",
+ sizeof(struct szedata2_tx_queue),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (txq == NULL) {
+ RTE_LOG(ERR, PMD, "rte_zmalloc_socket() failed for tx queue id "
+ "%" PRIu16 "!\n", tx_queue_id);
+ return -ENOMEM;
+ }
+
txq->sze = szedata_open(internals->sze_dev);
- if (txq->sze == NULL)
+ if (txq->sze == NULL) {
+ RTE_LOG(ERR, PMD, "szedata_open() failed for tx queue id "
+ "%" PRIu16 "!\n", tx_queue_id);
+ eth_tx_queue_release(txq);
return -EINVAL;
+ }
ret = szedata_subscribe3(txq->sze, &rx, &tx);
if (ret != 0 || tx == 0) {
- szedata_close(txq->sze);
- txq->sze = NULL;
+ RTE_LOG(ERR, PMD, "szedata_subscribe3() failed for tx queue id "
+ "%" PRIu16 "!\n", tx_queue_id);
+ eth_tx_queue_release(txq);
return -EINVAL;
}
txq->tx_channel = tx_queue_id;
@@ -1321,6 +1366,10 @@ struct pmd_internals {
txq->err_pkts = 0;
dev->data->tx_queues[tx_queue_id] = txq;
+
+ RTE_LOG(DEBUG, PMD, "Configured tx queue id %" PRIu16 " on socket "
+ "%u.\n", tx_queue_id, socket_id);
+
return 0;
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 3/4] net/szedata2: add stat of mbuf allocation failures
2018-04-04 13:42 [dpdk-dev] [PATCH 0/4] net/szedata2: fixes or improvements Matej Vido
2018-04-04 13:42 ` [dpdk-dev] [PATCH 1/4] net/szedata2: fix total stats Matej Vido
2018-04-04 13:42 ` [dpdk-dev] [PATCH 2/4] net/szedata2: use dynamically allocated queues Matej Vido
@ 2018-04-04 13:42 ` Matej Vido
2018-04-04 13:42 ` [dpdk-dev] [PATCH 4/4] net/szedata2: fix format string for pci address Matej Vido
2018-04-06 13:53 ` [dpdk-dev] [PATCH 0/4] net/szedata2: fixes or improvements Ferruh Yigit
4 siblings, 0 replies; 8+ messages in thread
From: Matej Vido @ 2018-04-04 13:42 UTC (permalink / raw)
To: dev; +Cc: remes
Signed-off-by: Matej Vido <vido@cesnet.cz>
---
drivers/net/szedata2/rte_eth_szedata2.c | 31 +++++++++++++++++++++++--------
1 file changed, 23 insertions(+), 8 deletions(-)
diff --git a/drivers/net/szedata2/rte_eth_szedata2.c b/drivers/net/szedata2/rte_eth_szedata2.c
index f41716d..8278780 100644
--- a/drivers/net/szedata2/rte_eth_szedata2.c
+++ b/drivers/net/szedata2/rte_eth_szedata2.c
@@ -67,7 +67,16 @@
#define SZEDATA2_DEV_PATH_FMT "/dev/szedataII%u"
+struct pmd_internals {
+ struct rte_eth_dev *dev;
+ uint16_t max_rx_queues;
+ uint16_t max_tx_queues;
+ char sze_dev[PATH_MAX];
+ struct rte_mem_resource *pci_rsc;
+};
+
struct szedata2_rx_queue {
+ struct pmd_internals *priv;
struct szedata *sze;
uint8_t rx_channel;
uint16_t in_port;
@@ -78,6 +87,7 @@ struct szedata2_rx_queue {
};
struct szedata2_tx_queue {
+ struct pmd_internals *priv;
struct szedata *sze;
uint8_t tx_channel;
volatile uint64_t tx_pkts;
@@ -85,13 +95,6 @@ struct szedata2_tx_queue {
volatile uint64_t err_pkts;
};
-struct pmd_internals {
- uint16_t max_rx_queues;
- uint16_t max_tx_queues;
- char sze_dev[PATH_MAX];
- struct rte_mem_resource *pci_rsc;
-};
-
static struct ether_addr eth_addr = {
.addr_bytes = { 0x00, 0x11, 0x17, 0x00, 0x00, 0x00 }
};
@@ -130,8 +133,10 @@ struct pmd_internals {
for (i = 0; i < nb_pkts; i++) {
mbuf = rte_pktmbuf_alloc(sze_q->mb_pool);
- if (unlikely(mbuf == NULL))
+ if (unlikely(mbuf == NULL)) {
+ sze_q->priv->dev->data->rx_mbuf_alloc_failed++;
break;
+ }
/* get the next sze packet */
if (sze->ct_rx_lck != NULL && !sze->ct_rx_rem_bytes &&
@@ -351,6 +356,8 @@ struct pmd_internals {
uint16_t packet_len1 = 0;
uint16_t packet_len2 = 0;
uint16_t hw_data_align;
+ uint64_t *mbuf_failed_ptr =
+ &sze_q->priv->dev->data->rx_mbuf_alloc_failed;
if (unlikely(sze_q->sze == NULL || nb_pkts == 0))
return 0;
@@ -538,6 +545,7 @@ struct pmd_internals {
sze->ct_rx_lck = ct_rx_lck_backup;
sze->ct_rx_rem_bytes = ct_rx_rem_bytes_backup;
sze->ct_rx_cur_ptr = ct_rx_cur_ptr_backup;
+ sze_q->priv->dev->data->rx_mbuf_alloc_failed++;
break;
}
@@ -587,6 +595,7 @@ struct pmd_internals {
ct_rx_rem_bytes_backup;
sze->ct_rx_cur_ptr =
ct_rx_cur_ptr_backup;
+ (*mbuf_failed_ptr)++;
goto finish;
}
@@ -630,6 +639,7 @@ struct pmd_internals {
ct_rx_rem_bytes_backup;
sze->ct_rx_cur_ptr =
ct_rx_cur_ptr_backup;
+ (*mbuf_failed_ptr)++;
goto finish;
}
@@ -1086,6 +1096,7 @@ struct pmd_internals {
stats->ibytes = rx_total_bytes;
stats->obytes = tx_total_bytes;
stats->oerrors = tx_err_total;
+ stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
return 0;
}
@@ -1290,6 +1301,7 @@ struct pmd_internals {
return -ENOMEM;
}
+ rxq->priv = internals;
rxq->sze = szedata_open(internals->sze_dev);
if (rxq->sze == NULL) {
RTE_LOG(ERR, PMD, "szedata_open() failed for rx queue id "
@@ -1346,6 +1358,7 @@ struct pmd_internals {
return -ENOMEM;
}
+ txq->priv = internals;
txq->sze = szedata_open(internals->sze_dev);
if (txq->sze == NULL) {
RTE_LOG(ERR, PMD, "szedata_open() failed for tx queue id "
@@ -1543,6 +1556,8 @@ struct pmd_internals {
pci_addr->domain, pci_addr->bus, pci_addr->devid,
pci_addr->function);
+ internals->dev = dev;
+
/* Get index of szedata2 device file and create path to device file */
ret = get_szedata2_index(pci_addr, &szedata2_index);
if (ret != 0) {
--
1.8.3.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 4/4] net/szedata2: fix format string for pci address
2018-04-04 13:42 [dpdk-dev] [PATCH 0/4] net/szedata2: fixes or improvements Matej Vido
` (2 preceding siblings ...)
2018-04-04 13:42 ` [dpdk-dev] [PATCH 3/4] net/szedata2: add stat of mbuf allocation failures Matej Vido
@ 2018-04-04 13:42 ` Matej Vido
2018-04-06 13:53 ` [dpdk-dev] [PATCH 0/4] net/szedata2: fixes or improvements Ferruh Yigit
4 siblings, 0 replies; 8+ messages in thread
From: Matej Vido @ 2018-04-04 13:42 UTC (permalink / raw)
To: dev; +Cc: remes, stable
For fscanf() function SCN macros should be used but PRI macros were
wrongly used.
Also use correct sizes of variables for read values.
Fixes: 83556fd2c0fc ("szedata2: change to physical device type")
Cc: stable@dpdk.org
Signed-off-by: Matej Vido <vido@cesnet.cz>
---
drivers/net/szedata2/rte_eth_szedata2.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/szedata2/rte_eth_szedata2.c b/drivers/net/szedata2/rte_eth_szedata2.c
index 8278780..04dc8bf 100644
--- a/drivers/net/szedata2/rte_eth_szedata2.c
+++ b/drivers/net/szedata2/rte_eth_szedata2.c
@@ -1488,9 +1488,9 @@ struct szedata2_tx_queue {
FILE *fd;
char pcislot_path[PATH_MAX];
uint32_t domain;
- uint32_t bus;
- uint32_t devid;
- uint32_t function;
+ uint8_t bus;
+ uint8_t devid;
+ uint8_t function;
dir = opendir("/sys/class/combo");
if (dir == NULL)
@@ -1515,7 +1515,7 @@ struct szedata2_tx_queue {
if (fd == NULL)
continue;
- ret = fscanf(fd, "%4" PRIx16 ":%2" PRIx8 ":%2" PRIx8 ".%" PRIx8,
+ ret = fscanf(fd, "%8" SCNx32 ":%2" SCNx8 ":%2" SCNx8 ".%" SCNx8,
&domain, &bus, &devid, &function);
fclose(fd);
if (ret != 4)
--
1.8.3.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] [PATCH 2/4] net/szedata2: use dynamically allocated queues
2018-04-04 13:42 ` [dpdk-dev] [PATCH 2/4] net/szedata2: use dynamically allocated queues Matej Vido
@ 2018-04-06 13:20 ` Ferruh Yigit
2018-04-06 13:51 ` Ferruh Yigit
0 siblings, 1 reply; 8+ messages in thread
From: Ferruh Yigit @ 2018-04-06 13:20 UTC (permalink / raw)
To: Matej Vido, dev
Cc: remes, Thomas Monjalon, Konstantin Ananyev, Jerin Jacob,
Bruce Richardson
On 4/4/2018 2:42 PM, Matej Vido wrote:
> Previously the queues were the part of private data structure of the
> Ethernet device.
> Now the queues are allocated at setup thus numa-aware allocation is
> possible.
Hi Matej,
Yes by default [rt]x_queues are allocated via rte_zmalloc, which uses SOCKET_ID_ANY.
And in burst functions, we do:
nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id], rx_pkts, nb_pkts);
So there is an access to rx_queues in each rte_eth_rx_burst() call.
I wonder if you observe any performance difference with this update?
And what about moving to the ethdev layer instead of keeping local to the PMD?
>
> Signed-off-by: Matej Vido <vido@cesnet.cz>
<...>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] [PATCH 2/4] net/szedata2: use dynamically allocated queues
2018-04-06 13:20 ` Ferruh Yigit
@ 2018-04-06 13:51 ` Ferruh Yigit
0 siblings, 0 replies; 8+ messages in thread
From: Ferruh Yigit @ 2018-04-06 13:51 UTC (permalink / raw)
To: Matej Vido, dev
Cc: remes, Thomas Monjalon, Konstantin Ananyev, Jerin Jacob,
Bruce Richardson
On 4/6/2018 2:20 PM, Ferruh Yigit wrote:
> On 4/4/2018 2:42 PM, Matej Vido wrote:
>> Previously the queues were the part of private data structure of the
>> Ethernet device.
>> Now the queues are allocated at setup thus numa-aware allocation is
>> possible.
>
> Hi Matej,
>
> Yes by default [rt]x_queues are allocated via rte_zmalloc, which uses SOCKET_ID_ANY.
>
> And in burst functions, we do:
> nb_rx = (*dev->rx_pkt_burst)(dev->data->rx_queues[queue_id], rx_pkts, nb_pkts);
>
> So there is an access to rx_queues in each rte_eth_rx_burst() call.
>
> I wonder if you observe any performance difference with this update?
> And what about moving to the ethdev layer instead of keeping local to the PMD?
Forget about it, I thought you are allocating data->[rt]x_queues in specific
socket, but this just allocating queues in specific socket, which is OK.
Still I would like to hear comments if allocating data->[rt]x_queues arrays in
specific socket helps for performance.
>
>>
>> Signed-off-by: Matej Vido <vido@cesnet.cz>
>
> <...>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] [PATCH 0/4] net/szedata2: fixes or improvements
2018-04-04 13:42 [dpdk-dev] [PATCH 0/4] net/szedata2: fixes or improvements Matej Vido
` (3 preceding siblings ...)
2018-04-04 13:42 ` [dpdk-dev] [PATCH 4/4] net/szedata2: fix format string for pci address Matej Vido
@ 2018-04-06 13:53 ` Ferruh Yigit
4 siblings, 0 replies; 8+ messages in thread
From: Ferruh Yigit @ 2018-04-06 13:53 UTC (permalink / raw)
To: Matej Vido, dev; +Cc: remes
On 4/4/2018 2:42 PM, Matej Vido wrote:
> Matej Vido (4):
> net/szedata2: fix total stats
> net/szedata2: use dynamically allocated queues
> net/szedata2: add stat of mbuf allocation failures
> net/szedata2: fix format string for pci address
Series applied to dpdk-next-net/master, thanks.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2018-04-06 13:53 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-04 13:42 [dpdk-dev] [PATCH 0/4] net/szedata2: fixes or improvements Matej Vido
2018-04-04 13:42 ` [dpdk-dev] [PATCH 1/4] net/szedata2: fix total stats Matej Vido
2018-04-04 13:42 ` [dpdk-dev] [PATCH 2/4] net/szedata2: use dynamically allocated queues Matej Vido
2018-04-06 13:20 ` Ferruh Yigit
2018-04-06 13:51 ` Ferruh Yigit
2018-04-04 13:42 ` [dpdk-dev] [PATCH 3/4] net/szedata2: add stat of mbuf allocation failures Matej Vido
2018-04-04 13:42 ` [dpdk-dev] [PATCH 4/4] net/szedata2: fix format string for pci address Matej Vido
2018-04-06 13:53 ` [dpdk-dev] [PATCH 0/4] net/szedata2: fixes or improvements Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).