* [dpdk-dev] [PATCH] net/null: Support bulk alloc and free.
@ 2018-02-03 3:11 Mallesh Koujalagi
2018-03-05 15:24 ` Ferruh Yigit
2018-03-08 23:40 ` [dpdk-dev] [PATCH v2] net/null: support bulk allocation Mallesh Koujalagi
0 siblings, 2 replies; 8+ messages in thread
From: Mallesh Koujalagi @ 2018-02-03 3:11 UTC (permalink / raw)
To: dev; +Cc: mtetsuyah, ferruh.yigit, malleshx.koujalagi
After bulk allocation and freeing of multiple mbufs increase more than ~2%
throughput on single core.
Signed-off-by: Mallesh Koujalagi <malleshx.koujalagi@intel.com>
---
drivers/net/null/rte_eth_null.c | 16 +++++++---------
1 file changed, 7 insertions(+), 9 deletions(-)
diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
index 9385ffd..247ede0 100644
--- a/drivers/net/null/rte_eth_null.c
+++ b/drivers/net/null/rte_eth_null.c
@@ -130,10 +130,11 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
return 0;
packet_size = h->internals->packet_size;
+
+ if (rte_pktmbuf_alloc_bulk(h->mb_pool, bufs, nb_bufs) != 0)
+ return 0;
+
for (i = 0; i < nb_bufs; i++) {
- bufs[i] = rte_pktmbuf_alloc(h->mb_pool);
- if (!bufs[i])
- break;
rte_memcpy(rte_pktmbuf_mtod(bufs[i], void *), h->dummy_packet,
packet_size);
bufs[i]->data_len = (uint16_t)packet_size;
@@ -149,18 +150,15 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
static uint16_t
eth_null_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
{
- int i;
struct null_queue *h = q;
if ((q == NULL) || (bufs == NULL))
return 0;
- for (i = 0; i < nb_bufs; i++)
- rte_pktmbuf_free(bufs[i]);
+ rte_mempool_put_bulk(bufs[0]->pool, (void **)bufs, nb_bufs);
+ rte_atomic64_add(&h->tx_pkts, nb_bufs);
- rte_atomic64_add(&(h->tx_pkts), i);
-
- return i;
+ return nb_bufs;
}
static uint16_t
--
2.7.4
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] [PATCH] net/null: Support bulk alloc and free.
2018-02-03 3:11 [dpdk-dev] [PATCH] net/null: Support bulk alloc and free Mallesh Koujalagi
@ 2018-03-05 15:24 ` Ferruh Yigit
2018-03-05 15:36 ` Ananyev, Konstantin
2018-03-08 23:40 ` [dpdk-dev] [PATCH v2] net/null: support bulk allocation Mallesh Koujalagi
1 sibling, 1 reply; 8+ messages in thread
From: Ferruh Yigit @ 2018-03-05 15:24 UTC (permalink / raw)
To: Mallesh Koujalagi, dev; +Cc: mtetsuyah
On 2/3/2018 3:11 AM, Mallesh Koujalagi wrote:
> After bulk allocation and freeing of multiple mbufs increase more than ~2%
> throughput on single core.
>
> Signed-off-by: Mallesh Koujalagi <malleshx.koujalagi@intel.com>
> ---
> drivers/net/null/rte_eth_null.c | 16 +++++++---------
> 1 file changed, 7 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
> index 9385ffd..247ede0 100644
> --- a/drivers/net/null/rte_eth_null.c
> +++ b/drivers/net/null/rte_eth_null.c
> @@ -130,10 +130,11 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
> return 0;
>
> packet_size = h->internals->packet_size;
> +
> + if (rte_pktmbuf_alloc_bulk(h->mb_pool, bufs, nb_bufs) != 0)
> + return 0;
> +
> for (i = 0; i < nb_bufs; i++) {
> - bufs[i] = rte_pktmbuf_alloc(h->mb_pool);
> - if (!bufs[i])
> - break;
> rte_memcpy(rte_pktmbuf_mtod(bufs[i], void *), h->dummy_packet,
> packet_size);
> bufs[i]->data_len = (uint16_t)packet_size;
> @@ -149,18 +150,15 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
> static uint16_t
> eth_null_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
> {
> - int i;
> struct null_queue *h = q;
>
> if ((q == NULL) || (bufs == NULL))
> return 0;
>
> - for (i = 0; i < nb_bufs; i++)
> - rte_pktmbuf_free(bufs[i]);
> + rte_mempool_put_bulk(bufs[0]->pool, (void **)bufs, nb_bufs);
Is it guarantied that all mbufs will be from same mempool?
> + rte_atomic64_add(&h->tx_pkts, nb_bufs);
>
> - rte_atomic64_add(&(h->tx_pkts), i);
> -
> - return i;
> + return nb_bufs;
> }
>
> static uint16_t
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] [PATCH] net/null: Support bulk alloc and free.
2018-03-05 15:24 ` Ferruh Yigit
@ 2018-03-05 15:36 ` Ananyev, Konstantin
2018-03-07 10:57 ` Ferruh Yigit
0 siblings, 1 reply; 8+ messages in thread
From: Ananyev, Konstantin @ 2018-03-05 15:36 UTC (permalink / raw)
To: Yigit, Ferruh, Koujalagi, MalleshX, dev; +Cc: mtetsuyah
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ferruh Yigit
> Sent: Monday, March 5, 2018 3:25 PM
> To: Koujalagi, MalleshX <malleshx.koujalagi@intel.com>; dev@dpdk.org
> Cc: mtetsuyah@gmail.com
> Subject: Re: [dpdk-dev] [PATCH] net/null: Support bulk alloc and free.
>
> On 2/3/2018 3:11 AM, Mallesh Koujalagi wrote:
> > After bulk allocation and freeing of multiple mbufs increase more than ~2%
> > throughput on single core.
> >
> > Signed-off-by: Mallesh Koujalagi <malleshx.koujalagi@intel.com>
> > ---
> > drivers/net/null/rte_eth_null.c | 16 +++++++---------
> > 1 file changed, 7 insertions(+), 9 deletions(-)
> >
> > diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
> > index 9385ffd..247ede0 100644
> > --- a/drivers/net/null/rte_eth_null.c
> > +++ b/drivers/net/null/rte_eth_null.c
> > @@ -130,10 +130,11 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
> > return 0;
> >
> > packet_size = h->internals->packet_size;
> > +
> > + if (rte_pktmbuf_alloc_bulk(h->mb_pool, bufs, nb_bufs) != 0)
> > + return 0;
> > +
> > for (i = 0; i < nb_bufs; i++) {
> > - bufs[i] = rte_pktmbuf_alloc(h->mb_pool);
> > - if (!bufs[i])
> > - break;
> > rte_memcpy(rte_pktmbuf_mtod(bufs[i], void *), h->dummy_packet,
> > packet_size);
> > bufs[i]->data_len = (uint16_t)packet_size;
> > @@ -149,18 +150,15 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
> > static uint16_t
> > eth_null_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
> > {
> > - int i;
> > struct null_queue *h = q;
> >
> > if ((q == NULL) || (bufs == NULL))
> > return 0;
> >
> > - for (i = 0; i < nb_bufs; i++)
> > - rte_pktmbuf_free(bufs[i]);
> > + rte_mempool_put_bulk(bufs[0]->pool, (void **)bufs, nb_bufs);
>
> Is it guarantied that all mbufs will be from same mempool?
I don't think it does, plus
rte_pktmbuf_free(mb) != rte_mempool_put_bulk(mb->pool, &mb, 1);
Konstantin
>
> > + rte_atomic64_add(&h->tx_pkts, nb_bufs);
> >
> > - rte_atomic64_add(&(h->tx_pkts), i);
> > -
> > - return i;
> > + return nb_bufs;
> > }
> >
> > static uint16_t
> >
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] [PATCH] net/null: Support bulk alloc and free.
2018-03-05 15:36 ` Ananyev, Konstantin
@ 2018-03-07 10:57 ` Ferruh Yigit
2018-03-08 21:29 ` Koujalagi, MalleshX
0 siblings, 1 reply; 8+ messages in thread
From: Ferruh Yigit @ 2018-03-07 10:57 UTC (permalink / raw)
To: Ananyev, Konstantin, Koujalagi, MalleshX, dev; +Cc: mtetsuyah
On 3/5/2018 3:36 PM, Ananyev, Konstantin wrote:
>
>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ferruh Yigit
>> Sent: Monday, March 5, 2018 3:25 PM
>> To: Koujalagi, MalleshX <malleshx.koujalagi@intel.com>; dev@dpdk.org
>> Cc: mtetsuyah@gmail.com
>> Subject: Re: [dpdk-dev] [PATCH] net/null: Support bulk alloc and free.
>>
>> On 2/3/2018 3:11 AM, Mallesh Koujalagi wrote:
>>> After bulk allocation and freeing of multiple mbufs increase more than ~2%
>>> throughput on single core.
>>>
>>> Signed-off-by: Mallesh Koujalagi <malleshx.koujalagi@intel.com>
>>> ---
>>> drivers/net/null/rte_eth_null.c | 16 +++++++---------
>>> 1 file changed, 7 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
>>> index 9385ffd..247ede0 100644
>>> --- a/drivers/net/null/rte_eth_null.c
>>> +++ b/drivers/net/null/rte_eth_null.c
>>> @@ -130,10 +130,11 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
>>> return 0;
>>>
>>> packet_size = h->internals->packet_size;
>>> +
>>> + if (rte_pktmbuf_alloc_bulk(h->mb_pool, bufs, nb_bufs) != 0)
>>> + return 0;
>>> +
>>> for (i = 0; i < nb_bufs; i++) {
>>> - bufs[i] = rte_pktmbuf_alloc(h->mb_pool);
>>> - if (!bufs[i])
>>> - break;
>>> rte_memcpy(rte_pktmbuf_mtod(bufs[i], void *), h->dummy_packet,
>>> packet_size);
>>> bufs[i]->data_len = (uint16_t)packet_size;
>>> @@ -149,18 +150,15 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
>>> static uint16_t
>>> eth_null_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
>>> {
>>> - int i;
>>> struct null_queue *h = q;
>>>
>>> if ((q == NULL) || (bufs == NULL))
>>> return 0;
>>>
>>> - for (i = 0; i < nb_bufs; i++)
>>> - rte_pktmbuf_free(bufs[i]);
>>> + rte_mempool_put_bulk(bufs[0]->pool, (void **)bufs, nb_bufs);
>>
>> Is it guarantied that all mbufs will be from same mempool?
>
> I don't think it does, plus
> rte_pktmbuf_free(mb) != rte_mempool_put_bulk(mb->pool, &mb, 1);
Perhaps we can just benefit from bulk alloc.
Hi Mallesh,
Does it give any performance improvement if we switch "rte_pktmbuf_alloc()" to
"rte_pktmbuf_alloc_bulk()" but keep free functions untouched?
Thanks,
ferruh
> Konstantin
>
>>
>>> + rte_atomic64_add(&h->tx_pkts, nb_bufs);
>>>
>>> - rte_atomic64_add(&(h->tx_pkts), i);
>>> -
>>> - return i;
>>> + return nb_bufs;
>>> }
>>>
>>> static uint16_t
>>>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] [PATCH] net/null: Support bulk alloc and free.
2018-03-07 10:57 ` Ferruh Yigit
@ 2018-03-08 21:29 ` Koujalagi, MalleshX
0 siblings, 0 replies; 8+ messages in thread
From: Koujalagi, MalleshX @ 2018-03-08 21:29 UTC (permalink / raw)
To: Yigit, Ferruh, Ananyev, Konstantin, dev; +Cc: mtetsuyah
Hi Ferruh,
Bulk allocation gives benefit but how much, will check and provide patch.
Best regards
-/Mallesh
-----Original Message-----
From: Yigit, Ferruh
Sent: Wednesday, March 7, 2018 2:57 AM
To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Koujalagi, MalleshX <malleshx.koujalagi@intel.com>; dev@dpdk.org
Cc: mtetsuyah@gmail.com
Subject: Re: [dpdk-dev] [PATCH] net/null: Support bulk alloc and free.
On 3/5/2018 3:36 PM, Ananyev, Konstantin wrote:
>
>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ferruh Yigit
>> Sent: Monday, March 5, 2018 3:25 PM
>> To: Koujalagi, MalleshX <malleshx.koujalagi@intel.com>; dev@dpdk.org
>> Cc: mtetsuyah@gmail.com
>> Subject: Re: [dpdk-dev] [PATCH] net/null: Support bulk alloc and free.
>>
>> On 2/3/2018 3:11 AM, Mallesh Koujalagi wrote:
>>> After bulk allocation and freeing of multiple mbufs increase more
>>> than ~2% throughput on single core.
>>>
>>> Signed-off-by: Mallesh Koujalagi <malleshx.koujalagi@intel.com>
>>> ---
>>> drivers/net/null/rte_eth_null.c | 16 +++++++---------
>>> 1 file changed, 7 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/drivers/net/null/rte_eth_null.c
>>> b/drivers/net/null/rte_eth_null.c index 9385ffd..247ede0 100644
>>> --- a/drivers/net/null/rte_eth_null.c
>>> +++ b/drivers/net/null/rte_eth_null.c
>>> @@ -130,10 +130,11 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
>>> return 0;
>>>
>>> packet_size = h->internals->packet_size;
>>> +
>>> + if (rte_pktmbuf_alloc_bulk(h->mb_pool, bufs, nb_bufs) != 0)
>>> + return 0;
>>> +
>>> for (i = 0; i < nb_bufs; i++) {
>>> - bufs[i] = rte_pktmbuf_alloc(h->mb_pool);
>>> - if (!bufs[i])
>>> - break;
>>> rte_memcpy(rte_pktmbuf_mtod(bufs[i], void *), h->dummy_packet,
>>> packet_size);
>>> bufs[i]->data_len = (uint16_t)packet_size; @@ -149,18 +150,15 @@
>>> eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
>>> static uint16_t eth_null_tx(void *q, struct rte_mbuf **bufs,
>>> uint16_t nb_bufs) {
>>> - int i;
>>> struct null_queue *h = q;
>>>
>>> if ((q == NULL) || (bufs == NULL))
>>> return 0;
>>>
>>> - for (i = 0; i < nb_bufs; i++)
>>> - rte_pktmbuf_free(bufs[i]);
>>> + rte_mempool_put_bulk(bufs[0]->pool, (void **)bufs, nb_bufs);
>>
>> Is it guarantied that all mbufs will be from same mempool?
>
> I don't think it does, plus
> rte_pktmbuf_free(mb) != rte_mempool_put_bulk(mb->pool, &mb, 1);
Perhaps we can just benefit from bulk alloc.
Hi Mallesh,
Does it give any performance improvement if we switch "rte_pktmbuf_alloc()" to "rte_pktmbuf_alloc_bulk()" but keep free functions untouched?
Thanks,
ferruh
> Konstantin
>
>>
>>> + rte_atomic64_add(&h->tx_pkts, nb_bufs);
>>>
>>> - rte_atomic64_add(&(h->tx_pkts), i);
>>> -
>>> - return i;
>>> + return nb_bufs;
>>> }
>>>
>>> static uint16_t
>>>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH v2] net/null: support bulk allocation
2018-02-03 3:11 [dpdk-dev] [PATCH] net/null: Support bulk alloc and free Mallesh Koujalagi
2018-03-05 15:24 ` Ferruh Yigit
@ 2018-03-08 23:40 ` Mallesh Koujalagi
2018-03-09 11:09 ` Ferruh Yigit
1 sibling, 1 reply; 8+ messages in thread
From: Mallesh Koujalagi @ 2018-03-08 23:40 UTC (permalink / raw)
To: dev, ferruh.yigit, konstantin.ananyev; +Cc: mtetsuyah, Mallesh Koujalagi
Bulk allocation of multiple mbufs increased more than ~2% and less
than 8% throughput on single core (1.8 GHz), based on usage for example
1: Testpmd case: Two null devices with copy 8% improvement.
testpmd -c 0x3 -n 4 --socket-mem 1024,1024
--vdev 'eth_null0,size=64,copy=1' --vdev 'eth_null1,size=64,copy=1'
-- -i -a --coremask=0x2 --txrst=64 --txfreet=64 --txd=256
--rxd=256 --rxfreet=64 --burst=64 --txpt=64 --txq=1 --rxq=1 --numa
2. Ovs switch case: 2% improvement.
$VSCTL add-port ovs-br dpdk1 -- set Interface dpdk1 type=dpdk \
options:dpdk-devargs=eth_null0,size=64,copy=1
$VSCTL add-port ovs-br dpdk2 -- set Interface dpdk2 type=dpdk \
options:dpdk-devargs=eth_null1,size=64,copy=1
Signed-off-by: Mallesh Koujalagi <malleshx.koujalagi@intel.com>
---
drivers/net/null/rte_eth_null.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
index 9385ffd..c019d2d 100644
--- a/drivers/net/null/rte_eth_null.c
+++ b/drivers/net/null/rte_eth_null.c
@@ -105,10 +105,10 @@ eth_null_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
return 0;
packet_size = h->internals->packet_size;
+ if (rte_pktmbuf_alloc_bulk(h->mb_pool, bufs, nb_bufs) != 0)
+ return 0;
+
for (i = 0; i < nb_bufs; i++) {
- bufs[i] = rte_pktmbuf_alloc(h->mb_pool);
- if (!bufs[i])
- break;
bufs[i]->data_len = (uint16_t)packet_size;
bufs[i]->pkt_len = packet_size;
bufs[i]->port = h->internals->port_id;
@@ -130,10 +130,10 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
return 0;
packet_size = h->internals->packet_size;
+ if (rte_pktmbuf_alloc_bulk(h->mb_pool, bufs, nb_bufs) != 0)
+ return 0;
+
for (i = 0; i < nb_bufs; i++) {
- bufs[i] = rte_pktmbuf_alloc(h->mb_pool);
- if (!bufs[i])
- break;
rte_memcpy(rte_pktmbuf_mtod(bufs[i], void *), h->dummy_packet,
packet_size);
bufs[i]->data_len = (uint16_t)packet_size;
--
2.7.4
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] [PATCH v2] net/null: support bulk allocation
2018-03-08 23:40 ` [dpdk-dev] [PATCH v2] net/null: support bulk allocation Mallesh Koujalagi
@ 2018-03-09 11:09 ` Ferruh Yigit
2018-03-16 14:08 ` Ferruh Yigit
0 siblings, 1 reply; 8+ messages in thread
From: Ferruh Yigit @ 2018-03-09 11:09 UTC (permalink / raw)
To: Mallesh Koujalagi, dev, konstantin.ananyev; +Cc: mtetsuyah
On 3/8/2018 11:40 PM, Mallesh Koujalagi wrote:
> Bulk allocation of multiple mbufs increased more than ~2% and less
> than 8% throughput on single core (1.8 GHz), based on usage for example
> 1: Testpmd case: Two null devices with copy 8% improvement.
> testpmd -c 0x3 -n 4 --socket-mem 1024,1024
> --vdev 'eth_null0,size=64,copy=1' --vdev 'eth_null1,size=64,copy=1'
> -- -i -a --coremask=0x2 --txrst=64 --txfreet=64 --txd=256
> --rxd=256 --rxfreet=64 --burst=64 --txpt=64 --txq=1 --rxq=1 --numa
> 2. Ovs switch case: 2% improvement.
> $VSCTL add-port ovs-br dpdk1 -- set Interface dpdk1 type=dpdk \
> options:dpdk-devargs=eth_null0,size=64,copy=1
> $VSCTL add-port ovs-br dpdk2 -- set Interface dpdk2 type=dpdk \
> options:dpdk-devargs=eth_null1,size=64,copy=1
>
> Signed-off-by: Mallesh Koujalagi <malleshx.koujalagi@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] [PATCH v2] net/null: support bulk allocation
2018-03-09 11:09 ` Ferruh Yigit
@ 2018-03-16 14:08 ` Ferruh Yigit
0 siblings, 0 replies; 8+ messages in thread
From: Ferruh Yigit @ 2018-03-16 14:08 UTC (permalink / raw)
To: Mallesh Koujalagi, dev, konstantin.ananyev; +Cc: mtetsuyah
On 3/9/2018 11:09 AM, Ferruh Yigit wrote:
> On 3/8/2018 11:40 PM, Mallesh Koujalagi wrote:
>> Bulk allocation of multiple mbufs increased more than ~2% and less
>> than 8% throughput on single core (1.8 GHz), based on usage for example
>> 1: Testpmd case: Two null devices with copy 8% improvement.
>> testpmd -c 0x3 -n 4 --socket-mem 1024,1024
>> --vdev 'eth_null0,size=64,copy=1' --vdev 'eth_null1,size=64,copy=1'
>> -- -i -a --coremask=0x2 --txrst=64 --txfreet=64 --txd=256
>> --rxd=256 --rxfreet=64 --burst=64 --txpt=64 --txq=1 --rxq=1 --numa
>> 2. Ovs switch case: 2% improvement.
>> $VSCTL add-port ovs-br dpdk1 -- set Interface dpdk1 type=dpdk \
>> options:dpdk-devargs=eth_null0,size=64,copy=1
>> $VSCTL add-port ovs-br dpdk2 -- set Interface dpdk2 type=dpdk \
>> options:dpdk-devargs=eth_null1,size=64,copy=1
>>
>> Signed-off-by: Mallesh Koujalagi <malleshx.koujalagi@intel.com>
>
> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Applied to dpdk-next-net/master, thanks.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2018-03-16 14:08 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-03 3:11 [dpdk-dev] [PATCH] net/null: Support bulk alloc and free Mallesh Koujalagi
2018-03-05 15:24 ` Ferruh Yigit
2018-03-05 15:36 ` Ananyev, Konstantin
2018-03-07 10:57 ` Ferruh Yigit
2018-03-08 21:29 ` Koujalagi, MalleshX
2018-03-08 23:40 ` [dpdk-dev] [PATCH v2] net/null: support bulk allocation Mallesh Koujalagi
2018-03-09 11:09 ` Ferruh Yigit
2018-03-16 14:08 ` Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).