* [dpdk-dev] [PATCH] mbuf: reduce pktmbuf init cycles
@ 2017-06-05 16:38 Jerin Jacob
2017-06-23 9:42 ` Olivier Matz
2017-06-27 11:57 ` [dpdk-dev] [PATCH v2] " Jerin Jacob
0 siblings, 2 replies; 6+ messages in thread
From: Jerin Jacob @ 2017-06-05 16:38 UTC (permalink / raw)
To: dev; +Cc: olivier.matz, Jerin Jacob
There is no need for initializing the complete
packet buffer with zero as the packet data area will be
overwritten by the NIC Rx HW anyway.
The testpmd configures the packet mempool
with around 180k buffers with
2176B size. In existing scheme, the init routine
needs to memset around ~370MB vs the proposed scheme
requires only around ~44MB on 128B cache aligned system.
Useful in running DPDK in HW simulators/emulators,
where millions of cycles have an impact on boot time.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
lib/librte_mbuf/rte_mbuf.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 0e3e36a58..1d5ce7816 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -131,8 +131,7 @@ rte_pktmbuf_init(struct rte_mempool *mp,
RTE_ASSERT(mp->elt_size >= mbuf_size);
RTE_ASSERT(buf_len <= UINT16_MAX);
- memset(m, 0, mp->elt_size);
-
+ memset(m, 0, mbuf_size + RTE_PKTMBUF_HEADROOM);
/* start of buffer is after mbuf structure and priv data */
m->priv_size = priv_size;
m->buf_addr = (char *)m + mbuf_size;
--
2.13.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-dev] [PATCH] mbuf: reduce pktmbuf init cycles
2017-06-05 16:38 [dpdk-dev] [PATCH] mbuf: reduce pktmbuf init cycles Jerin Jacob
@ 2017-06-23 9:42 ` Olivier Matz
2017-06-23 10:06 ` Jerin Jacob
2017-06-27 11:57 ` [dpdk-dev] [PATCH v2] " Jerin Jacob
1 sibling, 1 reply; 6+ messages in thread
From: Olivier Matz @ 2017-06-23 9:42 UTC (permalink / raw)
To: Jerin Jacob; +Cc: dev
On Mon, 5 Jun 2017 22:08:07 +0530, Jerin Jacob <jerin.jacob@caviumnetworks.com> wrote:
> There is no need for initializing the complete
> packet buffer with zero as the packet data area will be
> overwritten by the NIC Rx HW anyway.
>
> The testpmd configures the packet mempool
> with around 180k buffers with
> 2176B size. In existing scheme, the init routine
> needs to memset around ~370MB vs the proposed scheme
> requires only around ~44MB on 128B cache aligned system.
>
> Useful in running DPDK in HW simulators/emulators,
> where millions of cycles have an impact on boot time.
>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
> lib/librte_mbuf/rte_mbuf.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
> index 0e3e36a58..1d5ce7816 100644
> --- a/lib/librte_mbuf/rte_mbuf.c
> +++ b/lib/librte_mbuf/rte_mbuf.c
> @@ -131,8 +131,7 @@ rte_pktmbuf_init(struct rte_mempool *mp,
> RTE_ASSERT(mp->elt_size >= mbuf_size);
> RTE_ASSERT(buf_len <= UINT16_MAX);
>
> - memset(m, 0, mp->elt_size);
> -
> + memset(m, 0, mbuf_size + RTE_PKTMBUF_HEADROOM);
> /* start of buffer is after mbuf structure and priv data */
> m->priv_size = priv_size;
> m->buf_addr = (char *)m + mbuf_size;
Yes, I don't foresee any risk to do that.
I'm just wondering why RTE_PKTMBUF_HEADROOM should be zeroed.
For example, rte_pktmbuf_free() does not touch the data either, so
after some packets processing, we also have garbage data in the
headroom.
Olivier
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-dev] [PATCH] mbuf: reduce pktmbuf init cycles
2017-06-23 9:42 ` Olivier Matz
@ 2017-06-23 10:06 ` Jerin Jacob
0 siblings, 0 replies; 6+ messages in thread
From: Jerin Jacob @ 2017-06-23 10:06 UTC (permalink / raw)
To: Olivier Matz; +Cc: dev
-----Original Message-----
> Date: Fri, 23 Jun 2017 11:42:30 +0200
> From: Olivier Matz <olivier.matz@6wind.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] mbuf: reduce pktmbuf init cycles
> X-Mailer: Claws Mail 3.14.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu)
>
> On Mon, 5 Jun 2017 22:08:07 +0530, Jerin Jacob <jerin.jacob@caviumnetworks.com> wrote:
> > There is no need for initializing the complete
> > packet buffer with zero as the packet data area will be
> > overwritten by the NIC Rx HW anyway.
> >
> > The testpmd configures the packet mempool
> > with around 180k buffers with
> > 2176B size. In existing scheme, the init routine
> > needs to memset around ~370MB vs the proposed scheme
> > requires only around ~44MB on 128B cache aligned system.
> >
> > Useful in running DPDK in HW simulators/emulators,
> > where millions of cycles have an impact on boot time.
> >
> > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > ---
> > lib/librte_mbuf/rte_mbuf.c | 3 +--
> > 1 file changed, 1 insertion(+), 2 deletions(-)
> >
> > diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
> > index 0e3e36a58..1d5ce7816 100644
> > --- a/lib/librte_mbuf/rte_mbuf.c
> > +++ b/lib/librte_mbuf/rte_mbuf.c
> > @@ -131,8 +131,7 @@ rte_pktmbuf_init(struct rte_mempool *mp,
> > RTE_ASSERT(mp->elt_size >= mbuf_size);
> > RTE_ASSERT(buf_len <= UINT16_MAX);
> >
> > - memset(m, 0, mp->elt_size);
> > -
> > + memset(m, 0, mbuf_size + RTE_PKTMBUF_HEADROOM);
> > /* start of buffer is after mbuf structure and priv data */
> > m->priv_size = priv_size;
> > m->buf_addr = (char *)m + mbuf_size;
>
> Yes, I don't foresee any risk to do that.
>
> I'm just wondering why RTE_PKTMBUF_HEADROOM should be zeroed.
> For example, rte_pktmbuf_free() does not touch the data either, so
> after some packets processing, we also have garbage data in the
> headroom.
Yes. Headroom can be garbage as application pull the packet offset up and writes
new header on encapsulation use case.
I will the send v2 with clearing only mbuf area.
>
> Olivier
^ permalink raw reply [flat|nested] 6+ messages in thread
* [dpdk-dev] [PATCH v2] mbuf: reduce pktmbuf init cycles
2017-06-05 16:38 [dpdk-dev] [PATCH] mbuf: reduce pktmbuf init cycles Jerin Jacob
2017-06-23 9:42 ` Olivier Matz
@ 2017-06-27 11:57 ` Jerin Jacob
2017-06-30 12:27 ` Olivier Matz
1 sibling, 1 reply; 6+ messages in thread
From: Jerin Jacob @ 2017-06-27 11:57 UTC (permalink / raw)
To: dev; +Cc: olivier.matz, Jerin Jacob
There is no need for initializing the complete
packet buffer with zero as the packet data area will be
overwritten by the NIC Rx HW anyway.
The testpmd configures the packet mempool
with around 180k buffers with
2176B size. In existing scheme, the init routine
needs to memset around ~370MB vs the proposed scheme
requires only around ~22MB on 128B cache aligned system.
Useful in running DPDK in HW simulators/emulators,
where millions of cycles have an impact on boot time.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v2:
- Removed RTE_PKTMBUF_HEADROOM from the memset area(Olivier)
---
lib/librte_mbuf/rte_mbuf.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 0e3e36a58..ab436b9da 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -131,8 +131,7 @@ rte_pktmbuf_init(struct rte_mempool *mp,
RTE_ASSERT(mp->elt_size >= mbuf_size);
RTE_ASSERT(buf_len <= UINT16_MAX);
- memset(m, 0, mp->elt_size);
-
+ memset(m, 0, mbuf_size);
/* start of buffer is after mbuf structure and priv data */
m->priv_size = priv_size;
m->buf_addr = (char *)m + mbuf_size;
--
2.13.2
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-dev] [PATCH v2] mbuf: reduce pktmbuf init cycles
2017-06-27 11:57 ` [dpdk-dev] [PATCH v2] " Jerin Jacob
@ 2017-06-30 12:27 ` Olivier Matz
2017-07-01 10:15 ` Thomas Monjalon
0 siblings, 1 reply; 6+ messages in thread
From: Olivier Matz @ 2017-06-30 12:27 UTC (permalink / raw)
To: Jerin Jacob; +Cc: dev
On Tue, 27 Jun 2017 17:27:51 +0530, Jerin Jacob <jerin.jacob@caviumnetworks.com> wrote:
> There is no need for initializing the complete
> packet buffer with zero as the packet data area will be
> overwritten by the NIC Rx HW anyway.
>
> The testpmd configures the packet mempool
> with around 180k buffers with
> 2176B size. In existing scheme, the init routine
> needs to memset around ~370MB vs the proposed scheme
> requires only around ~22MB on 128B cache aligned system.
>
> Useful in running DPDK in HW simulators/emulators,
> where millions of cycles have an impact on boot time.
>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [dpdk-dev] [PATCH v2] mbuf: reduce pktmbuf init cycles
2017-06-30 12:27 ` Olivier Matz
@ 2017-07-01 10:15 ` Thomas Monjalon
0 siblings, 0 replies; 6+ messages in thread
From: Thomas Monjalon @ 2017-07-01 10:15 UTC (permalink / raw)
To: Jerin Jacob; +Cc: dev, Olivier Matz
30/06/2017 14:27, Olivier Matz:
> On Tue, 27 Jun 2017 17:27:51 +0530, Jerin Jacob <jerin.jacob@caviumnetworks.com> wrote:
> > There is no need for initializing the complete
> > packet buffer with zero as the packet data area will be
> > overwritten by the NIC Rx HW anyway.
> >
> > The testpmd configures the packet mempool
> > with around 180k buffers with
> > 2176B size. In existing scheme, the init routine
> > needs to memset around ~370MB vs the proposed scheme
> > requires only around ~22MB on 128B cache aligned system.
> >
> > Useful in running DPDK in HW simulators/emulators,
> > where millions of cycles have an impact on boot time.
> >
> > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>
> Acked-by: Olivier Matz <olivier.matz@6wind.com>
Applied, thanks
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2017-07-01 10:15 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-05 16:38 [dpdk-dev] [PATCH] mbuf: reduce pktmbuf init cycles Jerin Jacob
2017-06-23 9:42 ` Olivier Matz
2017-06-23 10:06 ` Jerin Jacob
2017-06-27 11:57 ` [dpdk-dev] [PATCH v2] " Jerin Jacob
2017-06-30 12:27 ` Olivier Matz
2017-07-01 10:15 ` Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).