* [dpdk-dev] [PATCH] app/pdump: enforcing pdump to use sw mempool
@ 2019-03-15 15:27 Harman Kalra
2019-03-15 15:27 ` Harman Kalra
2019-07-04 16:29 ` Thomas Monjalon
0 siblings, 2 replies; 11+ messages in thread
From: Harman Kalra @ 2019-03-15 15:27 UTC (permalink / raw)
To: reshma.pattan; +Cc: dev, Jerin Jacob Kollanukkaran, Harman Kalra
Since pdump uses SW rings to manage packets hence
pdump should use SW ring mempool for managing its
own copy of packets.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
app/pdump/main.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/app/pdump/main.c b/app/pdump/main.c
index ccf2a1d2f..d0e342645 100644
--- a/app/pdump/main.c
+++ b/app/pdump/main.c
@@ -598,11 +598,12 @@ create_mp_ring_vdev(void)
mbuf_pool = rte_mempool_lookup(mempool_name);
if (mbuf_pool == NULL) {
/* create mempool */
- mbuf_pool = rte_pktmbuf_pool_create(mempool_name,
+ mbuf_pool = rte_pktmbuf_pool_create_by_ops(mempool_name,
pt->total_num_mbufs,
MBUF_POOL_CACHE_SIZE, 0,
pt->mbuf_data_size,
- rte_socket_id());
+ rte_socket_id(),
+ RTE_MBUF_DEFAULT_MEMPOOL_OPS);
if (mbuf_pool == NULL) {
cleanup_rings();
rte_exit(EXIT_FAILURE,
--
2.18.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [dpdk-dev] [PATCH] app/pdump: enforcing pdump to use sw mempool
2019-03-15 15:27 [dpdk-dev] [PATCH] app/pdump: enforcing pdump to use sw mempool Harman Kalra
@ 2019-03-15 15:27 ` Harman Kalra
2019-07-04 16:29 ` Thomas Monjalon
1 sibling, 0 replies; 11+ messages in thread
From: Harman Kalra @ 2019-03-15 15:27 UTC (permalink / raw)
To: reshma.pattan; +Cc: dev, Jerin Jacob Kollanukkaran, Harman Kalra
Since pdump uses SW rings to manage packets hence
pdump should use SW ring mempool for managing its
own copy of packets.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
app/pdump/main.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/app/pdump/main.c b/app/pdump/main.c
index ccf2a1d2f..d0e342645 100644
--- a/app/pdump/main.c
+++ b/app/pdump/main.c
@@ -598,11 +598,12 @@ create_mp_ring_vdev(void)
mbuf_pool = rte_mempool_lookup(mempool_name);
if (mbuf_pool == NULL) {
/* create mempool */
- mbuf_pool = rte_pktmbuf_pool_create(mempool_name,
+ mbuf_pool = rte_pktmbuf_pool_create_by_ops(mempool_name,
pt->total_num_mbufs,
MBUF_POOL_CACHE_SIZE, 0,
pt->mbuf_data_size,
- rte_socket_id());
+ rte_socket_id(),
+ RTE_MBUF_DEFAULT_MEMPOOL_OPS);
if (mbuf_pool == NULL) {
cleanup_rings();
rte_exit(EXIT_FAILURE,
--
2.18.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] [PATCH] app/pdump: enforcing pdump to use sw mempool
2019-03-15 15:27 [dpdk-dev] [PATCH] app/pdump: enforcing pdump to use sw mempool Harman Kalra
2019-03-15 15:27 ` Harman Kalra
@ 2019-07-04 16:29 ` Thomas Monjalon
2019-07-05 13:48 ` Olivier Matz
1 sibling, 1 reply; 11+ messages in thread
From: Thomas Monjalon @ 2019-07-04 16:29 UTC (permalink / raw)
To: reshma.pattan, olivier.matz, arybchenko
Cc: dev, Harman Kalra, Jerin Jacob Kollanukkaran
15/03/2019 16:27, Harman Kalra:
> Since pdump uses SW rings to manage packets hence
> pdump should use SW ring mempool for managing its
> own copy of packets.
I'm not sure to understand the reasoning.
Reshma, Olivier, Andrew, any opinion?
Let's take a decision for this very old patch.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] [PATCH] app/pdump: enforcing pdump to use sw mempool
2019-07-04 16:29 ` Thomas Monjalon
@ 2019-07-05 13:48 ` Olivier Matz
2019-07-05 14:39 ` [dpdk-dev] [EXT] " Harman Kalra
0 siblings, 1 reply; 11+ messages in thread
From: Olivier Matz @ 2019-07-05 13:48 UTC (permalink / raw)
To: Thomas Monjalon
Cc: reshma.pattan, arybchenko, dev, Harman Kalra, Jerin Jacob Kollanukkaran
Hi,
On Thu, Jul 04, 2019 at 06:29:25PM +0200, Thomas Monjalon wrote:
> 15/03/2019 16:27, Harman Kalra:
> > Since pdump uses SW rings to manage packets hence
> > pdump should use SW ring mempool for managing its
> > own copy of packets.
>
> I'm not sure to understand the reasoning.
> Reshma, Olivier, Andrew, any opinion?
>
> Let's take a decision for this very old patch.
From what I understand, many mempools of packets are created, to
store the copy of dumped packets. I suppose that it may not be
possible to create as many mempools by using the "best" mbuf pool
(from rte_mbuf_best_mempool_ops()).
Using a "ring_mp_mc" as mempool ops should always be possible.
I think it would be safer to use "ring_mp_mc" instead of
CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS, because the latter could be
overriden on a specific platform.
Olivier
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH] app/pdump: enforcing pdump to use sw mempool
2019-07-05 13:48 ` Olivier Matz
@ 2019-07-05 14:39 ` Harman Kalra
2019-07-05 15:09 ` Thomas Monjalon
0 siblings, 1 reply; 11+ messages in thread
From: Harman Kalra @ 2019-07-05 14:39 UTC (permalink / raw)
To: Olivier Matz
Cc: Thomas Monjalon, reshma.pattan, arybchenko, dev,
Jerin Jacob Kollanukkaran
On Fri, Jul 05, 2019 at 03:48:01PM +0200, Olivier Matz wrote:
> External Email
>
> ----------------------------------------------------------------------
> Hi,
>
> On Thu, Jul 04, 2019 at 06:29:25PM +0200, Thomas Monjalon wrote:
> > 15/03/2019 16:27, Harman Kalra:
> > > Since pdump uses SW rings to manage packets hence
> > > pdump should use SW ring mempool for managing its
> > > own copy of packets.
> >
> > I'm not sure to understand the reasoning.
> > Reshma, Olivier, Andrew, any opinion?
> >
> > Let's take a decision for this very old patch.
>
> From what I understand, many mempools of packets are created, to
> store the copy of dumped packets. I suppose that it may not be
> possible to create as many mempools by using the "best" mbuf pool
> (from rte_mbuf_best_mempool_ops()).
>
> Using a "ring_mp_mc" as mempool ops should always be possible.
> I think it would be safer to use "ring_mp_mc" instead of
> CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS, because the latter could be
> overriden on a specific platform.
>
> Olivier
Following are some reasons for this patch:
1. As we all know dpdk-pdump app creates a mempool for receiving packets (from primary process) into the mbufs, which would get tx'ed into pcap device and freed. Using hw mempool for creating mempool for dpdk-pdump mbufs was generating segmentation fault because hw mempool vfio is setup by primary process and secondary will not have access to its bar regions.
2. Setting up a seperate hw mempool vfio device for secondary generates following error:
"cannot find TAILQ entry for PCI device!"
http://git.dpdk.org/dpdk/tree/drivers/bus/pci/linux/pci_vfio.c#n823
which means secondary cannot setup a new device which is not set by primary.
3. Since pdump creates mempool for its own local mbufs, we could not feel the requirement for hw mempool, as SW mempool in our opinion is capable enough for working in all conditions.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH] app/pdump: enforcing pdump to use sw mempool
2019-07-05 14:39 ` [dpdk-dev] [EXT] " Harman Kalra
@ 2019-07-05 15:09 ` Thomas Monjalon
2019-07-05 15:40 ` [dpdk-dev] [PATCH v2] " Harman Kalra
0 siblings, 1 reply; 11+ messages in thread
From: Thomas Monjalon @ 2019-07-05 15:09 UTC (permalink / raw)
To: Harman Kalra
Cc: Olivier Matz, reshma.pattan, arybchenko, dev, Jerin Jacob Kollanukkaran
05/07/2019 16:39, Harman Kalra:
> On Fri, Jul 05, 2019 at 03:48:01PM +0200, Olivier Matz wrote:
> > On Thu, Jul 04, 2019 at 06:29:25PM +0200, Thomas Monjalon wrote:
> > > 15/03/2019 16:27, Harman Kalra:
> > > > Since pdump uses SW rings to manage packets hence
> > > > pdump should use SW ring mempool for managing its
> > > > own copy of packets.
> > >
> > > I'm not sure to understand the reasoning.
> > > Reshma, Olivier, Andrew, any opinion?
> > >
> > > Let's take a decision for this very old patch.
> >
> > From what I understand, many mempools of packets are created, to
> > store the copy of dumped packets. I suppose that it may not be
> > possible to create as many mempools by using the "best" mbuf pool
> > (from rte_mbuf_best_mempool_ops()).
> >
> > Using a "ring_mp_mc" as mempool ops should always be possible.
> > I think it would be safer to use "ring_mp_mc" instead of
> > CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS, because the latter could be
> > overriden on a specific platform.
> >
> > Olivier
>
> Following are some reasons for this patch:
> 1. As we all know dpdk-pdump app creates a mempool for receiving packets (from primary process) into the mbufs, which would get tx'ed into pcap device and freed. Using hw mempool for creating mempool for dpdk-pdump mbufs was generating segmentation fault because hw mempool vfio is setup by primary process and secondary will not have access to its bar regions.
>
> 2. Setting up a seperate hw mempool vfio device for secondary generates following error:
> "cannot find TAILQ entry for PCI device!"
> http://git.dpdk.org/dpdk/tree/drivers/bus/pci/linux/pci_vfio.c#n823
> which means secondary cannot setup a new device which is not set by primary.
>
> 3. Since pdump creates mempool for its own local mbufs, we could not feel the requirement for hw mempool, as SW mempool in our opinion is capable enough for working in all conditions.
OK
From the commit log, it is just missing to explain
that HW mempool cannot be used in secondary if initialized in primary,
and cannot be initialized in secondary process.
Then it will become clear :)
Please, do you want to reword a v2?
^ permalink raw reply [flat|nested] 11+ messages in thread
* [dpdk-dev] [PATCH v2] app/pdump: enforcing pdump to use sw mempool
2019-07-05 15:09 ` Thomas Monjalon
@ 2019-07-05 15:40 ` Harman Kalra
2019-07-05 15:50 ` Olivier Matz
0 siblings, 1 reply; 11+ messages in thread
From: Harman Kalra @ 2019-07-05 15:40 UTC (permalink / raw)
To: reshma.pattan, olivier.matz, thomas, arybchenko
Cc: dev, Jerin Jacob Kollanukkaran, Harman Kalra
A secondary process cannot access HW mempool already
initiazed by primary, and neither it can setup its own
HW mempool due to its own restrictions.
Since dpdk-pdump creates mempool for managing its local
mbufs, SW mempool is capable enough to solve this purpose.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
app/pdump/main.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/app/pdump/main.c b/app/pdump/main.c
index 80dc924cf..c739ae43d 100644
--- a/app/pdump/main.c
+++ b/app/pdump/main.c
@@ -604,11 +604,12 @@ create_mp_ring_vdev(void)
mbuf_pool = rte_mempool_lookup(mempool_name);
if (mbuf_pool == NULL) {
/* create mempool */
- mbuf_pool = rte_pktmbuf_pool_create(mempool_name,
+ mbuf_pool = rte_pktmbuf_pool_create_by_ops(mempool_name,
pt->total_num_mbufs,
MBUF_POOL_CACHE_SIZE, 0,
pt->mbuf_data_size,
- rte_socket_id());
+ rte_socket_id(),
+ RTE_MBUF_DEFAULT_MEMPOOL_OPS);
if (mbuf_pool == NULL) {
cleanup_rings();
rte_exit(EXIT_FAILURE,
--
2.18.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] [PATCH v2] app/pdump: enforcing pdump to use sw mempool
2019-07-05 15:40 ` [dpdk-dev] [PATCH v2] " Harman Kalra
@ 2019-07-05 15:50 ` Olivier Matz
2019-07-09 8:44 ` [dpdk-dev] [PATCH v3] " Harman Kalra
0 siblings, 1 reply; 11+ messages in thread
From: Olivier Matz @ 2019-07-05 15:50 UTC (permalink / raw)
To: Harman Kalra
Cc: reshma.pattan, thomas, arybchenko, dev, Jerin Jacob Kollanukkaran
hi,
On Fri, Jul 05, 2019 at 03:40:20PM +0000, Harman Kalra wrote:
> A secondary process cannot access HW mempool already
> initiazed by primary, and neither it can setup its own
> HW mempool due to its own restrictions.
>
> Since dpdk-pdump creates mempool for managing its local
> mbufs, SW mempool is capable enough to solve this purpose.
>
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> ---
> app/pdump/main.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/app/pdump/main.c b/app/pdump/main.c
> index 80dc924cf..c739ae43d 100644
> --- a/app/pdump/main.c
> +++ b/app/pdump/main.c
> @@ -604,11 +604,12 @@ create_mp_ring_vdev(void)
> mbuf_pool = rte_mempool_lookup(mempool_name);
> if (mbuf_pool == NULL) {
> /* create mempool */
> - mbuf_pool = rte_pktmbuf_pool_create(mempool_name,
> + mbuf_pool = rte_pktmbuf_pool_create_by_ops(mempool_name,
> pt->total_num_mbufs,
> MBUF_POOL_CACHE_SIZE, 0,
> pt->mbuf_data_size,
> - rte_socket_id());
> + rte_socket_id(),
> + RTE_MBUF_DEFAULT_MEMPOOL_OPS);
> if (mbuf_pool == NULL) {
> cleanup_rings();
> rte_exit(EXIT_FAILURE,
> --
> 2.18.0
>
Did you see the comment in my previous mail?
"""
I think it would be safer to use "ring_mp_mc" instead of
CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS, because the latter could be
overriden on a specific platform.
"""
Thanks,
Olivier
^ permalink raw reply [flat|nested] 11+ messages in thread
* [dpdk-dev] [PATCH v3] app/pdump: enforcing pdump to use sw mempool
2019-07-05 15:50 ` Olivier Matz
@ 2019-07-09 8:44 ` Harman Kalra
2019-07-10 9:22 ` [dpdk-dev] [PATCH v4] app/pdump: enforcing pdump to use SW mempool Harman Kalra
0 siblings, 1 reply; 11+ messages in thread
From: Harman Kalra @ 2019-07-09 8:44 UTC (permalink / raw)
To: olivier.matz, reshma.pattan, thomas
Cc: arybchenko, dev, Jerin Jacob Kollanukkaran, Harman Kalra
A secondary process cannot access HW mempool already
initiazed by primary, and neither it can setup its own
HW mempool due to its own restrictions.
Since dpdk-pdump creates mempool for managing its local
mbufs, SW mempool is capable enough to solve this purpose.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
app/pdump/main.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/app/pdump/main.c b/app/pdump/main.c
index 80dc924cf..cd0986aee 100644
--- a/app/pdump/main.c
+++ b/app/pdump/main.c
@@ -604,11 +604,11 @@ create_mp_ring_vdev(void)
mbuf_pool = rte_mempool_lookup(mempool_name);
if (mbuf_pool == NULL) {
/* create mempool */
- mbuf_pool = rte_pktmbuf_pool_create(mempool_name,
+ mbuf_pool = rte_pktmbuf_pool_create_by_ops(mempool_name,
pt->total_num_mbufs,
MBUF_POOL_CACHE_SIZE, 0,
pt->mbuf_data_size,
- rte_socket_id());
+ rte_socket_id(), "ring_mp_mc");
if (mbuf_pool == NULL) {
cleanup_rings();
rte_exit(EXIT_FAILURE,
--
2.18.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [dpdk-dev] [PATCH v4] app/pdump: enforcing pdump to use SW mempool
2019-07-09 8:44 ` [dpdk-dev] [PATCH v3] " Harman Kalra
@ 2019-07-10 9:22 ` Harman Kalra
2019-07-10 22:21 ` Thomas Monjalon
0 siblings, 1 reply; 11+ messages in thread
From: Harman Kalra @ 2019-07-10 9:22 UTC (permalink / raw)
To: reshma.pattan, olivier.matz, thomas, arybchenko
Cc: dev, Jerin Jacob Kollanukkaran, Harman Kalra
A secondary process cannot access HW mempool already
initiazed by primary, and neither it can setup its own
HW mempool due to its own restrictions.
Since dpdk-pdump creates mempool for managing its local
mbufs, SW mempool is capable enough to solve this purpose.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
app/pdump/main.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/app/pdump/main.c b/app/pdump/main.c
index 80dc924cf..cd0986aee 100644
--- a/app/pdump/main.c
+++ b/app/pdump/main.c
@@ -604,11 +604,11 @@ create_mp_ring_vdev(void)
mbuf_pool = rte_mempool_lookup(mempool_name);
if (mbuf_pool == NULL) {
/* create mempool */
- mbuf_pool = rte_pktmbuf_pool_create(mempool_name,
+ mbuf_pool = rte_pktmbuf_pool_create_by_ops(mempool_name,
pt->total_num_mbufs,
MBUF_POOL_CACHE_SIZE, 0,
pt->mbuf_data_size,
- rte_socket_id());
+ rte_socket_id(), "ring_mp_mc");
if (mbuf_pool == NULL) {
cleanup_rings();
rte_exit(EXIT_FAILURE,
--
2.18.0
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2019-07-10 22:21 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-15 15:27 [dpdk-dev] [PATCH] app/pdump: enforcing pdump to use sw mempool Harman Kalra
2019-03-15 15:27 ` Harman Kalra
2019-07-04 16:29 ` Thomas Monjalon
2019-07-05 13:48 ` Olivier Matz
2019-07-05 14:39 ` [dpdk-dev] [EXT] " Harman Kalra
2019-07-05 15:09 ` Thomas Monjalon
2019-07-05 15:40 ` [dpdk-dev] [PATCH v2] " Harman Kalra
2019-07-05 15:50 ` Olivier Matz
2019-07-09 8:44 ` [dpdk-dev] [PATCH v3] " Harman Kalra
2019-07-10 9:22 ` [dpdk-dev] [PATCH v4] app/pdump: enforcing pdump to use SW mempool Harman Kalra
2019-07-10 22:21 ` Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).