* [dpdk-dev] dpdk 2.0.0: Issue mapping mempool into guest using IVSHMEM
@ 2015-05-18 8:32 Mauricio Vásquez
2015-05-19 12:27 ` Bruce Richardson
0 siblings, 1 reply; 3+ messages in thread
From: Mauricio Vásquez @ 2015-05-18 8:32 UTC (permalink / raw)
To: dev
Hi all,
I'm trying to map a mempool into a guest using the IVSHMEM library but the
mempool is not visible from the guest.
The code I'm running is quite simple, on the host I run a primary DPDK
process that creates the mempool, creates a metadata file and then adds the
mempool to it.
The code is:
...
int main(int argc, char * argv[])
{
int retval = 0;
/* Init EAL, parsing EAL args */
retval = rte_eal_init(argc, argv);
if (retval < 0)
return -1;
char cmdline[PATH_MAX] = {0};
struct rte_mempool *packets_pool;
//Create mempool
packets_pool = rte_mempool_create(
"packets",
NUM_PKTS,
MBUF_SIZE,
CACHE_SIZE, //This is the size of the mempool
cache
sizeof(struct rte_pktmbuf_pool_private),
rte_pktmbuf_pool_init,
NULL,
rte_pktmbuf_init,
NULL,
rte_socket_id(),
0 /*NO_FLAGS*/);
if (packets_pool == NULL)
rte_exit(EXIT_FAILURE,"Cannot init the packets pool\n");
//Create metadata file
if (rte_ivshmem_metadata_create(metadata_name) < 0)
rte_exit(EXIT_FAILURE, "Cannot create metadata file\n");
//Add mempool to metadata file
if(rte_ivshmem_metadata_add_mempool(packets_pool, metadata_name) < 0)
rte_exit(EXIT_FAILURE, "Cannot add mempool metadata file\n");
//Get qemu command line
if (rte_ivshmem_metadata_cmdline_generate(cmdline, sizeof(cmdline),
metadata_name) < 0)
rte_exit(EXIT_FAILURE, "Failed generating command line for qemu\n");
RTE_LOG(INFO, APP, "Command line for qemu: %s\n", cmdline);
save_ivshmem_cmdline_to_file(cmdline);
//Avoids the application closes
char x = getchar();
(void) x;
return 0;
}
When I run it I can see clearly that the memzone is added:
EAL: Adding memzone 'MP_packets' at 0x7ffec0e8c1c0 to metadata vm_1
EAL: Adding memzone 'RG_MP_packets' at 0x7ffec0d8c140 to metadata vm_1
APP: Command line for qemu: -device
ivshmem,size=2048M,shm=fd:/dev/hugepages/rtemap_0:0x0:0x40000000:/dev/zero:0x0:0x3fffc000:/var/run/.dpdk_ivshmem_metadata_vm_1:0x0:0x4000
I run the modified version of QEMU provided by dpdk-ovs using the command
line generated by the host application, then in the guest I run an even
simpler application:
...
void mempool_walk_f(const struct rte_mempool *r, void * arg)
{
RTE_LOG(INFO, APP, "Mempool: %s\n", r->name);
(void) arg;
}
int main(int argc, char *argv[])
{
int retval = 0;
if ((retval = rte_eal_init(argc, argv)) < 0)
return -1;
argc -= retval;
argv += retval;
struct rte_mempool * packets;
packets = rte_mempool_lookup("packets");
if(packets == NULL)
{
RTE_LOG(ERR, APP, "Failed to find mempool\n");
}
RTE_LOG(INFO, APP, "List of mempool: \n");
rte_mempool_walk(mempool_walk_f, NULL);
return 0;
}
...
I can see in the application output that the mem zones that were added are
found:
EAL: Found memzone: 'RG_MP_packets' at 0x7ffec0d8c140 (len 0x100080)
EAL: Found memzone: 'MP_packets' at 0x7ffec0e8c1c0 (len 0x3832100)
But, the rte_mempool_lookup function returns NULL.
Using the rte_mempool_walker the program only prints a memzone called
log_history.
Do you have any suggestion?
Thank you very much for your help.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [dpdk-dev] dpdk 2.0.0: Issue mapping mempool into guest using IVSHMEM
2015-05-18 8:32 [dpdk-dev] dpdk 2.0.0: Issue mapping mempool into guest using IVSHMEM Mauricio Vásquez
@ 2015-05-19 12:27 ` Bruce Richardson
2015-05-19 14:21 ` Mauricio Vásquez
0 siblings, 1 reply; 3+ messages in thread
From: Bruce Richardson @ 2015-05-19 12:27 UTC (permalink / raw)
To: Mauricio Vásquez; +Cc: dev
On Mon, May 18, 2015 at 10:32:37AM +0200, Mauricio Vásquez wrote:
> Hi all,
>
> I'm trying to map a mempool into a guest using the IVSHMEM library but the
> mempool is not visible from the guest.
>
> The code I'm running is quite simple, on the host I run a primary DPDK
> process that creates the mempool, creates a metadata file and then adds the
> mempool to it.
Can you perhaps try using the ivshmem example application with DPDK to start with
and see if you can reproduce your issue with that?
Regards,
/Bruce
>
> The code is:
> ...
> int main(int argc, char * argv[])
> {
> int retval = 0;
>
> /* Init EAL, parsing EAL args */
> retval = rte_eal_init(argc, argv);
> if (retval < 0)
> return -1;
>
> char cmdline[PATH_MAX] = {0};
>
> struct rte_mempool *packets_pool;
> //Create mempool
> packets_pool = rte_mempool_create(
> "packets",
> NUM_PKTS,
> MBUF_SIZE,
> CACHE_SIZE, //This is the size of the mempool
> cache
> sizeof(struct rte_pktmbuf_pool_private),
> rte_pktmbuf_pool_init,
> NULL,
> rte_pktmbuf_init,
> NULL,
> rte_socket_id(),
> 0 /*NO_FLAGS*/);
>
> if (packets_pool == NULL)
> rte_exit(EXIT_FAILURE,"Cannot init the packets pool\n");
>
> //Create metadata file
> if (rte_ivshmem_metadata_create(metadata_name) < 0)
> rte_exit(EXIT_FAILURE, "Cannot create metadata file\n");
>
> //Add mempool to metadata file
> if(rte_ivshmem_metadata_add_mempool(packets_pool, metadata_name) < 0)
> rte_exit(EXIT_FAILURE, "Cannot add mempool metadata file\n");
>
> //Get qemu command line
> if (rte_ivshmem_metadata_cmdline_generate(cmdline, sizeof(cmdline),
> metadata_name) < 0)
> rte_exit(EXIT_FAILURE, "Failed generating command line for qemu\n");
>
> RTE_LOG(INFO, APP, "Command line for qemu: %s\n", cmdline);
> save_ivshmem_cmdline_to_file(cmdline);
>
> //Avoids the application closes
> char x = getchar();
> (void) x;
> return 0;
> }
>
> When I run it I can see clearly that the memzone is added:
>
> EAL: Adding memzone 'MP_packets' at 0x7ffec0e8c1c0 to metadata vm_1
> EAL: Adding memzone 'RG_MP_packets' at 0x7ffec0d8c140 to metadata vm_1
> APP: Command line for qemu: -device
> ivshmem,size=2048M,shm=fd:/dev/hugepages/rtemap_0:0x0:0x40000000:/dev/zero:0x0:0x3fffc000:/var/run/.dpdk_ivshmem_metadata_vm_1:0x0:0x4000
>
> I run the modified version of QEMU provided by dpdk-ovs using the command
> line generated by the host application, then in the guest I run an even
> simpler application:
>
> ...
> void mempool_walk_f(const struct rte_mempool *r, void * arg)
> {
> RTE_LOG(INFO, APP, "Mempool: %s\n", r->name);
> (void) arg;
> }
>
> int main(int argc, char *argv[])
> {
> int retval = 0;
>
> if ((retval = rte_eal_init(argc, argv)) < 0)
> return -1;
>
> argc -= retval;
> argv += retval;
>
> struct rte_mempool * packets;
>
> packets = rte_mempool_lookup("packets");
>
> if(packets == NULL)
> {
> RTE_LOG(ERR, APP, "Failed to find mempool\n");
> }
>
> RTE_LOG(INFO, APP, "List of mempool: \n");
> rte_mempool_walk(mempool_walk_f, NULL);
>
> return 0;
> }
> ...
>
> I can see in the application output that the mem zones that were added are
> found:
>
> EAL: Found memzone: 'RG_MP_packets' at 0x7ffec0d8c140 (len 0x100080)
> EAL: Found memzone: 'MP_packets' at 0x7ffec0e8c1c0 (len 0x3832100)
>
> But, the rte_mempool_lookup function returns NULL.
> Using the rte_mempool_walker the program only prints a memzone called
> log_history.
>
> Do you have any suggestion?
>
> Thank you very much for your help.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [dpdk-dev] dpdk 2.0.0: Issue mapping mempool into guest using IVSHMEM
2015-05-19 12:27 ` Bruce Richardson
@ 2015-05-19 14:21 ` Mauricio Vásquez
0 siblings, 0 replies; 3+ messages in thread
From: Mauricio Vásquez @ 2015-05-19 14:21 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev
Thank you for your answer Bruce,
I think you refer to the example "l2fwd-ivshmem", Don't you?
I compiled and run it, it works.
I was reviewing the code and I found that the function rte_mempool_lookup
is not used in the guest.
In the host a structure containing pointers to the rings is allocated using
rte_memzone_reserve and then mapped to the guest with
rte_ivshmem_metadata_add_memzone.
In the guest rte_memzone_lookup is used to lookup for this structure.
It's clear to me that I could use this approach to do what I want to do,
but I don't understand why rte_mempool_lookup does not work in my case.
Thanks in advance.
On Tue, May 19, 2015 at 2:27 PM, Bruce Richardson <
bruce.richardson@intel.com> wrote:
> On Mon, May 18, 2015 at 10:32:37AM +0200, Mauricio Vásquez wrote:
> > Hi all,
> >
> > I'm trying to map a mempool into a guest using the IVSHMEM library but
> the
> > mempool is not visible from the guest.
> >
> > The code I'm running is quite simple, on the host I run a primary DPDK
> > process that creates the mempool, creates a metadata file and then adds
> the
> > mempool to it.
>
> Can you perhaps try using the ivshmem example application with DPDK to
> start with
> and see if you can reproduce your issue with that?
>
> Regards,
> /Bruce
>
> >
> > The code is:
> > ...
> > int main(int argc, char * argv[])
> > {
> > int retval = 0;
> >
> > /* Init EAL, parsing EAL args */
> > retval = rte_eal_init(argc, argv);
> > if (retval < 0)
> > return -1;
> >
> > char cmdline[PATH_MAX] = {0};
> >
> > struct rte_mempool *packets_pool;
> > //Create mempool
> > packets_pool = rte_mempool_create(
> > "packets",
> > NUM_PKTS,
> > MBUF_SIZE,
> > CACHE_SIZE, //This is the size of the mempool
> > cache
> > sizeof(struct rte_pktmbuf_pool_private),
> > rte_pktmbuf_pool_init,
> > NULL,
> > rte_pktmbuf_init,
> > NULL,
> > rte_socket_id(),
> > 0 /*NO_FLAGS*/);
> >
> > if (packets_pool == NULL)
> > rte_exit(EXIT_FAILURE,"Cannot init the packets pool\n");
> >
> > //Create metadata file
> > if (rte_ivshmem_metadata_create(metadata_name) < 0)
> > rte_exit(EXIT_FAILURE, "Cannot create metadata file\n");
> >
> > //Add mempool to metadata file
> > if(rte_ivshmem_metadata_add_mempool(packets_pool, metadata_name) < 0)
> > rte_exit(EXIT_FAILURE, "Cannot add mempool metadata file\n");
> >
> > //Get qemu command line
> > if (rte_ivshmem_metadata_cmdline_generate(cmdline, sizeof(cmdline),
> > metadata_name) < 0)
> > rte_exit(EXIT_FAILURE, "Failed generating command line for
> qemu\n");
> >
> > RTE_LOG(INFO, APP, "Command line for qemu: %s\n", cmdline);
> > save_ivshmem_cmdline_to_file(cmdline);
> >
> > //Avoids the application closes
> > char x = getchar();
> > (void) x;
> > return 0;
> > }
> >
> > When I run it I can see clearly that the memzone is added:
> >
> > EAL: Adding memzone 'MP_packets' at 0x7ffec0e8c1c0 to metadata vm_1
> > EAL: Adding memzone 'RG_MP_packets' at 0x7ffec0d8c140 to metadata vm_1
> > APP: Command line for qemu: -device
> >
> ivshmem,size=2048M,shm=fd:/dev/hugepages/rtemap_0:0x0:0x40000000:/dev/zero:0x0:0x3fffc000:/var/run/.dpdk_ivshmem_metadata_vm_1:0x0:0x4000
> >
> > I run the modified version of QEMU provided by dpdk-ovs using the command
> > line generated by the host application, then in the guest I run an even
> > simpler application:
> >
> > ...
> > void mempool_walk_f(const struct rte_mempool *r, void * arg)
> > {
> > RTE_LOG(INFO, APP, "Mempool: %s\n", r->name);
> > (void) arg;
> > }
> >
> > int main(int argc, char *argv[])
> > {
> > int retval = 0;
> >
> > if ((retval = rte_eal_init(argc, argv)) < 0)
> > return -1;
> >
> > argc -= retval;
> > argv += retval;
> >
> > struct rte_mempool * packets;
> >
> > packets = rte_mempool_lookup("packets");
> >
> > if(packets == NULL)
> > {
> > RTE_LOG(ERR, APP, "Failed to find mempool\n");
> > }
> >
> > RTE_LOG(INFO, APP, "List of mempool: \n");
> > rte_mempool_walk(mempool_walk_f, NULL);
> >
> > return 0;
> > }
> > ...
> >
> > I can see in the application output that the mem zones that were added
> are
> > found:
> >
> > EAL: Found memzone: 'RG_MP_packets' at 0x7ffec0d8c140 (len 0x100080)
> > EAL: Found memzone: 'MP_packets' at 0x7ffec0e8c1c0 (len 0x3832100)
> >
> > But, the rte_mempool_lookup function returns NULL.
> > Using the rte_mempool_walker the program only prints a memzone called
> > log_history.
> >
> > Do you have any suggestion?
> >
> > Thank you very much for your help.
>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2015-05-19 14:21 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-18 8:32 [dpdk-dev] dpdk 2.0.0: Issue mapping mempool into guest using IVSHMEM Mauricio Vásquez
2015-05-19 12:27 ` Bruce Richardson
2015-05-19 14:21 ` Mauricio Vásquez
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).