DPDK patches and discussions
 help / color / mirror / Atom feed
From: Bruce Richardson <bruce.richardson@intel.com>
To: "Mauricio Vásquez" <mauricio.vasquezbernal@studenti.polito.it>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] dpdk 2.0.0: Issue mapping mempool into guest using IVSHMEM
Date: Tue, 19 May 2015 13:27:07 +0100	[thread overview]
Message-ID: <20150519122706.GA7216@bricha3-MOBL3> (raw)
In-Reply-To: <CAPwdgqihcWwfcvchU0JEbAT7BuZRCH6AYQzEhSv-U3gDGui-EQ@mail.gmail.com>

On Mon, May 18, 2015 at 10:32:37AM +0200, Mauricio Vásquez wrote:
> Hi all,
> 
> I'm trying to map a mempool into a guest using the IVSHMEM library but the
> mempool is not visible from the guest.
> 
> The code I'm running is quite simple, on the host I run a primary DPDK
> process that creates the mempool, creates a metadata file and then adds the
> mempool to it.

Can you perhaps try using the ivshmem example application with DPDK to start with
and see if you can reproduce your issue with that?

Regards,
/Bruce

> 
> The code is:
> ...
> int main(int argc, char * argv[])
> {
>     int retval = 0;
> 
>     /* Init EAL, parsing EAL args */
>     retval = rte_eal_init(argc, argv);
>     if (retval < 0)
>         return -1;
> 
>     char cmdline[PATH_MAX] = {0};
> 
>     struct rte_mempool *packets_pool;
>     //Create mempool
>     packets_pool = rte_mempool_create(
>         "packets",
>         NUM_PKTS,
>         MBUF_SIZE,
>         CACHE_SIZE,                    //This is the size of the mempool
> cache
>         sizeof(struct rte_pktmbuf_pool_private),
>         rte_pktmbuf_pool_init,
>         NULL,
>         rte_pktmbuf_init,
>         NULL,
>         rte_socket_id(),
>         0 /*NO_FLAGS*/);
> 
>     if (packets_pool == NULL)
>         rte_exit(EXIT_FAILURE,"Cannot init the packets pool\n");
> 
>     //Create metadata file
>     if (rte_ivshmem_metadata_create(metadata_name) < 0)
>         rte_exit(EXIT_FAILURE, "Cannot create metadata file\n");
> 
>     //Add mempool to metadata file
>     if(rte_ivshmem_metadata_add_mempool(packets_pool, metadata_name) < 0)
>         rte_exit(EXIT_FAILURE, "Cannot add mempool metadata file\n");
> 
>     //Get qemu command line
>     if (rte_ivshmem_metadata_cmdline_generate(cmdline, sizeof(cmdline),
>             metadata_name) < 0)
>         rte_exit(EXIT_FAILURE, "Failed generating command line for qemu\n");
> 
>     RTE_LOG(INFO, APP, "Command line for qemu: %s\n", cmdline);
>     save_ivshmem_cmdline_to_file(cmdline);
> 
>     //Avoids the application closes
>     char x = getchar();
>     (void) x;
>     return 0;
> }
> 
> When I run it I can see clearly that the memzone is added:
> 
> EAL: Adding memzone 'MP_packets' at 0x7ffec0e8c1c0 to metadata vm_1
> EAL: Adding memzone 'RG_MP_packets' at 0x7ffec0d8c140 to metadata vm_1
> APP: Command line for qemu: -device
> ivshmem,size=2048M,shm=fd:/dev/hugepages/rtemap_0:0x0:0x40000000:/dev/zero:0x0:0x3fffc000:/var/run/.dpdk_ivshmem_metadata_vm_1:0x0:0x4000
> 
> I run the modified version of QEMU provided by dpdk-ovs using the command
> line generated by the host application, then in the guest I run an even
> simpler application:
> 
> ...
> void mempool_walk_f(const struct rte_mempool *r, void * arg)
> {
>     RTE_LOG(INFO, APP, "Mempool: %s\n", r->name);
>     (void) arg;
> }
> 
> int main(int argc, char *argv[])
> {
>     int retval = 0;
> 
>     if ((retval = rte_eal_init(argc, argv)) < 0)
>         return -1;
> 
>     argc -= retval;
>     argv += retval;
> 
>     struct rte_mempool * packets;
> 
>     packets = rte_mempool_lookup("packets");
> 
>     if(packets == NULL)
>     {
>         RTE_LOG(ERR, APP, "Failed to find mempool\n");
>     }
> 
>     RTE_LOG(INFO, APP, "List of mempool: \n");
>     rte_mempool_walk(mempool_walk_f, NULL);
> 
>     return 0;
> }
> ...
> 
> I can see in the application output that the mem zones that were added are
> found:
> 
> EAL: Found memzone: 'RG_MP_packets' at 0x7ffec0d8c140 (len 0x100080)
> EAL: Found memzone: 'MP_packets' at 0x7ffec0e8c1c0 (len 0x3832100)
> 
> But, the rte_mempool_lookup function returns NULL.
> Using the rte_mempool_walker the program only prints a memzone called
> log_history.
> 
> Do you have any suggestion?
> 
> Thank you very much for your help.

  reply	other threads:[~2015-05-19 12:27 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-05-18  8:32 Mauricio Vásquez
2015-05-19 12:27 ` Bruce Richardson [this message]
2015-05-19 14:21   ` Mauricio Vásquez

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150519122706.GA7216@bricha3-MOBL3 \
    --to=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=mauricio.vasquezbernal@studenti.polito.it \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).