DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] Problem with running the load balancer example application
@ 2014-12-19  7:43 Haowei Yuan
  2014-12-19 10:14 ` Bruce Richardson
       [not found] ` <1f80c4cd14794b06a8f483d0fbf60b3d@BLUPR02MB115.namprd02.prod.outlook.com>
  0 siblings, 2 replies; 4+ messages in thread
From: Haowei Yuan @ 2014-12-19  7:43 UTC (permalink / raw)
  To: dev

Hi folks,

I am new to DPDK and have been trying to run the load balancer example
application on a machine with two NUMA nodes. Somehow the program
cannot be launched correctly if I add the "--no-huge" option to the
command. I am wondering if someone had seen similar problems, or maybe
I did something wrong.

The command I used was "./load_balancer -c 0x5 -n 4 --no-huge -- --rx
"(0,0,0),(1,0,0)" --tx "(0,0),(1,0)" --w "2" --lpm "192.168.0.0/1=>1;"
--pos-lb 29"

When hugepage memory was used, everything worked fine. But when I
added the "--no-huge" option, I got the warning message of "Master
core has no memory on local socket", and eventually the program quit
as mbuf pool could not be created on socket 0. The following is the
detailed error message.

-----------------------------------------------------------------------
EAL: WARNING: Master core has no memory on local socket!
EAL: Master core 0 is ready (tid=781f4800)
EAL: Core 2 is ready (tid=72df2700)
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1528 rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x7f57723f2000
EAL:   PCI memory mapped at 0x7f57723ee000
EAL: PCI device 0000:01:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1528 rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x7f57721ee000
EAL:   PCI memory mapped at 0x7f57721ea000
EAL: PCI device 0000:07:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   0000:07:00.0 not managed by UIO driver, skipping
EAL: PCI device 0000:07:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   0000:07:00.1 not managed by UIO driver, skipping
Creating the mbuf pool for socket 0 ...
PANIC in app_init_mbuf_pools():
Cannot create mbuf pool on socket 0
-----------------------------------------------------------------------


Then I added one line of code to the eal.c file to print the socket
id, the edited code and the output are:
Code:
-----------------------------------------------------------------------
static void
eal_check_mem_on_local_socket(void)
{
        const struct rte_memseg *ms;
        int i, socket_id;

        socket_id = rte_lcore_to_socket_id(rte_config.master_lcore);

        ms = rte_eal_get_physmem_layout();

        for (i = 0; i < RTE_MAX_MEMSEG; i++) {
               printf("socket_id = %d, len = %lu\n", ms[i].socket_id,
ms[i].len); XXXXX Added line of code
               if (ms[i].socket_id == socket_id &&
                                ms[i].len > 0)
                        return;
        }

        RTE_LOG(WARNING, EAL, "WARNING: Master core has no "
                        "memory on local socket!\n");
}

Output:
socket_id = -1, len = 67108864
socket_id = 0, len = 0
socket_id = 0, len = 0
socket_id = 0, len = 0
socket_id = 0, len = 0
......
-----------------------------------------------------------------------

I think the first socket_id of -1 is causing the problem as Core 0 is
on socket 0....
Has someone seem similar problems, or is there anything I did incorrect?

Thanks,
Haowei

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2014-12-20  9:32 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-12-19  7:43 [dpdk-dev] Problem with running the load balancer example application Haowei Yuan
2014-12-19 10:14 ` Bruce Richardson
     [not found] ` <1f80c4cd14794b06a8f483d0fbf60b3d@BLUPR02MB115.namprd02.prod.outlook.com>
2014-12-19 18:14   ` Haowei Yuan
2014-12-20  9:32     ` Bruce Richardson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).