DPDK usage discussions
 help / color / mirror / Atom feed
* Re: [dpdk-users] IP_PIPELINE DPDK
       [not found] <E4F6A044CDA07E40AE0DCB658B0F3F59CBDB07@FMSMSX105.amr.corp.intel.com>
@ 2016-12-06  9:51 ` Zhang, Roy Fan
  0 siblings, 0 replies; only message in thread
From: Zhang, Roy Fan @ 2016-12-06  9:51 UTC (permalink / raw)
  To: Thulaseedharan Nair, Aswathy; +Cc: users, Singh, Jasvinder

Dear Aswathy,

Latest DPDK 16.11 supports Mellanox MLX4 and MLX5 NICs, however the PMD driver of them was disabled by default.

You may enable them in DPDK_ROOT_PATH/config/common_base file instead of config/common_linuxapp.
Please search "CONFIG_RTE_LIBRTE_MLX4_PMD" and "CONFIG_RTE_LIBRTE_MLX5_PMD" and set their values to "y"
You then can save the file and recompile DPDK.

Unfortunately I didn't have the chance to operate a Mellanox NIC, thus there might be further steps to be done in order to use this NIC in DPDK. You may find useful information at http://www.mellanox.com/related-docs/prod_software/MLNX_DPDK_Quick_Start_Guide_v2.2_3.9.pdf, please note this guide targets DPDK 2.2 so some steps relating to DPDK may be obsolete.

In case you have further questions, please CC your next email to user@dpdk.org<mailto:user@dpdk.org> so it will appear in mailing list. It may be useful for someone else later :)

Regards,
Fan


From: Thulaseedharan Nair, Aswathy
Sent: Tuesday, December 6, 2016 12:56 AM
To: Zhang, Roy Fan <roy.fan.zhang@intel.com>
Cc: Thulaseedharan Nair, Aswathy <aswathy.thulaseedharan.nair@intel.com>
Subject: IP_PIPELINE DPDK

Hi Fan,

I am working on dpdk ip_pipeline. While I was trying to run dpdk ip_pipeline on Mellanox 100G NIC ,

I am getting the following error message.

EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 88 lcore(s)
EAL: Probing VFIO support...
EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory)
EAL: VFIO modules not loaded, skipping VFIO support...
EAL: Setting up physically contiguous memory...
EAL: Ask a virtual area of 0x200000000 bytes
EAL: Virtual area found at 0x7f5d80000000 (size = 0x200000000)
EAL: Ask a virtual area of 0x200000000 bytes
EAL: Virtual area found at 0x7f5b40000000 (size = 0x200000000)
EAL: Requesting 8 pages of size 1024MB from socket 0
EAL: Requesting 8 pages of size 1024MB from socket 1
EAL: TSC frequency is ~2194914 KHz
EAL: Master lcore 0 is ready (tid=b0e35900;cpuset=[0])
EAL: lcore 1 is ready (tid=af504700;cpuset=[1])
EAL: PCI device 0000:09:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:09:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   Not managed by a supported kernel driver, skipped
[APP] Initializing MEMPOOL0 ...
[APP] Initializing MEMPOOL1 ...
[APP] Initializing LINK0 (0) (1 RXQ, 1 TXQ) ...
PANIC in app_init_link():
LINK0 (0): init error (-22)
6: [./build/ip_pipeline() [0x4342c5]]
5: [/lib64/libc.so.6(__libc_start_main+0xf5) [0x7f5fafd27b15]]
4: [./build/ip_pipeline(main+0x5f) [0x43346f]]
3: [./build/ip_pipeline(app_init+0x1aa3) [0x4435c3]]
2: [./build/ip_pipeline(__rte_panic+0xc9) [0x42d903]]
1: [./build/ip_pipeline(rte_dump_stack+0x1a) [0x4cfbfa]]
Aborted (core dumped)

Do you know why I am getting this app_init_link error? I have tried the same application on INTEL 100G NIC. It was successful.  I followed the same steps in implementing ip_pipeline in Mellanox too. But not sure, what is causing the error.
Any help will be really appreciated.

Thanks,
Aswathy

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2016-12-06  9:51 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <E4F6A044CDA07E40AE0DCB658B0F3F59CBDB07@FMSMSX105.amr.corp.intel.com>
2016-12-06  9:51 ` [dpdk-users] IP_PIPELINE DPDK Zhang, Roy Fan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).