From: "Wiles, Keith" <keith.wiles@intel.com>
To: Philip Lee <plee2@andrew.cmu.edu>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] Pktgen Cannot configure device panic
Date: Mon, 6 Mar 2017 20:24:21 +0000 [thread overview]
Message-ID: <A12604D4-11B5-480A-94C2-4BBC8669FE0F@intel.com> (raw)
In-Reply-To: <F1D1BA88-735F-4153-AA9D-78A6F0EDF512@intel.com>
> On Mar 6, 2017, at 2:12 PM, Wiles, Keith <keith.wiles@intel.com> wrote:
>
>>
>> On Mar 6, 2017, at 11:00 AM, Philip Lee <plee2@andrew.cmu.edu> wrote:
>>
>> Hi Keith,
>>
>> Do you have any insights into which driver you think may be problematic?
>>
>> I haven't really gone anywhere after redoing my original install steps
>> for DPDK and Pktgen.
>> The only main difference I can think of is that I installed the
>> Netronome board support package from a prepackaged .deb file onto this
>> system.
>>
>> Sorry for all the hassle. I'm completely new to Netronome, DPDK, and Pktgen.
>
> I do not think it is a problem with the Netronome BSP unless it also installed the PMD driver for DPDK. I suspect the PMD driver in DPKD is not setting the max queues to a value like 1 and Pktgen just happens to check the return code from the rte_eth_dev_configure(). I have not used the Netronome card as I do not have a NIC for it.
>
> You made me look at the PMD and it is setting the rx/tx queue count so it seems ok. One thing you can try is to enable --log-level=9 on the command line for DPDK to print out more information and it should print out how many rx_queues it has in a log message.
>
> Other then that I think you will need to contact the Netronome PMD driver maintainer.
I am looking at the configure call in the PMD and it does not support a number of features like RSS, split header, rx checksum, VLAN Filter, VLAN strip, … so it could be failing on one of these, but you have to have the INFO enabled to see the message from the driver. I just expected the PMD to support some of these features and in the config structure passed into the routine from Pktgen would need to be altered.
At the top of pktgen/app/pktgen-port-cfg.c is a structure I use to configure the ports called default_port_conf and it has RSS enabled which is one of the things NFP does not support. You can try adjusting the structure to disable the bad feature. Let me know which feature is causing the problem.
>
> Netronome nfp
> M: Alejandro Lucero <alejandro.lucero@netronome.com>
> F: drivers/net/nfp/
> F: doc/guides/nics/nfp.rst
>
>>
>> Thanks,
>>
>> Philip
>>
>> On Mon, Mar 6, 2017 at 11:22 AM, Wiles, Keith <keith.wiles@intel.com> wrote:
>>>
>>>> On Mar 6, 2017, at 10:17 AM, Philip Lee <phillee4473@gmail.com> wrote:
>>>>
>>>> Hi Keith,
>>>>
>>>> Also, how do I get a list of devices that DPDK detects to find the ports to blacklist?
>>>>
>>>> I tried blacklisting just the other virtual functions of the Netronome NIC, but the results are the same. I also tried unbinding the igb_uio drivers from all but the virtual function I'm using. If I use the whitelist (-w), does it force it to look at that pci device only? I tried that and it provided the same results as well.
>>>
>>> You can do one of two things only bind the ports you want to use or blacklist all of the ports that are bound that you do not want DPDK to see.
>>>
>>> To see all of your ports in the system do ‘lspci | grep Ethernet’
>>>
>>> Then you need to figure out how the PCI id maps to the phyiscal port you want to use. (Not normally a easy task or then read the hardware spec on the Motherboard or just do some experiments.
>>>
>>>>
>>>>
>>>> Also, running pktgen on the working node gives this output with max_rx_queues and max_tx_queues having values of 1, so it seems like its a problem with the system setup on this broken node.
>>>> ** Default Info (5:8.0, if_index:0) **
>>>> max_vfs : 0, min_rx_bufsize : 68, max_rx_pktlen : 9216
>>>> max_rx_queues : 1, max_tx_queues : 1
>>>
>>> I think this is a driver problem, as it should report at least one per.
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> Phlip Lee
>>>>
>>>>
>>>> On Mon, Mar 6, 2017 at 10:14 AM, Wiles, Keith <keith.wiles@intel.com> wrote:
>>>>
>>>>> On Mar 5, 2017, at 8:03 PM, Philip Lee <plee2@andrew.cmu.edu> wrote:
>>>>>
>>>>> Hello all,
>>>>>
>>>>> I had a "working" install of pktgen that would transfer data but not
>>>>> provide statistics. The setup are two Netronome NICs connected
>>>>> together. It was suggested there was a problem with the Netronome PMD,
>>>>> so I reinstalled both the Netronome BSP and DPDK. Now I'm getting the
>>>>> following error with trying to start up pktgen with: ./pktgen -c 0x1f
>>>>> -n 1 -- -m [1:2].0
>>>>>
>>>>>>>> Packet Burst 32, RX Desc 512, TX Desc 1024, mbufs/port 8192, mbuf cache 1024
>>>>> === port to lcore mapping table (# lcores 5) ===
>>>>> lcore: 0 1 2 3 4
>>>>> port 0: D: T 1: 0 0: 1 0: 0 0: 0 = 1: 1
>>>>> Total : 0: 0 1: 0 0: 1 0: 0 0: 0
>>>>> Display and Timer on lcore 0, rx:tx counts per port/lcore
>>>>>
>>>>> Configuring 4 ports, MBUF Size 1920, MBUF Cache Size 1024
>>>>> Lcore:
>>>>> 1, RX-Only
>>>>> RX( 1): ( 0: 0)
>>>>> 2, TX-Only
>>>>> TX( 1): ( 0: 0)
>>>>> Port :
>>>>> 0, nb_lcores 2, private 0x8cca90, lcores: 1 2
>>>>>
>>>>> ** Default Info (5:8.0, if_index:0) **
>>>>> max_vfs : 0, min_rx_bufsize : 68, max_rx_pktlen : 0
>>>>> max_rx_queues : 0, max_tx_queues : 0
>>>>> max_mac_addrs : 1, max_hash_mac_addrs: 0, max_vmdq_pools: 0
>>>>> rx_offload_capa: 0, tx_offload_capa : 0, reta_size :
>>>>> 128, flow_type_rss_offloads:0000000000000000
>>>>> vmdq_queue_base: 0, vmdq_queue_num : 0, vmdq_pool_base: 0
>>>>> ** RX Conf **
>>>>> pthresh : 8, hthresh : 8, wthresh : 0
>>>>> Free Thresh : 32, Drop Enable : 0, Deferred Start : 0
>>>>> ** TX Conf **
>>>>> pthresh : 32, hthresh : 0, wthresh : 0
>>>>> Free Thresh : 32, RS Thresh : 32, Deferred Start :
>>>>> 0, TXQ Flags:00000f01
>>>>>
>>>>> !PANIC!: Cannot configure device: port=0, Num queues 1,1 (2)Invalid argument
>>>>> PANIC in pktgen_config_ports():
>>>>> Cannot configure device: port=0, Num queues 1,1 (2)Invalid argument6:
>>>>> [./pktgen() [0x43394e]]
>>>>> 5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f89dd0f7f45]]
>>>>> 4: [./pktgen(main+0x4d4) [0x432f54]]
>>>>> 3: [./pktgen(pktgen_config_ports+0x3108) [0x45f418]]
>>>>> 2: [./pktgen(__rte_panic+0xbe) [0x42f288]]
>>>>> 1: [./pktgen(rte_dump_stack+0x1a) [0x49af3a]]
>>>>> Aborted
>>>>>
>>>>> ------------------------------------------------------------------------------------------------------------------------
>>>>>
>>>>> I tried unbinding the nics and rebinding. I read in an older mailling
>>>>> post that setup.sh needs to be run every reboot. I executed it, and it
>>>>> looks like a list of things to install pktgen that I had done manually
>>>>> again after the most recent reboot. The output of the status check
>>>>> script is below:
>>>>> ./dpdk-devbind.py --status
>>>>>
>>>>> Network devices using DPDK-compatible driver
>>>>> ============================================
>>>>> 0000:05:08.0 'Device 6003' drv=igb_uio unused=
>>>>> 0000:05:08.1 'Device 6003' drv=igb_uio unused=
>>>>> 0000:05:08.2 'Device 6003' drv=igb_uio unused=
>>>>> 0000:05:08.3 'Device 6003' drv=igb_uio unused=
>>>>>
>>>>> Network devices using kernel driver
>>>>> ===================================
>>>>> 0000:01:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=eth0 drv=tg3
>>>>> unused=igb_uio *Active*
>>>>> 0000:01:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=eth1 drv=tg3
>>>>> unused=igb_uio
>>>>> 0000:02:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=eth2 drv=tg3
>>>>> unused=igb_uio
>>>>> 0000:02:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=eth3 drv=tg3
>>>>> unused=igb_uio
>>>>> 0000:05:00.0 'Device 4000' if= drv=nfp unused=igb_uio
>>>>> 0000:43:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=eth4
>>>>> drv=ixgbe unused=igb_uio
>>>>> 0000:43:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=eth7
>>>>> drv=ixgbe unused=igb_uio
>>>>> 0000:44:00.0 'MT27500 Family [ConnectX-3]' if=eth5,eth6 drv=mlx4_core
>>>>> unused=igb_uio
>>>>>
>>>>> Does anyone have any suggestions?
>>>>
>>>> Try blacklisting (-b 0000:01:00.1 -b ...) all of the ports you are not using. The number of ports being setup is taken from the number of devices DPDK detects.
>>>>
>>>> The only on thing I am worried about is the ' max_rx_queues : 0, max_tx_queues : 0’ is reporting zero queues. It maybe other example code does not test the return code from the rte_eth_dev_configure() call. I think the max_rx_queues and max_tx_queues should be at least 1.
>>>>
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Philip Lee
>>>>
>>>> Regards,
>>>> Keith
>>>>
>>>>
>>>
>>> Regards,
>>> Keith
>>>
>
> Regards,
> Keith
Regards,
Keith
prev parent reply other threads:[~2017-03-06 20:24 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-06 2:03 Philip Lee
2017-03-06 15:14 ` Wiles, Keith
[not found] ` <CACeHyXb6ZP39jJiGUfuXxcYKBe_YshBU4RAuOq-JT7ZZPUJugw@mail.gmail.com>
2017-03-06 16:18 ` Philip Lee
2017-03-06 16:22 ` Wiles, Keith
2017-03-06 17:00 ` Philip Lee
2017-03-06 20:12 ` Wiles, Keith
2017-03-06 20:24 ` Wiles, Keith [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=A12604D4-11B5-480A-94C2-4BBC8669FE0F@intel.com \
--to=keith.wiles@intel.com \
--cc=plee2@andrew.cmu.edu \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).