From: Puneet Singh <Puneet.Singh@truminds.com>
To: "Li, Xiaoyun" <xiaoyun.li@intel.com>,
"users@dpdk.org" <users@dpdk.org>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic
Date: Tue, 17 Mar 2020 06:59:48 +0000 [thread overview]
Message-ID: <BMXPR01MB4088E000C0DD306200FCFC89F1F60@BMXPR01MB4088.INDPRD01.PROD.OUTLOOK.COM> (raw)
In-Reply-To: <3b14667cd1f44bb4986523b62a1957e2@intel.com>
Hi Xiaoyun Li
Following is the difference between eth conf of testpmd and my application. Please let us know if any parameter is critical
testpmd (rte_eth_conf)
e = {link_speeds = 0, rxmode = {mq_mode = ETH_MQ_RX_NONE, max_rx_pkt_len = 1518, max_lro_pkt_size = 0, split_hdr_size = 0, offloads = 0, reserved_64s = {0, 0},
reserved_ptrs = {0x0, 0x0}}, txmode = {mq_mode = ETH_MQ_TX_NONE, offloads = 65536, pvid = 0, hw_vlan_reject_tagged = 0 '\000', hw_vlan_reject_untagged = 0 '\000',
hw_vlan_insert_pvid = 0 '\000', reserved_64s = {0, 0}, reserved_ptrs = {0x0, 0x0}}, lpbk_mode = 0, rx_adv_conf = {rss_conf = {rss_key = 0x0,
rss_key_len = 0 '\000', rss_hf = 0}, vmdq_dcb_conf = {nb_queue_pools = (unknown: 0), enable_default_pool = 0 '\000', default_pool = 0 '\000',
nb_pool_maps = 0 '\000', pool_map = {{vlan_id = 0, pools = 0} <repeats 64 times>}, dcb_tc = "\000\000\000\000\000\000\000"}, dcb_rx_conf = {
nb_tcs = (unknown: 0), dcb_tc = "\000\000\000\000\000\000\000"}, vmdq_rx_conf = {nb_queue_pools = (unknown: 0), enable_default_pool = 0 '\000',
default_pool = 0 '\000', enable_loop_back = 0 '\000', nb_pool_maps = 0 '\000', rx_mode = 0, pool_map = {{vlan_id = 0, pools = 0} <repeats 64 times>}}},
tx_adv_conf = {vmdq_dcb_tx_conf = {nb_queue_pools = (unknown: 0), dcb_tc = "\000\000\000\000\000\000\000"}, dcb_tx_conf = {nb_tcs = (unknown: 0),
dcb_tc = "\000\000\000\000\000\000\000"}, vmdq_tx_conf = {nb_queue_pools = (unknown: 0)}}, dcb_capability_en = 0, fdir_conf = {mode = RTE_FDIR_MODE_NONE,
pballoc = RTE_FDIR_PBALLOC_64K, status = RTE_FDIR_REPORT_STATUS, drop_queue = 127 '\177', mask = {vlan_tci_mask = 65519, ipv4_mask = {src_ip = 4294967295,
dst_ip = 4294967295, tos = 0 '\000', ttl = 0 '\000', proto = 0 '\000'}, ipv6_mask = {src_ip = {4294967295, 4294967295, 4294967295, 4294967295}, dst_ip = {
4294967295, 4294967295, 4294967295, 4294967295}, tc = 0 '\000', proto = 0 '\000', hop_limits = 0 '\000'}, src_port_mask = 65535, dst_port_mask = 65535,
mac_addr_byte_mask = 255 '\377', tunnel_id_mask = 4294967295, tunnel_type_mask = 1 '\001'}, flex_conf = {nb_payloads = 0, nb_flexmasks = 0, flex_set = {{
type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {
type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {
type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {
type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}}, flex_mask = {{
flow_type = 0, mask = '\000' <repeats 15 times>} <repeats 24 times>}}}, intr_conf = {lsc = 1, rxq = 0, rmv = 0}}
my app(rte_eth_conf)
e = {link_speeds = 0, rxmode = {mq_mode = ETH_MQ_RX_NONE, max_rx_pkt_len = 1518, max_lro_pkt_size = 0, split_hdr_size = 0, offloads = 0, reserved_64s = {0, 0},
reserved_ptrs = {0x0, 0x0}}, txmode = {mq_mode = ETH_MQ_TX_NONE, offloads = 32774, pvid = 0, hw_vlan_reject_tagged = 0 '\000', hw_vlan_reject_untagged = 0 '\000',
hw_vlan_insert_pvid = 0 '\000', reserved_64s = {0, 0}, reserved_ptrs = {0x0, 0x0}}, lpbk_mode = 0, rx_adv_conf = {rss_conf = {rss_key = 0x0,
rss_key_len = 0 '\000', rss_hf = 0}, vmdq_dcb_conf = {nb_queue_pools = (unknown: 0), enable_default_pool = 0 '\000', default_pool = 0 '\000',
nb_pool_maps = 0 '\000', pool_map = {{vlan_id = 0, pools = 0} <repeats 64 times>}, dcb_tc = "\000\000\000\000\000\000\000"}, dcb_rx_conf = {
nb_tcs = (unknown: 0), dcb_tc = "\000\000\000\000\000\000\000"}, vmdq_rx_conf = {nb_queue_pools = (unknown: 0), enable_default_pool = 0 '\000',
default_pool = 0 '\000', enable_loop_back = 0 '\000', nb_pool_maps = 0 '\000', rx_mode = 0, pool_map = {{vlan_id = 0, pools = 0} <repeats 64 times>}}},
tx_adv_conf = {vmdq_dcb_tx_conf = {nb_queue_pools = (unknown: 0), dcb_tc = "\000\000\000\000\000\000\000"}, dcb_tx_conf = {nb_tcs = (unknown: 0),
dcb_tc = "\000\000\000\000\000\000\000"}, vmdq_tx_conf = {nb_queue_pools = (unknown: 0)}}, dcb_capability_en = 0, fdir_conf = {mode = RTE_FDIR_MODE_NONE,
pballoc = RTE_FDIR_PBALLOC_64K, status = RTE_FDIR_NO_REPORT_STATUS, drop_queue = 0 '\000', mask = {vlan_tci_mask = 0, ipv4_mask = {src_ip = 0, dst_ip = 0,
tos = 0 '\000', ttl = 0 '\000', proto = 0 '\000'}, ipv6_mask = {src_ip = {0, 0, 0, 0}, dst_ip = {0, 0, 0, 0}, tc = 0 '\000', proto = 0 '\000',
hop_limits = 0 '\000'}, src_port_mask = 0, dst_port_mask = 0, mac_addr_byte_mask = 0 '\000', tunnel_id_mask = 0, tunnel_type_mask = 0 '\000'}, flex_conf = {
nb_payloads = 0, nb_flexmasks = 0, flex_set = {{type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN,
src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {
0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {
0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {
0 <repeats 16 times>}}}, flex_mask = {{flow_type = 0, mask = '\000' <repeats 15 times>} <repeats 24 times>}}}, intr_conf = {lsc = 0, rxq = 0, rmv = 0}}
Regards
Puneet Singh
From: Li, Xiaoyun <xiaoyun.li@intel.com>
Sent: 17 March 2020 07:34
To: Puneet Singh <Puneet.Singh@truminds.com>; users@dpdk.org
Subject: RE: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic
Hi
For driver, you can set the log level as debug. Replace the RTE_LOG_NOTICE with RTE_LOG_DEBUG in the following codes.
This will show all PMD_INIT_LOG() and PMD_DRV_LOG()
i40e_logtype_init = rte_log_register("pmd.net.i40e.init");
if (i40e_logtype_init >= 0)
rte_log_set_level(i40e_logtype_init, RTE_LOG_NOTICE);
i40e_logtype_driver = rte_log_register("pmd.net.i40e.driver");
if (i40e_logtype_driver >= 0)
rte_log_set_level(i40e_logtype_driver, RTE_LOG_NOTICE);
And you can turn on the rx/tx debug if you need to debug tx/rx.
In config/common_base,
CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=y
CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=y
CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=y
Best Regards
Xiaoyun Li
From: Puneet Singh <Puneet.Singh@truminds.com<mailto:Puneet.Singh@truminds.com>>
Sent: Monday, March 16, 2020 21:02
To: Li, Xiaoyun <xiaoyun.li@intel.com<mailto:xiaoyun.li@intel.com>>; users@dpdk.org<mailto:users@dpdk.org>
Subject: RE: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic
Hi Xiaoyun Li ,
With changes you suggested ,Testpmd works fine on my setup.
But With my own application, the port is detected properly, the queue setup to NIC also does not give any error.
But my application is not getting any packets on the rx burst polls.
Any suggestions on what is the best way to debug this eg. can some logs be enabled at PMD to see what setting is wrong etc. ?
Thanks & Regards
Puneet Singh
From: Puneet Singh <Puneet.Singh@truminds.com<mailto:Puneet.Singh@truminds.com>>
Sent: 13 March 2020 10:05
To: Li, Xiaoyun <xiaoyun.li@intel.com<mailto:xiaoyun.li@intel.com>>; users@dpdk.org<mailto:users@dpdk.org>
Subject: Re: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic
Hi,
Thanks a lot for information,I will try these steps with dpdk 19.11 and update.
Regards
Puneet Singh
Get Outlook for Android<https://aka.ms/ghei36>
________________________________
From: Li, Xiaoyun <xiaoyun.li@intel.com<mailto:xiaoyun.li@intel.com>>
Sent: Friday, 13 March, 2020, 9:00 am
To: Puneet Singh; users@dpdk.org<mailto:users@dpdk.org>
Subject: RE: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic
Hi
It is because that X722 doesn't support I40E_HW_FLAG_802_1AD_CAPABLE with new firmware.
This is fixed in 20.02 with base code patch update. This commit: 37b091c75b13d2f26359be9b77adbc33c55a7581.
If you have to use 19.11. You need to add the following in eth_i40e_dev_init():
- if (hw->device_id == I40E_DEV_ID_SFP_X722)
+ if (hw->mac.type == I40E_MAC_X722)
hw->flags &= ~I40E_HW_FLAG_802_1AD_CAPABLE;
Best Regards
Xiaoyun Li
> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Puneet Singh
> Sent: Wednesday, March 11, 2020 15:25
> To: users@dpdk.org<mailto:users@dpdk.org>
> Cc: Puneet Singh <Puneet.Singh@truminds.com<mailto:Puneet.Singh@truminds.com>>
> Subject: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722
> Nic
>
> Hi Everyone ,
>
> I am trying to run test-pmd with below mentioned nic card but getting following
> error . Can anyone please help me to resolve this issue.
>
> EAL: probe driver: 8086:37d3 net_i40e
> i40e_vlan_tpid_set(): Set switch config failed aq_err: 14
> eth_i40e_dev_init(): Failed to set the default outer VLAN ether type
> EAL: ethdev initialisation failedEAL: Requested device 0000:b5:00.0 cannot be
> used
> EAL: PCI device 0000:b5:00.1 on NUMA socket 0
> EAL: probe driver: 8086:37d3 net_i40e
> testpmd: No probed ethernet devices
> Interactive-mode selected
> Set mac packet forwarding mode
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=267456, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc Done
>
>
> SETUP Details :
>
> DPDK 19.11
>
> NIC :
>
> b5:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection X722
> for 10GbE SFP+ [8086:37d3] (rev 04) i40e Driver and Firmware Version :
> i40e: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver - version 2.9.21
> [101414.147635] i40e: Copyright(c) 2013 - 2019 Intel Corporation.
> [101414.162733] i40e 0000:b5:00.1: fw 4.0.53196 api 1.8 nvm 4.10 0x80001a17
> 1.2145.0 [101414.165982] i40e 0000:b5:00.1: MAC address: 08:3a:88:15:f0:7b
> [101414.166232] i40e 0000:b5:00.1: FW LLDP is disabled [101414.166289] i40e
> 0000:b5:00.1: DCB is not supported or FW LLDP is disabled [101414.166290]
> i40e 0000:b5:00.1: DCB init failed -64, disabled
>
>
> modinfo i40e
> filename: /lib/modules/3.10.0-
> 957.el7.x86_64/updates/drivers/net/ethernet/intel/i40e/i40e.ko
> version: 2.9.21
> license: GPL
> description: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver
> author: Intel Corporation, <e1000-devel@lists.sourceforge.net<mailto:e1000-devel@lists.sourceforge.net>>
> retpoline: Y
> rhelversion: 7.6
> srcversion: FA2B2ABB57C568002DF6CFC
> alias: pci:v00008086d0000158Bsv*sd*bc*sc*i*
> alias: pci:v00008086d0000158Asv*sd*bc*sc*i*
> alias: pci:v00008086d000037D3sv*sd*bc*sc*i*
> alias: pci:v00008086d000037D2sv*sd*bc*sc*i*
> alias: pci:v00008086d000037D1sv*sd*bc*sc*i*
> alias: pci:v00008086d000037D0sv*sd*bc*sc*i*
> alias: pci:v00008086d000037CFsv*sd*bc*sc*i*
> alias: pci:v00008086d000037CEsv*sd*bc*sc*i*
> alias: pci:v00008086d00000D58sv*sd*bc*sc*i*
> alias: pci:v00008086d00000CF8sv*sd*bc*sc*i*
> alias: pci:v00008086d00001588sv*sd*bc*sc*i*
> alias: pci:v00008086d00001587sv*sd*bc*sc*i*
> alias: pci:v00008086d0000104Fsv*sd*bc*sc*i*
> alias: pci:v00008086d0000104Esv*sd*bc*sc*i*
> alias: pci:v00008086d000015FFsv*sd*bc*sc*i*
> alias: pci:v00008086d00001589sv*sd*bc*sc*i*
> alias: pci:v00008086d00001586sv*sd*bc*sc*i*
> alias: pci:v00008086d00001585sv*sd*bc*sc*i*
> alias: pci:v00008086d00001584sv*sd*bc*sc*i*
> alias: pci:v00008086d00001583sv*sd*bc*sc*i*
> alias: pci:v00008086d00001581sv*sd*bc*sc*i*
> alias: pci:v00008086d00001580sv*sd*bc*sc*i*
> alias: pci:v00008086d00001574sv*sd*bc*sc*i*
> alias: pci:v00008086d00001572sv*sd*bc*sc*i*
> depends: ptp
> vermagic: 3.10.0-957.el7.x86_64 SMP mod_unload modversions
> parm: debug:Debug level (0=none,...,16=all) (int)
>
>
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:b5:00.0 'Ethernet Connection X722 for 10GbE SFP+ 37d3'
> drv=uio_pci_generic unused=i40e,igb_uio,vfio-pci
>
> Network devices using kernel driver
> ===================================
> 0000:04:00.0 'I350 Gigabit Network Connection 1521' if=eno3 drv=igb
> unused=igb_uio,vfio-pci,uio_pci_generic *Active*
> 0000:04:00.1 'I350 Gigabit Network Connection 1521' if=eno4 drv=igb
> unused=igb_uio,vfio-pci,uio_pci_generic
> 0000:b5:00.1 'Ethernet Connection X722 for 10GbE SFP+ 37d3' if=eno2 drv=i40e
> unused=igb_uio,vfio-pci,uio_pci_generic
>
>
> Thanks & Regards
> Puneet Singh
>
>
next prev parent reply other threads:[~2020-03-17 10:47 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <BMXPR01MB408812B77E42C9FBBD201DD5F1F90@BMXPR01MB4088.INDPRD01.PROD.OUTLOOK.COM>
2020-03-16 13:11 ` Puneet Singh
[not found] ` <3b14667cd1f44bb4986523b62a1957e2@intel.com>
2020-03-17 6:59 ` Puneet Singh [this message]
[not found] ` <BMXPR01MB40883A263BC1F2359E224101F1F70@BMXPR01MB4088.INDPRD01.PROD.OUTLOOK.COM>
[not found] ` <BMXPR01MB40882E046ED7B42FA3DE53DBF1F40@BMXPR01MB4088.INDPRD01.PROD.OUTLOOK.COM>
2020-03-26 17:57 ` Puneet Singh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BMXPR01MB4088E000C0DD306200FCFC89F1F60@BMXPR01MB4088.INDPRD01.PROD.OUTLOOK.COM \
--to=puneet.singh@truminds.com \
--cc=dev@dpdk.org \
--cc=users@dpdk.org \
--cc=xiaoyun.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).