DPDK patches and discussions
 help / color / mirror / Atom feed
* Re: [dpdk-dev] [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic
       [not found] <BMXPR01MB408812B77E42C9FBBD201DD5F1F90@BMXPR01MB4088.INDPRD01.PROD.OUTLOOK.COM>
@ 2020-03-16 13:11 ` Puneet Singh
       [not found] ` <3b14667cd1f44bb4986523b62a1957e2@intel.com>
  1 sibling, 0 replies; 3+ messages in thread
From: Puneet Singh @ 2020-03-16 13:11 UTC (permalink / raw)
  To: Li, Xiaoyun, dev

Hi Xiaoyun Li ,

With changes you suggested ,Testpmd works fine on my setup.
But With my own application, the port is detected properly, the queue setup to NIC also does not give any error.
But my application is not getting any packets on the rx burst polls.
Any suggestions on what is the best way to debug this eg. can some logs be enabled at PMD to see what setting is wrong etc. ?


Thanks & Regards
Puneet Singh



From: Puneet Singh <Puneet.Singh@truminds.com<mailto:Puneet.Singh@truminds.com>>
Sent: 13 March 2020 10:05
To: Li, Xiaoyun <xiaoyun.li@intel.com<mailto:xiaoyun.li@intel.com>>; users@dpdk.org<mailto:users@dpdk.org>
Subject: Re: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic

Hi,

Thanks a lot for information,I will try these steps with dpdk 19.11 and update.

Regards
Puneet Singh

Get Outlook for Android<https://aka.ms/ghei36>

________________________________
From: Li, Xiaoyun <xiaoyun.li@intel.com<mailto:xiaoyun.li@intel.com>>
Sent: Friday, 13 March, 2020, 9:00 am
To: Puneet Singh; users@dpdk.org<mailto:users@dpdk.org>
Subject: RE: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic

Hi
It is because that X722 doesn't support I40E_HW_FLAG_802_1AD_CAPABLE with new firmware.
This is fixed in 20.02 with base code patch update. This commit: 37b091c75b13d2f26359be9b77adbc33c55a7581.
If you have to use 19.11. You need to add the following in eth_i40e_dev_init():
-       if (hw->device_id == I40E_DEV_ID_SFP_X722)
+       if (hw->mac.type == I40E_MAC_X722)
                 hw->flags &= ~I40E_HW_FLAG_802_1AD_CAPABLE;

Best Regards
Xiaoyun Li

> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Puneet Singh
> Sent: Wednesday, March 11, 2020 15:25
> To: users@dpdk.org<mailto:users@dpdk.org>
> Cc: Puneet Singh <Puneet.Singh@truminds.com<mailto:Puneet.Singh@truminds.com>>
> Subject: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722
> Nic
>
> Hi Everyone ,
>
> I am trying to run test-pmd with below mentioned nic card but getting following
> error . Can anyone please help me to resolve this issue.
>
> EAL:   probe driver: 8086:37d3 net_i40e
> i40e_vlan_tpid_set(): Set switch config failed aq_err: 14
> eth_i40e_dev_init(): Failed to set the default outer VLAN ether type
> EAL: ethdev initialisation failedEAL: Requested device 0000:b5:00.0 cannot be
> used
> EAL: PCI device 0000:b5:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:37d3 net_i40e
> testpmd: No probed ethernet devices
> Interactive-mode selected
> Set mac packet forwarding mode
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=267456, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc Done
>
>
> SETUP Details :
>
> DPDK 19.11
>
> NIC :
>
> b5:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection X722
> for 10GbE SFP+ [8086:37d3] (rev 04) i40e Driver and Firmware Version :
> i40e: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver - version 2.9.21
> [101414.147635] i40e: Copyright(c) 2013 - 2019 Intel Corporation.
> [101414.162733] i40e 0000:b5:00.1: fw 4.0.53196 api 1.8 nvm 4.10 0x80001a17
> 1.2145.0 [101414.165982] i40e 0000:b5:00.1: MAC address: 08:3a:88:15:f0:7b
> [101414.166232] i40e 0000:b5:00.1: FW LLDP is disabled [101414.166289] i40e
> 0000:b5:00.1: DCB is not supported or FW LLDP is disabled [101414.166290]
> i40e 0000:b5:00.1: DCB init failed -64, disabled
>
>
> modinfo i40e
> filename:       /lib/modules/3.10.0-
> 957.el7.x86_64/updates/drivers/net/ethernet/intel/i40e/i40e.ko
> version:        2.9.21
> license:        GPL
> description:    Intel(R) 40-10 Gigabit Ethernet Connection Network Driver
> author:         Intel Corporation, <e1000-devel@lists.sourceforge.net<mailto:e1000-devel@lists.sourceforge.net>>
> retpoline:      Y
> rhelversion:    7.6
> srcversion:     FA2B2ABB57C568002DF6CFC
> alias:          pci:v00008086d0000158Bsv*sd*bc*sc*i*
> alias:          pci:v00008086d0000158Asv*sd*bc*sc*i*
> alias:          pci:v00008086d000037D3sv*sd*bc*sc*i*
> alias:          pci:v00008086d000037D2sv*sd*bc*sc*i*
> alias:          pci:v00008086d000037D1sv*sd*bc*sc*i*
> alias:          pci:v00008086d000037D0sv*sd*bc*sc*i*
> alias:          pci:v00008086d000037CFsv*sd*bc*sc*i*
> alias:          pci:v00008086d000037CEsv*sd*bc*sc*i*
> alias:          pci:v00008086d00000D58sv*sd*bc*sc*i*
> alias:          pci:v00008086d00000CF8sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001588sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001587sv*sd*bc*sc*i*
> alias:          pci:v00008086d0000104Fsv*sd*bc*sc*i*
> alias:          pci:v00008086d0000104Esv*sd*bc*sc*i*
> alias:          pci:v00008086d000015FFsv*sd*bc*sc*i*
> alias:          pci:v00008086d00001589sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001586sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001585sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001584sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001583sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001581sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001580sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001574sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001572sv*sd*bc*sc*i*
> depends:        ptp
> vermagic:       3.10.0-957.el7.x86_64 SMP mod_unload modversions
> parm:           debug:Debug level (0=none,...,16=all) (int)
>
>
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:b5:00.0 'Ethernet Connection X722 for 10GbE SFP+ 37d3'
> drv=uio_pci_generic unused=i40e,igb_uio,vfio-pci
>
> Network devices using kernel driver
> ===================================
> 0000:04:00.0 'I350 Gigabit Network Connection 1521' if=eno3 drv=igb
> unused=igb_uio,vfio-pci,uio_pci_generic *Active*
> 0000:04:00.1 'I350 Gigabit Network Connection 1521' if=eno4 drv=igb
> unused=igb_uio,vfio-pci,uio_pci_generic
> 0000:b5:00.1 'Ethernet Connection X722 for 10GbE SFP+ 37d3' if=eno2 drv=i40e
> unused=igb_uio,vfio-pci,uio_pci_generic
>
>
> Thanks & Regards
> Puneet Singh
>
>


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic
       [not found] ` <3b14667cd1f44bb4986523b62a1957e2@intel.com>
@ 2020-03-17  6:59   ` Puneet Singh
       [not found]     ` <BMXPR01MB40883A263BC1F2359E224101F1F70@BMXPR01MB4088.INDPRD01.PROD.OUTLOOK.COM>
  0 siblings, 1 reply; 3+ messages in thread
From: Puneet Singh @ 2020-03-17  6:59 UTC (permalink / raw)
  To: Li, Xiaoyun, users, dev

Hi Xiaoyun Li

Following  is the difference between eth conf of testpmd and my application. Please let us know if any parameter is critical

testpmd (rte_eth_conf)

e = {link_speeds = 0, rxmode = {mq_mode = ETH_MQ_RX_NONE, max_rx_pkt_len = 1518, max_lro_pkt_size = 0, split_hdr_size = 0, offloads = 0, reserved_64s = {0, 0},
    reserved_ptrs = {0x0, 0x0}}, txmode = {mq_mode = ETH_MQ_TX_NONE, offloads = 65536, pvid = 0, hw_vlan_reject_tagged = 0 '\000', hw_vlan_reject_untagged = 0 '\000',
    hw_vlan_insert_pvid = 0 '\000', reserved_64s = {0, 0}, reserved_ptrs = {0x0, 0x0}}, lpbk_mode = 0, rx_adv_conf = {rss_conf = {rss_key = 0x0,
      rss_key_len = 0 '\000', rss_hf = 0}, vmdq_dcb_conf = {nb_queue_pools = (unknown: 0), enable_default_pool = 0 '\000', default_pool = 0 '\000',
      nb_pool_maps = 0 '\000', pool_map = {{vlan_id = 0, pools = 0} <repeats 64 times>}, dcb_tc = "\000\000\000\000\000\000\000"}, dcb_rx_conf = {
      nb_tcs = (unknown: 0), dcb_tc = "\000\000\000\000\000\000\000"}, vmdq_rx_conf = {nb_queue_pools = (unknown: 0), enable_default_pool = 0 '\000',
      default_pool = 0 '\000', enable_loop_back = 0 '\000', nb_pool_maps = 0 '\000', rx_mode = 0, pool_map = {{vlan_id = 0, pools = 0} <repeats 64 times>}}},
  tx_adv_conf = {vmdq_dcb_tx_conf = {nb_queue_pools = (unknown: 0), dcb_tc = "\000\000\000\000\000\000\000"}, dcb_tx_conf = {nb_tcs = (unknown: 0),
      dcb_tc = "\000\000\000\000\000\000\000"}, vmdq_tx_conf = {nb_queue_pools = (unknown: 0)}}, dcb_capability_en = 0, fdir_conf = {mode = RTE_FDIR_MODE_NONE,
    pballoc = RTE_FDIR_PBALLOC_64K, status = RTE_FDIR_REPORT_STATUS, drop_queue = 127 '\177', mask = {vlan_tci_mask = 65519, ipv4_mask = {src_ip = 4294967295,
        dst_ip = 4294967295, tos = 0 '\000', ttl = 0 '\000', proto = 0 '\000'}, ipv6_mask = {src_ip = {4294967295, 4294967295, 4294967295, 4294967295}, dst_ip = {
          4294967295, 4294967295, 4294967295, 4294967295}, tc = 0 '\000', proto = 0 '\000', hop_limits = 0 '\000'}, src_port_mask = 65535, dst_port_mask = 65535,
      mac_addr_byte_mask = 255 '\377', tunnel_id_mask = 4294967295, tunnel_type_mask = 1 '\001'}, flex_conf = {nb_payloads = 0, nb_flexmasks = 0, flex_set = {{
          type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {
          type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {
          type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {
          type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}}, flex_mask = {{
          flow_type = 0, mask = '\000' <repeats 15 times>} <repeats 24 times>}}}, intr_conf = {lsc = 1, rxq = 0, rmv = 0}}

my app(rte_eth_conf)
e = {link_speeds = 0, rxmode = {mq_mode = ETH_MQ_RX_NONE, max_rx_pkt_len = 1518, max_lro_pkt_size = 0, split_hdr_size = 0, offloads = 0, reserved_64s = {0, 0},
    reserved_ptrs = {0x0, 0x0}}, txmode = {mq_mode = ETH_MQ_TX_NONE, offloads = 32774, pvid = 0, hw_vlan_reject_tagged = 0 '\000', hw_vlan_reject_untagged = 0 '\000',
    hw_vlan_insert_pvid = 0 '\000', reserved_64s = {0, 0}, reserved_ptrs = {0x0, 0x0}}, lpbk_mode = 0, rx_adv_conf = {rss_conf = {rss_key = 0x0,
      rss_key_len = 0 '\000', rss_hf = 0}, vmdq_dcb_conf = {nb_queue_pools = (unknown: 0), enable_default_pool = 0 '\000', default_pool = 0 '\000',
      nb_pool_maps = 0 '\000', pool_map = {{vlan_id = 0, pools = 0} <repeats 64 times>}, dcb_tc = "\000\000\000\000\000\000\000"}, dcb_rx_conf = {
      nb_tcs = (unknown: 0), dcb_tc = "\000\000\000\000\000\000\000"}, vmdq_rx_conf = {nb_queue_pools = (unknown: 0), enable_default_pool = 0 '\000',
      default_pool = 0 '\000', enable_loop_back = 0 '\000', nb_pool_maps = 0 '\000', rx_mode = 0, pool_map = {{vlan_id = 0, pools = 0} <repeats 64 times>}}},
  tx_adv_conf = {vmdq_dcb_tx_conf = {nb_queue_pools = (unknown: 0), dcb_tc = "\000\000\000\000\000\000\000"}, dcb_tx_conf = {nb_tcs = (unknown: 0),
      dcb_tc = "\000\000\000\000\000\000\000"}, vmdq_tx_conf = {nb_queue_pools = (unknown: 0)}}, dcb_capability_en = 0, fdir_conf = {mode = RTE_FDIR_MODE_NONE,
    pballoc = RTE_FDIR_PBALLOC_64K, status = RTE_FDIR_NO_REPORT_STATUS, drop_queue = 0 '\000', mask = {vlan_tci_mask = 0, ipv4_mask = {src_ip = 0, dst_ip = 0,
        tos = 0 '\000', ttl = 0 '\000', proto = 0 '\000'}, ipv6_mask = {src_ip = {0, 0, 0, 0}, dst_ip = {0, 0, 0, 0}, tc = 0 '\000', proto = 0 '\000',
        hop_limits = 0 '\000'}, src_port_mask = 0, dst_port_mask = 0, mac_addr_byte_mask = 0 '\000', tunnel_id_mask = 0, tunnel_type_mask = 0 '\000'}, flex_conf = {
      nb_payloads = 0, nb_flexmasks = 0, flex_set = {{type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN,
          src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {
            0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {
            0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {
            0 <repeats 16 times>}}}, flex_mask = {{flow_type = 0, mask = '\000' <repeats 15 times>} <repeats 24 times>}}}, intr_conf = {lsc = 0, rxq = 0, rmv = 0}}


Regards
Puneet Singh

From: Li, Xiaoyun <xiaoyun.li@intel.com>
Sent: 17 March 2020 07:34
To: Puneet Singh <Puneet.Singh@truminds.com>; users@dpdk.org
Subject: RE: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic

Hi
For driver, you can set the log level as debug. Replace the RTE_LOG_NOTICE with RTE_LOG_DEBUG in the following codes.
This will show all PMD_INIT_LOG() and PMD_DRV_LOG()
               i40e_logtype_init = rte_log_register("pmd.net.i40e.init");
               if (i40e_logtype_init >= 0)
                              rte_log_set_level(i40e_logtype_init, RTE_LOG_NOTICE);
               i40e_logtype_driver = rte_log_register("pmd.net.i40e.driver");
               if (i40e_logtype_driver >= 0)
                              rte_log_set_level(i40e_logtype_driver, RTE_LOG_NOTICE);

And you can turn on the rx/tx debug if you need to debug tx/rx.
In config/common_base,
CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=y
CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=y
CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=y

Best Regards
Xiaoyun Li

From: Puneet Singh <Puneet.Singh@truminds.com<mailto:Puneet.Singh@truminds.com>>
Sent: Monday, March 16, 2020 21:02
To: Li, Xiaoyun <xiaoyun.li@intel.com<mailto:xiaoyun.li@intel.com>>; users@dpdk.org<mailto:users@dpdk.org>
Subject: RE: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic

Hi Xiaoyun Li ,

With changes you suggested ,Testpmd works fine on my setup.
But With my own application, the port is detected properly, the queue setup to NIC also does not give any error.
But my application is not getting any packets on the rx burst polls.
Any suggestions on what is the best way to debug this eg. can some logs be enabled at PMD to see what setting is wrong etc. ?


Thanks & Regards
Puneet Singh



From: Puneet Singh <Puneet.Singh@truminds.com<mailto:Puneet.Singh@truminds.com>>
Sent: 13 March 2020 10:05
To: Li, Xiaoyun <xiaoyun.li@intel.com<mailto:xiaoyun.li@intel.com>>; users@dpdk.org<mailto:users@dpdk.org>
Subject: Re: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic

Hi,

Thanks a lot for information,I will try these steps with dpdk 19.11 and update.

Regards
Puneet Singh

Get Outlook for Android<https://aka.ms/ghei36>

________________________________
From: Li, Xiaoyun <xiaoyun.li@intel.com<mailto:xiaoyun.li@intel.com>>
Sent: Friday, 13 March, 2020, 9:00 am
To: Puneet Singh; users@dpdk.org<mailto:users@dpdk.org>
Subject: RE: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic

Hi
It is because that X722 doesn't support I40E_HW_FLAG_802_1AD_CAPABLE with new firmware.
This is fixed in 20.02 with base code patch update. This commit: 37b091c75b13d2f26359be9b77adbc33c55a7581.
If you have to use 19.11. You need to add the following in eth_i40e_dev_init():
-       if (hw->device_id == I40E_DEV_ID_SFP_X722)
+       if (hw->mac.type == I40E_MAC_X722)
                 hw->flags &= ~I40E_HW_FLAG_802_1AD_CAPABLE;

Best Regards
Xiaoyun Li

> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Puneet Singh
> Sent: Wednesday, March 11, 2020 15:25
> To: users@dpdk.org<mailto:users@dpdk.org>
> Cc: Puneet Singh <Puneet.Singh@truminds.com<mailto:Puneet.Singh@truminds.com>>
> Subject: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722
> Nic
>
> Hi Everyone ,
>
> I am trying to run test-pmd with below mentioned nic card but getting following
> error . Can anyone please help me to resolve this issue.
>
> EAL:   probe driver: 8086:37d3 net_i40e
> i40e_vlan_tpid_set(): Set switch config failed aq_err: 14
> eth_i40e_dev_init(): Failed to set the default outer VLAN ether type
> EAL: ethdev initialisation failedEAL: Requested device 0000:b5:00.0 cannot be
> used
> EAL: PCI device 0000:b5:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:37d3 net_i40e
> testpmd: No probed ethernet devices
> Interactive-mode selected
> Set mac packet forwarding mode
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=267456, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc Done
>
>
> SETUP Details :
>
> DPDK 19.11
>
> NIC :
>
> b5:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection X722
> for 10GbE SFP+ [8086:37d3] (rev 04) i40e Driver and Firmware Version :
> i40e: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver - version 2.9.21
> [101414.147635] i40e: Copyright(c) 2013 - 2019 Intel Corporation.
> [101414.162733] i40e 0000:b5:00.1: fw 4.0.53196 api 1.8 nvm 4.10 0x80001a17
> 1.2145.0 [101414.165982] i40e 0000:b5:00.1: MAC address: 08:3a:88:15:f0:7b
> [101414.166232] i40e 0000:b5:00.1: FW LLDP is disabled [101414.166289] i40e
> 0000:b5:00.1: DCB is not supported or FW LLDP is disabled [101414.166290]
> i40e 0000:b5:00.1: DCB init failed -64, disabled
>
>
> modinfo i40e
> filename:       /lib/modules/3.10.0-
> 957.el7.x86_64/updates/drivers/net/ethernet/intel/i40e/i40e.ko
> version:        2.9.21
> license:        GPL
> description:    Intel(R) 40-10 Gigabit Ethernet Connection Network Driver
> author:         Intel Corporation, <e1000-devel@lists.sourceforge.net<mailto:e1000-devel@lists.sourceforge.net>>
> retpoline:      Y
> rhelversion:    7.6
> srcversion:     FA2B2ABB57C568002DF6CFC
> alias:          pci:v00008086d0000158Bsv*sd*bc*sc*i*
> alias:          pci:v00008086d0000158Asv*sd*bc*sc*i*
> alias:          pci:v00008086d000037D3sv*sd*bc*sc*i*
> alias:          pci:v00008086d000037D2sv*sd*bc*sc*i*
> alias:          pci:v00008086d000037D1sv*sd*bc*sc*i*
> alias:          pci:v00008086d000037D0sv*sd*bc*sc*i*
> alias:          pci:v00008086d000037CFsv*sd*bc*sc*i*
> alias:          pci:v00008086d000037CEsv*sd*bc*sc*i*
> alias:          pci:v00008086d00000D58sv*sd*bc*sc*i*
> alias:          pci:v00008086d00000CF8sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001588sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001587sv*sd*bc*sc*i*
> alias:          pci:v00008086d0000104Fsv*sd*bc*sc*i*
> alias:          pci:v00008086d0000104Esv*sd*bc*sc*i*
> alias:          pci:v00008086d000015FFsv*sd*bc*sc*i*
> alias:          pci:v00008086d00001589sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001586sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001585sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001584sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001583sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001581sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001580sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001574sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001572sv*sd*bc*sc*i*
> depends:        ptp
> vermagic:       3.10.0-957.el7.x86_64 SMP mod_unload modversions
> parm:           debug:Debug level (0=none,...,16=all) (int)
>
>
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:b5:00.0 'Ethernet Connection X722 for 10GbE SFP+ 37d3'
> drv=uio_pci_generic unused=i40e,igb_uio,vfio-pci
>
> Network devices using kernel driver
> ===================================
> 0000:04:00.0 'I350 Gigabit Network Connection 1521' if=eno3 drv=igb
> unused=igb_uio,vfio-pci,uio_pci_generic *Active*
> 0000:04:00.1 'I350 Gigabit Network Connection 1521' if=eno4 drv=igb
> unused=igb_uio,vfio-pci,uio_pci_generic
> 0000:b5:00.1 'Ethernet Connection X722 for 10GbE SFP+ 37d3' if=eno2 drv=i40e
> unused=igb_uio,vfio-pci,uio_pci_generic
>
>
> Thanks & Regards
> Puneet Singh
>
>


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic
       [not found]       ` <BMXPR01MB40882E046ED7B42FA3DE53DBF1F40@BMXPR01MB4088.INDPRD01.PROD.OUTLOOK.COM>
@ 2020-03-26 17:57         ` Puneet Singh
  0 siblings, 0 replies; 3+ messages in thread
From: Puneet Singh @ 2020-03-26 17:57 UTC (permalink / raw)
  To: Li, Xiaoyun, users, dev

Hi Everyone,

I am using  X722 NIC with DPDK 19.11 after a single line patch for port detection as was advised earlier.
The port gets detected properly.
The NIC stats via rte_eth_stats_get report that the packets are arriving at NIC. There are no packets that are dropped due to no-mbuf's
But the calls to rte_eth_rx_burst function calls in the application do not provide any packets to the user space.

Has anyone used successful packet I/O with X722 NIC and if yes with which OS and which DPDK release ? If any tricks are needed, kindly advise. My entire usecase which works normally with X520, Vmxnet3 and virtio is blocked with the X722 NIC.

Regards
Puneet
From: Puneet Singh
Sent: 19 March 2020 13:35
To: 'Li, Xiaoyun' <xiaoyun.li@intel.com>; 'users@dpdk.org' <users@dpdk.org>; 'dev@dpdk.org' <dev@dpdk.org>
Subject: RE: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic

Hi Everyone,

I am using X722 NIC with DPDK 19.11.

Testpmd works fine but my application does not (the port is detected but data rx/tx not working)

I have reconciled the exact configs that are passed to rte_eth_dev_configure, rte_eth_tx_queue_setup, rte_eth_rx_queue_setup between testpmd and my application

I do notice that in my application the rte_eth_rx_burst uses 1 as the max packets to be received at a time, while testpmd uses MAX_PKT_BURST=512.

I changed the MAX_PKT_BURST to 1 in testpmd and testpmd also runs into some problems eg. I cannot give the stop command.

Also I notice the following difference in logs while using testpmd with MAX_PKT_BURST=512 versus MAX_PKT_BURST=1

With MAX_PKT_BURST=1
i40e_ethertype_filter_restore(): Ethertype filter: mac_etype_used = 41450, etype_used = 189, mac_etype_free = 0, etype_free = 0

With MAX_PKT_BURST=512
i40e_ethertype_filter_restore(): Ethertype filter: mac_etype_used = 37994, etype_used = 189, mac_etype_free = 0, etype_free = 0

It should be noted that MAX_PKT_BURST parameter also indirectly controls the number of mbuf's created in the packet pool, so how is that changing the above parameters ?

Further in my application, the number of mbuf's are allocated independently, there the following log comes out -
i40e_ethertype_filter_restore(): Ethertype filter: mac_etype_used = 0, etype_used = 1, mac_etype_free = 0, etype_free = 0


Regards
Puneet Singh

From: Puneet Singh
Sent: 18 March 2020 13:25
To: 'Li, Xiaoyun' <xiaoyun.li@intel.com<mailto:xiaoyun.li@intel.com>>; 'users@dpdk.org' <users@dpdk.org<mailto:users@dpdk.org>>; 'dev@dpdk.org' <dev@dpdk.org<mailto:dev@dpdk.org>>
Subject: RE: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic

Hi Team,

The Only Difference I could see :

TestPmd:  Ethertype filter: mac_etype_used = 37994, etype_used = 189

My Applicaiton:  Ethertype filter: mac_etype_used = 0, etype_used = 1

Can anyone tell, what's significance of these fields for i40 Nic and how to configure them correctly via application and what is it in testpmd which triggers the settings of 37994 and 189 which is not showing up with my application ?



Thanks & Regards
Puneet Singh

From: Puneet Singh
Sent: 17 March 2020 12:30
To: Li, Xiaoyun <xiaoyun.li@intel.com<mailto:xiaoyun.li@intel.com>>; users@dpdk.org<mailto:users@dpdk.org>; dev@dpdk.org<mailto:dev@dpdk.org>
Subject: RE: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic

Hi Xiaoyun Li

Following  is the difference between eth conf of testpmd and my application. Please let us know if any parameter is critical

testpmd (rte_eth_conf)

e = {link_speeds = 0, rxmode = {mq_mode = ETH_MQ_RX_NONE, max_rx_pkt_len = 1518, max_lro_pkt_size = 0, split_hdr_size = 0, offloads = 0, reserved_64s = {0, 0},
    reserved_ptrs = {0x0, 0x0}}, txmode = {mq_mode = ETH_MQ_TX_NONE, offloads = 65536, pvid = 0, hw_vlan_reject_tagged = 0 '\000', hw_vlan_reject_untagged = 0 '\000',
    hw_vlan_insert_pvid = 0 '\000', reserved_64s = {0, 0}, reserved_ptrs = {0x0, 0x0}}, lpbk_mode = 0, rx_adv_conf = {rss_conf = {rss_key = 0x0,
      rss_key_len = 0 '\000', rss_hf = 0}, vmdq_dcb_conf = {nb_queue_pools = (unknown: 0), enable_default_pool = 0 '\000', default_pool = 0 '\000',
      nb_pool_maps = 0 '\000', pool_map = {{vlan_id = 0, pools = 0} <repeats 64 times>}, dcb_tc = "\000\000\000\000\000\000\000"}, dcb_rx_conf = {
      nb_tcs = (unknown: 0), dcb_tc = "\000\000\000\000\000\000\000"}, vmdq_rx_conf = {nb_queue_pools = (unknown: 0), enable_default_pool = 0 '\000',
      default_pool = 0 '\000', enable_loop_back = 0 '\000', nb_pool_maps = 0 '\000', rx_mode = 0, pool_map = {{vlan_id = 0, pools = 0} <repeats 64 times>}}},
  tx_adv_conf = {vmdq_dcb_tx_conf = {nb_queue_pools = (unknown: 0), dcb_tc = "\000\000\000\000\000\000\000"}, dcb_tx_conf = {nb_tcs = (unknown: 0),
      dcb_tc = "\000\000\000\000\000\000\000"}, vmdq_tx_conf = {nb_queue_pools = (unknown: 0)}}, dcb_capability_en = 0, fdir_conf = {mode = RTE_FDIR_MODE_NONE,
    pballoc = RTE_FDIR_PBALLOC_64K, status = RTE_FDIR_REPORT_STATUS, drop_queue = 127 '\177', mask = {vlan_tci_mask = 65519, ipv4_mask = {src_ip = 4294967295,
        dst_ip = 4294967295, tos = 0 '\000', ttl = 0 '\000', proto = 0 '\000'}, ipv6_mask = {src_ip = {4294967295, 4294967295, 4294967295, 4294967295}, dst_ip = {
          4294967295, 4294967295, 4294967295, 4294967295}, tc = 0 '\000', proto = 0 '\000', hop_limits = 0 '\000'}, src_port_mask = 65535, dst_port_mask = 65535,
      mac_addr_byte_mask = 255 '\377', tunnel_id_mask = 4294967295, tunnel_type_mask = 1 '\001'}, flex_conf = {nb_payloads = 0, nb_flexmasks = 0, flex_set = {{
          type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {
          type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {
          type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {
          type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}}, flex_mask = {{
          flow_type = 0, mask = '\000' <repeats 15 times>} <repeats 24 times>}}}, intr_conf = {lsc = 1, rxq = 0, rmv = 0}}

my app(rte_eth_conf)
e = {link_speeds = 0, rxmode = {mq_mode = ETH_MQ_RX_NONE, max_rx_pkt_len = 1518, max_lro_pkt_size = 0, split_hdr_size = 0, offloads = 0, reserved_64s = {0, 0},
    reserved_ptrs = {0x0, 0x0}}, txmode = {mq_mode = ETH_MQ_TX_NONE, offloads = 32774, pvid = 0, hw_vlan_reject_tagged = 0 '\000', hw_vlan_reject_untagged = 0 '\000',
    hw_vlan_insert_pvid = 0 '\000', reserved_64s = {0, 0}, reserved_ptrs = {0x0, 0x0}}, lpbk_mode = 0, rx_adv_conf = {rss_conf = {rss_key = 0x0,
      rss_key_len = 0 '\000', rss_hf = 0}, vmdq_dcb_conf = {nb_queue_pools = (unknown: 0), enable_default_pool = 0 '\000', default_pool = 0 '\000',
      nb_pool_maps = 0 '\000', pool_map = {{vlan_id = 0, pools = 0} <repeats 64 times>}, dcb_tc = "\000\000\000\000\000\000\000"}, dcb_rx_conf = {
      nb_tcs = (unknown: 0), dcb_tc = "\000\000\000\000\000\000\000"}, vmdq_rx_conf = {nb_queue_pools = (unknown: 0), enable_default_pool = 0 '\000',
      default_pool = 0 '\000', enable_loop_back = 0 '\000', nb_pool_maps = 0 '\000', rx_mode = 0, pool_map = {{vlan_id = 0, pools = 0} <repeats 64 times>}}},
  tx_adv_conf = {vmdq_dcb_tx_conf = {nb_queue_pools = (unknown: 0), dcb_tc = "\000\000\000\000\000\000\000"}, dcb_tx_conf = {nb_tcs = (unknown: 0),
      dcb_tc = "\000\000\000\000\000\000\000"}, vmdq_tx_conf = {nb_queue_pools = (unknown: 0)}}, dcb_capability_en = 0, fdir_conf = {mode = RTE_FDIR_MODE_NONE,
    pballoc = RTE_FDIR_PBALLOC_64K, status = RTE_FDIR_NO_REPORT_STATUS, drop_queue = 0 '\000', mask = {vlan_tci_mask = 0, ipv4_mask = {src_ip = 0, dst_ip = 0,
        tos = 0 '\000', ttl = 0 '\000', proto = 0 '\000'}, ipv6_mask = {src_ip = {0, 0, 0, 0}, dst_ip = {0, 0, 0, 0}, tc = 0 '\000', proto = 0 '\000',
        hop_limits = 0 '\000'}, src_port_mask = 0, dst_port_mask = 0, mac_addr_byte_mask = 0 '\000', tunnel_id_mask = 0, tunnel_type_mask = 0 '\000'}, flex_conf = {
      nb_payloads = 0, nb_flexmasks = 0, flex_set = {{type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN,
          src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {
            0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {
            0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {0 <repeats 16 times>}}, {type = RTE_ETH_PAYLOAD_UNKNOWN, src_offset = {
            0 <repeats 16 times>}}}, flex_mask = {{flow_type = 0, mask = '\000' <repeats 15 times>} <repeats 24 times>}}}, intr_conf = {lsc = 0, rxq = 0, rmv = 0}}


Regards
Puneet Singh

From: Li, Xiaoyun <xiaoyun.li@intel.com<mailto:xiaoyun.li@intel.com>>
Sent: 17 March 2020 07:34
To: Puneet Singh <Puneet.Singh@truminds.com<mailto:Puneet.Singh@truminds.com>>; users@dpdk.org<mailto:users@dpdk.org>
Subject: RE: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic

Hi
For driver, you can set the log level as debug. Replace the RTE_LOG_NOTICE with RTE_LOG_DEBUG in the following codes.
This will show all PMD_INIT_LOG() and PMD_DRV_LOG()
               i40e_logtype_init = rte_log_register("pmd.net.i40e.init");
               if (i40e_logtype_init >= 0)
                              rte_log_set_level(i40e_logtype_init, RTE_LOG_NOTICE);
               i40e_logtype_driver = rte_log_register("pmd.net.i40e.driver");
               if (i40e_logtype_driver >= 0)
                              rte_log_set_level(i40e_logtype_driver, RTE_LOG_NOTICE);

And you can turn on the rx/tx debug if you need to debug tx/rx.
In config/common_base,
CONFIG_RTE_LIBRTE_I40E_DEBUG_RX=y
CONFIG_RTE_LIBRTE_I40E_DEBUG_TX=y
CONFIG_RTE_LIBRTE_I40E_DEBUG_TX_FREE=y

Best Regards
Xiaoyun Li

From: Puneet Singh <Puneet.Singh@truminds.com<mailto:Puneet.Singh@truminds.com>>
Sent: Monday, March 16, 2020 21:02
To: Li, Xiaoyun <xiaoyun.li@intel.com<mailto:xiaoyun.li@intel.com>>; users@dpdk.org<mailto:users@dpdk.org>
Subject: RE: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic

Hi Xiaoyun Li ,

With changes you suggested ,Testpmd works fine on my setup.
But With my own application, the port is detected properly, the queue setup to NIC also does not give any error.
But my application is not getting any packets on the rx burst polls.
Any suggestions on what is the best way to debug this eg. can some logs be enabled at PMD to see what setting is wrong etc. ?


Thanks & Regards
Puneet Singh



From: Puneet Singh <Puneet.Singh@truminds.com<mailto:Puneet.Singh@truminds.com>>
Sent: 13 March 2020 10:05
To: Li, Xiaoyun <xiaoyun.li@intel.com<mailto:xiaoyun.li@intel.com>>; users@dpdk.org<mailto:users@dpdk.org>
Subject: Re: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic

Hi,

Thanks a lot for information,I will try these steps with dpdk 19.11 and update.

Regards
Puneet Singh

Get Outlook for Android<https://aka.ms/ghei36>

________________________________
From: Li, Xiaoyun <xiaoyun.li@intel.com<mailto:xiaoyun.li@intel.com>>
Sent: Friday, 13 March, 2020, 9:00 am
To: Puneet Singh; users@dpdk.org<mailto:users@dpdk.org>
Subject: RE: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic

Hi
It is because that X722 doesn't support I40E_HW_FLAG_802_1AD_CAPABLE with new firmware.
This is fixed in 20.02 with base code patch update. This commit: 37b091c75b13d2f26359be9b77adbc33c55a7581.
If you have to use 19.11. You need to add the following in eth_i40e_dev_init():
-       if (hw->device_id == I40E_DEV_ID_SFP_X722)
+       if (hw->mac.type == I40E_MAC_X722)
                 hw->flags &= ~I40E_HW_FLAG_802_1AD_CAPABLE;

Best Regards
Xiaoyun Li

> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Puneet Singh
> Sent: Wednesday, March 11, 2020 15:25
> To: users@dpdk.org<mailto:users@dpdk.org>
> Cc: Puneet Singh <Puneet.Singh@truminds.com<mailto:Puneet.Singh@truminds.com>>
> Subject: [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722
> Nic
>
> Hi Everyone ,
>
> I am trying to run test-pmd with below mentioned nic card but getting following
> error . Can anyone please help me to resolve this issue.
>
> EAL:   probe driver: 8086:37d3 net_i40e
> i40e_vlan_tpid_set(): Set switch config failed aq_err: 14
> eth_i40e_dev_init(): Failed to set the default outer VLAN ether type
> EAL: ethdev initialisation failedEAL: Requested device 0000:b5:00.0 cannot be
> used
> EAL: PCI device 0000:b5:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:37d3 net_i40e
> testpmd: No probed ethernet devices
> Interactive-mode selected
> Set mac packet forwarding mode
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=267456, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc Done
>
>
> SETUP Details :
>
> DPDK 19.11
>
> NIC :
>
> b5:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection X722
> for 10GbE SFP+ [8086:37d3] (rev 04) i40e Driver and Firmware Version :
> i40e: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver - version 2.9.21
> [101414.147635] i40e: Copyright(c) 2013 - 2019 Intel Corporation.
> [101414.162733] i40e 0000:b5:00.1: fw 4.0.53196 api 1.8 nvm 4.10 0x80001a17
> 1.2145.0 [101414.165982] i40e 0000:b5:00.1: MAC address: 08:3a:88:15:f0:7b
> [101414.166232] i40e 0000:b5:00.1: FW LLDP is disabled [101414.166289] i40e
> 0000:b5:00.1: DCB is not supported or FW LLDP is disabled [101414.166290]
> i40e 0000:b5:00.1: DCB init failed -64, disabled
>
>
> modinfo i40e
> filename:       /lib/modules/3.10.0-
> 957.el7.x86_64/updates/drivers/net/ethernet/intel/i40e/i40e.ko
> version:        2.9.21
> license:        GPL
> description:    Intel(R) 40-10 Gigabit Ethernet Connection Network Driver
> author:         Intel Corporation, <e1000-devel@lists.sourceforge.net<mailto:e1000-devel@lists.sourceforge.net>>
> retpoline:      Y
> rhelversion:    7.6
> srcversion:     FA2B2ABB57C568002DF6CFC
> alias:          pci:v00008086d0000158Bsv*sd*bc*sc*i*
> alias:          pci:v00008086d0000158Asv*sd*bc*sc*i*
> alias:          pci:v00008086d000037D3sv*sd*bc*sc*i*
> alias:          pci:v00008086d000037D2sv*sd*bc*sc*i*
> alias:          pci:v00008086d000037D1sv*sd*bc*sc*i*
> alias:          pci:v00008086d000037D0sv*sd*bc*sc*i*
> alias:          pci:v00008086d000037CFsv*sd*bc*sc*i*
> alias:          pci:v00008086d000037CEsv*sd*bc*sc*i*
> alias:          pci:v00008086d00000D58sv*sd*bc*sc*i*
> alias:          pci:v00008086d00000CF8sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001588sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001587sv*sd*bc*sc*i*
> alias:          pci:v00008086d0000104Fsv*sd*bc*sc*i*
> alias:          pci:v00008086d0000104Esv*sd*bc*sc*i*
> alias:          pci:v00008086d000015FFsv*sd*bc*sc*i*
> alias:          pci:v00008086d00001589sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001586sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001585sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001584sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001583sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001581sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001580sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001574sv*sd*bc*sc*i*
> alias:          pci:v00008086d00001572sv*sd*bc*sc*i*
> depends:        ptp
> vermagic:       3.10.0-957.el7.x86_64 SMP mod_unload modversions
> parm:           debug:Debug level (0=none,...,16=all) (int)
>
>
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:b5:00.0 'Ethernet Connection X722 for 10GbE SFP+ 37d3'
> drv=uio_pci_generic unused=i40e,igb_uio,vfio-pci
>
> Network devices using kernel driver
> ===================================
> 0000:04:00.0 'I350 Gigabit Network Connection 1521' if=eno3 drv=igb
> unused=igb_uio,vfio-pci,uio_pci_generic *Active*
> 0000:04:00.1 'I350 Gigabit Network Connection 1521' if=eno4 drv=igb
> unused=igb_uio,vfio-pci,uio_pci_generic
> 0000:b5:00.1 'Ethernet Connection X722 for 10GbE SFP+ 37d3' if=eno2 drv=i40e
> unused=igb_uio,vfio-pci,uio_pci_generic
>
>
> Thanks & Regards
> Puneet Singh
>
>


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-03-26 21:34 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <BMXPR01MB408812B77E42C9FBBD201DD5F1F90@BMXPR01MB4088.INDPRD01.PROD.OUTLOOK.COM>
2020-03-16 13:11 ` [dpdk-dev] [dpdk-users] Issue while running DPDK19.11 test-pmd with Intel X722 Nic Puneet Singh
     [not found] ` <3b14667cd1f44bb4986523b62a1957e2@intel.com>
2020-03-17  6:59   ` Puneet Singh
     [not found]     ` <BMXPR01MB40883A263BC1F2359E224101F1F70@BMXPR01MB4088.INDPRD01.PROD.OUTLOOK.COM>
     [not found]       ` <BMXPR01MB40882E046ED7B42FA3DE53DBF1F40@BMXPR01MB4088.INDPRD01.PROD.OUTLOOK.COM>
2020-03-26 17:57         ` Puneet Singh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).