DPDK patches and discussions
 help / color / mirror / Atom feed
From: JackyCT.Chen <JackyCT.Chen@advantech.com.tw>
To: "Zeng, ZhichaoX" <zhichaox.zeng@intel.com>,
	Jason.Hsu <Jason.Hsu@advantech.com.tw>,
	"Chang, Howard C" <howard.c.chang@intel.com>
Cc: Amy.Shih <Amy.Shih@advantech.com.tw>,
	Leo66.Wang <Leo66.Wang@advantech.com.tw>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"Cui, KaixinX" <kaixinx.cui@intel.com>
Subject: RE: DPDK testpmd with E823 link status is down
Date: Fri, 15 Mar 2024 02:23:09 +0000	[thread overview]
Message-ID: <JH0PR02MB69032868E801225556FE6825BD282@JH0PR02MB6903.apcprd02.prod.outlook.com> (raw)
In-Reply-To: <TYZPR02MB8084D756393DF7B0B69507F1C7242@TYZPR02MB8084.apcprd02.prod.outlook.com>


[-- Attachment #1.1: Type: text/plain, Size: 16504 bytes --]

Hi Zhichao,
Do you have any update ?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Other discussion from Intel IPS Case No. 00867743  ~
[Intel internal sync  - between  DPDK forum and DPDK IPS case discussion]
Something “the firmware issue “  need you sync with  Intel IPS ticket (#00867743) owner – Mao and Howard
It really need your help . thanks a lot!

Mao
Intel Technical Specialist
[mhsieh 03/14/2024 07:37:47]
Dear Customer,
Does Advantech see the similar issue with the kernel driver?
Since I don't get such FW issue, please check with the person who provided the DPDK patch for the following up if v3.36 NVM still see the issue.

He (Mao) said he does NOT hear such FW issue .   could you sync this sync that you think  with Mao or Howard  ?
Hope let it CLOSED this issue more speedily and quickly .

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Meantime, we will arrange to do the same experiment on LEK v3.36 which Howard and Mao 's suggestion recently.

Thanks a lot!

Best Regards,
JackyCT.Chen
x86 Software | Cloud-IoT Group | Advantech Co., Ltd.
02-2792-7818 Ext. 1194


From: Jason.Hsu <Jason.Hsu@advantech.com.tw>
Sent: Monday, March 11, 2024 4:26 PM
To: Chang, Howard C <howard.c.chang@intel.com>; Zeng, ZhichaoX <zhichaox.zeng@intel.com>; JackyCT.Chen <JackyCT.Chen@advantech.com.tw>
Cc: Amy.Shih <Amy.Shih@advantech.com.tw>; Leo66.Wang <Leo66.Wang@advantech.com.tw>; dev@dpdk.org; Cui, KaixinX <kaixinx.cui@intel.com>
Subject: RE: DPDK testpmd with E823 link status is down







Hi Howard,

The IPS Case No : 00867743, kindly help to check it and sharing related schedule info for ref., TKS.



Best regards,
Jason Hsu 許文偉
Product Manager | ICVG-ENPD | Advantech Co., Ltd.
Tel: +886 2 2792-7818 ext.1602 | Mobile: +886 920-125-625 | Fax: +886 2 2794-7336
www.advantech.com<http://www.advantech.com/> I jason.hsu@advantech.com.tw<mailto:jason.hsu@advantech.com.tw>
[Advantech ENPD]<https://campaign.advantech.online/en/Cloud-IoT/uCPE/>

From: Jason.Hsu
Sent: Thursday, March 7, 2024 5:29 PM
To: Chang, Howard C <howard.c.chang@intel.com<mailto:howard.c.chang@intel.com>>; Zeng, ZhichaoX <zhichaox.zeng@intel.com<mailto:zhichaox.zeng@intel.com>>; JackyCT.Chen <JackyCT.Chen@advantech.com.tw<mailto:JackyCT.Chen@advantech.com.tw>>
Cc: Amy.Shih <Amy.Shih@advantech.com.tw<mailto:Amy.Shih@advantech.com.tw>>; Leo66.Wang <Leo66.Wang@advantech.com.tw<mailto:Leo66.Wang@advantech.com.tw>>; dev@dpdk.org<mailto:dev@dpdk.org>; Cui, KaixinX <kaixinx.cui@intel.com<mailto:kaixinx.cui@intel.com>>
Subject: RE: DPDK testpmd with E823 link status is down

Hi Howard,

As discussed, the issue need to have a IPS ticket for keep following up in next, so we will try to issue it and update the no. soon, TKS.






Best regards,
Jason Hsu 許文偉
Product Manager | ICVG-ENPD | Advantech Co., Ltd.
Tel: +886 2 2792-7818 ext.1602 | Mobile: +886 920-125-625 | Fax: +886 2 2794-7336
www.advantech.com<http://www.advantech.com/> I jason.hsu@advantech.com.tw<mailto:jason.hsu@advantech.com.tw>
[Advantech ENPD]<https://campaign.advantech.online/en/Cloud-IoT/uCPE/>

From: Jason.Hsu
Sent: Tuesday, March 5, 2024 10:07 AM
To: Chang, Howard C <howard.c.chang@intel.com<mailto:howard.c.chang@intel.com>>; Zeng, ZhichaoX <zhichaox.zeng@intel.com<mailto:zhichaox.zeng@intel.com>>; JackyCT.Chen <JackyCT.Chen@advantech.com.tw<mailto:JackyCT.Chen@advantech.com.tw>>
Cc: Amy.Shih <Amy.Shih@advantech.com.tw<mailto:Amy.Shih@advantech.com.tw>>; Leo66.Wang <Leo66.Wang@advantech.com.tw<mailto:Leo66.Wang@advantech.com.tw>>; dev@dpdk.org<mailto:dev@dpdk.org>; Cui, KaixinX <kaixinx.cui@intel.com<mailto:kaixinx.cui@intel.com>>
Subject: RE: DPDK testpmd with E823 link status is down

Hi Howard,

Could you help to check this DPDK testing issue and comments the estimated schedule for next FW ver. release to fix it?




Best regards,
Jason Hsu 許文偉
Product Manager | ICVG-ENPD | Advantech Co., Ltd.
Tel: +886 2 2792-7818 ext.1602 | Mobile: +886 920-125-625 | Fax: +886 2 2794-7336
www.advantech.com<http://www.advantech.com/> I jason.hsu@advantech.com.tw<mailto:jason.hsu@advantech.com.tw>
[Advantech ENPD]<https://campaign.advantech.online/en/Cloud-IoT/uCPE/>

From: JackyCT.Chen <JackyCT.Chen@advantech.com.tw<mailto:JackyCT.Chen@advantech.com.tw>>
Sent: Friday, March 1, 2024 6:25 PM
To: Zeng, ZhichaoX <zhichaox.zeng@intel.com<mailto:zhichaox.zeng@intel.com>>
Cc: Amy.Shih <Amy.Shih@advantech.com.tw<mailto:Amy.Shih@advantech.com.tw>>; Jason.Hsu <Jason.Hsu@advantech.com.tw<mailto:Jason.Hsu@advantech.com.tw>>; Leo66.Wang <Leo66.Wang@advantech.com.tw<mailto:Leo66.Wang@advantech.com.tw>>; dev@dpdk.org<mailto:dev@dpdk.org>; Cui, KaixinX <kaixinx.cui@intel.com<mailto:kaixinx.cui@intel.com>>
Subject: RE: DPDK testpmd with E823 link status is down

Hi Zhichao,
Do you have any update ?

Q: As you said “We are tracking this issue, it is a firmware issue that has been reported to the hardware team and the fix will take some time.“
Could you descript more detail ?
2-1 The “firmware issue” that mean the LEK firmware-version: 3.26 0x8001b733 1.3429.0 of the Intel NIC - E823  has some question or not ?
2-2 When it will release new E823 LEK that fix link status/speed issue ?
2-3 Where we could get the new E823 LEK that fix link status/speed issue ?.. Content ID # of Intel RDC .
We look forward to receiving your reply . 😊

Thanks your help !

Best Regards,
JackyCT.Chen
x86 Software | Cloud-IoT Group | Advantech Co., Ltd.
02-2792-7818 Ext. 1194

From: JackyCT.Chen
Sent: Friday, February 16, 2024 4:26 PM
To: Zeng, ZhichaoX <zhichaox.zeng@intel.com<mailto:zhichaox.zeng@intel.com>>
Cc: Amy.Shih <Amy.Shih@advantech.com.tw<mailto:Amy.Shih@advantech.com.tw>>; Jason.Hsu <Jason.Hsu@advantech.com.tw<mailto:Jason.Hsu@advantech.com.tw>>; Leo66.Wang <Leo66.Wang@advantech.com.tw<mailto:Leo66.Wang@advantech.com.tw>>; dev@dpdk.org<mailto:dev@dpdk.org>; Cui, KaixinX <kaixinx.cui@intel.com<mailto:kaixinx.cui@intel.com>>
Subject: RE: DPDK testpmd with E823 link status is down

Hi Zhichao,

  1.  This workaround in ICE_PMD which change the “no wait” to “wait_to_complete”  ==> it seemed workable  .

When we run testpmd on E823 port0/1 , it got “link status : up and link speed : 10 Gbps” as below  (Detail see attached file)
testpmd> show port summary all
Number of available ports: 2
Port MAC Address       Name         Driver         Status   Link
0    00:00:00:00:01:00 89:00.0      net_ice        up       10 Gbps

1    00:00:00:00:01:01 89:00.1      net_ice        up       10 Gbps


  1.  As you said “We are tracking this issue, it is a firmware issue that has been reported to the hardware team and the fix will take some time.“

Could you descript more detail ?
2-1 The “firmware issue” that mean the LEK firmware-version: 3.26 0x8001b733 1.3429.0 of the Intel NIC - E823  has some question or not ?
2-2 When it will release new E823 LEK that fix link status/speed issue ?
2-3 Where we could get the new E823 LEK that fix link status/speed issue ?.. Content ID # of Intel RDC .
We look forward to receiving your reply . 😊


Thanks your help !


Best Regards,
JackyCT.Chen
x86 Software | Cloud-IoT Group | Advantech Co., Ltd.
02-2792-7818 Ext. 1194

From: Zeng, ZhichaoX <zhichaox.zeng@intel.com<mailto:zhichaox.zeng@intel.com>>
Sent: Tuesday, February 6, 2024 11:24 AM
To: JackyCT.Chen <JackyCT.Chen@advantech.com.tw<mailto:JackyCT.Chen@advantech.com.tw>>
Cc: Amy.Shih <Amy.Shih@advantech.com.tw<mailto:Amy.Shih@advantech.com.tw>>; Jason.Hsu <Jason.Hsu@advantech.com.tw<mailto:Jason.Hsu@advantech.com.tw>>; Leo66.Wang <Leo66.Wang@advantech.com.tw<mailto:Leo66.Wang@advantech.com.tw>>; dev@dpdk.org<mailto:dev@dpdk.org>; Cui, KaixinX <kaixinx.cui@intel.com<mailto:kaixinx.cui@intel.com>>
Subject: RE: DPDK testpmd with E823 link status is down

Hi JackyCT.Chen:

We are tracking this issue, it is a firmware issue that has been reported to the hardware team and the fix will take some time.

There is a workaround in ICE PMD, change the “no wait” to “wait_to_complete” mode when ice_interrupt_handler() updates the link status in drivers/net/ice/ice_ethdev.c:

#ifdef ICE_LSE_SPT
                if (int_fw_ctl & PFINT_FW_CTL_INTEVENT_M) {
                                PMD_DRV_LOG(INFO, "FW_CTL: link state change event");
                                ice_handle_aq_msg(dev);
                }
#else
                if (oicr & PFINT_OICR_LINK_STAT_CHANGE_M) {
                                PMD_DRV_LOG(INFO, "OICR: link state change event");
-                            ret = ice_link_update(dev, 0);
+                           ret = ice_link_update(dev, 1);
                                if (!ret)
                                                rte_eth_dev_callback_process
                                                                (dev, RTE_ETH_EVENT_INTR_LSC, NULL);
                }
#endif



Best Regards
Zhichao

From: JackyCT.Chen <JackyCT.Chen@advantech.com.tw<mailto:JackyCT.Chen@advantech.com.tw>>
Sent: Wednesday, January 31, 2024 6:53 PM
To: Yang, Qiming <qiming.yang@intel.com<mailto:qiming.yang@intel.com>>; dev@dpdk.org<mailto:dev@dpdk.org>
Cc: Shih, Amy <amy.shih@advantech.com.tw<mailto:amy.shih@advantech.com.tw>>; Hsu, Jason <jason.hsu@advantech.com.tw<mailto:jason.hsu@advantech.com.tw>>; Wang, Leo <leo66.wang@advantech.com.tw<mailto:leo66.wang@advantech.com.tw>>
Subject: RE: DPDK testpmd with E823 link status is down

Hi Qiming & dpdk dev team:

This is JackyCT.Chen from Advantech, we have a question about E823 DPDK loopback testpmd  ,
Could you please give us some advice?

We bind the E823 and X710 devices with vfio-pci driver and execute the DPDK testpmd . (detail see attached files please)

However,  both E823  “link status : down and link speed : None” , we expected that “link status : up and link speed : 10 Gbps” .
Do you have any suggestions?

Testing procedure & result:
Platform : Moro City Reference Planform ICX-D  ~ CRB

l   On-Board : E823

l   Ext-PCIE CARD : PCIE-2230NP-00A1E ( Intel X710 )
OS/Kernel :  Debian 12  / kernel 6.1.0-16-amd64 x86_64
DPDK : DPDK 24.03.0-rc0 (from trunk build)
NIC_BDF_INFO :
CRB EXT-PCIE CARD : X710
Port : 10G * 4
firmware-version: 7.10 0x80007b33 255.65535.255

CRB On-BOARD : E823
Port Option : 4x10-4x2.5
firmware-version: 3.26 0x8001b733 1.3429.0

BDF = 91:00.0
---
BDF = 89:00.0
BDF = 91:00.1
---
BDF = 89:00.1
Prepare and config :
root@5-efi:~# modprobe uio
root@5-efi:~# modprobe vfio-pci
root@5-efi:~# echo 2048 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
root@5-efi:~# mkdir -p /mnt/huge
root@5-efi:~# mount -t hugetlbfs nodev /mnt/huge
root@5-efi:~# dpdk-devbind.py -b vfio-pci 91:00.0
root@5-efi:~# dpdk-devbind.py -b vfio-pci 91:00.1
root@5-efi:~# dpdk-devbind.py -b vfio-pci 89:00.0
root@5-efi:~# dpdk-devbind.py -b vfio-pci 89:00.1

LOG :
root@5-efi:~# dpdk-testpmd -c 0xff -n 4 -a 89:00.0 -a 89:00.1 --socket-mem=256 -- -i --mbcache=512 --socket-num=0 --coremask=0xc --nb-cores=2 --rxq=1 --txq=1 --portmask=0xf --rxd=4096 --rxfreet=128 --rxpt=128 --rxht=8 --rxwt=0 --txd=4096 --txfreet=128 --txpt=128 --txht=0 --txwt=0 --burst=64 --txrst=64 --rss-ip -a
EAL: Detected CPU lcores: 24
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Ignore mapping IO port bar(1)
EAL: Ignore mapping IO port bar(4)
EAL: Probe PCI driver: net_ice (8086:188a) device: 0000:89:00.0 (socket 0)
ice_dev_init(): Failed to read device serial number

ice_load_pkg_type(): Active package is: 1.3.35.0, ICE OS Default Package (double VLAN mode)
EAL: Ignore mapping IO port bar(1)
EAL: Ignore mapping IO port bar(4)
EAL: Probe PCI driver: net_ice (8086:188a) device: 0000:89:00.1 (socket 0)
ice_dev_init(): Failed to read device serial number

ice_load_pkg_type(): Active package is: 1.3.35.0, ICE OS Default Package (double VLAN mode)
TMTY: TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
previous number of forwarding cores 1 - changed to number of configured cores 2
Auto-start selected
testpmd: create a new mbuf pool <mb_pool_0>: n=262144, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
ice_set_rx_function(): Using AVX2 Vector Rx (port 0).
Port 0: 00:00:00:00:01:00
Configuring Port 1 (socket 0)
ice_set_rx_function(): Using AVX2 Vector Rx (port 1).
Port 1: 00:00:00:00:01:01
Checking link statuses...
Done
Start automatic packet forwarding
io packet forwarding - ports=2 - cores=2 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
Logical Core 3 (socket 0) forwards packets on 1 streams:
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=64
  nb forwarding cores=2 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=4096 - RX free threshold=128
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=4096 - TX free threshold=128
      TX threshold registers: pthresh=128 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=64
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=4096 - RX free threshold=128
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=4096 - TX free threshold=128
      TX threshold registers: pthresh=128 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=64
testpmd>
testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 442827099  RX-missed: 0          RX-bytes:  26569625172
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 443292288  TX-errors: 0          TX-bytes:  26597536896

  Throughput (since last show)
  Rx-pps:     14390795          Rx-bps:   6907582048
  Tx-pps:     14405470          Tx-bps:   6914626456
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 443293641  RX-missed: 0          RX-bytes:  26597617500
  RX-errors: 0
 RX-nombuf:  0
  TX-packets: 442827661  TX-errors: 0          TX-bytes:  26569658892

  Throughput (since last show)
  Rx-pps:     14405477          Rx-bps:   6914629232
  Tx-pps:     14390795          Tx-bps:   6907581696
  ############################################################################
testpmd> show port summary all
Number of available ports: 2
Port MAC Address       Name         Driver         Status   Link
0    00:00:00:00:01:00 89:00.0      net_ice        down     None
1    00:00:00:00:01:01 89:00.1      net_ice        down     None
testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 2267795378 RX-missed: 0          RX-bytes:  136067721784
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 2270213831 TX-errors: 0          TX-bytes:  136212829092

  Throughput (since last show)
  Rx-pps:     14385293          Rx-bps:   6904940896
  Tx-pps:     14400690          Tx-bps:   6912331240
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 2270215290 RX-missed: 0          RX-bytes:  136212916568
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 2267796060 TX-errors: 0          TX-bytes:  136067762768

  Throughput (since last show)
  Rx-pps:     14400690          Rx-bps:   6912331344
  Tx-pps:     14385293          Tx-bps:   6904941024
  ############################################################################



Thanks!


Best Regards,
JackyCT.Chen
x86 Software | Cloud-IoT Group | Advantech Co., Ltd.
02-2792-7818 Ext. 1194


[-- Attachment #1.2: Type: text/html, Size: 79569 bytes --]

[-- Attachment #2: image001.jpg --]
[-- Type: image/jpeg, Size: 103480 bytes --]

  reply	other threads:[~2024-03-15  8:12 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-31 10:52 JackyCT.Chen
2024-02-06  3:24 ` Zeng, ZhichaoX
2024-02-16  8:25   ` JackyCT.Chen
2024-03-01 10:24     ` JackyCT.Chen
2024-03-05  2:06       ` Jason.Hsu
2024-03-07  9:28         ` Jason.Hsu
2024-03-11  8:26           ` Jason.Hsu
2024-03-15  2:23             ` JackyCT.Chen [this message]
2024-04-02  7:05               ` JackyCT.Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=JH0PR02MB69032868E801225556FE6825BD282@JH0PR02MB6903.apcprd02.prod.outlook.com \
    --to=jackyct.chen@advantech.com.tw \
    --cc=Amy.Shih@advantech.com.tw \
    --cc=Jason.Hsu@advantech.com.tw \
    --cc=Leo66.Wang@advantech.com.tw \
    --cc=dev@dpdk.org \
    --cc=howard.c.chang@intel.com \
    --cc=kaixinx.cui@intel.com \
    --cc=zhichaox.zeng@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).