DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Zeng, ZhichaoX" <zhichaox.zeng@intel.com>
To: "Chen, Jacky" <jackyct.chen@advantech.com.tw>
Cc: "Shih, Amy" <amy.shih@advantech.com.tw>,
	"Hsu, Jason" <jason.hsu@advantech.com.tw>,
	"Wang, Leo" <leo66.wang@advantech.com.tw>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"Cui, KaixinX" <kaixinx.cui@intel.com>
Subject: RE: DPDK testpmd with E823 link status is down
Date: Tue, 6 Feb 2024 03:24:19 +0000	[thread overview]
Message-ID: <CO6PR11MB5602033FE3A079F16E7C7328F1462@CO6PR11MB5602.namprd11.prod.outlook.com> (raw)
In-Reply-To: <JH0PR02MB6903B289006C8A0223505897BD7C2@JH0PR02MB6903.apcprd02.prod.outlook.com>

[-- Attachment #1: Type: text/plain, Size: 8350 bytes --]

Hi JackyCT.Chen:

We are tracking this issue, it is a firmware issue that has been reported to the hardware team and the fix will take some time.

There is a workaround in ICE PMD, change the "no wait" to "wait_to_complete" mode when ice_interrupt_handler() updates the link status in drivers/net/ice/ice_ethdev.c:

#ifdef ICE_LSE_SPT
                if (int_fw_ctl & PFINT_FW_CTL_INTEVENT_M) {
                                PMD_DRV_LOG(INFO, "FW_CTL: link state change event");
                                ice_handle_aq_msg(dev);
                }
#else
                if (oicr & PFINT_OICR_LINK_STAT_CHANGE_M) {
                                PMD_DRV_LOG(INFO, "OICR: link state change event");
-                            ret = ice_link_update(dev, 0);
+                           ret = ice_link_update(dev, 1);
                                if (!ret)
                                                rte_eth_dev_callback_process
                                                                (dev, RTE_ETH_EVENT_INTR_LSC, NULL);
                }
#endif


Best Regards
Zhichao

From: JackyCT.Chen <JackyCT.Chen@advantech.com.tw>
Sent: Wednesday, January 31, 2024 6:53 PM
To: Yang, Qiming <qiming.yang@intel.com>; dev@dpdk.org
Cc: Shih, Amy <amy.shih@advantech.com.tw>; Hsu, Jason <jason.hsu@advantech.com.tw>; Wang, Leo <leo66.wang@advantech.com.tw>
Subject: RE: DPDK testpmd with E823 link status is down

Hi Qiming & dpdk dev team:

This is JackyCT.Chen from Advantech, we have a question about E823 DPDK loopback testpmd  ,
Could you please give us some advice?

We bind the E823 and X710 devices with vfio-pci driver and execute the DPDK testpmd . (detail see attached files please)

However,  both E823  "link status : down and link speed : None" , we expected that "link status : up and link speed : 10 Gbps" .
Do you have any suggestions?

Testing procedure & result:
Platform : Moro City Reference Planform ICX-D  ~ CRB

l   On-Board : E823

l   Ext-PCIE CARD : PCIE-2230NP-00A1E ( Intel X710 )
OS/Kernel :  Debian 12  / kernel 6.1.0-16-amd64 x86_64
DPDK : DPDK 24.03.0-rc0 (from trunk build)
NIC_BDF_INFO :
CRB EXT-PCIE CARD : X710
Port : 10G * 4
firmware-version: 7.10 0x80007b33 255.65535.255

CRB On-BOARD : E823
Port Option : 4x10-4x2.5
firmware-version: 3.26 0x8001b733 1.3429.0

BDF = 91:00.0
---
BDF = 89:00.0
BDF = 91:00.1
---
BDF = 89:00.1
Prepare and config :
root@5-efi:~# modprobe uio
root@5-efi:~# modprobe vfio-pci
root@5-efi:~# echo 2048 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
root@5-efi:~# mkdir -p /mnt/huge
root@5-efi:~# mount -t hugetlbfs nodev /mnt/huge
root@5-efi:~# dpdk-devbind.py -b vfio-pci 91:00.0
root@5-efi:~# dpdk-devbind.py -b vfio-pci 91:00.1
root@5-efi:~# dpdk-devbind.py -b vfio-pci 89:00.0
root@5-efi:~# dpdk-devbind.py -b vfio-pci 89:00.1

LOG :
root@5-efi:~# dpdk-testpmd -c 0xff -n 4 -a 89:00.0 -a 89:00.1 --socket-mem=256 -- -i --mbcache=512 --socket-num=0 --coremask=0xc --nb-cores=2 --rxq=1 --txq=1 --portmask=0xf --rxd=4096 --rxfreet=128 --rxpt=128 --rxht=8 --rxwt=0 --txd=4096 --txfreet=128 --txpt=128 --txht=0 --txwt=0 --burst=64 --txrst=64 --rss-ip -a
EAL: Detected CPU lcores: 24
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Ignore mapping IO port bar(1)
EAL: Ignore mapping IO port bar(4)
EAL: Probe PCI driver: net_ice (8086:188a) device: 0000:89:00.0 (socket 0)
ice_dev_init(): Failed to read device serial number

ice_load_pkg_type(): Active package is: 1.3.35.0, ICE OS Default Package (double VLAN mode)
EAL: Ignore mapping IO port bar(1)
EAL: Ignore mapping IO port bar(4)
EAL: Probe PCI driver: net_ice (8086:188a) device: 0000:89:00.1 (socket 0)
ice_dev_init(): Failed to read device serial number

ice_load_pkg_type(): Active package is: 1.3.35.0, ICE OS Default Package (double VLAN mode)
TMTY: TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
previous number of forwarding cores 1 - changed to number of configured cores 2
Auto-start selected
testpmd: create a new mbuf pool <mb_pool_0>: n=262144, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
ice_set_rx_function(): Using AVX2 Vector Rx (port 0).
Port 0: 00:00:00:00:01:00
Configuring Port 1 (socket 0)
ice_set_rx_function(): Using AVX2 Vector Rx (port 1).
Port 1: 00:00:00:00:01:01
Checking link statuses...
Done
Start automatic packet forwarding
io packet forwarding - ports=2 - cores=2 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
Logical Core 3 (socket 0) forwards packets on 1 streams:
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=64
  nb forwarding cores=2 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=4096 - RX free threshold=128
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=4096 - TX free threshold=128
      TX threshold registers: pthresh=128 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=64
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=4096 - RX free threshold=128
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=4096 - TX free threshold=128
      TX threshold registers: pthresh=128 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=64
testpmd>
testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 442827099  RX-missed: 0          RX-bytes:  26569625172
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 443292288  TX-errors: 0          TX-bytes:  26597536896

  Throughput (since last show)
  Rx-pps:     14390795          Rx-bps:   6907582048
  Tx-pps:     14405470          Tx-bps:   6914626456
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 443293641  RX-missed: 0          RX-bytes:  26597617500
  RX-errors: 0
 RX-nombuf:  0
  TX-packets: 442827661  TX-errors: 0          TX-bytes:  26569658892

  Throughput (since last show)
  Rx-pps:     14405477          Rx-bps:   6914629232
  Tx-pps:     14390795          Tx-bps:   6907581696
  ############################################################################
testpmd> show port summary all
Number of available ports: 2
Port MAC Address       Name         Driver         Status   Link
0    00:00:00:00:01:00 89:00.0      net_ice        down     None
1    00:00:00:00:01:01 89:00.1      net_ice        down     None
testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 2267795378 RX-missed: 0          RX-bytes:  136067721784
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 2270213831 TX-errors: 0          TX-bytes:  136212829092

  Throughput (since last show)
  Rx-pps:     14385293          Rx-bps:   6904940896
  Tx-pps:     14400690          Tx-bps:   6912331240
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 2270215290 RX-missed: 0          RX-bytes:  136212916568
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 2267796060 TX-errors: 0          TX-bytes:  136067762768

  Throughput (since last show)
  Rx-pps:     14400690          Rx-bps:   6912331344
  Tx-pps:     14385293          Tx-bps:   6904941024
  ############################################################################



Thanks!


Best Regards,
JackyCT.Chen
x86 Software | Cloud-IoT Group | Advantech Co., Ltd.
02-2792-7818 Ext. 1194


[-- Attachment #2: Type: text/html, Size: 47325 bytes --]

  reply	other threads:[~2024-02-06  3:24 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-31 10:52 JackyCT.Chen
2024-02-06  3:24 ` Zeng, ZhichaoX [this message]
2024-02-16  8:25   ` JackyCT.Chen
2024-03-01 10:24     ` JackyCT.Chen
2024-03-05  2:06       ` Jason.Hsu
2024-03-07  9:28         ` Jason.Hsu
2024-03-11  8:26           ` Jason.Hsu
2024-03-15  2:23             ` JackyCT.Chen
2024-04-02  7:05               ` JackyCT.Chen
2024-10-09 10:41                 ` Jason.Hsu
2024-10-15  0:46                   ` Jason.Hsu
2024-10-16  9:18                     ` Lo, James
2024-10-17  7:45                       ` Jason.Hsu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CO6PR11MB5602033FE3A079F16E7C7328F1462@CO6PR11MB5602.namprd11.prod.outlook.com \
    --to=zhichaox.zeng@intel.com \
    --cc=amy.shih@advantech.com.tw \
    --cc=dev@dpdk.org \
    --cc=jackyct.chen@advantech.com.tw \
    --cc=jason.hsu@advantech.com.tw \
    --cc=kaixinx.cui@intel.com \
    --cc=leo66.wang@advantech.com.tw \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).