From: Prasanna Panchamukhi <panchamukhi@arista.com>
To: dev@dpdk.org
Cc: "Jayakumar, Muthurajan" <muthurajan.jayakumar@intel.com>
Subject: DPDK/Hot swap of SFPs 10G/1G current is not support on Intel x553 controller
Date: Thu, 21 Mar 2024 17:22:27 -0700 [thread overview]
Message-ID: <CACqWiXCYMTKcE=9xv2TXLgWOQspSiRZuHNHAh4MQEm-rr_LvaQ@mail.gmail.com> (raw)
[-- Attachment #1: Type: text/plain, Size: 2606 bytes --]
Hot swap of SFPs 10G/1G current is not support on Intel x553 controller
with dpdk 21.11.3
CPU Intel(R) Atom(TM) CPU C3558R @ 2.40GHz
x553 ports bond to dpdk inserted with 10G and 1G SFPs
# lspci
07:00.0 *Ether*net controller: Intel Corporation *Ether*net Connection X553
10 GbE SFP+ (rev 11)
07:00.1 *Ether*net controller: Intel Corporation *Ether*net Connection X553
10 GbE SFP+ (rev 11)
# /usr/share/dpdk/tools/dpdk-devbind.py --status
Network devices using DPDK-compatible driver
============================================
0000:07:00.0 'Ethernet Connection X553 10 GbE SFP+ 15c4' drv=vfio-pci
unused=
0000:07:00.1 'Ethernet Connection X553 10 GbE SFP+ 15c4' drv=vfio-pci
unused=
et7 Driver PMDPort HWaddr cc:1a:a3:ff:ea:e9 MTU 9236
Speed 1,000Mbps Link UP Duplex FULL Autoneg ON
RX Queues: 1 RX Queue size: 4096
TX Queues: 1 TX Queue Size: 1024
Inc/RX packets: 22,327 bytes: 23,925,237
dropped: 0
Out/TX packets: 22,352 bytes: 23,924,291
dropped: 0
et6 Driver PMDPort HWaddr cc:1a:a3:ff:ea:e9 MTU 9236
Speed 10,000Mbps Link UP Duplex FULL Autoneg ON
RX Queues: 1 RX Queue size: 4096
TX Queues: 1 TX Queue Size: 1024
Inc/RX packets: 2,798,567,431 bytes: 4,186,656,722,403
dropped: 0
Out/TX packets: 523,462,096 bytes: 783,087,633,300
dropped: 28,382
Swap the SFPs with the cable between port6 and port 7.
et7 Driver PMDPort HWaddr cc:1a:a3:ff:ea:e9 MTU 9236
Speed UNKNOWN Link DOWN Duplex HALF Autoneg ON
RX Queues: 1 RX Queue size: 4096
TX Queues: 1 TX Queue Size: 1024
Inc/RX packets: 22,327 bytes: 23,925,237
dropped: 0
Out/TX packets: 22,362 bytes: 23,924,891
dropped: 0
et7 Driver PMDPort HWaddr cc:1a:a3:ff:ea:e9 MTU 9236
Speed UNKNOWN Link DOWN Duplex HALF Autoneg ON
RX Queues: 1 RX Queue size: 4096
TX Queues: 1 TX Queue Size: 1024
Inc/RX packets: 22,327 bytes: 23,925,237
dropped: 0
Out/TX packets: 22,362 bytes: 23,924,891
dropped: 0
Same experiment with Linux ixgbe kernel driver and on hotswap on SFPs both
link and speed were correctly reported. Logs below:
I saw some patches in the archives posted in 2021 which did not make it to
the upstream release that support these features by Stephen.
https://mails.dpdk.org/archives/dev/2021-December/230965.html
Is there a plan to support this feature dpdk like linux kernel ixgbe driver
?
Thanks,
Prasanna
[-- Attachment #2: Type: text/html, Size: 18773 bytes --]
reply other threads:[~2024-03-22 0:22 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CACqWiXCYMTKcE=9xv2TXLgWOQspSiRZuHNHAh4MQEm-rr_LvaQ@mail.gmail.com' \
--to=panchamukhi@arista.com \
--cc=dev@dpdk.org \
--cc=muthurajan.jayakumar@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).