DPDK usage discussions
 help / color / mirror / Atom feed
From: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
To: Danushka Menikkumbura <danushka.menikkumbura@gmail.com>
Cc: Thomas Monjalon <thomas@monjalon.net>, users@dpdk.org
Subject: Re: [dpdk-users] All links down with Chelsio T6 NICs
Date: Sat, 10 Apr 2021 18:47:21 +0530	[thread overview]
Message-ID: <20210410131720.GA4617@chelsio.com> (raw)
In-Reply-To: <CAF4V=3F602bn+xvx4a4yvGVXM_BZLm_67qDjudPrww8m5tkEHQ@mail.gmail.com>

Hello Danushka,

On Saturday, April 04/10/21, 2021 at 07:47:15 -0400, Danushka Menikkumbura wrote:
>    Thank you for your reply, Thomas!
>    This is the port summary.
>    Number of available ports: 4
>    Port MAC Address       Name         Driver         Status   Link
>    0    00:07:43:5D:4E:60 0000:05:00.4 net_cxgbe      down     100 Gbps
>    1    00:07:43:5D:4E:68 0000:05:00.4_1 net_cxgbe      down     100 Gbps
>    2    00:07:43:5D:51:00 0000:0b:00.4 net_cxgbe      down     100 Gbps
>    3    00:07:43:5D:51:08 0000:0b:00.4_1 net_cxgbe      down     100 Gbps
>    Additionally, this is info of one of the ports.
>    ********************* Infos for port 0  *********************
>    MAC address: 00:07:43:5D:4E:60
>    Device name: 0000:05:00.4
>    Driver name: net_cxgbe
>    Firmware-version: not available
>    Connect to socket: 0
>    memory allocation on the socket: 0
>    Link status: down
>    Link speed: 100 Gbps
>    Link duplex: full-duplex
>    MTU: 1500
>    Promiscuous mode: enabled
>    Allmulticast mode: disabled
>    Maximum number of MAC addresses: 1
>    Maximum number of MAC addresses of hash filtering: 0
>    VLAN offload:
>      strip off, filter off, extend off, qinq strip off
>    Hash key size in bytes: 40
>    Redirection table size: 32
>    Supported RSS offload flow types:
>      ipv4
>      ipv4-frag
>      ipv4-tcp
>      ipv4-udp
>      ipv4-other
>      ipv6
>      ipv6-frag
>      ipv6-tcp
>      ipv6-udp
>      ipv6-other
>      user defined 15
>      user defined 16
>      user defined 17
>    Minimum size of RX buffer: 68
>    Maximum configurable length of RX packet: 9018
>    Maximum configurable size of LRO aggregated packet: 0
>    Maximum number of VFs: 256
>    Current number of RX queues: 1
>    Max possible RX queues: 114
>    Max possible number of RXDs per queue: 4096
>    Min possible number of RXDs per queue: 128
>    RXDs number alignment: 1
>    Current number of TX queues: 1
>    Max possible TX queues: 114
>    Max possible number of TXDs per queue: 4096
>    Min possible number of TXDs per queue: 128
>    TXDs number alignment: 1
>    Max segment number per packet: 0
>    Max segment number per MTU/TSO: 0
>    Best,
>    Danushka

Thank you for reporting the issue. I have forwarded the reported issue
to Chelsio support team and we are looking into it.

Thank you for adding me to the thread Thomas.

>    On Sat, Apr 10, 2021 at 4:15 AM Thomas Monjalon <[1]thomas@monjalon.net>
>    wrote:
> 
>      +Cc Chelsio maintainer
> 
>      09/04/2021 19:24, Danushka Menikkumbura:
>      > Hello,
>      >
>      > When I run testpmd on a system with 2 two-port Chelsio T6 NICs, the
>      link
>      > status is down for all four ports. I use igb_uio as the kernel driver.
>      > Below is my testpmd commandline and the startup log.
>      >
>      > sudo ./build/app/dpdk-testpmd -l 0,1,2,5 -b 81:00.0 -- -i
>      >
>      > EAL: Detected 20 lcore(s)
>      > EAL: Detected 4 NUMA nodes
>      > EAL: Detected static linkage of DPDK
>      > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>      > EAL: Selected IOVA mode 'PA'
>      > EAL: No available 1048576 kB hugepages reported
>      > EAL: Probing VFIO support...
>      > EAL: VFIO support initialized
>      > EAL: Probe PCI driver: net_cxgbe (1425:6408) device: 0000:05:00.4
>      (socket 0)
>      > rte_cxgbe_pmd: Maskless filter support disabled. Continuing
>      > EAL: Probe PCI driver: net_cxgbe (1425:6408) device: 0000:0b:00.4
>      (socket 0)
>      > rte_cxgbe_pmd: Maskless filter support disabled. Continuing
>      > Interactive-mode selected
>      > testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176,
>      socket=0
>      > testpmd: preferred mempool ops selected: ring_mp_mc
>      > testpmd: create a new mbuf pool <mb_pool_2>: n=171456, size=2176,
>      socket=2
>      > testpmd: preferred mempool ops selected: ring_mp_mc
>      > Configuring Port 0 (socket 0)
>      > Port 0: 00:07:43:5D:4E:60
>      > Configuring Port 1 (socket 0)
>      > Port 1: 00:07:43:5D:4E:68
>      > Configuring Port 2 (socket 0)
>      > Port 2: 00:07:43:5D:51:00
>      > Configuring Port 3 (socket 0)
>      > Port 3: 00:07:43:5D:51:08
>      > Checking link statuses...
>      > Done
>      > testpmd>
>      >
>      > Your help is very much appreciated.
> 
>      Please run the command "show port summary all"
> 

Thanks,
Rahul

      reply	other threads:[~2021-04-10 13:18 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-09 17:24 Danushka Menikkumbura
2021-04-10  8:15 ` Thomas Monjalon
2021-04-10 11:47   ` Danushka Menikkumbura
2021-04-10 13:17     ` Rahul Lakkireddy [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210410131720.GA4617@chelsio.com \
    --to=rahul.lakkireddy@chelsio.com \
    --cc=danushka.menikkumbura@gmail.com \
    --cc=thomas@monjalon.net \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).