From: "Fu, Weiyi (NSN - CN/Hangzhou)" <weiyi.fu@nsn.com>
To: "ext Ouyang, Changchun" <changchun.ouyang@intel.com>,
"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] In DPDK 1.7.1, the link status of the interface using virtio driver is always down.
Date: Thu, 11 Dec 2014 08:42:47 +0000 [thread overview]
Message-ID: <2680B515A539A446ACBEC0EBBDEC3DF80E938371@SGSIMBX001.nsn-intra.net> (raw)
In-Reply-To: <F52918179C57134FAEC9EA62FA2F9625119456D4@shsmsx102.ccr.corp.intel.com>
Hi,
The result is still the same.
[root@EIPU-0(KVMCluster) /root]
# ./testpmd -c 3 -n 4 -- --burst=64 -i --txq=1 --rxq=1 --txqflags=0xffff
EAL: Cannot read numa node link for lcore 0 - using physical package id instead
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Cannot read numa node link for lcore 1 - using physical package id instead
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Cannot read numa node link for lcore 2 - using physical package id instead
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Cannot read numa node link for lcore 3 - using physical package id instead
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Cannot read numa node link for lcore 4 - using physical package id instead
EAL: Detected lcore 4 as core 4 on socket 0
EAL: Cannot read numa node link for lcore 5 - using physical package id instead
EAL: Detected lcore 5 as core 5 on socket 0
EAL: Cannot read numa node link for lcore 6 - using physical package id instead
EAL: Detected lcore 6 as core 6 on socket 0
EAL: Support maximum 64 logical core(s) by configuration.
EAL: Detected 7 lcore(s)
EAL: Searching for IVSHMEM devices...
EAL: No IVSHMEM configuration found!
EAL: Setting up memory...
EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id 0
EAL: Ask a virtual area of 0x13400000 bytes
EAL: Virtual area found at 0x7fb8e2600000 (size = 0x13400000)
EAL: Ask a virtual area of 0x1f000000 bytes
EAL: Virtual area found at 0x7fb8c3400000 (size = 0x1f000000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fb8c3000000 (size = 0x200000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fb8c2a00000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fb8c2600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fb8c2200000 (size = 0x200000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fb8c1c00000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fb8c1800000 (size = 0x200000)
EAL: Requesting 410 pages of size 2MB from socket 0
EAL: TSC frequency is ~2792867 KHz
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
EAL: Master core 0 is ready (tid=f6998800)
EAL: Core 1 is ready (tid=c0ffe710)
EAL: PCI device 0000:00:03.0 on NUMA socket -1
EAL: probe driver: 1af4:1000 rte_virtio_pmd
EAL: 0000:00:03.0 not managed by UIO driver, skipping
EAL: PCI device 0000:00:04.0 on NUMA socket -1
EAL: probe driver: 1af4:1000 rte_virtio_pmd
EAL: PCI memory mapped at 0x7fb8f6959000
PMD: eth_virtio_dev_init(): PCI Port IO found start=0xc020 with size=0x20
PMD: virtio_negotiate_features(): guest_features before negotiate = 438020
PMD: virtio_negotiate_features(): host_features before negotiate = 489f7c26
PMD: virtio_negotiate_features(): features after negotiate = 30020
PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00
PMD: eth_virtio_dev_init(): VIRTIO_NET_F_MQ is not supported
PMD: virtio_dev_cq_queue_setup(): >>
PMD: virtio_dev_queue_setup(): selecting queue: 2
PMD: virtio_dev_queue_setup(): vq_size: 64 nb_desc:0
PMD: virtio_dev_queue_setup(): vring_size: 4612, rounded_vring_size: 8192
PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212bdd000
PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31dd000
PMD: eth_virtio_dev_init(): config->max_virtqueue_pairs=1
PMD: eth_virtio_dev_init(): config->status=0
PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00
PMD: eth_virtio_dev_init(): hw->max_rx_queues=1 hw->max_tx_queues=1
PMD: eth_virtio_dev_init(): port 0 vendorID=0x1af4 deviceID=0x1000
EAL: PCI device 0000:00:05.0 on NUMA socket -1
EAL: probe driver: 1af4:1000 rte_virtio_pmd
EAL: PCI memory mapped at 0x7fb8f6958000
PMD: eth_virtio_dev_init(): PCI Port IO found start=0xc000 with size=0x20
PMD: virtio_negotiate_features(): guest_features before negotiate = 438020
PMD: virtio_negotiate_features(): host_features before negotiate = 489f7c26
PMD: virtio_negotiate_features(): features after negotiate = 30020
PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00
PMD: eth_virtio_dev_init(): VIRTIO_NET_F_MQ is not supported
PMD: virtio_dev_cq_queue_setup(): >>
PMD: virtio_dev_queue_setup(): selecting queue: 2
PMD: virtio_dev_queue_setup(): vq_size: 64 nb_desc:0
PMD: virtio_dev_queue_setup(): vring_size: 4612, rounded_vring_size: 8192
PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212be0000
PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31e0000
PMD: eth_virtio_dev_init(): config->max_virtqueue_pairs=1
PMD: eth_virtio_dev_init(): config->status=0
PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00
PMD: eth_virtio_dev_init(): hw->max_rx_queues=1 hw->max_tx_queues=1
PMD: eth_virtio_dev_init(): port 1 vendorID=0x1af4 deviceID=0x1000
Interactive-mode selected
Configuring Port 0 (socket 0)
PMD: virtio_dev_configure(): configure
PMD: virtio_dev_tx_queue_setup(): >>
PMD: virtio_dev_queue_setup(): selecting queue: 1
PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:512
PMD: virtio_dev_queue_setup(): Warning: nb_desc(512) is not equal to vq size (256), fall to vq size
PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288
PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212be3000
PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31e3000
PMD: virtio_dev_rx_queue_setup(): >>
PMD: virtio_dev_queue_setup(): selecting queue: 0
PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:128
PMD: virtio_dev_queue_setup(): Warning: nb_desc(128) is not equal to vq size (256), fall to vq size
PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288
PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212be7000
PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31e7000
PMD: virtio_dev_vring_start(): >>
PMD: virtio_dev_rxtx_start(): >>
PMD: virtio_dev_vring_start(): >>
PMD: virtio_dev_vring_start(): Allocated 256 bufs
PMD: virtio_dev_vring_start(): >>
Port: 0 Link is DOWN
PMD: virtio_dev_start(): nb_queues=1
PMD: virtio_dev_start(): Notified backend at initialization
PMD: rte_eth_dev_config_restore: port 0: MAC address array not supported
PMD: rte_eth_promiscuous_disable: Function not supported
PMD: rte_eth_allmulticast_disable: Function not supported
Port 0: FF:FF:00:00:00:00
Configuring Port 1 (socket 0)
PMD: virtio_dev_configure(): configure
PMD: virtio_dev_tx_queue_setup(): >>
PMD: virtio_dev_queue_setup(): selecting queue: 1
PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:512
PMD: virtio_dev_queue_setup(): Warning: nb_desc(512) is not equal to vq size (256), fall to vq size
PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288
PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212bea000
PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31ea000
PMD: virtio_dev_rx_queue_setup(): >>
PMD: virtio_dev_queue_setup(): selecting queue: 0
PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:128
PMD: virtio_dev_queue_setup(): Warning: nb_desc(128) is not equal to vq size (256), fall to vq size
PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288
PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212bee000
PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31ee000
PMD: virtio_dev_vring_start(): >>
PMD: virtio_dev_rxtx_start(): >>
PMD: virtio_dev_vring_start(): >>
PMD: virtio_dev_vring_start(): Allocated 256 bufs
PMD: virtio_dev_vring_start(): >>
Port: 1 Link is DOWN
PMD: virtio_dev_start(): nb_queues=1
PMD: virtio_dev_start(): Notified backend at initialization
PMD: rte_eth_dev_config_restore: port 1: MAC address array not supported
PMD: rte_eth_promiscuous_disable: Function not supported
PMD: rte_eth_allmulticast_disable: Function not supported
Port 1: FF:FF:00:00:00:00
Checking link statuses...
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
Brs,
Fu Weiyi
-----Original Message-----
From: ext Ouyang, Changchun [mailto:changchun.ouyang@intel.com]
Sent: Thursday, December 11, 2014 4:11 PM
To: Fu, Weiyi (NSN - CN/Hangzhou); dev@dpdk.org
Cc: Ouyang, Changchun
Subject: RE: [dpdk-dev] In DPDK 1.7.1, the link status of the interface using virtio driver is always down.
Hi,
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Fu, Weiyi (NSN -
> CN/Hangzhou)
> Sent: Thursday, December 11, 2014 3:57 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] In DPDK 1.7.1, the link status of the interface using virtio
> driver is always down.
>
> Hi,
> We are using the l2fwd based on DPDK 1.7.1 and found out that the link
> status of the interface using virtio driver is always down.
> Is there any precondition to let the link up?
>
Suggest you use testpmd to replace l2fwd, and virito need tx some packets before forwarding any packet.
In testpmd, you can use the following cmd:
Start tx_first
Thanks
Changchun
next prev parent reply other threads:[~2014-12-11 8:43 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-12-11 7:57 Fu, Weiyi (NSN - CN/Hangzhou)
2014-12-11 8:11 ` Ouyang, Changchun
2014-12-11 8:42 ` Fu, Weiyi (NSN - CN/Hangzhou) [this message]
2014-12-11 11:41 ` Fu, Weiyi (NSN - CN/Hangzhou)
2014-12-11 13:41 ` Vijayakumar Muthuvel Manickam
2014-12-12 1:00 ` Ouyang, Changchun
2014-12-12 1:57 ` Fu, Weiyi (NSN - CN/Hangzhou)
2014-12-15 2:55 ` Fu, Weiyi (NSN - CN/Hangzhou)
2014-12-15 3:25 ` Ouyang, Changchun
2014-12-15 7:00 ` Fu, Weiyi (NSN - CN/Hangzhou)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2680B515A539A446ACBEC0EBBDEC3DF80E938371@SGSIMBX001.nsn-intra.net \
--to=weiyi.fu@nsn.com \
--cc=changchun.ouyang@intel.com \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).