From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <weiyi.fu@nsn.com>
Received: from demumfd001.nsn-inter.net (demumfd001.nsn-inter.net
 [93.183.12.32]) by dpdk.org (Postfix) with ESMTP id 7684C6A95
 for <dev@dpdk.org>; Thu, 11 Dec 2014 12:42:19 +0100 (CET)
Received: from demuprx016.emea.nsn-intra.net ([10.150.129.55])
 by demumfd001.nsn-inter.net (8.14.3/8.14.3) with ESMTP id sBBBgIsi008472
 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
 Thu, 11 Dec 2014 11:42:18 GMT
Received: from SGSIHTC004.nsn-intra.net ([10.159.225.21])
 by demuprx016.emea.nsn-intra.net (8.12.11.20060308/8.12.11) with ESMTP id
 sBBBfxNE026584
 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL);
 Thu, 11 Dec 2014 12:42:17 +0100
Received: from SGSIMBX001.nsn-intra.net ([169.254.1.131]) by
 SGSIHTC004.nsn-intra.net ([10.159.225.21]) with mapi id 14.03.0195.001; Thu,
 11 Dec 2014 19:41:53 +0800
From: "Fu, Weiyi (NSN - CN/Hangzhou)" <weiyi.fu@nsn.com>
To: "Fu, Weiyi (NSN - CN/Hangzhou)" <weiyi.fu@nsn.com>, "ext Ouyang,
 Changchun" <changchun.ouyang@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
Thread-Topic: [dpdk-dev]  In DPDK 1.7.1, the link status of the interface
 using virtio driver is always down.
Thread-Index: AQHQFRoVfrrffY6AGUCV6tB+0HXMP5yKEnVAgAAxdTA=
Date: Thu, 11 Dec 2014 11:41:52 +0000
Message-ID: <2680B515A539A446ACBEC0EBBDEC3DF80E938465@SGSIMBX001.nsn-intra.net>
References: <2680B515A539A446ACBEC0EBBDEC3DF80E938312@SGSIMBX001.nsn-intra.net>
 <F52918179C57134FAEC9EA62FA2F9625119456D4@shsmsx102.ccr.corp.intel.com> 
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.159.225.120]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-purgate-type: clean
X-purgate-Ad: Categorized by eleven eXpurgate (R) http://www.eleven.de
X-purgate: clean
X-purgate: This mail is considered clean (visit http://www.eleven.de for
 further information)
X-purgate-size: 11930
X-purgate-ID: 151667::1418298138-0000658F-D4C69CBC/0/0
Subject: Re: [dpdk-dev] In DPDK 1.7.1,
 the link status of the interface using virtio driver is always down.
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Thu, 11 Dec 2014 11:42:19 -0000

Hi Changchun,
I found you had done follow change to allow the virtio interface startup wh=
en the link is down.  Is there any scenario causing link down for virtio in=
terface?

diff --git a/lib/librte_pmd_virtio/virtio_ethdev.c b/lib/librte_pmd_virtio/=
virtio_ethdev.c
index 78018f9..4bff0fe 100644
--- a/lib/librte_pmd_virtio/virtio_ethdev.c
+++ b/lib/librte_pmd_virtio/virtio_ethdev.c
@@ -1057,14 +1057,12 @@  virtio_dev_start(struct rte_eth_dev *dev)
 		vtpci_read_dev_config(hw,
 				offsetof(struct virtio_net_config, status),
 				&status, sizeof(status));
-		if ((status & VIRTIO_NET_S_LINK_UP) =3D=3D 0) {
+		if ((status & VIRTIO_NET_S_LINK_UP) =3D=3D 0)
 			PMD_INIT_LOG(ERR, "Port: %d Link is DOWN",
 				     dev->data->port_id);
-			return -EIO;
-		} else {
+		else
 			PMD_INIT_LOG(DEBUG, "Port: %d Link is UP",
 				     dev->data->port_id);
-		}
 	}
 	vtpci_reinit_complete(hw);



Brs,
Fu Weiyi

-----Original Message-----
From: Fu, Weiyi (NSN - CN/Hangzhou)=20
Sent: Thursday, December 11, 2014 4:43 PM
To: 'ext Ouyang, Changchun'; dev@dpdk.org
Subject: RE: [dpdk-dev] In DPDK 1.7.1, the link status of the interface usi=
ng virtio driver is always down.

Hi,
The result is still the same.

[root@EIPU-0(KVMCluster) /root]
# ./testpmd  -c 3 -n 4   -- --burst=3D64 -i --txq=3D1 --rxq=3D1 --txqflags=
=3D0xffff
EAL: Cannot read numa node link for lcore 0 - using physical package id ins=
tead
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Cannot read numa node link for lcore 1 - using physical package id ins=
tead
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Cannot read numa node link for lcore 2 - using physical package id ins=
tead
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Cannot read numa node link for lcore 3 - using physical package id ins=
tead
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Cannot read numa node link for lcore 4 - using physical package id ins=
tead
EAL: Detected lcore 4 as core 4 on socket 0
EAL: Cannot read numa node link for lcore 5 - using physical package id ins=
tead
EAL: Detected lcore 5 as core 5 on socket 0
EAL: Cannot read numa node link for lcore 6 - using physical package id ins=
tead
EAL: Detected lcore 6 as core 6 on socket 0
EAL: Support maximum 64 logical core(s) by configuration.
EAL: Detected 7 lcore(s)
EAL: Searching for IVSHMEM devices...
EAL: No IVSHMEM configuration found!
EAL: Setting up memory...
EAL: cannot open /proc/self/numa_maps, consider that all memory is in socke=
t_id 0
EAL: Ask a virtual area of 0x13400000 bytes
EAL: Virtual area found at 0x7fb8e2600000 (size =3D 0x13400000)
EAL: Ask a virtual area of 0x1f000000 bytes
EAL: Virtual area found at 0x7fb8c3400000 (size =3D 0x1f000000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fb8c3000000 (size =3D 0x200000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fb8c2a00000 (size =3D 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fb8c2600000 (size =3D 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fb8c2200000 (size =3D 0x200000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fb8c1c00000 (size =3D 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fb8c1800000 (size =3D 0x200000)
EAL: Requesting 410 pages of size 2MB from socket 0
EAL: TSC frequency is ~2792867 KHz
EAL: WARNING: cpu flags constant_tsc=3Dyes nonstop_tsc=3Dno -> using unreli=
able clock cycles !
EAL: Master core 0 is ready (tid=3Df6998800)
EAL: Core 1 is ready (tid=3Dc0ffe710)
EAL: PCI device 0000:00:03.0 on NUMA socket -1
EAL:   probe driver: 1af4:1000 rte_virtio_pmd
EAL:   0000:00:03.0 not managed by UIO driver, skipping
EAL: PCI device 0000:00:04.0 on NUMA socket -1
EAL:   probe driver: 1af4:1000 rte_virtio_pmd
EAL:   PCI memory mapped at 0x7fb8f6959000
PMD: eth_virtio_dev_init(): PCI Port IO found start=3D0xc020 with size=3D0x=
20
PMD: virtio_negotiate_features(): guest_features before negotiate =3D 43802=
0
PMD: virtio_negotiate_features(): host_features before negotiate =3D 489f7c=
26
PMD: virtio_negotiate_features(): features after negotiate =3D 30020
PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00
PMD: eth_virtio_dev_init(): VIRTIO_NET_F_MQ is not supported
PMD: virtio_dev_cq_queue_setup():  >>
PMD: virtio_dev_queue_setup(): selecting queue: 2
PMD: virtio_dev_queue_setup(): vq_size: 64 nb_desc:0
PMD: virtio_dev_queue_setup(): vring_size: 4612, rounded_vring_size: 8192
PMD: virtio_dev_queue_setup(): vq->vq_ring_mem:      0x212bdd000
PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31dd000
PMD: eth_virtio_dev_init(): config->max_virtqueue_pairs=3D1
PMD: eth_virtio_dev_init(): config->status=3D0
PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00
PMD: eth_virtio_dev_init(): hw->max_rx_queues=3D1   hw->max_tx_queues=3D1
PMD: eth_virtio_dev_init(): port 0 vendorID=3D0x1af4 deviceID=3D0x1000
EAL: PCI device 0000:00:05.0 on NUMA socket -1
EAL:   probe driver: 1af4:1000 rte_virtio_pmd
EAL:   PCI memory mapped at 0x7fb8f6958000
PMD: eth_virtio_dev_init(): PCI Port IO found start=3D0xc000 with size=3D0x=
20
PMD: virtio_negotiate_features(): guest_features before negotiate =3D 43802=
0
PMD: virtio_negotiate_features(): host_features before negotiate =3D 489f7c=
26
PMD: virtio_negotiate_features(): features after negotiate =3D 30020
PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00
PMD: eth_virtio_dev_init(): VIRTIO_NET_F_MQ is not supported
PMD: virtio_dev_cq_queue_setup():  >>
PMD: virtio_dev_queue_setup(): selecting queue: 2
PMD: virtio_dev_queue_setup(): vq_size: 64 nb_desc:0
PMD: virtio_dev_queue_setup(): vring_size: 4612, rounded_vring_size: 8192
PMD: virtio_dev_queue_setup(): vq->vq_ring_mem:      0x212be0000
PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31e0000
PMD: eth_virtio_dev_init(): config->max_virtqueue_pairs=3D1
PMD: eth_virtio_dev_init(): config->status=3D0
PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00
PMD: eth_virtio_dev_init(): hw->max_rx_queues=3D1   hw->max_tx_queues=3D1
PMD: eth_virtio_dev_init(): port 1 vendorID=3D0x1af4 deviceID=3D0x1000
Interactive-mode selected
Configuring Port 0 (socket 0)
PMD: virtio_dev_configure(): configure
PMD: virtio_dev_tx_queue_setup():  >>
PMD: virtio_dev_queue_setup(): selecting queue: 1
PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:512
PMD: virtio_dev_queue_setup(): Warning: nb_desc(512) is not equal to vq siz=
e (256), fall to vq size
PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288
PMD: virtio_dev_queue_setup(): vq->vq_ring_mem:      0x212be3000
PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31e3000
PMD: virtio_dev_rx_queue_setup():  >>
PMD: virtio_dev_queue_setup(): selecting queue: 0
PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:128
PMD: virtio_dev_queue_setup(): Warning: nb_desc(128) is not equal to vq siz=
e (256), fall to vq size
PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288
PMD: virtio_dev_queue_setup(): vq->vq_ring_mem:      0x212be7000
PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31e7000
PMD: virtio_dev_vring_start():  >>
PMD: virtio_dev_rxtx_start():  >>
PMD: virtio_dev_vring_start():  >>
PMD: virtio_dev_vring_start(): Allocated 256 bufs
PMD: virtio_dev_vring_start():  >>

Port: 0 Link is DOWN
PMD: virtio_dev_start(): nb_queues=3D1
PMD: virtio_dev_start(): Notified backend at initialization
PMD: rte_eth_dev_config_restore: port 0: MAC address array not supported
PMD: rte_eth_promiscuous_disable: Function not supported
PMD: rte_eth_allmulticast_disable: Function not supported
Port 0: FF:FF:00:00:00:00
Configuring Port 1 (socket 0)
PMD: virtio_dev_configure(): configure
PMD: virtio_dev_tx_queue_setup():  >>
PMD: virtio_dev_queue_setup(): selecting queue: 1
PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:512
PMD: virtio_dev_queue_setup(): Warning: nb_desc(512) is not equal to vq siz=
e (256), fall to vq size
PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288
PMD: virtio_dev_queue_setup(): vq->vq_ring_mem:      0x212bea000
PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31ea000
PMD: virtio_dev_rx_queue_setup():  >>
PMD: virtio_dev_queue_setup(): selecting queue: 0
PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:128
PMD: virtio_dev_queue_setup(): Warning: nb_desc(128) is not equal to vq siz=
e (256), fall to vq size
PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288
PMD: virtio_dev_queue_setup(): vq->vq_ring_mem:      0x212bee000
PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31ee000
PMD: virtio_dev_vring_start():  >>
PMD: virtio_dev_rxtx_start():  >>
PMD: virtio_dev_vring_start():  >>
PMD: virtio_dev_vring_start(): Allocated 256 bufs
PMD: virtio_dev_vring_start():  >>

Port: 1 Link is DOWN
PMD: virtio_dev_start(): nb_queues=3D1
PMD: virtio_dev_start(): Notified backend at initialization
PMD: rte_eth_dev_config_restore: port 1: MAC address array not supported
PMD: rte_eth_promiscuous_disable: Function not supported
PMD: rte_eth_allmulticast_disable: Function not supported
Port 1: FF:FF:00:00:00:00
Checking link statuses...
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down
PMD: virtio_dev_link_update(): Get link status from hw
PMD: virtio_dev_link_update(): Port 0 is down

Brs,
Fu Weiyi

-----Original Message-----
From: ext Ouyang, Changchun [mailto:changchun.ouyang@intel.com]=20
Sent: Thursday, December 11, 2014 4:11 PM
To: Fu, Weiyi (NSN - CN/Hangzhou); dev@dpdk.org
Cc: Ouyang, Changchun
Subject: RE: [dpdk-dev] In DPDK 1.7.1, the link status of the interface usi=
ng virtio driver is always down.

Hi,

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Fu, Weiyi (NSN -
> CN/Hangzhou)
> Sent: Thursday, December 11, 2014 3:57 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] In DPDK 1.7.1, the link status of the interface using=
 virtio
> driver is always down.
>=20
> Hi,
> We are using the l2fwd based on DPDK 1.7.1  and found out that the link
> status of the interface using virtio driver is always down.
> Is there any precondition to let the link up?
>=20

Suggest you use testpmd to replace l2fwd, and virito need tx some packets b=
efore forwarding any packet.=20
In testpmd, you can use the following cmd:
Start tx_first=20

Thanks
Changchun