From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qa0-f48.google.com (mail-qa0-f48.google.com [209.85.216.48]) by dpdk.org (Postfix) with ESMTP id C72A86A95 for ; Thu, 11 Dec 2014 14:41:14 +0100 (CET) Received: by mail-qa0-f48.google.com with SMTP id v10so3580950qac.35 for ; Thu, 11 Dec 2014 05:41:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=NEvru+qOoE474wY1X4C5z5EPOzNNG9AsCFBIwf3aw1U=; b=dFy09DHUJsuQCD1Q24NqJfN9HWH9+NR0iDqUGur6+QmbDtNG6QxHs41p9ZZuRGIRw2 p0JW2QWHm1z8Nvf0PAdMDfeDvDabYSWFhYKD6ZK6EI6GeVc+WZ8c333ZuQQNKgWjHvom wA+V43aY62ceIX9DUBlieEIDwqWzACpqoH6jYLucOUIcWnO5q6tN6Mq0BNeuzIG/3rDa utzOjqw7+jgfJoxLu2bjDyQJlAtrUOMZPDzuveks/BHVBvXdf48JIRzc8PxcVNeU8Fco FxnbZ3cQJhVo5EtlA+oYUUnOt0Cd49Kq6J2ZHNDYY3QCAfaCal/hyN8YQfvepIxNBKtD R5fg== MIME-Version: 1.0 X-Received: by 10.224.72.5 with SMTP id k5mr20277538qaj.2.1418305274332; Thu, 11 Dec 2014 05:41:14 -0800 (PST) Received: by 10.96.49.66 with HTTP; Thu, 11 Dec 2014 05:41:14 -0800 (PST) In-Reply-To: <2680B515A539A446ACBEC0EBBDEC3DF80E938465@SGSIMBX001.nsn-intra.net> References: <2680B515A539A446ACBEC0EBBDEC3DF80E938312@SGSIMBX001.nsn-intra.net> <2680B515A539A446ACBEC0EBBDEC3DF80E938465@SGSIMBX001.nsn-intra.net> Date: Thu, 11 Dec 2014 05:41:14 -0800 Message-ID: From: Vijayakumar Muthuvel Manickam To: "Fu, Weiyi (NSN - CN/Hangzhou)" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] In DPDK 1.7.1, the link status of the interface using virtio driver is always down. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Dec 2014 13:41:15 -0000 Hi, I have seen this issue happen on older kernels like 2.6.32-220.el6.x86_64 while it works with no issues on a recent kernel like 3.10.x. Further, I found this issue was happening due to /sys/bus/pci/devices//msi_irqs dir not being enumerated in older kernels resulting in hw->use_msix=0. This causes VIRTIO_PCI_CONFIG() offset to change and hence the Link status issue shows up. Setting hw->use_msix=1 helped me get past this issue on 2.6.32-220.el6.x86_64 Thanks, Vijay On Thu, Dec 11, 2014 at 3:41 AM, Fu, Weiyi (NSN - CN/Hangzhou) < weiyi.fu@nsn.com> wrote: > Hi Changchun, > I found you had done follow change to allow the virtio interface startup > when the link is down. Is there any scenario causing link down for virtio > interface? > > diff --git a/lib/librte_pmd_virtio/virtio_ethdev.c > b/lib/librte_pmd_virtio/virtio_ethdev.c > index 78018f9..4bff0fe 100644 > --- a/lib/librte_pmd_virtio/virtio_ethdev.c > +++ b/lib/librte_pmd_virtio/virtio_ethdev.c > @@ -1057,14 +1057,12 @@ virtio_dev_start(struct rte_eth_dev *dev) > vtpci_read_dev_config(hw, > offsetof(struct virtio_net_config, status), > &status, sizeof(status)); > - if ((status & VIRTIO_NET_S_LINK_UP) == 0) { > + if ((status & VIRTIO_NET_S_LINK_UP) == 0) > PMD_INIT_LOG(ERR, "Port: %d Link is DOWN", > dev->data->port_id); > - return -EIO; > - } else { > + else > PMD_INIT_LOG(DEBUG, "Port: %d Link is UP", > dev->data->port_id); > - } > } > vtpci_reinit_complete(hw); > > > > Brs, > Fu Weiyi > > -----Original Message----- > From: Fu, Weiyi (NSN - CN/Hangzhou) > Sent: Thursday, December 11, 2014 4:43 PM > To: 'ext Ouyang, Changchun'; dev@dpdk.org > Subject: RE: [dpdk-dev] In DPDK 1.7.1, the link status of the interface > using virtio driver is always down. > > Hi, > The result is still the same. > > [root@EIPU-0(KVMCluster) /root] > # ./testpmd -c 3 -n 4 -- --burst=64 -i --txq=1 --rxq=1 --txqflags=0xffff > EAL: Cannot read numa node link for lcore 0 - using physical package id > instead > EAL: Detected lcore 0 as core 0 on socket 0 > EAL: Cannot read numa node link for lcore 1 - using physical package id > instead > EAL: Detected lcore 1 as core 1 on socket 0 > EAL: Cannot read numa node link for lcore 2 - using physical package id > instead > EAL: Detected lcore 2 as core 2 on socket 0 > EAL: Cannot read numa node link for lcore 3 - using physical package id > instead > EAL: Detected lcore 3 as core 3 on socket 0 > EAL: Cannot read numa node link for lcore 4 - using physical package id > instead > EAL: Detected lcore 4 as core 4 on socket 0 > EAL: Cannot read numa node link for lcore 5 - using physical package id > instead > EAL: Detected lcore 5 as core 5 on socket 0 > EAL: Cannot read numa node link for lcore 6 - using physical package id > instead > EAL: Detected lcore 6 as core 6 on socket 0 > EAL: Support maximum 64 logical core(s) by configuration. > EAL: Detected 7 lcore(s) > EAL: Searching for IVSHMEM devices... > EAL: No IVSHMEM configuration found! > EAL: Setting up memory... > EAL: cannot open /proc/self/numa_maps, consider that all memory is in > socket_id 0 > EAL: Ask a virtual area of 0x13400000 bytes > EAL: Virtual area found at 0x7fb8e2600000 (size = 0x13400000) > EAL: Ask a virtual area of 0x1f000000 bytes > EAL: Virtual area found at 0x7fb8c3400000 (size = 0x1f000000) > EAL: Ask a virtual area of 0x200000 bytes > EAL: Virtual area found at 0x7fb8c3000000 (size = 0x200000) > EAL: Ask a virtual area of 0x400000 bytes > EAL: Virtual area found at 0x7fb8c2a00000 (size = 0x400000) > EAL: Ask a virtual area of 0x200000 bytes > EAL: Virtual area found at 0x7fb8c2600000 (size = 0x200000) > EAL: Ask a virtual area of 0x200000 bytes > EAL: Virtual area found at 0x7fb8c2200000 (size = 0x200000) > EAL: Ask a virtual area of 0x400000 bytes > EAL: Virtual area found at 0x7fb8c1c00000 (size = 0x400000) > EAL: Ask a virtual area of 0x200000 bytes > EAL: Virtual area found at 0x7fb8c1800000 (size = 0x200000) > EAL: Requesting 410 pages of size 2MB from socket 0 > EAL: TSC frequency is ~2792867 KHz > EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using > unreliable clock cycles ! > EAL: Master core 0 is ready (tid=f6998800) > EAL: Core 1 is ready (tid=c0ffe710) > EAL: PCI device 0000:00:03.0 on NUMA socket -1 > EAL: probe driver: 1af4:1000 rte_virtio_pmd > EAL: 0000:00:03.0 not managed by UIO driver, skipping > EAL: PCI device 0000:00:04.0 on NUMA socket -1 > EAL: probe driver: 1af4:1000 rte_virtio_pmd > EAL: PCI memory mapped at 0x7fb8f6959000 > PMD: eth_virtio_dev_init(): PCI Port IO found start=0xc020 with size=0x20 > PMD: virtio_negotiate_features(): guest_features before negotiate = 438020 > PMD: virtio_negotiate_features(): host_features before negotiate = 489f7c26 > PMD: virtio_negotiate_features(): features after negotiate = 30020 > PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00 > PMD: eth_virtio_dev_init(): VIRTIO_NET_F_MQ is not supported > PMD: virtio_dev_cq_queue_setup(): >> > PMD: virtio_dev_queue_setup(): selecting queue: 2 > PMD: virtio_dev_queue_setup(): vq_size: 64 nb_desc:0 > PMD: virtio_dev_queue_setup(): vring_size: 4612, rounded_vring_size: 8192 > PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212bdd000 > PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31dd000 > PMD: eth_virtio_dev_init(): config->max_virtqueue_pairs=1 > PMD: eth_virtio_dev_init(): config->status=0 > PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00 > PMD: eth_virtio_dev_init(): hw->max_rx_queues=1 hw->max_tx_queues=1 > PMD: eth_virtio_dev_init(): port 0 vendorID=0x1af4 deviceID=0x1000 > EAL: PCI device 0000:00:05.0 on NUMA socket -1 > EAL: probe driver: 1af4:1000 rte_virtio_pmd > EAL: PCI memory mapped at 0x7fb8f6958000 > PMD: eth_virtio_dev_init(): PCI Port IO found start=0xc000 with size=0x20 > PMD: virtio_negotiate_features(): guest_features before negotiate = 438020 > PMD: virtio_negotiate_features(): host_features before negotiate = 489f7c26 > PMD: virtio_negotiate_features(): features after negotiate = 30020 > PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00 > PMD: eth_virtio_dev_init(): VIRTIO_NET_F_MQ is not supported > PMD: virtio_dev_cq_queue_setup(): >> > PMD: virtio_dev_queue_setup(): selecting queue: 2 > PMD: virtio_dev_queue_setup(): vq_size: 64 nb_desc:0 > PMD: virtio_dev_queue_setup(): vring_size: 4612, rounded_vring_size: 8192 > PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212be0000 > PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31e0000 > PMD: eth_virtio_dev_init(): config->max_virtqueue_pairs=1 > PMD: eth_virtio_dev_init(): config->status=0 > PMD: eth_virtio_dev_init(): PORT MAC: FF:FF:00:00:00:00 > PMD: eth_virtio_dev_init(): hw->max_rx_queues=1 hw->max_tx_queues=1 > PMD: eth_virtio_dev_init(): port 1 vendorID=0x1af4 deviceID=0x1000 > Interactive-mode selected > Configuring Port 0 (socket 0) > PMD: virtio_dev_configure(): configure > PMD: virtio_dev_tx_queue_setup(): >> > PMD: virtio_dev_queue_setup(): selecting queue: 1 > PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:512 > PMD: virtio_dev_queue_setup(): Warning: nb_desc(512) is not equal to vq > size (256), fall to vq size > PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288 > PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212be3000 > PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31e3000 > PMD: virtio_dev_rx_queue_setup(): >> > PMD: virtio_dev_queue_setup(): selecting queue: 0 > PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:128 > PMD: virtio_dev_queue_setup(): Warning: nb_desc(128) is not equal to vq > size (256), fall to vq size > PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288 > PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212be7000 > PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31e7000 > PMD: virtio_dev_vring_start(): >> > PMD: virtio_dev_rxtx_start(): >> > PMD: virtio_dev_vring_start(): >> > PMD: virtio_dev_vring_start(): Allocated 256 bufs > PMD: virtio_dev_vring_start(): >> > > Port: 0 Link is DOWN > PMD: virtio_dev_start(): nb_queues=1 > PMD: virtio_dev_start(): Notified backend at initialization > PMD: rte_eth_dev_config_restore: port 0: MAC address array not supported > PMD: rte_eth_promiscuous_disable: Function not supported > PMD: rte_eth_allmulticast_disable: Function not supported > Port 0: FF:FF:00:00:00:00 > Configuring Port 1 (socket 0) > PMD: virtio_dev_configure(): configure > PMD: virtio_dev_tx_queue_setup(): >> > PMD: virtio_dev_queue_setup(): selecting queue: 1 > PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:512 > PMD: virtio_dev_queue_setup(): Warning: nb_desc(512) is not equal to vq > size (256), fall to vq size > PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288 > PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212bea000 > PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31ea000 > PMD: virtio_dev_rx_queue_setup(): >> > PMD: virtio_dev_queue_setup(): selecting queue: 0 > PMD: virtio_dev_queue_setup(): vq_size: 256 nb_desc:128 > PMD: virtio_dev_queue_setup(): Warning: nb_desc(128) is not equal to vq > size (256), fall to vq size > PMD: virtio_dev_queue_setup(): vring_size: 10244, rounded_vring_size: 12288 > PMD: virtio_dev_queue_setup(): vq->vq_ring_mem: 0x212bee000 > PMD: virtio_dev_queue_setup(): vq->vq_ring_virt_mem: 0x7fb8c31ee000 > PMD: virtio_dev_vring_start(): >> > PMD: virtio_dev_rxtx_start(): >> > PMD: virtio_dev_vring_start(): >> > PMD: virtio_dev_vring_start(): Allocated 256 bufs > PMD: virtio_dev_vring_start(): >> > > Port: 1 Link is DOWN > PMD: virtio_dev_start(): nb_queues=1 > PMD: virtio_dev_start(): Notified backend at initialization > PMD: rte_eth_dev_config_restore: port 1: MAC address array not supported > PMD: rte_eth_promiscuous_disable: Function not supported > PMD: rte_eth_allmulticast_disable: Function not supported > Port 1: FF:FF:00:00:00:00 > Checking link statuses... > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > PMD: virtio_dev_link_update(): Get link status from hw > PMD: virtio_dev_link_update(): Port 0 is down > > Brs, > Fu Weiyi > > -----Original Message----- > From: ext Ouyang, Changchun [mailto:changchun.ouyang@intel.com] > Sent: Thursday, December 11, 2014 4:11 PM > To: Fu, Weiyi (NSN - CN/Hangzhou); dev@dpdk.org > Cc: Ouyang, Changchun > Subject: RE: [dpdk-dev] In DPDK 1.7.1, the link status of the interface > using virtio driver is always down. > > Hi, > > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Fu, Weiyi (NSN - > > CN/Hangzhou) > > Sent: Thursday, December 11, 2014 3:57 PM > > To: dev@dpdk.org > > Subject: [dpdk-dev] In DPDK 1.7.1, the link status of the interface > using virtio > > driver is always down. > > > > Hi, > > We are using the l2fwd based on DPDK 1.7.1 and found out that the link > > status of the interface using virtio driver is always down. > > Is there any precondition to let the link up? > > > > Suggest you use testpmd to replace l2fwd, and virito need tx some packets > before forwarding any packet. > In testpmd, you can use the following cmd: > Start tx_first > > Thanks > Changchun > >