DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [Bug 175] DPDK on Azure using `intel-go/nff-go` fails using `hv_netvsc` driver
@ 2018-12-25 13:40 bugzilla
  2018-12-27 21:58 ` Stephen Hemminger
  2020-05-12 13:54 ` bugzilla
  0 siblings, 2 replies; 3+ messages in thread
From: bugzilla @ 2018-12-25 13:40 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=175

            Bug ID: 175
           Summary: DPDK on Azure using `intel-go/nff-go` fails using
                    `hv_netvsc` driver
           Product: DPDK
           Version: 18.11
          Hardware: Other
                OS: Linux
            Status: CONFIRMED
          Severity: normal
          Priority: Normal
         Component: vhost/virtio
          Assignee: dev@dpdk.org
          Reporter: guesslin1986@gmail.com
  Target Milestone: ---

Overview: running DPDK on Azure using `intel-go/nff-go` library fails with
`Requested receive port exceeds number of ports which can be used by DPDK (bind
to DPDK). (3)`
  Steps to Reproduce:
    1) build binary from
https://gist.github.com/guesslin/76be1139e5e3b8d71e964e194c5d9322
    2) run `app -d uio_hv_generic -n <NICName>`
  Actual Results:
$ sudo ./app -d uio_hv_generic -n eth2
2018/12/25 06:58:49 Binding PMD driver eth2 to NIC uio_hv_generic
2018/12/25 06:58:49 Initiating nff-go flow system
------------***-------- Initializing DPDK --------***------------
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-2048kB
EAL: No free hugepages reported in hugepages-2048kB
EAL: No free hugepages reported in hugepages-1048576kB
EAL: FATAL: Cannot get hugepage information.
EAL: Cannot get hugepage information.
Initiated nff-go flow system
Setting receiver on port 0
panic: Erorr: Requested receive port exceeds number of ports which can be used
by DPDK (bind to DPDK). (3), Msg: start dpdk/nff-go failed
  Expected Results:
     no error outputs
# result from testpmd
```
glasnostic@glasnostic-router:~$ sudo testpmd -w 0003:00:02.0
--vdev="net_vdev_netvsc0,iface=eth2" -- -i --port-topology=chained
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/run/.rte_unix
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
EAL: PCI device 0003:00:02.0 on NUMA socket 0
EAL:   probe driver: 15b3:1004 net_mlx4
PMD: net_mlx4: cannot access device, is mlx4_ib loaded?
EAL: Requested device 0003:00:02.0 cannot be used
PMD: net_failsafe: Initializing Fail-safe PMD for
net_failsafe_net_vdev_netvsc0_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0
PMD: net_failsafe: MAC address is 00:0d:3a:18:3e:11
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and
--ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176,
socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
PMD: net_tap_net_vdev_netvsc0_id0: 0xfc3a80: TX configured queues number: 1
PMD: net_tap_net_vdev_netvsc0_id0: 0xfc3a80: RX configured queues number: 1
Port 0: 00:0D:3A:18:3E:11
Checking link statuses...
Done
testpmd> show port info all
********************* Infos for port 0  *********************
MAC address: 00:0D:3A:18:3E:11
Driver name: net_failsafe
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 1
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
  strip off
  filter off
  qinq(extend) off
No flow type is supported.
Minimum size of RX buffer: 0
Maximum configurable length of RX packet: 1522
Current number of RX queues: 1
Max possible RX queues: 16
Max possible number of RXDs per queue: 65535
Min possible number of RXDs per queue: 0
RXDs number alignment: 1
Current number of TX queues: 1
Max possible TX queues: 16
Max possible number of TXDs per queue: 65535
Min possible number of TXDs per queue: 0
TXDs number alignment: 1
testpmd>
```
## directly run testpmd
```
glasnostic@glasnostic-router:~$ sudo testpmd
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/run/.rte_unix
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
EAL: PCI device 0003:00:02.0 on NUMA socket 0
EAL:   probe driver: 15b3:1004 net_mlx4
PMD: net_mlx4: cannot access device, is mlx4_ib loaded?
EAL: Requested device 0003:00:02.0 cannot be used
PMD: net_failsafe: Initializing Fail-safe PMD for
net_failsafe_net_vdev_netvsc_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc_id0 as dtap0
PMD: net_failsafe: MAC address is 00:0d:3a:18:3e:11
Warning: NUMA should be configured manually by using --port-numa-config and
--ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176,
socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port will
pair with itself.
Configuring Port 0 (socket 0)
PMD: net_tap_net_vdev_netvsc_id0: 0xfc3a80: TX configured queues number: 1
PMD: net_tap_net_vdev_netvsc_id0: 0xfc3a80: RX configured queues number: 1
Port 0: 00:0D:3A:18:3E:11
Checking link statuses...
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP
over anonymous pages disabled
Logical Core 1 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=1
  port 0:
  CRC stripping enabled
  RX queues=1 - RX desc=1024 - RX free threshold=0
  RX threshold registers: pthresh=0 hthresh=0  wthresh=0
  TX queues=1 - TX desc=1024 - TX free threshold=0
  TX threshold registers: pthresh=0 hthresh=0  wthresh=0
  TX RS bit threshold=0 - TXQ offloads=0x0
Press enter to exit
Telling cores to stop...
Waiting for lcores to finish...
  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 7              RX-dropped: 0             RX-total: 7
  TX-packets: 7              TX-dropped: 0             TX-total: 7
  ----------------------------------------------------------------------------
  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 7              RX-dropped: 0             RX-total: 7
  TX-packets: 7              TX-dropped: 0             TX-total: 7
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
Shutting down port 0...
Stopping ports...
Done
Closing ports...
Done
Bye...
```
## load mlx4_ib
```
glasnostic@glasnostic-router:~$ sudo modprobe mlx4_ib
glasnostic@glasnostic-router:~$ lsmod | grep mlx4_ib
mlx4_ib               176128  0
mlx4_core             294912  2 mlx4_en,mlx4_ib
devlink                53248  3 mlx4_en,mlx4_core,mlx4_ib
ib_core               229376  6 ib_iser,ib_cm,rdma_cm,ib_uverbs,iw_cm,mlx4_ib
glasnostic@glasnostic-router:~$ sudo testpmd
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/run/.rte_unix
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
EAL: PCI device 0003:00:02.0 on NUMA socket 0
EAL:   probe driver: 15b3:1004 net_mlx4
PMD: net_mlx4: cannot access device, is mlx4_ib loaded?
EAL: Requested device 0003:00:02.0 cannot be used
PMD: net_failsafe: Initializing Fail-safe PMD for
net_failsafe_net_vdev_netvsc_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc_id0 as dtap0
PMD: net_failsafe: MAC address is 00:0d:3a:18:3e:11
Warning: NUMA should be configured manually by using --port-numa-config and
--ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176,
socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port will
pair with itself.
Configuring Port 0 (socket 0)
PMD: net_tap_net_vdev_netvsc_id0: 0xfc3a80: TX configured queues number: 1
PMD: net_tap_net_vdev_netvsc_id0: 0xfc3a80: RX configured queues number: 1
Port 0: 00:0D:3A:18:3E:11
Checking link statuses...
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP
over anonymous pages disabled
Logical Core 1 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=1
  port 0:
  CRC stripping enabled
  RX queues=1 - RX desc=1024 - RX free threshold=0
  RX threshold registers: pthresh=0 hthresh=0  wthresh=0
  TX queues=1 - TX desc=1024 - TX free threshold=0
  TX threshold registers: pthresh=0 hthresh=0  wthresh=0
  TX RS bit threshold=0 - TXQ offloads=0x0
Press enter to exit
Telling cores to stop...
Waiting for lcores to finish...
  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 7              RX-dropped: 0             RX-total: 7
  TX-packets: 7              TX-dropped: 0             TX-total: 7
  ----------------------------------------------------------------------------
  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 7              RX-dropped: 0             RX-total: 7
  TX-packets: 7              TX-dropped: 0             TX-total: 7
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
Shutting down port 0...
Stopping ports...
Done
Closing ports...
Done
Bye...
```

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] [Bug 175] DPDK on Azure using `intel-go/nff-go` fails using `hv_netvsc` driver
  2018-12-25 13:40 [dpdk-dev] [Bug 175] DPDK on Azure using `intel-go/nff-go` fails using `hv_netvsc` driver bugzilla
@ 2018-12-27 21:58 ` Stephen Hemminger
  2020-05-12 13:54 ` bugzilla
  1 sibling, 0 replies; 3+ messages in thread
From: Stephen Hemminger @ 2018-12-27 21:58 UTC (permalink / raw)
  To: bugzilla; +Cc: dev

On Tue, 25 Dec 2018 13:40:32 +0000
bugzilla@dpdk.org wrote:

> https://bugs.dpdk.org/show_bug.cgi?id=175
> 
>             Bug ID: 175
>            Summary: DPDK on Azure using `intel-go/nff-go` fails using
>                     `hv_netvsc` driver
>            Product: DPDK
>            Version: 18.11
>           Hardware: Other
>                 OS: Linux
>             Status: CONFIRMED
>           Severity: normal
>           Priority: Normal
>          Component: vhost/virtio
>           Assignee: dev@dpdk.org
>           Reporter: guesslin1986@gmail.com
>   Target Milestone: ---
> 
> Overview: running DPDK on Azure using `intel-go/nff-go` library fails with
> `Requested receive port exceeds number of ports which can be used by DPDK (bind
> to DPDK). (3)`
>   Steps to Reproduce:
>     1) build binary from
> https://gist.github.com/guesslin/76be1139e5e3b8d71e964e194c5d9322
>     2) run `app -d uio_hv_generic -n <NICName>`
>   Actual Results:
> $ sudo ./app -d uio_hv_generic -n eth2
> 2018/12/25 06:58:49 Binding PMD driver eth2 to NIC uio_hv_generic
> 2018/12/25 06:58:49 Initiating nff-go flow system
> ------------***-------- Initializing DPDK --------***------------
> EAL: Detected 4 lcore(s)
> EAL: Detected 1 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: No free hugepages reported in hugepages-2048kB
> EAL: No free hugepages reported in hugepages-2048kB
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: FATAL: Cannot get hugepage information.
> EAL: Cannot get hugepage information.
> Initiated nff-go flow system
> Setting receiver on port 0
> panic: Erorr: Requested receive port exceeds number of ports which can be used
> by DPDK (bind to DPDK). (3), Msg: start dpdk/nff-go failed
>   Expected Results:
>      no error outputs
> # result from testpmd
> ```
> glasnostic@glasnostic-router:~$ sudo testpmd -w 0003:00:02.0
> --vdev="net_vdev_netvsc0,iface=eth2" -- -i --port-topology=chained
> EAL: Detected 4 lcore(s)
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Multi-process socket /var/run/.rte_unix
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
> clock cycles !
> EAL: PCI device 0003:00:02.0 on NUMA socket 0
> EAL:   probe driver: 15b3:1004 net_mlx4
> PMD: net_mlx4: cannot access device, is mlx4_ib loaded?
> EAL: Requested device 0003:00:02.0 cannot be used
> PMD: net_failsafe: Initializing Fail-safe PMD for
> net_failsafe_net_vdev_netvsc0_id0
> PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
> PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0
> PMD: net_failsafe: MAC address is 00:0d:3a:18:3e:11
> Interactive-mode selected
> Warning: NUMA should be configured manually by using --port-numa-config and
> --ring-numa-config parameters along with --numa.
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Configuring Port 0 (socket 0)
> PMD: net_tap_net_vdev_netvsc0_id0: 0xfc3a80: TX configured queues number: 1
> PMD: net_tap_net_vdev_netvsc0_id0: 0xfc3a80: RX configured queues number: 1
> Port 0: 00:0D:3A:18:3E:11
> Checking link statuses...
> Done
> testpmd> show port info all  
> ********************* Infos for port 0  *********************
> MAC address: 00:0D:3A:18:3E:11
> Driver name: net_failsafe
> Connect to socket: 0
> memory allocation on the socket: 0
> Link status: up
> Link speed: 10000 Mbps
> Link duplex: full-duplex
> MTU: 1500
> Promiscuous mode: enabled
> Allmulticast mode: disabled
> Maximum number of MAC addresses: 1
> Maximum number of MAC addresses of hash filtering: 0
> VLAN offload:
>   strip off
>   filter off
>   qinq(extend) off
> No flow type is supported.
> Minimum size of RX buffer: 0
> Maximum configurable length of RX packet: 1522
> Current number of RX queues: 1
> Max possible RX queues: 16
> Max possible number of RXDs per queue: 65535
> Min possible number of RXDs per queue: 0
> RXDs number alignment: 1
> Current number of TX queues: 1
> Max possible TX queues: 16
> Max possible number of TXDs per queue: 65535
> Min possible number of TXDs per queue: 0
> TXDs number alignment: 1
> testpmd>  
> ```
> ## directly run testpmd
> ```
> glasnostic@glasnostic-router:~$ sudo testpmd
> EAL: Detected 4 lcore(s)
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Multi-process socket /var/run/.rte_unix
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
> clock cycles !
> EAL: PCI device 0003:00:02.0 on NUMA socket 0
> EAL:   probe driver: 15b3:1004 net_mlx4
> PMD: net_mlx4: cannot access device, is mlx4_ib loaded?
> EAL: Requested device 0003:00:02.0 cannot be used
> PMD: net_failsafe: Initializing Fail-safe PMD for
> net_failsafe_net_vdev_netvsc_id0
> PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
> PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc_id0 as dtap0
> PMD: net_failsafe: MAC address is 00:0d:3a:18:3e:11
> Warning: NUMA should be configured manually by using --port-numa-config and
> --ring-numa-config parameters along with --numa.
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Warning! port-topology=paired and odd forward ports number, the last port will
> pair with itself.
> Configuring Port 0 (socket 0)
> PMD: net_tap_net_vdev_netvsc_id0: 0xfc3a80: TX configured queues number: 1
> PMD: net_tap_net_vdev_netvsc_id0: 0xfc3a80: RX configured queues number: 1
> Port 0: 00:0D:3A:18:3E:11
> Checking link statuses...
> Done
> No commandline core given, start packet forwarding
> io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP
> over anonymous pages disabled
> Logical Core 1 (socket 0) forwards packets on 1 streams:
>   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
>   io packet forwarding packets/burst=32
>   nb forwarding cores=1 - nb forwarding ports=1
>   port 0:
>   CRC stripping enabled
>   RX queues=1 - RX desc=1024 - RX free threshold=0
>   RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>   TX queues=1 - TX desc=1024 - TX free threshold=0
>   TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>   TX RS bit threshold=0 - TXQ offloads=0x0
> Press enter to exit
> Telling cores to stop...
> Waiting for lcores to finish...
>   ---------------------- Forward statistics for port 0  ----------------------
>   RX-packets: 7              RX-dropped: 0             RX-total: 7
>   TX-packets: 7              TX-dropped: 0             TX-total: 7
>   ----------------------------------------------------------------------------
>   +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
>   RX-packets: 7              RX-dropped: 0             RX-total: 7
>   TX-packets: 7              TX-dropped: 0             TX-total: 7
>   ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> Done.
> Shutting down port 0...
> Stopping ports...
> Done
> Closing ports...
> Done
> Bye...
> ```
> ## load mlx4_ib
> ```
> glasnostic@glasnostic-router:~$ sudo modprobe mlx4_ib
> glasnostic@glasnostic-router:~$ lsmod | grep mlx4_ib
> mlx4_ib               176128  0
> mlx4_core             294912  2 mlx4_en,mlx4_ib
> devlink                53248  3 mlx4_en,mlx4_core,mlx4_ib
> ib_core               229376  6 ib_iser,ib_cm,rdma_cm,ib_uverbs,iw_cm,mlx4_ib
> glasnostic@glasnostic-router:~$ sudo testpmd
> EAL: Detected 4 lcore(s)
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Multi-process socket /var/run/.rte_unix
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
> clock cycles !
> EAL: PCI device 0003:00:02.0 on NUMA socket 0
> EAL:   probe driver: 15b3:1004 net_mlx4
> PMD: net_mlx4: cannot access device, is mlx4_ib loaded?
> EAL: Requested device 0003:00:02.0 cannot be used
> PMD: net_failsafe: Initializing Fail-safe PMD for
> net_failsafe_net_vdev_netvsc_id0
> PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
> PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc_id0 as dtap0
> PMD: net_failsafe: MAC address is 00:0d:3a:18:3e:11
> Warning: NUMA should be configured manually by using --port-numa-config and
> --ring-numa-config parameters along with --numa.
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Warning! port-topology=paired and odd forward ports number, the last port will
> pair with itself.
> Configuring Port 0 (socket 0)
> PMD: net_tap_net_vdev_netvsc_id0: 0xfc3a80: TX configured queues number: 1
> PMD: net_tap_net_vdev_netvsc_id0: 0xfc3a80: RX configured queues number: 1
> Port 0: 00:0D:3A:18:3E:11
> Checking link statuses...
> Done
> No commandline core given, start packet forwarding
> io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP
> over anonymous pages disabled
> Logical Core 1 (socket 0) forwards packets on 1 streams:
>   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
>   io packet forwarding packets/burst=32
>   nb forwarding cores=1 - nb forwarding ports=1
>   port 0:
>   CRC stripping enabled
>   RX queues=1 - RX desc=1024 - RX free threshold=0
>   RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>   TX queues=1 - TX desc=1024 - TX free threshold=0
>   TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>   TX RS bit threshold=0 - TXQ offloads=0x0
> Press enter to exit
> Telling cores to stop...
> Waiting for lcores to finish...
>   ---------------------- Forward statistics for port 0  ----------------------
>   RX-packets: 7              RX-dropped: 0             RX-total: 7
>   TX-packets: 7              TX-dropped: 0             TX-total: 7
>   ----------------------------------------------------------------------------
>   +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
>   RX-packets: 7              RX-dropped: 0             RX-total: 7
>   TX-packets: 7              TX-dropped: 0             TX-total: 7
>   ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> Done.
> Shutting down port 0...
> Stopping ports...
> Done
> Closing ports...
> Done
> Bye...
> ```
> 

I am on holiday this week like most people in US. Will be back next week.
What kernel version are you using?

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [dpdk-dev] [Bug 175] DPDK on Azure using `intel-go/nff-go` fails using `hv_netvsc` driver
  2018-12-25 13:40 [dpdk-dev] [Bug 175] DPDK on Azure using `intel-go/nff-go` fails using `hv_netvsc` driver bugzilla
  2018-12-27 21:58 ` Stephen Hemminger
@ 2020-05-12 13:54 ` bugzilla
  1 sibling, 0 replies; 3+ messages in thread
From: bugzilla @ 2020-05-12 13:54 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=175

Asaf Penso (asafp@mellanox.com) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
         Resolution|---                         |INVALID
                 CC|                            |asafp@mellanox.com
             Status|CONFIRMED                   |RESOLVED

--- Comment #6 from Asaf Penso (asafp@mellanox.com) ---
It looks like a misconfiguration that was solved by the above documentation.

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-05-12 13:54 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-25 13:40 [dpdk-dev] [Bug 175] DPDK on Azure using `intel-go/nff-go` fails using `hv_netvsc` driver bugzilla
2018-12-27 21:58 ` Stephen Hemminger
2020-05-12 13:54 ` bugzilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).