DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] Fetching packets from NIC on Hyper-V using patch v9 & Kernel 4.16.11
@ 2018-05-29 10:56 Giridharan, Ganesan
  2018-05-29 15:22 ` Stephen Hemminger
  0 siblings, 1 reply; 4+ messages in thread
From: Giridharan, Ganesan @ 2018-05-29 10:56 UTC (permalink / raw)
  To: users

Good morning.

Environment:

I am using "testpmd" to verify packets receive/transmit "io" mode on Linux guest under Hyper-V on a WS2016 DC server.
I am using "latest" pulled via git last week early. Applied "v9" patches from "Stephen Hemminger".
Linux guest I am using is Ubuntu 18.04 upgraded to 4.16.11 Kernel.

Problem:

Testpmd does not seem to receive any packets.

Log:

"alias tpmd='./testpmd -l 0-1 --log-level=8 --log-level='\''pmd.net.netvsc.*:debug'\'' --log-level='\''bus.vmbus:debug'\'' --log-level='\''lib.ethdev.*:debug'\'' -- -i'"

root@ubuntu-1804-dev:~/v9/dpdk# ./huge-setup.sh eth1 eth2
Removing currently reserved hugepages
Unmounting /mnt/huge and removing directory
Reserving hugepages
Creating /mnt/huge and mounting as hugetlbfs
Rebind eth1 to hv_uio_generic
Rebind eth2 to hv_uio_generic
root@ubuntu-1804-dev:~/v9/dpdk# cd x86_64-native-linuxapp-gcc/app/
root@ubuntu-1804-dev:~/v9/dpdk/x86_64-native-linuxapp-gcc/app# tpmd
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
vmbus_scan_one(): Adding vmbus device a759f3db-77bb-4e7c-ac6f-504c268c7f2b
vmbus_scan_one(): Adding vmbus device fd149e91-82e0-4a7d-afa6-2a4166cbd7c0
vmbus_scan_one(): Adding vmbus device 58f75a6d-d949-4320-99e1-a2a2576d581c
vmbus_scan_one(): Adding vmbus device f5bee29c-1741-4aad-a4c2-8fdedb46dcc2
vmbus_scan_one(): Adding vmbus device 3e7e7e4c-e8cb-4e05-a0aa-a267ced7b73e
vmbus_scan_one(): Adding vmbus device 1eccfd72-4b41-45ef-b73a-4a6e44c12924
vmbus_scan_one(): Adding vmbus device d34b2567-b9b6-42b9-8778-0a4ec0b955bf
vmbus_scan_one(): Adding vmbus device 4487b255-b88c-403f-bb51-d1f69cf17f87
vmbus_scan_one(): Adding vmbus device 83a43d10-0c74-45ec-a293-31e29ebb1787
vmbus_scan_one(): Adding vmbus device 242ff919-07db-4180-9c2e-b86cb68c8c55
vmbus_scan_one(): Adding vmbus device 2dd1ce17-079e-403c-b352-a1921ee207ee
vmbus_scan_one(): Adding vmbus device 00000000-0000-8899-0000-000000000000
vmbus_scan_one(): Adding vmbus device 99221fa0-24ad-11e2-be98-001aa01bbf6e
vmbus_scan_one(): Adding vmbus device 1ac2a997-5040-4851-89e4-3ccd48e51cf9
vmbus_scan_one(): Adding vmbus device 5620e0c7-8062-4dce-aeb7-520c7ef76171
vmbus_scan_one(): Adding vmbus device b6650ff7-33bc-4840-8048-e0676786f393
vmbus_scan_one(): Adding vmbus device 2450ee40-33bf-4fbd-892e-9fb06e9214cf
vmbus_scan_one(): Adding vmbus device 00000000-0001-8899-0000-000000000000
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
vmbus_probe_one_driver(): VMBUS device 3e7e7e4c-e8cb-4e05-a0aa-a267ced7b73e on NUMA socket -1
rte_vmbus_map_device(): Not managed by UIO driver, skipped
vmbus_probe_one_driver(): VMBUS device 83a43d10-0c74-45ec-a293-31e29ebb1787 on NUMA socket -1
vmbus_probe_one_driver():   Invalid NUMA socket, default to 0
vmbus_probe_one_driver():   probe driver: net_netvsc
eth_hn_probe():  >>
eth_dev_vmbus_allocate(): eth_dev_vmbus_allocate: Allocating eth dev for 83a43d10-0c74-45ec-a293-31e29ebb1787
eth_dev_vmbus_allocate(): Num of ETH devices after allocation = 0
eth_hn_dev_init():  >>
hn_nvs_init(): NVS version 0x60001, NDIS version 6.30
hn_nvs_conn_rxbuf(): connect rxbuff va=0x7fcbfe0ff000 gpad=0xe1e33
hn_nvs_conn_rxbuf(): receive buffer size 1728 count 9102
hn_nvs_conn_chim(): connect send buf va=0x7fcbfd1ff000 gpad=0xe1e34
hn_nvs_conn_chim(): send buffer 15728640 section size:6144, count:2560
hn_rndis_init(): RNDIS ver 1.0, aggpkt size 4026531839, aggpkt cnt 8, aggpkt align 8
hn_rndis_link_status(): link status 0x4001000b
hn_rndis_set_rxfilter(): set RX filter 0 done
hn_tx_pool_init(): create a TX send pool hn_txd_0 n=2560 size=32 socket=0
hn_rndis_get_eaddr(): MAC address 00:15:5d:0a:6e:0a
eth_hn_dev_init(): VMBus max channels 1
hn_rndis_query_rsscaps(): RX rings 64 indirect 128 caps 0x301
vmbus_probe_one_driver(): VMBUS device a759f3db-77bb-4e7c-ac6f-504c268c7f2b on NUMA socket -1
vmbus_probe_one_driver():   Invalid NUMA socket, default to 0
vmbus_probe_one_driver():   probe driver: net_netvsc
eth_hn_probe():  >>
eth_dev_vmbus_allocate(): eth_dev_vmbus_allocate: Allocating eth dev for a759f3db-77bb-4e7c-ac6f-504c268c7f2b
eth_dev_vmbus_allocate(): Num of ETH devices after allocation = 1
eth_hn_dev_init():  >>
hn_nvs_init(): NVS version 0x60001, NDIS version 6.30
hn_nvs_conn_rxbuf(): connect rxbuff va=0x7fcbf5cfe000 gpad=0xe1e30
hn_nvs_conn_rxbuf(): receive buffer size 1728 count 9102
hn_nvs_conn_chim(): connect send buf va=0x7fcbf4dfe000 gpad=0xe1e31
hn_nvs_conn_chim(): send buffer 15728640 section size:6144, count:2560
hn_rndis_init(): RNDIS ver 1.0, aggpkt size 4026531839, aggpkt cnt 8, aggpkt align 8
hn_rndis_link_status(): link status 0x4001000b
hn_rndis_set_rxfilter(): set RX filter 0 done
hn_tx_pool_init(): create a TX send pool hn_txd_1 n=2560 size=32 socket=0
hn_rndis_get_eaddr(): MAC address 00:15:5d:0a:6e:09
eth_hn_dev_init(): VMBus max channels 1
hn_rndis_query_rsscaps(): RX rings 64 indirect 128 caps 0x301
Interactive-mode selected
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_configure():  >>
hn_rndis_link_status(): link status 0x40020006
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_tx_queue_setup():  >>
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_rx_queue_setup():  >>
hn_dev_start():  >>
hn_rndis_set_rxfilter(): set RX filter 0xd done
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_rndis_set_rxfilter(): set RX filter 0x9 done
hn_rndis_set_rxfilter(): set RX filter 0x9 done
hn_dev_link_update(): Port 0 is up
Port 0: 00:15:5D:0A:6E:0A
Configuring Port 1 (socket 0)
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_configure():  >>
hn_rndis_link_status(): link status 0x40020006
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_tx_queue_setup():  >>
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_rx_queue_setup():  >>
hn_dev_start():  >>
hn_rndis_set_rxfilter(): set RX filter 0xd done
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_rndis_set_rxfilter(): set RX filter 0x9 done
hn_rndis_set_rxfilter(): set RX filter 0x9 done
hn_dev_link_update(): Port 1 is up
Port 1: 00:15:5D:0A:6E:09
Checking link statuses...
Done
hn_rndis_set_rxfilter(): set RX filter 0x20 done
hn_rndis_set_rxfilter(): set RX filter 0x20 done
testpmd> start
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP over anonymous pages disabled
Logical Core 1 (socket 0) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x1000 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x1000 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
hn_dev_stats_get():  >>
hn_dev_stats_get():  >>
testpmd> show port stats 0
hn_dev_stats_get():  >>

  ######################## NIC statistics for port 0  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0
  Tx-pps:            0
  ############################################################################
testpmd> show port stats 1
hn_dev_stats_get():  >>

  ######################## NIC statistics for port 1  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0
  Tx-pps:            0
  ############################################################################
testpmd> show port stats 1
hn_dev_stats_get():  >>

  ######################## NIC statistics for port 1  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0
  Tx-pps:            0
  ############################################################################
testpmd> quit
Telling cores to stop...
Waiting for lcores to finish...
hn_dev_stats_get():  >>

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------
hn_dev_stats_get():  >>

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Shutting down port 0...
Stopping ports...
hn_dev_stop():  >>
hn_rndis_set_rxfilter(): set RX filter 0 done
Done
Closing ports...
hn_dev_close(): close
Done

Shutting down port 1...
Stopping ports...
hn_dev_stop():  >>
hn_rndis_set_rxfilter(): set RX filter 0 done
Done
Closing ports...
hn_dev_close(): close
Done

Bye...

Shutting down port 0...
Stopping ports...
Done
Closing ports...
Port 0 is already closed
Done

Shutting down port 1...
Stopping ports...
Done
Closing ports...
Port 1 is already closed
Done

Bye...

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] Fetching packets from NIC on Hyper-V using patch v9 & Kernel 4.16.11
  2018-05-29 10:56 [dpdk-users] Fetching packets from NIC on Hyper-V using patch v9 & Kernel 4.16.11 Giridharan, Ganesan
@ 2018-05-29 15:22 ` Stephen Hemminger
  2018-05-29 15:29   ` Giridharan, Ganesan
  2018-05-29 16:32   ` Giridharan, Ganesan
  0 siblings, 2 replies; 4+ messages in thread
From: Stephen Hemminger @ 2018-05-29 15:22 UTC (permalink / raw)
  To: Giridharan, Ganesan; +Cc: users

One possible thing is that Hyperv host filters based on MAC address. What
is the sending application using?

On Tue, May 29, 2018, 3:56 AM Giridharan, Ganesan <ggiridharan@rbbn.com>
wrote:

> Good morning.
>
> Environment:
>
> I am using "testpmd" to verify packets receive/transmit "io" mode on Linux
> guest under Hyper-V on a WS2016 DC server.
> I am using "latest" pulled via git last week early. Applied "v9" patches
> from "Stephen Hemminger".
> Linux guest I am using is Ubuntu 18.04 upgraded to 4.16.11 Kernel.
>
> Problem:
>
> Testpmd does not seem to receive any packets.
>
> Log:
>
> "alias tpmd='./testpmd -l 0-1 --log-level=8 --log-level='\''pmd.net.netvsc.*:debug'\''
> --log-level='\''bus.vmbus:debug'\'' --log-level='\''lib.ethdev.*:debug'\''
> -- -i'"
>
> root@ubuntu-1804-dev:~/v9/dpdk# ./huge-setup.sh eth1 eth2
> Removing currently reserved hugepages
> Unmounting /mnt/huge and removing directory
> Reserving hugepages
> Creating /mnt/huge and mounting as hugetlbfs
> Rebind eth1 to hv_uio_generic
> Rebind eth2 to hv_uio_generic
> root@ubuntu-1804-dev:~/v9/dpdk# cd x86_64-native-linuxapp-gcc/app/
> root@ubuntu-1804-dev:~/v9/dpdk/x86_64-native-linuxapp-gcc/app# tpmd
> EAL: Detected 4 lcore(s)
> EAL: Detected 1 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> vmbus_scan_one(): Adding vmbus device a759f3db-77bb-4e7c-ac6f-504c268c7f2b
> vmbus_scan_one(): Adding vmbus device fd149e91-82e0-4a7d-afa6-2a4166cbd7c0
> vmbus_scan_one(): Adding vmbus device 58f75a6d-d949-4320-99e1-a2a2576d581c
> vmbus_scan_one(): Adding vmbus device f5bee29c-1741-4aad-a4c2-8fdedb46dcc2
> vmbus_scan_one(): Adding vmbus device 3e7e7e4c-e8cb-4e05-a0aa-a267ced7b73e
> vmbus_scan_one(): Adding vmbus device 1eccfd72-4b41-45ef-b73a-4a6e44c12924
> vmbus_scan_one(): Adding vmbus device d34b2567-b9b6-42b9-8778-0a4ec0b955bf
> vmbus_scan_one(): Adding vmbus device 4487b255-b88c-403f-bb51-d1f69cf17f87
> vmbus_scan_one(): Adding vmbus device 83a43d10-0c74-45ec-a293-31e29ebb1787
> vmbus_scan_one(): Adding vmbus device 242ff919-07db-4180-9c2e-b86cb68c8c55
> vmbus_scan_one(): Adding vmbus device 2dd1ce17-079e-403c-b352-a1921ee207ee
> vmbus_scan_one(): Adding vmbus device 00000000-0000-8899-0000-000000000000
> vmbus_scan_one(): Adding vmbus device 99221fa0-24ad-11e2-be98-001aa01bbf6e
> vmbus_scan_one(): Adding vmbus device 1ac2a997-5040-4851-89e4-3ccd48e51cf9
> vmbus_scan_one(): Adding vmbus device 5620e0c7-8062-4dce-aeb7-520c7ef76171
> vmbus_scan_one(): Adding vmbus device b6650ff7-33bc-4840-8048-e0676786f393
> vmbus_scan_one(): Adding vmbus device 2450ee40-33bf-4fbd-892e-9fb06e9214cf
> vmbus_scan_one(): Adding vmbus device 00000000-0001-8899-0000-000000000000
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
> unreliable clock cycles !
> vmbus_probe_one_driver(): VMBUS device
> 3e7e7e4c-e8cb-4e05-a0aa-a267ced7b73e on NUMA socket -1
> rte_vmbus_map_device(): Not managed by UIO driver, skipped
> vmbus_probe_one_driver(): VMBUS device
> 83a43d10-0c74-45ec-a293-31e29ebb1787 on NUMA socket -1
> vmbus_probe_one_driver():   Invalid NUMA socket, default to 0
> vmbus_probe_one_driver():   probe driver: net_netvsc
> eth_hn_probe():  >>
> eth_dev_vmbus_allocate(): eth_dev_vmbus_allocate: Allocating eth dev for
> 83a43d10-0c74-45ec-a293-31e29ebb1787
> eth_dev_vmbus_allocate(): Num of ETH devices after allocation = 0
> eth_hn_dev_init():  >>
> hn_nvs_init(): NVS version 0x60001, NDIS version 6.30
> hn_nvs_conn_rxbuf(): connect rxbuff va=0x7fcbfe0ff000 gpad=0xe1e33
> hn_nvs_conn_rxbuf(): receive buffer size 1728 count 9102
> hn_nvs_conn_chim(): connect send buf va=0x7fcbfd1ff000 gpad=0xe1e34
> hn_nvs_conn_chim(): send buffer 15728640 section size:6144, count:2560
> hn_rndis_init(): RNDIS ver 1.0, aggpkt size 4026531839, aggpkt cnt 8,
> aggpkt align 8
> hn_rndis_link_status(): link status 0x4001000b
> hn_rndis_set_rxfilter(): set RX filter 0 done
> hn_tx_pool_init(): create a TX send pool hn_txd_0 n=2560 size=32 socket=0
> hn_rndis_get_eaddr(): MAC address 00:15:5d:0a:6e:0a
> eth_hn_dev_init(): VMBus max channels 1
> hn_rndis_query_rsscaps(): RX rings 64 indirect 128 caps 0x301
> vmbus_probe_one_driver(): VMBUS device
> a759f3db-77bb-4e7c-ac6f-504c268c7f2b on NUMA socket -1
> vmbus_probe_one_driver():   Invalid NUMA socket, default to 0
> vmbus_probe_one_driver():   probe driver: net_netvsc
> eth_hn_probe():  >>
> eth_dev_vmbus_allocate(): eth_dev_vmbus_allocate: Allocating eth dev for
> a759f3db-77bb-4e7c-ac6f-504c268c7f2b
> eth_dev_vmbus_allocate(): Num of ETH devices after allocation = 1
> eth_hn_dev_init():  >>
> hn_nvs_init(): NVS version 0x60001, NDIS version 6.30
> hn_nvs_conn_rxbuf(): connect rxbuff va=0x7fcbf5cfe000 gpad=0xe1e30
> hn_nvs_conn_rxbuf(): receive buffer size 1728 count 9102
> hn_nvs_conn_chim(): connect send buf va=0x7fcbf4dfe000 gpad=0xe1e31
> hn_nvs_conn_chim(): send buffer 15728640 section size:6144, count:2560
> hn_rndis_init(): RNDIS ver 1.0, aggpkt size 4026531839, aggpkt cnt 8,
> aggpkt align 8
> hn_rndis_link_status(): link status 0x4001000b
> hn_rndis_set_rxfilter(): set RX filter 0 done
> hn_tx_pool_init(): create a TX send pool hn_txd_1 n=2560 size=32 socket=0
> hn_rndis_get_eaddr(): MAC address 00:15:5d:0a:6e:09
> eth_hn_dev_init(): VMBus max channels 1
> hn_rndis_query_rsscaps(): RX rings 64 indirect 128 caps 0x301
> Interactive-mode selected
> hn_dev_info_get():  >>
> hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
> hn_dev_info_get():  >>
> hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Configuring Port 0 (socket 0)
> hn_dev_info_get():  >>
> hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
> hn_dev_configure():  >>
> hn_rndis_link_status(): link status 0x40020006
> hn_dev_info_get():  >>
> hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
> hn_dev_tx_queue_setup():  >>
> hn_dev_info_get():  >>
> hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
> hn_dev_rx_queue_setup():  >>
> hn_dev_start():  >>
> hn_rndis_set_rxfilter(): set RX filter 0xd done
> hn_dev_info_get():  >>
> hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
> hn_rndis_set_rxfilter(): set RX filter 0x9 done
> hn_rndis_set_rxfilter(): set RX filter 0x9 done
> hn_dev_link_update(): Port 0 is up
> Port 0: 00:15:5D:0A:6E:0A
> Configuring Port 1 (socket 0)
> hn_dev_info_get():  >>
> hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
> hn_dev_configure():  >>
> hn_rndis_link_status(): link status 0x40020006
> hn_dev_info_get():  >>
> hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
> hn_dev_tx_queue_setup():  >>
> hn_dev_info_get():  >>
> hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
> hn_dev_rx_queue_setup():  >>
> hn_dev_start():  >>
> hn_rndis_set_rxfilter(): set RX filter 0xd done
> hn_dev_info_get():  >>
> hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
> hn_rndis_set_rxfilter(): set RX filter 0x9 done
> hn_rndis_set_rxfilter(): set RX filter 0x9 done
> hn_dev_link_update(): Port 1 is up
> Port 1: 00:15:5D:0A:6E:09
> Checking link statuses...
> Done
> hn_rndis_set_rxfilter(): set RX filter 0x20 done
> hn_rndis_set_rxfilter(): set RX filter 0x20 done
> testpmd> start
> io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support
> enabled, MP over anonymous pages disabled
> Logical Core 1 (socket 0) forwards packets on 2 streams:
>   RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
>   RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
>
>   io packet forwarding packets/burst=32
>   nb forwarding cores=1 - nb forwarding ports=2
>   port 0: RX queue number: 1 Tx queue number: 1
>     Rx offloads=0x1000 Tx offloads=0x0
>     RX queue: 0
>       RX desc=0 - RX free threshold=0
>       RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>       RX Offloads=0x0
>     TX queue: 0
>       TX desc=0 - TX free threshold=0
>       TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>       TX offloads=0x0 - TX RS bit threshold=0
>   port 1: RX queue number: 1 Tx queue number: 1
>     Rx offloads=0x1000 Tx offloads=0x0
>     RX queue: 0
>       RX desc=0 - RX free threshold=0
>       RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>       RX Offloads=0x0
>     TX queue: 0
>       TX desc=0 - TX free threshold=0
>       TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>       TX offloads=0x0 - TX RS bit threshold=0
> hn_dev_stats_get():  >>
> hn_dev_stats_get():  >>
> testpmd> show port stats 0
> hn_dev_stats_get():  >>
>
>   ######################## NIC statistics for port 0
> ########################
>   RX-packets: 0          RX-missed: 0          RX-bytes:  0
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 0          TX-errors: 0          TX-bytes:  0
>
>   Throughput (since last show)
>   Rx-pps:            0
>   Tx-pps:            0
>
> ############################################################################
> testpmd> show port stats 1
> hn_dev_stats_get():  >>
>
>   ######################## NIC statistics for port 1
> ########################
>   RX-packets: 0          RX-missed: 0          RX-bytes:  0
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 0          TX-errors: 0          TX-bytes:  0
>
>   Throughput (since last show)
>   Rx-pps:            0
>   Tx-pps:            0
>
> ############################################################################
> testpmd> show port stats 1
> hn_dev_stats_get():  >>
>
>   ######################## NIC statistics for port 1
> ########################
>   RX-packets: 0          RX-missed: 0          RX-bytes:  0
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 0          TX-errors: 0          TX-bytes:  0
>
>   Throughput (since last show)
>   Rx-pps:            0
>   Tx-pps:            0
>
> ############################################################################
> testpmd> quit
> Telling cores to stop...
> Waiting for lcores to finish...
> hn_dev_stats_get():  >>
>
>   ---------------------- Forward statistics for port 0
> ----------------------
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>
> ----------------------------------------------------------------------------
> hn_dev_stats_get():  >>
>
>   ---------------------- Forward statistics for port 1
> ----------------------
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>
> ----------------------------------------------------------------------------
>
>   +++++++++++++++ Accumulated forward statistics for all
> ports+++++++++++++++
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>
> Done.
>
> Shutting down port 0...
> Stopping ports...
> hn_dev_stop():  >>
> hn_rndis_set_rxfilter(): set RX filter 0 done
> Done
> Closing ports...
> hn_dev_close(): close
> Done
>
> Shutting down port 1...
> Stopping ports...
> hn_dev_stop():  >>
> hn_rndis_set_rxfilter(): set RX filter 0 done
> Done
> Closing ports...
> hn_dev_close(): close
> Done
>
> Bye...
>
> Shutting down port 0...
> Stopping ports...
> Done
> Closing ports...
> Port 0 is already closed
> Done
>
> Shutting down port 1...
> Stopping ports...
> Done
> Closing ports...
> Port 1 is already closed
> Done
>
> Bye...
>
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] Fetching packets from NIC on Hyper-V using patch v9 & Kernel 4.16.11
  2018-05-29 15:22 ` Stephen Hemminger
@ 2018-05-29 15:29   ` Giridharan, Ganesan
  2018-05-29 16:32   ` Giridharan, Ganesan
  1 sibling, 0 replies; 4+ messages in thread
From: Giridharan, Ganesan @ 2018-05-29 15:29 UTC (permalink / raw)
  To: users

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=UTF-8, Size: 12141 bytes --]

Testpmd log reports the MAC for port to be

Port 0: 00:15:5D:0A:6E:0A
Port 1: 00:15:5D:0A:6E:09

From: Stephen Hemminger <stephen@networkplumber.org>
Sent: Tuesday, May 29, 2018 10:23 AM
To: Giridharan, Ganesan <ggiridharan@rbbn.com>
Cc: users <users@dpdk.org>
Subject: Re: [dpdk-users] Fetching packets from NIC on Hyper-V using patch v9 & Kernel 4.16.11

________________________________
NOTICE: This email was received from an EXTERNAL sender
________________________________

One possible thing is that Hyperv host filters based on MAC address. What is the sending application using?

On Tue, May 29, 2018, 3:56 AM Giridharan, Ganesan <ggiridharan@rbbn.com<mailto:ggiridharan@rbbn.com>> wrote:
Good morning.

Environment:

I am using "testpmd" to verify packets receive/transmit "io" mode on Linux guest under Hyper-V on a WS2016 DC server.
I am using "latest" pulled via git last week early. Applied "v9" patches from "Stephen Hemminger".
Linux guest I am using is Ubuntu 18.04 upgraded to 4.16.11 Kernel.

Problem:

Testpmd does not seem to receive any packets.

Log:

"alias tpmd='./testpmd -l 0-1 --log-level=8 --log-level='\''pmd.net<http://pmd.net>.netvsc.*:debug'\'' --log-level='\''bus.vmbus:debug'\'' --log-level='\''lib.ethdev.*:debug'\'' -- -i'"

root@ubuntu-1804-dev:~/v9/dpdk# ./huge-setup.sh eth1 eth2
Removing currently reserved hugepages
Unmounting /mnt/huge and removing directory
Reserving hugepages
Creating /mnt/huge and mounting as hugetlbfs
Rebind eth1 to hv_uio_generic
Rebind eth2 to hv_uio_generic
root@ubuntu-1804-dev:~/v9/dpdk# cd x86_64-native-linuxapp-gcc/app/
root@ubuntu-1804-dev:~/v9/dpdk/x86_64-native-linuxapp-gcc/app# tpmd
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
vmbus_scan_one(): Adding vmbus device a759f3db-77bb-4e7c-ac6f-504c268c7f2b
vmbus_scan_one(): Adding vmbus device fd149e91-82e0-4a7d-afa6-2a4166cbd7c0
vmbus_scan_one(): Adding vmbus device 58f75a6d-d949-4320-99e1-a2a2576d581c
vmbus_scan_one(): Adding vmbus device f5bee29c-1741-4aad-a4c2-8fdedb46dcc2
vmbus_scan_one(): Adding vmbus device 3e7e7e4c-e8cb-4e05-a0aa-a267ced7b73e
vmbus_scan_one(): Adding vmbus device 1eccfd72-4b41-45ef-b73a-4a6e44c12924
vmbus_scan_one(): Adding vmbus device d34b2567-b9b6-42b9-8778-0a4ec0b955bf
vmbus_scan_one(): Adding vmbus device 4487b255-b88c-403f-bb51-d1f69cf17f87
vmbus_scan_one(): Adding vmbus device 83a43d10-0c74-45ec-a293-31e29ebb1787
vmbus_scan_one(): Adding vmbus device 242ff919-07db-4180-9c2e-b86cb68c8c55
vmbus_scan_one(): Adding vmbus device 2dd1ce17-079e-403c-b352-a1921ee207ee
vmbus_scan_one(): Adding vmbus device 00000000-0000-8899-0000-000000000000
vmbus_scan_one(): Adding vmbus device 99221fa0-24ad-11e2-be98-001aa01bbf6e
vmbus_scan_one(): Adding vmbus device 1ac2a997-5040-4851-89e4-3ccd48e51cf9
vmbus_scan_one(): Adding vmbus device 5620e0c7-8062-4dce-aeb7-520c7ef76171
vmbus_scan_one(): Adding vmbus device b6650ff7-33bc-4840-8048-e0676786f393
vmbus_scan_one(): Adding vmbus device 2450ee40-33bf-4fbd-892e-9fb06e9214cf
vmbus_scan_one(): Adding vmbus device 00000000-0001-8899-0000-000000000000
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
vmbus_probe_one_driver(): VMBUS device 3e7e7e4c-e8cb-4e05-a0aa-a267ced7b73e on NUMA socket -1
rte_vmbus_map_device(): Not managed by UIO driver, skipped
vmbus_probe_one_driver(): VMBUS device 83a43d10-0c74-45ec-a293-31e29ebb1787 on NUMA socket -1
vmbus_probe_one_driver():   Invalid NUMA socket, default to 0
vmbus_probe_one_driver():   probe driver: net_netvsc
eth_hn_probe():  >>
eth_dev_vmbus_allocate(): eth_dev_vmbus_allocate: Allocating eth dev for 83a43d10-0c74-45ec-a293-31e29ebb1787
eth_dev_vmbus_allocate(): Num of ETH devices after allocation = 0
eth_hn_dev_init():  >>
hn_nvs_init(): NVS version 0x60001, NDIS version 6.30
hn_nvs_conn_rxbuf(): connect rxbuff va=0x7fcbfe0ff000 gpad=0xe1e33
hn_nvs_conn_rxbuf(): receive buffer size 1728 count 9102
hn_nvs_conn_chim(): connect send buf va=0x7fcbfd1ff000 gpad=0xe1e34
hn_nvs_conn_chim(): send buffer 15728640 section size:6144, count:2560
hn_rndis_init(): RNDIS ver 1.0, aggpkt size 4026531839, aggpkt cnt 8, aggpkt align 8
hn_rndis_link_status(): link status 0x4001000b
hn_rndis_set_rxfilter(): set RX filter 0 done
hn_tx_pool_init(): create a TX send pool hn_txd_0 n=2560 size=32 socket=0
hn_rndis_get_eaddr(): MAC address 00:15:5d:0a:6e:0a
eth_hn_dev_init(): VMBus max channels 1
hn_rndis_query_rsscaps(): RX rings 64 indirect 128 caps 0x301
vmbus_probe_one_driver(): VMBUS device a759f3db-77bb-4e7c-ac6f-504c268c7f2b on NUMA socket -1
vmbus_probe_one_driver():   Invalid NUMA socket, default to 0
vmbus_probe_one_driver():   probe driver: net_netvsc
eth_hn_probe():  >>
eth_dev_vmbus_allocate(): eth_dev_vmbus_allocate: Allocating eth dev for a759f3db-77bb-4e7c-ac6f-504c268c7f2b
eth_dev_vmbus_allocate(): Num of ETH devices after allocation = 1
eth_hn_dev_init():  >>
hn_nvs_init(): NVS version 0x60001, NDIS version 6.30
hn_nvs_conn_rxbuf(): connect rxbuff va=0x7fcbf5cfe000 gpad=0xe1e30
hn_nvs_conn_rxbuf(): receive buffer size 1728 count 9102
hn_nvs_conn_chim(): connect send buf va=0x7fcbf4dfe000 gpad=0xe1e31
hn_nvs_conn_chim(): send buffer 15728640 section size:6144, count:2560
hn_rndis_init(): RNDIS ver 1.0, aggpkt size 4026531839, aggpkt cnt 8, aggpkt align 8
hn_rndis_link_status(): link status 0x4001000b
hn_rndis_set_rxfilter(): set RX filter 0 done
hn_tx_pool_init(): create a TX send pool hn_txd_1 n=2560 size=32 socket=0
hn_rndis_get_eaddr(): MAC address 00:15:5d:0a:6e:09
eth_hn_dev_init(): VMBus max channels 1
hn_rndis_query_rsscaps(): RX rings 64 indirect 128 caps 0x301
Interactive-mode selected
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_configure():  >>
hn_rndis_link_status(): link status 0x40020006
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_tx_queue_setup():  >>
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_rx_queue_setup():  >>
hn_dev_start():  >>
hn_rndis_set_rxfilter(): set RX filter 0xd done
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_rndis_set_rxfilter(): set RX filter 0x9 done
hn_rndis_set_rxfilter(): set RX filter 0x9 done
hn_dev_link_update(): Port 0 is up
Port 0: 00:15:5D:0A:6E:0A
Configuring Port 1 (socket 0)
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_configure():  >>
hn_rndis_link_status(): link status 0x40020006
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_tx_queue_setup():  >>
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_rx_queue_setup():  >>
hn_dev_start():  >>
hn_rndis_set_rxfilter(): set RX filter 0xd done
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_rndis_set_rxfilter(): set RX filter 0x9 done
hn_rndis_set_rxfilter(): set RX filter 0x9 done
hn_dev_link_update(): Port 1 is up
Port 1: 00:15:5D:0A:6E:09
Checking link statuses...
Done
hn_rndis_set_rxfilter(): set RX filter 0x20 done
hn_rndis_set_rxfilter(): set RX filter 0x20 done
testpmd> start
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP over anonymous pages disabled
Logical Core 1 (socket 0) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x1000 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x1000 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
hn_dev_stats_get():  >>
hn_dev_stats_get():  >>
testpmd> show port stats 0
hn_dev_stats_get():  >>

  ######################## NIC statistics for port 0  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0
  Tx-pps:            0
  ############################################################################
testpmd> show port stats 1
hn_dev_stats_get():  >>

  ######################## NIC statistics for port 1  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0
  Tx-pps:            0
  ############################################################################
testpmd> show port stats 1
hn_dev_stats_get():  >>

  ######################## NIC statistics for port 1  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0
  Tx-pps:            0
  ############################################################################
testpmd> quit
Telling cores to stop...
Waiting for lcores to finish...
hn_dev_stats_get():  >>

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------
hn_dev_stats_get():  >>

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Shutting down port 0...
Stopping ports...
hn_dev_stop():  >>
hn_rndis_set_rxfilter(): set RX filter 0 done
Done
Closing ports...
hn_dev_close(): close
Done

Shutting down port 1...
Stopping ports...
hn_dev_stop():  >>
hn_rndis_set_rxfilter(): set RX filter 0 done
Done
Closing ports...
hn_dev_close(): close
Done

Bye...

Shutting down port 0...
Stopping ports...
Done
Closing ports...
Port 0 is already closed
Done

Shutting down port 1...
Stopping ports...
Done
Closing ports...
Port 1 is already closed
Done

Bye...
\x16º&‚\b«‰ØZ­©ëm¹Ü¢dîxƲÛÝ|×mzÛM|Eën®sÚ¶\x18 Š¸…ªÚž¶ÛÊ&Eç\x1eŠ÷~º&ºË&¶—^–+Þ¯-|Öh¦yƬµÊ&ºË&¶—^–+Þ¯-|Öh¦yƬµÊ&ë}vómuóVòv—d¢¸\x0f¢Ë_‹\x1c"¶\x11\x1213âw\x7f=\aPÄÑú+ºÇ«±Ú]’ŠàNç¶ôƲÛM|×Ívׯ´ÛM\x02\x11$Ã(ƒ\x12Š	Ú¶êÞ

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] Fetching packets from NIC on Hyper-V using patch v9 & Kernel 4.16.11
  2018-05-29 15:22 ` Stephen Hemminger
  2018-05-29 15:29   ` Giridharan, Ganesan
@ 2018-05-29 16:32   ` Giridharan, Ganesan
  1 sibling, 0 replies; 4+ messages in thread
From: Giridharan, Ganesan @ 2018-05-29 16:32 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: users

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=UTF-8, Size: 12294 bytes --]

I would assume there has to be at least a few broadcast packets. Also I have PING sending to an ip address with MAC of Port-0 in testpmd.

From: Stephen Hemminger <stephen@networkplumber.org>
Sent: Tuesday, May 29, 2018 10:23 AM
To: Giridharan, Ganesan <ggiridharan@rbbn.com>
Cc: users <users@dpdk.org>
Subject: Re: [dpdk-users] Fetching packets from NIC on Hyper-V using patch v9 & Kernel 4.16.11

________________________________
NOTICE: This email was received from an EXTERNAL sender
________________________________

One possible thing is that Hyperv host filters based on MAC address. What is the sending application using?

On Tue, May 29, 2018, 3:56 AM Giridharan, Ganesan <ggiridharan@rbbn.com<mailto:ggiridharan@rbbn.com>> wrote:
Good morning.

Environment:

I am using "testpmd" to verify packets receive/transmit "io" mode on Linux guest under Hyper-V on a WS2016 DC server.
I am using "latest" pulled via git last week early. Applied "v9" patches from "Stephen Hemminger".
Linux guest I am using is Ubuntu 18.04 upgraded to 4.16.11 Kernel.

Problem:

Testpmd does not seem to receive any packets.

Log:

"alias tpmd='./testpmd -l 0-1 --log-level=8 --log-level='\''pmd.net<http://pmd.net>.netvsc.*:debug'\'' --log-level='\''bus.vmbus:debug'\'' --log-level='\''lib.ethdev.*:debug'\'' -- -i'"

root@ubuntu-1804-dev:~/v9/dpdk# ./huge-setup.sh eth1 eth2
Removing currently reserved hugepages
Unmounting /mnt/huge and removing directory
Reserving hugepages
Creating /mnt/huge and mounting as hugetlbfs
Rebind eth1 to hv_uio_generic
Rebind eth2 to hv_uio_generic
root@ubuntu-1804-dev:~/v9/dpdk# cd x86_64-native-linuxapp-gcc/app/
root@ubuntu-1804-dev:~/v9/dpdk/x86_64-native-linuxapp-gcc/app# tpmd
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
vmbus_scan_one(): Adding vmbus device a759f3db-77bb-4e7c-ac6f-504c268c7f2b
vmbus_scan_one(): Adding vmbus device fd149e91-82e0-4a7d-afa6-2a4166cbd7c0
vmbus_scan_one(): Adding vmbus device 58f75a6d-d949-4320-99e1-a2a2576d581c
vmbus_scan_one(): Adding vmbus device f5bee29c-1741-4aad-a4c2-8fdedb46dcc2
vmbus_scan_one(): Adding vmbus device 3e7e7e4c-e8cb-4e05-a0aa-a267ced7b73e
vmbus_scan_one(): Adding vmbus device 1eccfd72-4b41-45ef-b73a-4a6e44c12924
vmbus_scan_one(): Adding vmbus device d34b2567-b9b6-42b9-8778-0a4ec0b955bf
vmbus_scan_one(): Adding vmbus device 4487b255-b88c-403f-bb51-d1f69cf17f87
vmbus_scan_one(): Adding vmbus device 83a43d10-0c74-45ec-a293-31e29ebb1787
vmbus_scan_one(): Adding vmbus device 242ff919-07db-4180-9c2e-b86cb68c8c55
vmbus_scan_one(): Adding vmbus device 2dd1ce17-079e-403c-b352-a1921ee207ee
vmbus_scan_one(): Adding vmbus device 00000000-0000-8899-0000-000000000000
vmbus_scan_one(): Adding vmbus device 99221fa0-24ad-11e2-be98-001aa01bbf6e
vmbus_scan_one(): Adding vmbus device 1ac2a997-5040-4851-89e4-3ccd48e51cf9
vmbus_scan_one(): Adding vmbus device 5620e0c7-8062-4dce-aeb7-520c7ef76171
vmbus_scan_one(): Adding vmbus device b6650ff7-33bc-4840-8048-e0676786f393
vmbus_scan_one(): Adding vmbus device 2450ee40-33bf-4fbd-892e-9fb06e9214cf
vmbus_scan_one(): Adding vmbus device 00000000-0001-8899-0000-000000000000
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
vmbus_probe_one_driver(): VMBUS device 3e7e7e4c-e8cb-4e05-a0aa-a267ced7b73e on NUMA socket -1
rte_vmbus_map_device(): Not managed by UIO driver, skipped
vmbus_probe_one_driver(): VMBUS device 83a43d10-0c74-45ec-a293-31e29ebb1787 on NUMA socket -1
vmbus_probe_one_driver():   Invalid NUMA socket, default to 0
vmbus_probe_one_driver():   probe driver: net_netvsc
eth_hn_probe():  >>
eth_dev_vmbus_allocate(): eth_dev_vmbus_allocate: Allocating eth dev for 83a43d10-0c74-45ec-a293-31e29ebb1787
eth_dev_vmbus_allocate(): Num of ETH devices after allocation = 0
eth_hn_dev_init():  >>
hn_nvs_init(): NVS version 0x60001, NDIS version 6.30
hn_nvs_conn_rxbuf(): connect rxbuff va=0x7fcbfe0ff000 gpad=0xe1e33
hn_nvs_conn_rxbuf(): receive buffer size 1728 count 9102
hn_nvs_conn_chim(): connect send buf va=0x7fcbfd1ff000 gpad=0xe1e34
hn_nvs_conn_chim(): send buffer 15728640 section size:6144, count:2560
hn_rndis_init(): RNDIS ver 1.0, aggpkt size 4026531839, aggpkt cnt 8, aggpkt align 8
hn_rndis_link_status(): link status 0x4001000b
hn_rndis_set_rxfilter(): set RX filter 0 done
hn_tx_pool_init(): create a TX send pool hn_txd_0 n=2560 size=32 socket=0
hn_rndis_get_eaddr(): MAC address 00:15:5d:0a:6e:0a
eth_hn_dev_init(): VMBus max channels 1
hn_rndis_query_rsscaps(): RX rings 64 indirect 128 caps 0x301
vmbus_probe_one_driver(): VMBUS device a759f3db-77bb-4e7c-ac6f-504c268c7f2b on NUMA socket -1
vmbus_probe_one_driver():   Invalid NUMA socket, default to 0
vmbus_probe_one_driver():   probe driver: net_netvsc
eth_hn_probe():  >>
eth_dev_vmbus_allocate(): eth_dev_vmbus_allocate: Allocating eth dev for a759f3db-77bb-4e7c-ac6f-504c268c7f2b
eth_dev_vmbus_allocate(): Num of ETH devices after allocation = 1
eth_hn_dev_init():  >>
hn_nvs_init(): NVS version 0x60001, NDIS version 6.30
hn_nvs_conn_rxbuf(): connect rxbuff va=0x7fcbf5cfe000 gpad=0xe1e30
hn_nvs_conn_rxbuf(): receive buffer size 1728 count 9102
hn_nvs_conn_chim(): connect send buf va=0x7fcbf4dfe000 gpad=0xe1e31
hn_nvs_conn_chim(): send buffer 15728640 section size:6144, count:2560
hn_rndis_init(): RNDIS ver 1.0, aggpkt size 4026531839, aggpkt cnt 8, aggpkt align 8
hn_rndis_link_status(): link status 0x4001000b
hn_rndis_set_rxfilter(): set RX filter 0 done
hn_tx_pool_init(): create a TX send pool hn_txd_1 n=2560 size=32 socket=0
hn_rndis_get_eaddr(): MAC address 00:15:5d:0a:6e:09
eth_hn_dev_init(): VMBus max channels 1
hn_rndis_query_rsscaps(): RX rings 64 indirect 128 caps 0x301
Interactive-mode selected
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_configure():  >>
hn_rndis_link_status(): link status 0x40020006
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_tx_queue_setup():  >>
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_rx_queue_setup():  >>
hn_dev_start():  >>
hn_rndis_set_rxfilter(): set RX filter 0xd done
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_rndis_set_rxfilter(): set RX filter 0x9 done
hn_rndis_set_rxfilter(): set RX filter 0x9 done
hn_dev_link_update(): Port 0 is up
Port 0: 00:15:5D:0A:6E:0A
Configuring Port 1 (socket 0)
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_configure():  >>
hn_rndis_link_status(): link status 0x40020006
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_tx_queue_setup():  >>
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_rx_queue_setup():  >>
hn_dev_start():  >>
hn_rndis_set_rxfilter(): set RX filter 0xd done
hn_dev_info_get():  >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_rndis_set_rxfilter(): set RX filter 0x9 done
hn_rndis_set_rxfilter(): set RX filter 0x9 done
hn_dev_link_update(): Port 1 is up
Port 1: 00:15:5D:0A:6E:09
Checking link statuses...
Done
hn_rndis_set_rxfilter(): set RX filter 0x20 done
hn_rndis_set_rxfilter(): set RX filter 0x20 done
testpmd> start
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP over anonymous pages disabled
Logical Core 1 (socket 0) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x1000 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x1000 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
hn_dev_stats_get():  >>
hn_dev_stats_get():  >>
testpmd> show port stats 0
hn_dev_stats_get():  >>

  ######################## NIC statistics for port 0  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0
  Tx-pps:            0
  ############################################################################
testpmd> show port stats 1
hn_dev_stats_get():  >>

  ######################## NIC statistics for port 1  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0
  Tx-pps:            0
  ############################################################################
testpmd> show port stats 1
hn_dev_stats_get():  >>

  ######################## NIC statistics for port 1  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0
  Tx-pps:            0
  ############################################################################
testpmd> quit
Telling cores to stop...
Waiting for lcores to finish...
hn_dev_stats_get():  >>

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------
hn_dev_stats_get():  >>

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Shutting down port 0...
Stopping ports...
hn_dev_stop():  >>
hn_rndis_set_rxfilter(): set RX filter 0 done
Done
Closing ports...
hn_dev_close(): close
Done

Shutting down port 1...
Stopping ports...
hn_dev_stop():  >>
hn_rndis_set_rxfilter(): set RX filter 0 done
Done
Closing ports...
hn_dev_close(): close
Done

Bye...

Shutting down port 0...
Stopping ports...
Done
Closing ports...
Port 0 is already closed
Done

Shutting down port 1...
Stopping ports...
Done
Closing ports...
Port 1 is already closed
Done

Bye...
\x16º&‘è­‡\b¥zȧµé\¢dîxƲÛÝ|ç}yÛM|Eën®sÚ¶\x19\x1eŠØpŠW¬Š{^•Ê&Eç\x1eŠ÷~º&š\x06µÖ)ízW(šh\x1a×X§µé\¢m}ÛžyÛÝÛÉÚ]’Šà>‹-~,pŠØDHÄω݅\a]\x03\x13Gè®ë\x1e®ÇivJ+;žÛÓ\x1aËm5ó_9ß^~Óm4\bD“\	©Eë.–ÔŠ óÄ\x0ez\x1a¶Öœ†g§¶)æzË\x1aåÀš‘b•å)–†yÑZ–Ç‘yÇ¢½ç_®‰ŸšÉ kM6~h§µé\¢mt۝öãn›Éù¬š\x06µÓgæŠ{^•Ê&Â+a\x11#\x13?ôËKðÇ\x11\x14€\0D¶ç¡‚1!ÀßÎ6ôƲÛM|ÓÞw×};ÓEÄÆÒ袝u\Šèœú+´\x05D

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-05-29 16:32 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-29 10:56 [dpdk-users] Fetching packets from NIC on Hyper-V using patch v9 & Kernel 4.16.11 Giridharan, Ganesan
2018-05-29 15:22 ` Stephen Hemminger
2018-05-29 15:29   ` Giridharan, Ganesan
2018-05-29 16:32   ` Giridharan, Ganesan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).