From: "Giridharan, Ganesan" <ggiridharan@rbbn.com>
To: "users@dpdk.org" <users@dpdk.org>
Subject: [dpdk-users] Fetching packets from NIC on Hyper-V using patch v9 & Kernel 4.16.11
Date: Tue, 29 May 2018 10:56:40 +0000 [thread overview]
Message-ID: <CY4PR03MB2757721E2C8C267A72A9F82DCB6D0@CY4PR03MB2757.namprd03.prod.outlook.com> (raw)
Good morning.
Environment:
I am using "testpmd" to verify packets receive/transmit "io" mode on Linux guest under Hyper-V on a WS2016 DC server.
I am using "latest" pulled via git last week early. Applied "v9" patches from "Stephen Hemminger".
Linux guest I am using is Ubuntu 18.04 upgraded to 4.16.11 Kernel.
Problem:
Testpmd does not seem to receive any packets.
Log:
"alias tpmd='./testpmd -l 0-1 --log-level=8 --log-level='\''pmd.net.netvsc.*:debug'\'' --log-level='\''bus.vmbus:debug'\'' --log-level='\''lib.ethdev.*:debug'\'' -- -i'"
root@ubuntu-1804-dev:~/v9/dpdk# ./huge-setup.sh eth1 eth2
Removing currently reserved hugepages
Unmounting /mnt/huge and removing directory
Reserving hugepages
Creating /mnt/huge and mounting as hugetlbfs
Rebind eth1 to hv_uio_generic
Rebind eth2 to hv_uio_generic
root@ubuntu-1804-dev:~/v9/dpdk# cd x86_64-native-linuxapp-gcc/app/
root@ubuntu-1804-dev:~/v9/dpdk/x86_64-native-linuxapp-gcc/app# tpmd
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
vmbus_scan_one(): Adding vmbus device a759f3db-77bb-4e7c-ac6f-504c268c7f2b
vmbus_scan_one(): Adding vmbus device fd149e91-82e0-4a7d-afa6-2a4166cbd7c0
vmbus_scan_one(): Adding vmbus device 58f75a6d-d949-4320-99e1-a2a2576d581c
vmbus_scan_one(): Adding vmbus device f5bee29c-1741-4aad-a4c2-8fdedb46dcc2
vmbus_scan_one(): Adding vmbus device 3e7e7e4c-e8cb-4e05-a0aa-a267ced7b73e
vmbus_scan_one(): Adding vmbus device 1eccfd72-4b41-45ef-b73a-4a6e44c12924
vmbus_scan_one(): Adding vmbus device d34b2567-b9b6-42b9-8778-0a4ec0b955bf
vmbus_scan_one(): Adding vmbus device 4487b255-b88c-403f-bb51-d1f69cf17f87
vmbus_scan_one(): Adding vmbus device 83a43d10-0c74-45ec-a293-31e29ebb1787
vmbus_scan_one(): Adding vmbus device 242ff919-07db-4180-9c2e-b86cb68c8c55
vmbus_scan_one(): Adding vmbus device 2dd1ce17-079e-403c-b352-a1921ee207ee
vmbus_scan_one(): Adding vmbus device 00000000-0000-8899-0000-000000000000
vmbus_scan_one(): Adding vmbus device 99221fa0-24ad-11e2-be98-001aa01bbf6e
vmbus_scan_one(): Adding vmbus device 1ac2a997-5040-4851-89e4-3ccd48e51cf9
vmbus_scan_one(): Adding vmbus device 5620e0c7-8062-4dce-aeb7-520c7ef76171
vmbus_scan_one(): Adding vmbus device b6650ff7-33bc-4840-8048-e0676786f393
vmbus_scan_one(): Adding vmbus device 2450ee40-33bf-4fbd-892e-9fb06e9214cf
vmbus_scan_one(): Adding vmbus device 00000000-0001-8899-0000-000000000000
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
vmbus_probe_one_driver(): VMBUS device 3e7e7e4c-e8cb-4e05-a0aa-a267ced7b73e on NUMA socket -1
rte_vmbus_map_device(): Not managed by UIO driver, skipped
vmbus_probe_one_driver(): VMBUS device 83a43d10-0c74-45ec-a293-31e29ebb1787 on NUMA socket -1
vmbus_probe_one_driver(): Invalid NUMA socket, default to 0
vmbus_probe_one_driver(): probe driver: net_netvsc
eth_hn_probe(): >>
eth_dev_vmbus_allocate(): eth_dev_vmbus_allocate: Allocating eth dev for 83a43d10-0c74-45ec-a293-31e29ebb1787
eth_dev_vmbus_allocate(): Num of ETH devices after allocation = 0
eth_hn_dev_init(): >>
hn_nvs_init(): NVS version 0x60001, NDIS version 6.30
hn_nvs_conn_rxbuf(): connect rxbuff va=0x7fcbfe0ff000 gpad=0xe1e33
hn_nvs_conn_rxbuf(): receive buffer size 1728 count 9102
hn_nvs_conn_chim(): connect send buf va=0x7fcbfd1ff000 gpad=0xe1e34
hn_nvs_conn_chim(): send buffer 15728640 section size:6144, count:2560
hn_rndis_init(): RNDIS ver 1.0, aggpkt size 4026531839, aggpkt cnt 8, aggpkt align 8
hn_rndis_link_status(): link status 0x4001000b
hn_rndis_set_rxfilter(): set RX filter 0 done
hn_tx_pool_init(): create a TX send pool hn_txd_0 n=2560 size=32 socket=0
hn_rndis_get_eaddr(): MAC address 00:15:5d:0a:6e:0a
eth_hn_dev_init(): VMBus max channels 1
hn_rndis_query_rsscaps(): RX rings 64 indirect 128 caps 0x301
vmbus_probe_one_driver(): VMBUS device a759f3db-77bb-4e7c-ac6f-504c268c7f2b on NUMA socket -1
vmbus_probe_one_driver(): Invalid NUMA socket, default to 0
vmbus_probe_one_driver(): probe driver: net_netvsc
eth_hn_probe(): >>
eth_dev_vmbus_allocate(): eth_dev_vmbus_allocate: Allocating eth dev for a759f3db-77bb-4e7c-ac6f-504c268c7f2b
eth_dev_vmbus_allocate(): Num of ETH devices after allocation = 1
eth_hn_dev_init(): >>
hn_nvs_init(): NVS version 0x60001, NDIS version 6.30
hn_nvs_conn_rxbuf(): connect rxbuff va=0x7fcbf5cfe000 gpad=0xe1e30
hn_nvs_conn_rxbuf(): receive buffer size 1728 count 9102
hn_nvs_conn_chim(): connect send buf va=0x7fcbf4dfe000 gpad=0xe1e31
hn_nvs_conn_chim(): send buffer 15728640 section size:6144, count:2560
hn_rndis_init(): RNDIS ver 1.0, aggpkt size 4026531839, aggpkt cnt 8, aggpkt align 8
hn_rndis_link_status(): link status 0x4001000b
hn_rndis_set_rxfilter(): set RX filter 0 done
hn_tx_pool_init(): create a TX send pool hn_txd_1 n=2560 size=32 socket=0
hn_rndis_get_eaddr(): MAC address 00:15:5d:0a:6e:09
eth_hn_dev_init(): VMBus max channels 1
hn_rndis_query_rsscaps(): RX rings 64 indirect 128 caps 0x301
Interactive-mode selected
hn_dev_info_get(): >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_info_get(): >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
hn_dev_info_get(): >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_configure(): >>
hn_rndis_link_status(): link status 0x40020006
hn_dev_info_get(): >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_tx_queue_setup(): >>
hn_dev_info_get(): >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_rx_queue_setup(): >>
hn_dev_start(): >>
hn_rndis_set_rxfilter(): set RX filter 0xd done
hn_dev_info_get(): >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_rndis_set_rxfilter(): set RX filter 0x9 done
hn_rndis_set_rxfilter(): set RX filter 0x9 done
hn_dev_link_update(): Port 0 is up
Port 0: 00:15:5D:0A:6E:0A
Configuring Port 1 (socket 0)
hn_dev_info_get(): >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_configure(): >>
hn_rndis_link_status(): link status 0x40020006
hn_dev_info_get(): >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_tx_queue_setup(): >>
hn_dev_info_get(): >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_dev_rx_queue_setup(): >>
hn_dev_start(): >>
hn_rndis_set_rxfilter(): set RX filter 0xd done
hn_dev_info_get(): >>
hn_rndis_get_offload(): offload capa Tx 0x802f Rx 0x180f
hn_rndis_set_rxfilter(): set RX filter 0x9 done
hn_rndis_set_rxfilter(): set RX filter 0x9 done
hn_dev_link_update(): Port 1 is up
Port 1: 00:15:5D:0A:6E:09
Checking link statuses...
Done
hn_rndis_set_rxfilter(): set RX filter 0x20 done
hn_rndis_set_rxfilter(): set RX filter 0x20 done
testpmd> start
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP over anonymous pages disabled
Logical Core 1 (socket 0) forwards packets on 2 streams:
RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x1000 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x1000 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
hn_dev_stats_get(): >>
hn_dev_stats_get(): >>
testpmd> show port stats 0
hn_dev_stats_get(): >>
######################## NIC statistics for port 0 ########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0
Tx-pps: 0
############################################################################
testpmd> show port stats 1
hn_dev_stats_get(): >>
######################## NIC statistics for port 1 ########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0
Tx-pps: 0
############################################################################
testpmd> show port stats 1
hn_dev_stats_get(): >>
######################## NIC statistics for port 1 ########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0
Tx-pps: 0
############################################################################
testpmd> quit
Telling cores to stop...
Waiting for lcores to finish...
hn_dev_stats_get(): >>
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
hn_dev_stats_get(): >>
---------------------- Forward statistics for port 1 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
Shutting down port 0...
Stopping ports...
hn_dev_stop(): >>
hn_rndis_set_rxfilter(): set RX filter 0 done
Done
Closing ports...
hn_dev_close(): close
Done
Shutting down port 1...
Stopping ports...
hn_dev_stop(): >>
hn_rndis_set_rxfilter(): set RX filter 0 done
Done
Closing ports...
hn_dev_close(): close
Done
Bye...
Shutting down port 0...
Stopping ports...
Done
Closing ports...
Port 0 is already closed
Done
Shutting down port 1...
Stopping ports...
Done
Closing ports...
Port 1 is already closed
Done
Bye...
next reply other threads:[~2018-05-29 10:56 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-05-29 10:56 Giridharan, Ganesan [this message]
2018-05-29 15:22 ` Stephen Hemminger
2018-05-29 15:29 ` Giridharan, Ganesan
2018-05-29 16:32 ` Giridharan, Ganesan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CY4PR03MB2757721E2C8C267A72A9F82DCB6D0@CY4PR03MB2757.namprd03.prod.outlook.com \
--to=ggiridharan@rbbn.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).