From: Stephen Hemminger <stephen@networkplumber.org>
To: Mohammed Gamal <mgamal@redhat.com>
Cc: dev@dpdk.org, maxime coquelin <maxime.coquelin@redhat.com>,
Yuhui Jiang <yujiang@redhat.com>, Wei Shi <wshi@redhat.com>
Subject: Re: [dpdk-dev] Problems running netvsc multiq
Date: Wed, 5 Dec 2018 14:12:46 -0800 [thread overview]
Message-ID: <20181205141246.6729c106@xeon-e3> (raw)
In-Reply-To: <1543942571.5400.38.camel@redhat.com>
On WS2016 and 4.19.7 kernel (with the 4 patches), this is what I see:
$ sudo ./testpmd -l 0-1 -n2 --log-level=8 --log-level='pmd.*,8' --log-level='bus.vmbus,8' -- --port-topology=chained --forward-mode=rxonly --stats-period 1 --eth-peer=0,00:15:5d:1e:20:c0 --txq 2 --rxq 2
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
vmbus_scan_one(): Adding vmbus device 2dd1ce17-079e-403c-b352-a1921ee207ee
vmbus_scan_one(): Adding vmbus device 635a7ae3-091e-4410-ad59-667c4f8c04c3
vmbus_scan_one(): Adding vmbus device 58f75a6d-d949-4320-99e1-a2a2576d581c
vmbus_scan_one(): Adding vmbus device 242ff919-07db-4180-9c2e-b86cb68c8c55
vmbus_scan_one(): Adding vmbus device 7cb9f65d-684d-44dc-9d55-13d40dd60570
vmbus_scan_one(): Adding vmbus device a6dcdcb3-c4da-445a-bc12-9050eb9cebfc
vmbus_scan_one(): Adding vmbus device 2450ee40-33bf-4fbd-892e-9fb06e9214cf
vmbus_scan_one(): Adding vmbus device 99221fa0-24ad-11e2-be98-001aa01bbf6e
vmbus_scan_one(): Adding vmbus device d34b2567-b9b6-42b9-8778-0a4ec0b955bf
vmbus_scan_one(): Adding vmbus device fd149e91-82e0-4a7d-afa6-2a4166cbd7c0
vmbus_scan_one(): Adding vmbus device b6650ff7-33bc-4840-8048-e0676786f393
vmbus_scan_one(): Adding vmbus device 5620e0c7-8062-4dce-aeb7-520c7ef76171
vmbus_scan_one(): Adding vmbus device 1eccfd72-4b41-45ef-b73a-4a6e44c12924
vmbus_scan_one(): Adding vmbus device 4487b255-b88c-403f-bb51-d1f69cf17f87
vmbus_scan_one(): Adding vmbus device b5fa4c59-1916-4725-935f-5c8d09d596c5
vmbus_scan_one(): Adding vmbus device b30ed368-1a6f-4921-8d2b-4160a0dfc667
vmbus_scan_one(): Adding vmbus device f5bee29c-1741-4aad-a4c2-8fdedb46dcc2
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
EAL: PCI device 935f:00:02.0 on NUMA socket 0
EAL: probe driver: 15b3:1014 net_mlx5
net_mlx5: checking device "mlx5_0"
net_mlx5: PCI information matches for device "mlx5_0"
net_mlx5: no switch support detected
net_mlx5: MPW isn't supported
net_mlx5: SWP support: 0
net_mlx5: tunnel offloading is supported
net_mlx5: MPLS over GRE/UDP tunnel offloading disabled due to old OFED/rdma-core version or firmware configuration
net_mlx5: naming Ethernet device "935f:00:02.0"
net_mlx5: port is not active: "down" (1)
net_mlx5: checksum offloading is supported
net_mlx5: counters are not supported
net_mlx5: maximum Rx indirection table size is 512
net_mlx5: VLAN stripping is supported
net_mlx5: FCS stripping configuration is supported
net_mlx5: hardware Rx end alignment padding is not supported
net_mlx5: MPS is disabled
net_mlx5: port 0 reserved UAR address space: 0x7f5523f6f000
net_mlx5: port 0 MAC address is 00:15:5d:2a:16:66
net_mlx5: port 0 MTU is 1500
net_mlx5: port 0 forcing Ethernet interface up
net_mlx5: port 0 flow maximum priority: 5
dpaax: read_memory_node(): Unable to glob device-tree memory node: (/proc/device-tree/memory[@0-9]*/reg)(3)
dpaax: PA->VA translation not available;
dpaax: Expect performance impact.
vmbus_probe_one_driver(): VMBUS device 635a7ae3-091e-4410-ad59-667c4f8c04c3 on NUMA socket 0
vmbus_probe_one_driver(): probe driver: net_netvsc
eth_hn_probe(): >>
eth_hn_dev_init(): >>
hn_nvs_init(): NVS version 0x60001, NDIS version 6.30
hn_nvs_conn_rxbuf(): connect rxbuff va=0x2200402000 gpad=0xe1e2f
hn_nvs_conn_rxbuf(): receive buffer size 1728 count 9102
hn_nvs_conn_chim(): connect send buf va=0x2201302000 gpad=0xe1e30
hn_nvs_conn_chim(): send buffer 15728640 section size:6144, count:2560
hn_rndis_init(): RNDIS ver 1.0, aggpkt size 4026531839, aggpkt cnt 8, aggpkt align 8
hn_nvs_handle_vfassoc(): VF serial 2 add to port 1
hn_rndis_link_status(): link status 0x4001000b
hn_rndis_set_rxfilter(): set RX filter 0 done
hn_tx_pool_init(): create a TX send pool hn_txd_1 n=2560 size=32 socket=0
hn_rndis_get_eaddr(): MAC address 00:15:5d:2a:16:66
eth_hn_dev_init(): VMBus max channels 64
hn_rndis_query_rsscaps(): RX rings 64 indirect 128 caps 0x301
eth_hn_dev_init(): Adding VF device
hn_vf_attach(): Attach VF device 0
hn_nvs_set_datapath(): set datapath VF
vmbus_probe_one_driver(): VMBUS device 7cb9f65d-684d-44dc-9d55-13d40dd60570 on NUMA socket 0
rte_vmbus_map_device(): Not managed by UIO driver, skipped
vmbus_probe_one_driver(): VMBUS device b30ed368-1a6f-4921-8d2b-4160a0dfc667 on NUMA socket 0
rte_vmbus_map_device(): Not managed by UIO driver, skipped
Set rxonly packet forwarding mode
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 1 (socket 0)
hn_dev_configure(): >>
hn_rndis_link_status(): link status 0x40020006
hn_subchan_configure(): open 1 subchannels
vmbus_uio_get_subchan(): ring mmap not found (yet) for: 20
hn_subchan_configure(): new sub channel 1
hn_rndis_conf_rss(): >>
_hn_vf_configure(): enabling LSC for VF 0
net_mlx5: port 0 Tx queues number update: 0 -> 2
net_mlx5: port 0 Rx queues number update: 0 -> 2
hn_dev_tx_queue_setup(): >>
net_mlx5: port 0 configuring queue 0 for 256 descriptors
net_mlx5: port 0 priv->device_attr.max_qp_wr is 32768
net_mlx5: port 0 priv->device_attr.max_sge is 30
net_mlx5: port 0 adding Tx queue 0 to list
hn_dev_tx_queue_setup(): >>
net_mlx5: port 0 configuring queue 1 for 256 descriptors
net_mlx5: port 0 priv->device_attr.max_qp_wr is 32768
net_mlx5: port 0 priv->device_attr.max_sge is 30
net_mlx5: port 0 adding Tx queue 1 to list
hn_dev_rx_queue_setup(): >>
net_mlx5: port 0 configuring Rx queue 0 for 256 descriptors
net_mlx5: port 0 maximum number of segments per packet: 1
net_mlx5: port 0 CRC stripping is enabled, 0 bytes will be subtracted from incoming frames to hide it
net_mlx5: port 0 adding Rx queue 0 to list
hn_dev_rx_queue_setup(): >>
net_mlx5: port 0 configuring Rx queue 1 for 256 descriptors
net_mlx5: port 0 maximum number of segments per packet: 1
net_mlx5: port 0 CRC stripping is enabled, 0 bytes will be subtracted from incoming frames to hide it
net_mlx5: port 0 adding Rx queue 1 to list
hn_dev_start(): >>
hn_rndis_set_rxfilter(): set RX filter 0xd done
net_mlx5: port 0 starting device
net_mlx5: port 0 Tx queue 0 allocated and configured 256 WRs
net_mlx5: port 0: uar_mmap_offset 0x6000
net_mlx5: port 0 Tx queue 1 allocated and configured 256 WRs
net_mlx5: port 0: uar_mmap_offset 0x6000
net_mlx5: port 0 Rx queue 0 registering mp mbuf_pool_socket_0 having 1 chunks
net_mlx5: port 0 creating a MR using address (0x1611be400)
net_mlx5: port 0 inserting MR(0x161184e80) to global cache
net_mlx5: inserted B-tree(0x17ffe85b8)[1], [0x140000000, 0x180000000) lkey=0x4040800
net_mlx5: inserted B-tree(0x16119475e)[1], [0x140000000, 0x180000000) lkey=0x4040800
net_mlx5: port 0 Rx queue 0 allocated and configured 256 segments (max 256 packets)
net_mlx5: port 0 priv->device_attr.max_qp_wr is 32768
net_mlx5: port 0 priv->device_attr.max_sge is 30
net_mlx5: port 0 rxq 0 updated with 0x7ffca42c1388
net_mlx5: port 0 Rx queue 1 registering mp mbuf_pool_socket_0 having 1 chunks
net_mlx5: inserted B-tree(0x16119145e)[1], [0x140000000, 0x180000000) lkey=0x4040800
net_mlx5: port 0 Rx queue 1 allocated and configured 256 segments (max 256 packets)
net_mlx5: port 0 priv->device_attr.max_qp_wr is 32768
net_mlx5: port 0 priv->device_attr.max_sge is 30
net_mlx5: port 0 rxq 1 updated with 0x7ffca42c1388
net_mlx5: port 0 selected Rx vectorized function
net_mlx5: port 0 setting primary MAC address
hn_rndis_set_rxfilter(): set RX filter 0x9 done
hn_rndis_set_rxfilter(): set RX filter 0x9 done
Port 1: 00:15:5D:2A:16:66
Checking link statuses...
Done
hn_rndis_set_rxfilter(): set RX filter 0x20 done
No commandline core given, start packet forwarding
rxonly packet forwarding - ports=1 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 2 streams:
R
P=1/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
RX P=1/Q=1 (socket 0) -> TX P=1/Q=1 (socket 0) peer=02:00:00:00:00:01
rxonly packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 1: RX queue number: 2 Tx queue number: 2
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
Port statistics ====================================
######################## NIC statistics for port 1 ########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0
Tx-pps: 0
############################################################################
Also make sure you have N VCPU >= N DPDK queues.
next prev parent reply other threads:[~2018-12-05 22:12 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-30 11:04 Mohammed Gamal
2018-11-30 18:27 ` Stephen Hemminger
2018-11-30 19:06 ` Mohammed Gamal
2018-12-04 16:48 ` Stephen Hemminger
2018-12-04 16:56 ` Mohammed Gamal
2018-12-05 22:12 ` Stephen Hemminger [this message]
2018-12-05 22:32 ` Stephen Hemminger
2018-12-07 11:15 ` Mohammed Gamal
2018-12-07 17:31 ` Stephen Hemminger
2018-12-07 19:18 ` Stephen Hemminger
2018-12-08 8:10 ` Mohammed Gamal
2018-11-30 20:24 ` [dpdk-dev] [PATCH] bus/vmbus: fix race in sub channel creation Stephen Hemminger
2018-12-03 6:02 ` Mohammed Gamal
2018-12-03 16:48 ` Stephen Hemminger
2018-12-04 11:59 ` Mohammed Gamal
2018-12-05 22:11 ` [dpdk-dev] [PATCH v2 1/4] " Stephen Hemminger
2018-12-05 22:11 ` [dpdk-dev] [PATCH v2 2/4] net/netvsc: enable SR-IOV Stephen Hemminger
2018-12-05 22:11 ` [dpdk-dev] [PATCH v2 3/4] net/netvsc: disable multi-queue on older servers Stephen Hemminger
2018-12-05 22:11 ` [dpdk-dev] [PATCH v2 4/4] bus/vmbus: debug subchannel setup Stephen Hemminger
2018-12-19 2:02 ` [dpdk-dev] [PATCH v2 1/4] bus/vmbus: fix race in sub channel creation Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181205141246.6729c106@xeon-e3 \
--to=stephen@networkplumber.org \
--cc=dev@dpdk.org \
--cc=maxime.coquelin@redhat.com \
--cc=mgamal@redhat.com \
--cc=wshi@redhat.com \
--cc=yujiang@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).