* [dpdk-users] ovs-ofctl dump-ports shows a huge rx_bytes value when using dpdk i40e pmd driver
@ 2017-12-24 6:50 lidejun
0 siblings, 0 replies; only message in thread
From: lidejun @ 2017-12-24 6:50 UTC (permalink / raw)
To: users; +Cc: Lichunhe, zangchuanqiang
Hi all, I have faced the following issue: I have a Intel X710 card, two ports on it, after adding one port to an OVS + DPDK bridge, ovs-ofctl dump-ports shows a huge rx_bytes value:
ovs-ofctl dump-ports br0
OFPST_PORT reply (xid=0x2): 3 ports
port 2: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
port 1: rx pkts=0, bytes=18446744073709551612, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
port LOCAL: rx pkts=9, bytes=786, drop=10, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
OVS version is 2.7.3, dpdk version is LTS 16.11.4, from ethtool I can see:
driver: i40e
version: 2.0.23
firmware-version: 5.05 0x80002a82 1.1313.0
Some other clues from gdb:before i40e_dev_stats_get returns, pf->stats: rx_bytes = 18446744073709551540, rx_unicast = 0, rx_multicast = 19, rx_broadcast = 0
Anybody found this issue too?
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2017-12-24 6:50 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-24 6:50 [dpdk-users] ovs-ofctl dump-ports shows a huge rx_bytes value when using dpdk i40e pmd driver lidejun
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).