DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [Bug 601] Virtio-user PMD Cannot Send/Receive Packets when 2M Hugepages arre Enabled
@ 2020-12-15 22:35 bugzilla
  2020-12-17 17:36 ` [dpdk-dev] [Bug 601] Virtio-user PMD Cannot Send/Receive Packets when 2M Hugepages are Enabled bugzilla
  0 siblings, 1 reply; 2+ messages in thread
From: bugzilla @ 2020-12-15 22:35 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=601

            Bug ID: 601
           Summary: Virtio-user PMD Cannot Send/Receive Packets when 2M
                    Hugepages arre Enabled
           Product: DPDK
           Version: 20.11
          Hardware: All
                OS: Linux
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: vhost/virtio
          Assignee: dev@dpdk.org
          Reporter: drc@linux.vnet.ibm.com
  Target Milestone: ---

I am attempting to duplicate the virtio-user/vhost-user configuration mentioned
in https://doc.dpdk.org/guides/howto/virtio_user_for_container_networking.html,
only without the container.  When I run the following commands on an x86/POWER
host with 1GB hugepages enabled, packets are transmitted/received as shown by
the command "show port stats all".  However, if the same system is configured
with 2MB hugepages, no error are shown and no traffic is passed between the
testpmd instances.

$ sudo /home/drc/src/dpdk/build/app/dpdk-testpmd 
--log-level="pmd.net.virtio.init,debug"
--log-level="pmd.net.virtio.driver,debug" --no-pci --file-prefix=virtio
--vdev=virtio_user0,path=/tmp/sock0 -- -i
testpmd> start

$ sudo /home/drc/src/dpdk/build/app/dpdk-testpmd
--log-level="pmd.net.vhost,debug" --no-pci --file-prefix=vhost-client --vdev
'net_vhost0,iface=/tmp/sock0' -- -i
testpmd> start tx_first

The main difference observed in the debug output appears to be additional
instances of VHOST_SET_MEM_TABLE being generated when 2MB hugepages are used.


Debug logging is enabled for both instances as shown below:

$ sudo /home/drc/src/dpdk/build/app/dpdk-testpmd
--log-level="pmd.net.vhost,debug" --no-pci --file-prefix=vhost-client --vdev
'net_vhost0,iface=/tmp/sock0' -- -i
EAL: Detected 32 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/vhost-client/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
Initializing pmd_vhost for net_vhost0
Creating VHOST-USER backend on numa socket 0
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=267456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will
pair with itself.

Configuring Port 0 (socket 0)
VHOST_CONFIG: vhost-user server: socket created, fd: 83
VHOST_CONFIG: bind to /tmp/sock0
Port 0: 56:48:4F:53:54:00
Checking link statuses...
Done
testpmd> VHOST_CONFIG: new vhost user connection is 395
VHOST_CONFIG: new device, handle is 0
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0x10009
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:396
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:397
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x910008000
VHOST_CONFIG: read message VHOST_USER_SET_STATUS
VHOST_CONFIG: New device status(0x0000000b):
        -RESET: 0
        -ACKNOWLEDGE: 1
        -DRIVER: 1
        -FEATURES_OK: 1
        -DRIVER_OK: 0
        -DEVICE_NEED_RESET: 0
        -FAILED: 0
VHOST_CONFIG: read message VHOST_USER_GET_STATUS
VHOST_CONFIG: read message VHOST_USER_GET_STATUS
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: guest memory region 0, size: 0x200000
         guest physical addr: 0x100200000
         guest virtual  addr: 0x100200000
         host  virtual  addr: 0x7eff04200000
         mmap addr : 0x7eff04200000
         mmap size : 0x200000
         mmap align: 0x200000
         mmap off  : 0x0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:399
vring0 is enabled

Port 0: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:400
vring1 is enabled

Port 0: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_STATUS
VHOST_CONFIG: New device status(0x0000000f):
        -RESET: 0
        -ACKNOWLEDGE: 1
        -DRIVER: 1
        -FEATURES_OK: 1
        -DRIVER_OK: 1
        -DEVICE_NEED_RESET: 0
        -FAILED: 0
VHOST_CONFIG: virtio is now ready for processing.
Vhost device 0 created

Port 0: link state change event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 0
vring0 is disabled

Port 0: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 1
vring1 is disabled

Port 0: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: guest memory region 0, size: 0x200000
         guest physical addr: 0x100200000
         guest virtual  addr: 0x100200000
         host  virtual  addr: 0x7eff04200000
         mmap addr : 0x7eff04200000
         mmap size : 0x200000
         mmap align: 0x200000
         mmap off  : 0x0
VHOST_CONFIG: guest memory region 1, size: 0x200000
         guest physical addr: 0x100400000
         guest virtual  addr: 0x100400000
         host  virtual  addr: 0x7efefc400000
         mmap addr : 0x7efefc400000
         mmap size : 0x200000
         mmap align: 0x200000
         mmap off  : 0x0
VHOST_CONFIG: guest memory region 2, size: 0x200000
         guest physical addr: 0x100600000
         guest virtual  addr: 0x100600000
         host  virtual  addr: 0x7efefc200000
         mmap addr : 0x7efefc200000
         mmap size : 0x200000
         mmap align: 0x200000
         mmap off  : 0x0
VHOST_CONFIG: guest memory region 3, size: 0x200000
         guest physical addr: 0x100800000
         guest virtual  addr: 0x100800000
         host  virtual  addr: 0x7efef5c00000
         mmap addr : 0x7efef5c00000
         mmap size : 0x200000
         mmap align: 0x200000
         mmap off  : 0x0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
vring0 is enabled

Port 0: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
vring1 is enabled

Port 0: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 0
vring0 is disabled

Port 0: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 1
vring1 is disabled

Port 0: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: guest memory region 0, size: 0x200000
         guest physical addr: 0x100200000
         guest virtual  addr: 0x100200000
         host  virtual  addr: 0x7eff04200000
         mmap addr : 0x7eff04200000
         mmap size : 0x200000
         mmap align: 0x200000
         mmap off  : 0x0
VHOST_CONFIG: guest memory region 1, size: 0x200000
         guest physical addr: 0x100400000
         guest virtual  addr: 0x100400000
         host  virtual  addr: 0x7efefc400000
         mmap addr : 0x7efefc400000
         mmap size : 0x200000
         mmap align: 0x200000
         mmap off  : 0x0
VHOST_CONFIG: guest memory region 2, size: 0x200000
         guest physical addr: 0x100600000
         guest virtual  addr: 0x100600000
         host  virtual  addr: 0x7efefc200000
         mmap addr : 0x7efefc200000
         mmap size : 0x200000
         mmap align: 0x200000
         mmap off  : 0x0
VHOST_CONFIG: guest memory region 3, size: 0x200000
         guest physical addr: 0x100800000
         guest virtual  addr: 0x100800000
         host  virtual  addr: 0x7efef5c00000
         mmap addr : 0x7efef5c00000
         mmap size : 0x200000
         mmap align: 0x200000
         mmap off  : 0x0
VHOST_CONFIG: guest memory region 4, size: 0x200000
         guest physical addr: 0x100a00000
         guest virtual  addr: 0x100a00000
         host  virtual  addr: 0x7efef5a00000
         mmap addr : 0x7efef5a00000
         mmap size : 0x200000
         mmap align: 0x200000
         mmap off  : 0x0
VHOST_CONFIG: guest memory region 5, size: 0x200000
         guest physical addr: 0x100c00000
         guest virtual  addr: 0x100c00000
         host  virtual  addr: 0x7efef5800000
         mmap addr : 0x7efef5800000
         mmap size : 0x200000
         mmap align: 0x200000
         mmap off  : 0x0
VHOST_CONFIG: guest memory region 6, size: 0x200000
         guest physical addr: 0x100e00000
         guest virtual  addr: 0x100e00000
         host  virtual  addr: 0x7efef5600000
         mmap addr : 0x7efef5600000
         mmap size : 0x200000
         mmap align: 0x200000
         mmap off  : 0x0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
vring0 is enabled

Port 0: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
vring1 is enabled

Port 0: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 0
vring0 is disabled

Port 0: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 1
vring1 is disabled

Port 0: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
vring0 is enabled

Port 0: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
vring1 is enabled

Port 0: queue state event

testpmd> show port info all

********************* Infos for port 0  *********************
MAC address: 56:48:4F:53:54:00
Device name: net_vhost0
Driver name: net_vhost
Firmware-version: not available
Devargs: iface=/tmp/sock0
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 10 Gbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: enabled
Maximum number of MAC addresses: 1
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
  strip off, filter off, extend off, qinq strip off
No RSS offload flow type is supported.
Minimum size of RX buffer: 0
Maximum configurable length of RX packet: 4294967295
Maximum configurable size of LRO aggregated packet: 0
Current number of RX queues: 1
Max possible RX queues: 1
Max possible number of RXDs per queue: 65535
Min possible number of RXDs per queue: 0
RXDs number alignment: 1
Current number of TX queues: 1
Max possible TX queues: 1
Max possible number of TXDs per queue: 65535
Min possible number of TXDs per queue: 0
TXDs number alignment: 1
Max segment number per packet: 65535
Max segment number per MTU/TSO: 65535

$ sudo /home/drc/src/dpdk/build/app/dpdk-testpmd 
--log-level="pmd.net.virtio.init,debug"
--log-level="pmd.net.virtio.driver,debug" --no-pci --file-prefix=virtio
--vdev=virtio_user0,path=/tmp/sock0 -- -i
EAL: Detected 32 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/virtio/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
virtio_user_pmd_probe(): Backend type detected: VHOST_USER
vhost_user_sock(): VHOST_SET_OWNER
vhost_user_sock(): VHOST_GET_FEATURES
vhost_user_sock(): VHOST_USER_GET_PROTOCOL_FEATURES
vhost_user_sock(): VHOST_USER_SET_PROTOCOL_FEATURES
vhost_user_sock(): VHOST_SET_STATUS
vhost_user_sock(): VHOST_GET_STATUS
vhost_user_sock(): VHOST_GET_STATUS
vhost_user_sock(): VHOST_SET_STATUS
vhost_user_sock(): VHOST_GET_STATUS
vhost_user_sock(): VHOST_SET_STATUS
virtio_negotiate_features(): guest_features before negotiate = 8000005f10ef8028
virtio_negotiate_features(): host_features before negotiate = 910019983
virtio_negotiate_features(): features after negotiate = 910018000
vhost_user_sock(): VHOST_GET_STATUS
vhost_user_sock(): VHOST_SET_VRING_CALL
vhost_user_sock(): VHOST_SET_VRING_CALL
vhost_user_sock(): VHOST_SET_FEATURES
virtio_user_dev_set_features(): set features: 910008000
vhost_user_sock(): VHOST_SET_STATUS
vhost_user_sock(): VHOST_GET_STATUS
virtio_user_dev_update_status(): Updated Device Status(0x0000000b):
        -RESET: 0
        -ACKNOWLEDGE: 1
        -DRIVER: 1
        -DRIVER_OK: 0
        -FEATURES_OK: 1
        -DEVICE_NEED_RESET: 0
        -FAILED: 0

virtio_init_device(): PORT MAC: 86:E1:DF:59:EA:33
virtio_init_device(): link speed = -1, duplex = 0
virtio_init_device(): config->max_virtqueue_pairs=1
virtio_init_queue(): setting up queue: 0 on NUMA node -1
virtio_init_queue(): vq_size: 256
virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
virtio_init_queue(): vq->vq_ring_mem:      0x1003ad000
virtio_init_queue(): vq->vq_ring_virt_mem: 0x1003ad000
virtio_init_vring():  >>
virtio_init_queue(): setting up queue: 1 on NUMA node -1
virtio_init_queue(): vq_size: 256
virtio_init_queue(): vring_size: 10244, rounded_vring_size: 12288
virtio_init_queue(): vq->vq_ring_mem:      0x1003a8000
virtio_init_queue(): vq->vq_ring_virt_mem: 0x1003a8000
virtio_init_vring():  >>
vhost_user_sock(): VHOST_GET_STATUS
virtio_user_dev_update_status(): Updated Device Status(0x0000000b):
        -RESET: 0
        -ACKNOWLEDGE: 1
        -DRIVER: 1
        -DRIVER_OK: 0
        -FEATURES_OK: 1
        -DEVICE_NEED_RESET: 0
        -FAILED: 0

vhost_user_sock(): VHOST_SET_MEM_TABLE
update_memory_region(): index=0 fd=81 offset=0x0 addr=0x100200000 len=2097152
vhost_user_sock(): VHOST_SET_VRING_NUM
vhost_user_sock(): VHOST_SET_VRING_BASE
vhost_user_sock(): VHOST_SET_VRING_ADDR
vhost_user_sock(): VHOST_SET_VRING_KICK
vhost_user_sock(): VHOST_SET_VRING_NUM
vhost_user_sock(): VHOST_SET_VRING_BASE
vhost_user_sock(): VHOST_SET_VRING_ADDR
vhost_user_sock(): VHOST_SET_VRING_KICK
vhost_user_sock(): VHOST_SET_VRING_ENABLE
vhost_user_sock(): VHOST_SET_VRING_ENABLE
vhost_user_sock(): VHOST_SET_STATUS
EAL: No legacy callbacks, legacy socket not created
vhost_user_sock(): VHOST_SET_VRING_ENABLE
vhost_user_sock(): VHOST_SET_VRING_ENABLE
vhost_user_sock(): VHOST_SET_MEM_TABLE
update_memory_region(): index=0 fd=81 offset=0x0 addr=0x100200000 len=2097152
update_memory_region(): index=1 fd=88 offset=0x0 addr=0x100400000 len=2097152
update_memory_region(): index=2 fd=89 offset=0x0 addr=0x100600000 len=2097152
update_memory_region(): index=3 fd=90 offset=0x0 addr=0x100800000 len=2097152
vhost_user_sock(): VHOST_SET_VRING_ENABLE
vhost_user_sock(): VHOST_SET_VRING_ENABLE
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and
--ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mb_pool_0>: n=267456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
vhost_user_sock(): VHOST_SET_VRING_ENABLE
vhost_user_sock(): VHOST_SET_VRING_ENABLE
vhost_user_sock(): VHOST_SET_MEM_TABLE
update_memory_region(): index=0 fd=81 offset=0x0 addr=0x100200000 len=2097152
update_memory_region(): index=1 fd=88 offset=0x0 addr=0x100400000 len=2097152
update_memory_region(): index=2 fd=89 offset=0x0 addr=0x100600000 len=2097152
update_memory_region(): index=3 fd=90 offset=0x0 addr=0x100800000 len=2097152
update_memory_region(): index=4 fd=92 offset=0x0 addr=0x100a00000 len=2097152
update_memory_region(): index=5 fd=93 offset=0x0 addr=0x100c00000 len=2097152
update_memory_region(): index=6 fd=94 offset=0x0 addr=0x100e00000 len=2097152
vhost_user_sock(): VHOST_SET_VRING_ENABLE
vhost_user_sock(): VHOST_SET_VRING_ENABLE
vhost_user_sock(): VHOST_SET_VRING_ENABLE
vhost_user_sock(): VHOST_SET_VRING_ENABLE
vhost_user_sock(): VHOST_SET_MEM_TABLE
update_memory_region(): index=0 fd=81 offset=0x0 addr=0x100200000 len=2097152
update_memory_region(): index=1 fd=88 offset=0x0 addr=0x100400000 len=2097152
update_memory_region(): index=2 fd=89 offset=0x0 addr=0x100600000 len=2097152
update_memory_region(): index=3 fd=90 offset=0x0 addr=0x100800000 len=2097152
update_memory_region(): index=4 fd=92 offset=0x0 addr=0x100a00000 len=2097152
update_memory_region(): index=5 fd=93 offset=0x0 addr=0x100c00000 len=2097152
update_memory_region(): index=6 fd=94 offset=0x0 addr=0x100e00000 len=2097152
update_memory_region(): index=7 fd=95 offset=0x0 addr=0x101000000 len=2097152
update_memory_region(): Too many memory regions
vhost_user_sock(): VHOST_SET_VRING_ENABLE
vhost_user_sock(): VHOST_SET_VRING_ENABLE

Warning! port-topology=paired and odd forward ports number, the last port will
pair with itself.

Configuring Port 0 (socket 0)
virtio_dev_configure(): configure
virtio_dev_tx_queue_setup():  >>
virtio_dev_rx_queue_setup():  >>
virtio_dev_rx_queue_setup_finish():  >>
virtio_dev_rx_queue_setup_finish(): Allocated 256 bufs
virtio_dev_tx_queue_setup_finish():  >>
virtio_dev_start(): nb_queues=1
virtio_dev_start(): Notified backend at initialization
set_rxtx_funcs(): virtio: using inorder Tx path on port 0
set_rxtx_funcs(): virtio: using inorder Rx path on port 0
virtio_dev_link_update(): Get link status from hw
virtio_dev_link_update(): Port 0 is up
virtio_dev_promiscuous_disable(): host does not support rx control
virtio_dev_allmulticast_disable(): host does not support rx control
Port 0: 86:E1:DF:59:EA:33
Checking link statuses...
Done
virtio_dev_promiscuous_enable(): host does not support rx control
Error during enabling promiscuous mode for port 0: Operation not supported -
ignore
testpmd> show port info all

********************* Infos for port 0  *********************
MAC address: 86:E1:DF:59:EA:33
Device name: virtio_user0
Driver name: net_virtio_user
Firmware-version: not available
Devargs: path=/tmp/sock0
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: Unknown
Link duplex: half-duplex
MTU: 1500
Promiscuous mode: disabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 64
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
  strip off, filter off, extend off, qinq strip off
No RSS offload flow type is supported.
Minimum size of RX buffer: 64
Maximum configurable length of RX packet: 9728
Maximum configurable size of LRO aggregated packet: 0
Current number of RX queues: 1
Max possible RX queues: 1
Max possible number of RXDs per queue: 65535
Min possible number of RXDs per queue: 0
RXDs number alignment: 1
Current number of TX queues: 1
Max possible TX queues: 1
Max possible number of TXDs per queue: 65535
Min possible number of TXDs per queue: 0
TXDs number alignment: 1
Max segment number per packet: 65535
Max segment number per MTU/TSO: 65535
testpmd> start tx_first
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP
allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=1
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 32         TX-errors: 0          TX-bytes:  2048

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 32         TX-errors: 0          TX-bytes:  2048

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [dpdk-dev] [Bug 601] Virtio-user PMD Cannot Send/Receive Packets when 2M Hugepages are Enabled
  2020-12-15 22:35 [dpdk-dev] [Bug 601] Virtio-user PMD Cannot Send/Receive Packets when 2M Hugepages arre Enabled bugzilla
@ 2020-12-17 17:36 ` bugzilla
  0 siblings, 0 replies; 2+ messages in thread
From: bugzilla @ 2020-12-17 17:36 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=601

David Christensen (drc@linux.vnet.ibm.com) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
         Resolution|---                         |FIXED
             Status|UNCONFIRMED                 |RESOLVED

--- Comment #2 from David Christensen (drc@linux.vnet.ibm.com) ---
Following directions in Comment 1 resolved the issue.

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-12-17 17:45 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-15 22:35 [dpdk-dev] [Bug 601] Virtio-user PMD Cannot Send/Receive Packets when 2M Hugepages arre Enabled bugzilla
2020-12-17 17:36 ` [dpdk-dev] [Bug 601] Virtio-user PMD Cannot Send/Receive Packets when 2M Hugepages are Enabled bugzilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).