DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Liang, Cunming" <cunming.liang@intel.com>
To: Stephen Hemminger <stephen@networkplumber.org>,
	"Nemeth, Balazs" <balazs.nemeth@intel.com>,
	"Richardson, Bruce" <bruce.richardson@intel.com>,
	Neil Horman <nhorman@tuxdriver.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] ixgbe vector mode not working.
Date: Wed, 25 Feb 2015 04:55:09 +0000	[thread overview]
Message-ID: <D0158A423229094DA7ABF71CF2FA0DA3118DE951@shsmsx102.ccr.corp.intel.com> (raw)
In-Reply-To: <20150224161609.15f590df@urahara>

Hi Stephen,

I tried on the latest mater branch with testpmd.
2 rxq and 2 txq as below, vector pmd on both rx and tx. I can't reproduced it.
I checked your log, on tx side, it looks the tx vector haven't enabled. (it shows vpmd on rx, spmd on tx).
Would you help to share the below params in your app ?
	RX desc=128 - RX free threshold=32
	TX desc=512 - TX free threshold=32
	TX RS bit threshold=32 - TXQ flags=0xf01
As in your case which using 2 rxq and 1 txq, would you explain the traffic flow between them.
One thread polling packets from each rxq and send to the specified txq ?

./x86_64-native-linuxapp-gcc/app/testpmd -c 0xff00 -n 4 -- -i --coremask=f000 --txfreet=32 --rxfreet=32 --txqflags=0xf01 --txrst=32 --rxq=2 --txq=2 --numa
 [...]
Configuring Port 0 (socket 1)
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f99cace9ac0 hw_ring=0x7f99c9c3f480 dma_addr=0x1fdd83f480
PMD: set_tx_function(): Using simple tx code path
PMD: set_tx_function(): Vector tx enabled.
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f99cace7980 hw_ring=0x7f99c9c4f480 dma_addr=0x1fdd84f480
PMD: set_tx_function(): Using simple tx code path
PMD: set_tx_function(): Vector tx enabled.
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f99cace7100 hw_ring=0x7f99c9c5f480 dma_addr=0x1fdd85f480
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f99cace6880 hw_ring=0x7f99c9c6f500 dma_addr=0x1fdd86f500
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=1.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
Port 0: 90:E2:BA:30:A0:75
Configuring Port 1 (socket 1)
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f99cace4540 hw_ring=0x7f99c9c7f580 dma_addr=0x1fdd87f580
PMD: set_tx_function(): Using simple tx code path
PMD: set_tx_function(): Vector tx enabled.
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f99cace2400 hw_ring=0x7f99c9c8f580 dma_addr=0x1fdd88f580
PMD: set_tx_function(): Using simple tx code path
PMD: set_tx_function(): Vector tx enabled.
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f99cace1b80 hw_ring=0x7f99c9c9f580 dma_addr=0x1fdd89f580
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f99cace1300 hw_ring=0x7f99c9caf600 dma_addr=0x1fdd8af600
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=1.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
Port 1: 90:E2:BA:06:90:59
Checking link statuses...
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd> show config rxtx
  io packet forwarding - CRC stripping disabled - packets/burst=32
  nb forwarding cores=4 - nb forwarding ports=2
  RX queues=2 - RX desc=128 - RX free threshold=32
  RX threshold registers: pthresh=8 hthresh=8 wthresh=0
  TX queues=2 - TX desc=512 - TX free threshold=32
  TX threshold registers: pthresh=32 hthresh=0 wthresh=0
  TX RS bit threshold=32 - TXQ flags=0xf01

-Cunming

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Wednesday, February 25, 2015 8:16 AM
> To: Nemeth, Balazs; Richardson, Bruce; Liang, Cunming; Neil Horman
> Cc: dev@dpdk.org
> Subject: ixgbe vector mode not working.
> 
> The ixgbe driver (from 1.8 or 2.0) works fine in normal (non-vectored) mode.
> But when vector mode is enabled, it gets a few packets through then hangs.
> We use 2 Rx queues and 1 Tx queue per interface.
> 
> Devices:
> 01:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+
> Network Connection (rev 01)
> 02:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-
> AT2 (rev 01)
> 
> Log:
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5
> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> EAL:   probe driver: 8086:1528 rte_ixgbe_pmd
> PMD: eth_ixgbe_dev_init(): MAC: 4, PHY: 3
> PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x1528
> [    0.000043] DATAPLANE: Port 0 rte_ixgbe_pmd on socket 0
> [    0.000053] DATAPLANE: Port 1 rte_ixgbe_pmd on socket 0
> [    0.031638] PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fc5ac6a1b40
> hw_ring=0x7fc5ab548300 dma_addr=0x67348300
> [    0.031647] PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc
> Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0,
> queue=0.
> [    0.031653] PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please
> make sure RX burst size no less than 32.
> [    0.031672] PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fc5ac6999c0
> hw_ring=0x7fc5ab558380 dma_addr=0x67358380
> [    0.031680] PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc
> Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0,
> queue=1.
> [    0.031695] PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please
> make sure RX burst size no less than 32.
> [    0.031708] PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7fc5ac697880
> hw_ring=0x7fc5ab568400 dma_addr=0x67368400
> [    0.035745] PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fc5ac684e00
> hw_ring=0x7fc5ab580480 dma_addr=0x67380480
> [    0.035754] PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc
> Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1,
> queue=0.
> [    0.035761] PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please
> make sure RX burst size no less than 32.
> [    0.035783] PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fc5ac67cc80
> hw_ring=0x7fc5ab590500 dma_addr=0x67390500
> [    0.035792] PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc
> Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1,
> queue=1.
> [    0.035798] PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please
> make sure RX burst size no less than 32.
> [    0.035810] PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7fc5ac67ab40
> hw_ring=0x7fc5ab5a0580 dma_addr=0x673a0580
> [    5.886027] PMD: ixgbe_dev_link_status_print(): Port 0: Link Down
> [    5.886064] PMD: ixgbe_dev_link_status_print(): Port 0: Link Up - speed 10000
> Mbps - full-duplex
> [    6.234150] PMD: ixgbe_dev_link_status_print(): Port 1: Link Up - speed 0 Mbps
> - half-duplex
> [    6.234196] PMD: ixgbe_dev_link_status_print(): Port 1: Link Down
> [    6.886098] PMD: ixgbe_dev_link_status_print(): Port 0: Link Up - speed 10000
> Mbps - full-duplex
> [   10.234776] PMD: ixgbe_dev_link_status_print(): Port 1: Link Down
> [   11.818676] PMD: ixgbe_dev_link_status_print(): Port 1: Link Up - speed 10000
> Mbps - full-duplex
> [   12.818758] PMD: ixgbe_dev_link_status_print(): Port 1: Link Up - speed 10000
> Mbps - full-duplex
> 
> Application trace shows lots of packets, then everything stops.

  reply	other threads:[~2015-02-25  5:00 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-02-25  0:16 Stephen Hemminger
2015-02-25  4:55 ` Liang, Cunming [this message]
2015-02-25  7:36   ` Stephen Hemminger
2015-02-25  8:49     ` Liang, Cunming
2015-02-26  1:07       ` Stephen Hemminger
2015-02-28  3:33         ` Liang, Cunming
2015-03-05 19:09           ` Thomas Monjalon
2015-02-25  9:18     ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=D0158A423229094DA7ABF71CF2FA0DA3118DE951@shsmsx102.ccr.corp.intel.com \
    --to=cunming.liang@intel.com \
    --cc=balazs.nemeth@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=nhorman@tuxdriver.com \
    --cc=stephen@networkplumber.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).