DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] ixgbe vector mode not working.
@ 2015-02-25  0:16 Stephen Hemminger
  2015-02-25  4:55 ` Liang, Cunming
  0 siblings, 1 reply; 8+ messages in thread
From: Stephen Hemminger @ 2015-02-25  0:16 UTC (permalink / raw)
  To: Balazs Nemeth, Bruce Richardson, Cunming Liang, Neil Horman; +Cc: dev

The ixgbe driver (from 1.8 or 2.0) works fine in normal (non-vectored) mode.
But when vector mode is enabled, it gets a few packets through then hangs.
We use 2 Rx queues and 1 Tx queue per interface.

Devices:
01:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01)
02:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)

Log:
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5
PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
EAL:   probe driver: 8086:1528 rte_ixgbe_pmd
PMD: eth_ixgbe_dev_init(): MAC: 4, PHY: 3
PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x1528
[    0.000043] DATAPLANE: Port 0 rte_ixgbe_pmd on socket 0
[    0.000053] DATAPLANE: Port 1 rte_ixgbe_pmd on socket 0
[    0.031638] PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fc5ac6a1b40 hw_ring=0x7fc5ab548300 dma_addr=0x67348300
[    0.031647] PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
[    0.031653] PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
[    0.031672] PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fc5ac6999c0 hw_ring=0x7fc5ab558380 dma_addr=0x67358380
[    0.031680] PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=1.
[    0.031695] PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
[    0.031708] PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7fc5ac697880 hw_ring=0x7fc5ab568400 dma_addr=0x67368400
[    0.035745] PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fc5ac684e00 hw_ring=0x7fc5ab580480 dma_addr=0x67380480
[    0.035754] PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0.
[    0.035761] PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
[    0.035783] PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fc5ac67cc80 hw_ring=0x7fc5ab590500 dma_addr=0x67390500
[    0.035792] PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=1.
[    0.035798] PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
[    0.035810] PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7fc5ac67ab40 hw_ring=0x7fc5ab5a0580 dma_addr=0x673a0580
[    5.886027] PMD: ixgbe_dev_link_status_print(): Port 0: Link Down
[    5.886064] PMD: ixgbe_dev_link_status_print(): Port 0: Link Up - speed 10000 Mbps - full-duplex
[    6.234150] PMD: ixgbe_dev_link_status_print(): Port 1: Link Up - speed 0 Mbps - half-duplex
[    6.234196] PMD: ixgbe_dev_link_status_print(): Port 1: Link Down
[    6.886098] PMD: ixgbe_dev_link_status_print(): Port 0: Link Up - speed 10000 Mbps - full-duplex
[   10.234776] PMD: ixgbe_dev_link_status_print(): Port 1: Link Down
[   11.818676] PMD: ixgbe_dev_link_status_print(): Port 1: Link Up - speed 10000 Mbps - full-duplex
[   12.818758] PMD: ixgbe_dev_link_status_print(): Port 1: Link Up - speed 10000 Mbps - full-duplex

Application trace shows lots of packets, then everything stops.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] ixgbe vector mode not working.
  2015-02-25  0:16 [dpdk-dev] ixgbe vector mode not working Stephen Hemminger
@ 2015-02-25  4:55 ` Liang, Cunming
  2015-02-25  7:36   ` Stephen Hemminger
  0 siblings, 1 reply; 8+ messages in thread
From: Liang, Cunming @ 2015-02-25  4:55 UTC (permalink / raw)
  To: Stephen Hemminger, Nemeth, Balazs, Richardson, Bruce, Neil Horman; +Cc: dev

Hi Stephen,

I tried on the latest mater branch with testpmd.
2 rxq and 2 txq as below, vector pmd on both rx and tx. I can't reproduced it.
I checked your log, on tx side, it looks the tx vector haven't enabled. (it shows vpmd on rx, spmd on tx).
Would you help to share the below params in your app ?
	RX desc=128 - RX free threshold=32
	TX desc=512 - TX free threshold=32
	TX RS bit threshold=32 - TXQ flags=0xf01
As in your case which using 2 rxq and 1 txq, would you explain the traffic flow between them.
One thread polling packets from each rxq and send to the specified txq ?

./x86_64-native-linuxapp-gcc/app/testpmd -c 0xff00 -n 4 -- -i --coremask=f000 --txfreet=32 --rxfreet=32 --txqflags=0xf01 --txrst=32 --rxq=2 --txq=2 --numa
 [...]
Configuring Port 0 (socket 1)
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f99cace9ac0 hw_ring=0x7f99c9c3f480 dma_addr=0x1fdd83f480
PMD: set_tx_function(): Using simple tx code path
PMD: set_tx_function(): Vector tx enabled.
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f99cace7980 hw_ring=0x7f99c9c4f480 dma_addr=0x1fdd84f480
PMD: set_tx_function(): Using simple tx code path
PMD: set_tx_function(): Vector tx enabled.
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f99cace7100 hw_ring=0x7f99c9c5f480 dma_addr=0x1fdd85f480
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f99cace6880 hw_ring=0x7f99c9c6f500 dma_addr=0x1fdd86f500
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=1.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
Port 0: 90:E2:BA:30:A0:75
Configuring Port 1 (socket 1)
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f99cace4540 hw_ring=0x7f99c9c7f580 dma_addr=0x1fdd87f580
PMD: set_tx_function(): Using simple tx code path
PMD: set_tx_function(): Vector tx enabled.
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f99cace2400 hw_ring=0x7f99c9c8f580 dma_addr=0x1fdd88f580
PMD: set_tx_function(): Using simple tx code path
PMD: set_tx_function(): Vector tx enabled.
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f99cace1b80 hw_ring=0x7f99c9c9f580 dma_addr=0x1fdd89f580
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f99cace1300 hw_ring=0x7f99c9caf600 dma_addr=0x1fdd8af600
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=1.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
Port 1: 90:E2:BA:06:90:59
Checking link statuses...
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd> show config rxtx
  io packet forwarding - CRC stripping disabled - packets/burst=32
  nb forwarding cores=4 - nb forwarding ports=2
  RX queues=2 - RX desc=128 - RX free threshold=32
  RX threshold registers: pthresh=8 hthresh=8 wthresh=0
  TX queues=2 - TX desc=512 - TX free threshold=32
  TX threshold registers: pthresh=32 hthresh=0 wthresh=0
  TX RS bit threshold=32 - TXQ flags=0xf01

-Cunming

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Wednesday, February 25, 2015 8:16 AM
> To: Nemeth, Balazs; Richardson, Bruce; Liang, Cunming; Neil Horman
> Cc: dev@dpdk.org
> Subject: ixgbe vector mode not working.
> 
> The ixgbe driver (from 1.8 or 2.0) works fine in normal (non-vectored) mode.
> But when vector mode is enabled, it gets a few packets through then hangs.
> We use 2 Rx queues and 1 Tx queue per interface.
> 
> Devices:
> 01:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+
> Network Connection (rev 01)
> 02:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-
> AT2 (rev 01)
> 
> Log:
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5
> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> EAL:   probe driver: 8086:1528 rte_ixgbe_pmd
> PMD: eth_ixgbe_dev_init(): MAC: 4, PHY: 3
> PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x1528
> [    0.000043] DATAPLANE: Port 0 rte_ixgbe_pmd on socket 0
> [    0.000053] DATAPLANE: Port 1 rte_ixgbe_pmd on socket 0
> [    0.031638] PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fc5ac6a1b40
> hw_ring=0x7fc5ab548300 dma_addr=0x67348300
> [    0.031647] PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc
> Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0,
> queue=0.
> [    0.031653] PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please
> make sure RX burst size no less than 32.
> [    0.031672] PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fc5ac6999c0
> hw_ring=0x7fc5ab558380 dma_addr=0x67358380
> [    0.031680] PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc
> Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0,
> queue=1.
> [    0.031695] PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please
> make sure RX burst size no less than 32.
> [    0.031708] PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7fc5ac697880
> hw_ring=0x7fc5ab568400 dma_addr=0x67368400
> [    0.035745] PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fc5ac684e00
> hw_ring=0x7fc5ab580480 dma_addr=0x67380480
> [    0.035754] PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc
> Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1,
> queue=0.
> [    0.035761] PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please
> make sure RX burst size no less than 32.
> [    0.035783] PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fc5ac67cc80
> hw_ring=0x7fc5ab590500 dma_addr=0x67390500
> [    0.035792] PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc
> Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1,
> queue=1.
> [    0.035798] PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please
> make sure RX burst size no less than 32.
> [    0.035810] PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7fc5ac67ab40
> hw_ring=0x7fc5ab5a0580 dma_addr=0x673a0580
> [    5.886027] PMD: ixgbe_dev_link_status_print(): Port 0: Link Down
> [    5.886064] PMD: ixgbe_dev_link_status_print(): Port 0: Link Up - speed 10000
> Mbps - full-duplex
> [    6.234150] PMD: ixgbe_dev_link_status_print(): Port 1: Link Up - speed 0 Mbps
> - half-duplex
> [    6.234196] PMD: ixgbe_dev_link_status_print(): Port 1: Link Down
> [    6.886098] PMD: ixgbe_dev_link_status_print(): Port 0: Link Up - speed 10000
> Mbps - full-duplex
> [   10.234776] PMD: ixgbe_dev_link_status_print(): Port 1: Link Down
> [   11.818676] PMD: ixgbe_dev_link_status_print(): Port 1: Link Up - speed 10000
> Mbps - full-duplex
> [   12.818758] PMD: ixgbe_dev_link_status_print(): Port 1: Link Up - speed 10000
> Mbps - full-duplex
> 
> Application trace shows lots of packets, then everything stops.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] ixgbe vector mode not working.
  2015-02-25  4:55 ` Liang, Cunming
@ 2015-02-25  7:36   ` Stephen Hemminger
  2015-02-25  8:49     ` Liang, Cunming
  2015-02-25  9:18     ` Thomas Monjalon
  0 siblings, 2 replies; 8+ messages in thread
From: Stephen Hemminger @ 2015-02-25  7:36 UTC (permalink / raw)
  To: Liang, Cunming; +Cc: Nemeth, Balazs, dev

On Wed, 25 Feb 2015 04:55:09 +0000
"Liang, Cunming" <cunming.liang@intel.com> wrote:

> Hi Stephen,
> 
> I tried on the latest mater branch with testpmd.
> 2 rxq and 2 txq as below, vector pmd on both rx and tx. I can't reproduced it.
> I checked your log, on tx side, it looks the tx vector haven't enabled. (it shows vpmd on rx, spmd on tx).
> Would you help to share the below params in your app ?
> 	RX desc=128 - RX free threshold=32
> 	TX desc=512 - TX free threshold=32
> 	TX RS bit threshold=32 - TXQ flags=0xf01
> As in your case which using 2 rxq and 1 txq, would you explain the traffic flow between them.
> One thread polling packets from each rxq and send to the specified txq ?

Basic thread model of application is same as examples/qos_sched.

On ixgbe:
	RX desc = 4000 - RX free threshold=32
	TX desc = 512  - TX free threshold=0 so driver sets default of 32

I was setting rx/tx conf but since examples don't went away from that.

The whole RX/TX tuning parameters are a very poor programming model only
a hardware engineer could love. Requiring the application to look at
driver string and choose the magic parameter settings, is in my opnion
an indication of using incorrect abstraction.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] ixgbe vector mode not working.
  2015-02-25  7:36   ` Stephen Hemminger
@ 2015-02-25  8:49     ` Liang, Cunming
  2015-02-26  1:07       ` Stephen Hemminger
  2015-02-25  9:18     ` Thomas Monjalon
  1 sibling, 1 reply; 8+ messages in thread
From: Liang, Cunming @ 2015-02-25  8:49 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Nemeth, Balazs, dev

Hi Stephen,

Thanks for the info, with rxd=4000, I can reproduce it.
On that time, it runs out of mbuf.
I'll follow up this issue.

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Wednesday, February 25, 2015 3:37 PM
> To: Liang, Cunming
> Cc: Nemeth, Balazs; Richardson, Bruce; Neil Horman; dev@dpdk.org
> Subject: Re: ixgbe vector mode not working.
> 
> On Wed, 25 Feb 2015 04:55:09 +0000
> "Liang, Cunming" <cunming.liang@intel.com> wrote:
> 
> > Hi Stephen,
> >
> > I tried on the latest mater branch with testpmd.
> > 2 rxq and 2 txq as below, vector pmd on both rx and tx. I can't reproduced it.
> > I checked your log, on tx side, it looks the tx vector haven't enabled. (it shows
> vpmd on rx, spmd on tx).
> > Would you help to share the below params in your app ?
> > 	RX desc=128 - RX free threshold=32
> > 	TX desc=512 - TX free threshold=32
> > 	TX RS bit threshold=32 - TXQ flags=0xf01
> > As in your case which using 2 rxq and 1 txq, would you explain the traffic flow
> between them.
> > One thread polling packets from each rxq and send to the specified txq ?
> 
> Basic thread model of application is same as examples/qos_sched.
> 
> On ixgbe:
> 	RX desc = 4000 - RX free threshold=32
> 	TX desc = 512  - TX free threshold=0 so driver sets default of 32
> 
> I was setting rx/tx conf but since examples don't went away from that.

[LCM] All these params defined in rte_eth_rxconf/rte_eth_txconf which are used during rte_eth_rx/tx_queue_setup.
If don't care the value and assign nothing for it, it takes the default value per each device.
For ixgbe, the default_txconf will use the vpmd. In your log, it's not. So that's why I asked for such params.

> 
> The whole RX/TX tuning parameters are a very poor programming model only
> a hardware engineer could love. Requiring the application to look at
> driver string and choose the magic parameter settings, is in my opnion
> an indication of using incorrect abstraction.
[LCM] It's not necessary for application to look at such parameter. As you said, that's only for RX/TX tuning.
If tuning, it makes sense to understand what these parameters mean.

 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] ixgbe vector mode not working.
  2015-02-25  7:36   ` Stephen Hemminger
  2015-02-25  8:49     ` Liang, Cunming
@ 2015-02-25  9:18     ` Thomas Monjalon
  1 sibling, 0 replies; 8+ messages in thread
From: Thomas Monjalon @ 2015-02-25  9:18 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, Nemeth, Balazs

2015-02-24 23:36, Stephen Hemminger:
> On Wed, 25 Feb 2015 04:55:09 +0000
> "Liang, Cunming" <cunming.liang@intel.com> wrote:
> 
> > Hi Stephen,
> > 
> > I tried on the latest mater branch with testpmd.
> > 2 rxq and 2 txq as below, vector pmd on both rx and tx. I can't reproduced it.
> > I checked your log, on tx side, it looks the tx vector haven't enabled. (it shows vpmd on rx, spmd on tx).
> > Would you help to share the below params in your app ?
> > 	RX desc=128 - RX free threshold=32
> > 	TX desc=512 - TX free threshold=32
> > 	TX RS bit threshold=32 - TXQ flags=0xf01
> > As in your case which using 2 rxq and 1 txq, would you explain the traffic flow between them.
> > One thread polling packets from each rxq and send to the specified txq ?
> 
> Basic thread model of application is same as examples/qos_sched.
> 
> On ixgbe:
> 	RX desc = 4000 - RX free threshold=32
> 	TX desc = 512  - TX free threshold=0 so driver sets default of 32
> 
> I was setting rx/tx conf but since examples don't went away from that.
> 
> The whole RX/TX tuning parameters are a very poor programming model only
> a hardware engineer could love. Requiring the application to look at
> driver string and choose the magic parameter settings, is in my opnion
> an indication of using incorrect abstraction.

Yes, improvements are welcome.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] ixgbe vector mode not working.
  2015-02-25  8:49     ` Liang, Cunming
@ 2015-02-26  1:07       ` Stephen Hemminger
  2015-02-28  3:33         ` Liang, Cunming
  0 siblings, 1 reply; 8+ messages in thread
From: Stephen Hemminger @ 2015-02-26  1:07 UTC (permalink / raw)
  To: Liang, Cunming; +Cc: Nemeth, Balazs, dev

On Wed, 25 Feb 2015 08:49:48 +0000
"Liang, Cunming" <cunming.liang@intel.com> wrote:

> Hi Stephen,
> 
> Thanks for the info, with rxd=4000, I can reproduce it.
> On that time, it runs out of mbuf.
> I'll follow up this issue.

The first time I ran it, the code was configure rx/tx conf
which was leftover from older versions.

Second time I ran it and the same hang happened.
Looking at mbuf pool statistics I see that it gets exhausted,
even when extra mbuf's are added to the pool.

Looks like a memory leak.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] ixgbe vector mode not working.
  2015-02-26  1:07       ` Stephen Hemminger
@ 2015-02-28  3:33         ` Liang, Cunming
  2015-03-05 19:09           ` Thomas Monjalon
  0 siblings, 1 reply; 8+ messages in thread
From: Liang, Cunming @ 2015-02-28  3:33 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Nemeth, Balazs, dev

Hi Stephen,

The root cause is about the rx descriptor number.
As we use below code to quick process the rx_tail wrap, it require rxd value is a 2^n.
"rxq->rx_tail = (uint16_t)(rxq->rx_tail & (rxq->nb_rx_desc - 1));"
We should add more checking on the input rxd, if checking fail, then tend to use scalar pmd.
Thanks for the report, I'll send fix patch soon.

-Cunming

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Thursday, February 26, 2015 9:07 AM
> To: Liang, Cunming
> Cc: Nemeth, Balazs; Richardson, Bruce; Neil Horman; dev@dpdk.org
> Subject: Re: ixgbe vector mode not working.
> 
> On Wed, 25 Feb 2015 08:49:48 +0000
> "Liang, Cunming" <cunming.liang@intel.com> wrote:
> 
> > Hi Stephen,
> >
> > Thanks for the info, with rxd=4000, I can reproduce it.
> > On that time, it runs out of mbuf.
> > I'll follow up this issue.
> 
> The first time I ran it, the code was configure rx/tx conf
> which was leftover from older versions.
> 
> Second time I ran it and the same hang happened.
> Looking at mbuf pool statistics I see that it gets exhausted,
> even when extra mbuf's are added to the pool.
> 
> Looks like a memory leak.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] ixgbe vector mode not working.
  2015-02-28  3:33         ` Liang, Cunming
@ 2015-03-05 19:09           ` Thomas Monjalon
  0 siblings, 0 replies; 8+ messages in thread
From: Thomas Monjalon @ 2015-03-05 19:09 UTC (permalink / raw)
  To: Liang, Cunming; +Cc: dev, Nemeth, Balazs

2015-02-28 03:33, Liang, Cunming:
> Hi Stephen,
> 
> The root cause is about the rx descriptor number.
> As we use below code to quick process the rx_tail wrap, it require rxd value is a 2^n.
> "rxq->rx_tail = (uint16_t)(rxq->rx_tail & (rxq->nb_rx_desc - 1));"
> We should add more checking on the input rxd, if checking fail, then tend to use scalar pmd.
> Thanks for the report, I'll send fix patch soon.

Fixed in http://dpdk.org/browse/dpdk/commit/?id=352078e8e196
Thanks

> > -----Original Message-----
> > From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > Sent: Thursday, February 26, 2015 9:07 AM
> > To: Liang, Cunming
> > Cc: Nemeth, Balazs; Richardson, Bruce; Neil Horman; dev@dpdk.org
> > Subject: Re: ixgbe vector mode not working.
> > 
> > On Wed, 25 Feb 2015 08:49:48 +0000
> > "Liang, Cunming" <cunming.liang@intel.com> wrote:
> > 
> > > Hi Stephen,
> > >
> > > Thanks for the info, with rxd=4000, I can reproduce it.
> > > On that time, it runs out of mbuf.
> > > I'll follow up this issue.
> > 
> > The first time I ran it, the code was configure rx/tx conf
> > which was leftover from older versions.
> > 
> > Second time I ran it and the same hang happened.
> > Looking at mbuf pool statistics I see that it gets exhausted,
> > even when extra mbuf's are added to the pool.
> > 
> > Looks like a memory leak.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-03-05 19:09 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-25  0:16 [dpdk-dev] ixgbe vector mode not working Stephen Hemminger
2015-02-25  4:55 ` Liang, Cunming
2015-02-25  7:36   ` Stephen Hemminger
2015-02-25  8:49     ` Liang, Cunming
2015-02-26  1:07       ` Stephen Hemminger
2015-02-28  3:33         ` Liang, Cunming
2015-03-05 19:09           ` Thomas Monjalon
2015-02-25  9:18     ` Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).