DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users]  VF RSS availble in I350-T2?
@ 2017-12-11  9:14 ..
  2017-12-12 12:58 ` ..
  0 siblings, 1 reply; 7+ messages in thread
From: .. @ 2017-12-11  9:14 UTC (permalink / raw)
  To: users

Hi,

I have an intel I350-T2 which I use for SR-IOV, however, I am hitting some
rx_dropped on the card when I start increasing traffic. (I have got more
with the same software out of a identical bare metal system)

I am using the Intel igb driver on Centos 7.2 (downloaded from Intel not
the driver installed with Centos), so the RSS parameters amongst others are
availbe to me

This then led me to investigate the interrupts on the tx rx ring buffers
and I noticed that the interface (vfs enabled) only had on tx/rx queue. Its
distributed between   This is on the KVM Host

             CPU0       CPU1       CPU2       CPU3       CPU4
CPU5       CPU6       CPU7       CPU8
 100:          1         33        137          0          0
0          0          0          0     IR-PCI-MSI-edge      ens2f1
 101:       2224          0          0       6309     178807
0          0          0          0     IR-PCI-MSI-edge      ens2f1-TxRx-0

Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues

On the VM I only get one tx one rx queue ( I know all the interrupts are
only using CPU0) but that is defined in our builds.

egrep "CPU|ens11" /proc/interrupts
                   CPU0       CPU1       CPU2       CPU3       CPU4
CPU5       CPU6       CPU7
 34:  715885552      0          0          0          0          0
0          0          0           PCI-MSI-edge      ens11-tx-0
 35:  559402399      0          0          0          0          0
0          0          0           PCI-MSI-edge      ens11-rx-0

I activated RSS in my card, and can set if, however if I use the param
max_vfs=n then it  defaults back to to 1 rx 1 tx queue per nic port

[  392.833410] igb 0000:07:00.0: Using MSI-X interrupts. 1 rx queue(s), 1
tx queue(s)
[  393.035408] igb 0000:07:00.1: Using MSI-X interrupts. 1 rx queue(s), 1
tx queue(s)

I have been reading some of the dpdk older posts and see that VF RSS is
implemented in some cards, does anybody know if its available in this card
(from reading it only seemed the 10GB cards)

One of my plans aside from trying to create more RSS per VM is to add more
CPUS to the VM that are not isolated so that the rx and tx queues can
distribute their load a bit to see if this helps.

Also is it worth investigating the VMDq options, however I understand this
to be less useful than SR-IOV which works well for me with KVM.


Thanks in advance,

Rolando

^ permalink raw reply	[flat|nested] 7+ messages in thread
* [dpdk-users] VF RSS availble in I350-T2?
@ 2017-12-08  9:57 ..
  0 siblings, 0 replies; 7+ messages in thread
From: .. @ 2017-12-08  9:57 UTC (permalink / raw)
  To: users

Hi,

I have an intel I350-T2 which I use for SR-IOV, however, I am hitting some
rx_dropped on the card when I start increasing traffic. (I have got more
with the same software out of a identical bare metal system)

I am using the Intel igb driver on Centos 7.2 (downloaded from Intel not
the driver installed with Centos), so the RSS parameters amongst others are
availbe to me

This then led me to investigate the interrupts on the tx rx ring buffers
and I noticed that the interface (vfs enabled) only had on tx/rx queue. Its
distributed between   This is on the KVM Host

             CPU0       CPU1       CPU2       CPU3       CPU4
CPU5       CPU6       CPU7       CPU8
 100:          1         33        137          0          0
0          0          0          0     IR-PCI-MSI-edge      ens2f1
 101:       2224          0          0       6309     178807
0          0          0          0     IR-PCI-MSI-edge      ens2f1-TxRx-0

Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues

On the VM I only get one tx one rx queue ( I know all the interrupts are
only using CPU0) but that is defined in our builds.

egrep "CPU|ens11" /proc/interrupts
                   CPU0       CPU1       CPU2       CPU3       CPU4
CPU5       CPU6       CPU7
 34:  715885552      0          0          0          0          0
0          0          0           PCI-MSI-edge      ens11-tx-0
 35:  559402399      0          0          0          0          0
0          0          0           PCI-MSI-edge      ens11-rx-0

I activated RSS in my card, and can set if, however if I use the param
max_vfs=n then it  defaults back to to 1 rx 1 tx queue per nic port

[  392.833410] igb 0000:07:00.0: Using MSI-X interrupts. 1 rx queue(s), 1
tx queue(s)
[  393.035408] igb 0000:07:00.1: Using MSI-X interrupts. 1 rx queue(s), 1
tx queue(s)

I have been reading some of the dpdk older posts and see that VF RSS is
implemented in some cards, does anybody know if its available in this card
(from reading it only seemed the 10GB cards)

One of my plans aside from trying to create more RSS per VM is to add more
CPUS to the VM that are not isolated so that the rx and tx queues can
distribute their load a bit to see if this helps.

Also is it worth investigating the VMDq options, however I understand this
to be less useful than SR-IOV which works well for me with KVM.


Thanks in advance,

Rolando

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-12-13 12:53 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-11  9:14 [dpdk-users] VF RSS availble in I350-T2? ..
2017-12-12 12:58 ` ..
2017-12-13 11:26   ` Thomas Monjalon
2017-12-13 11:35   ` Paul Emmerich
2017-12-13 12:24     ` ..
2017-12-13 12:53       ` ..
  -- strict thread matches above, loose matches on Subject: below --
2017-12-08  9:57 ..

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).