DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] VF RSS availble in I350-T2?
@ 2017-12-08  9:57 ..
  0 siblings, 0 replies; 7+ messages in thread
From: .. @ 2017-12-08  9:57 UTC (permalink / raw)
  To: users

Hi,

I have an intel I350-T2 which I use for SR-IOV, however, I am hitting some
rx_dropped on the card when I start increasing traffic. (I have got more
with the same software out of a identical bare metal system)

I am using the Intel igb driver on Centos 7.2 (downloaded from Intel not
the driver installed with Centos), so the RSS parameters amongst others are
availbe to me

This then led me to investigate the interrupts on the tx rx ring buffers
and I noticed that the interface (vfs enabled) only had on tx/rx queue. Its
distributed between   This is on the KVM Host

             CPU0       CPU1       CPU2       CPU3       CPU4
CPU5       CPU6       CPU7       CPU8
 100:          1         33        137          0          0
0          0          0          0     IR-PCI-MSI-edge      ens2f1
 101:       2224          0          0       6309     178807
0          0          0          0     IR-PCI-MSI-edge      ens2f1-TxRx-0

Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues

On the VM I only get one tx one rx queue ( I know all the interrupts are
only using CPU0) but that is defined in our builds.

egrep "CPU|ens11" /proc/interrupts
                   CPU0       CPU1       CPU2       CPU3       CPU4
CPU5       CPU6       CPU7
 34:  715885552      0          0          0          0          0
0          0          0           PCI-MSI-edge      ens11-tx-0
 35:  559402399      0          0          0          0          0
0          0          0           PCI-MSI-edge      ens11-rx-0

I activated RSS in my card, and can set if, however if I use the param
max_vfs=n then it  defaults back to to 1 rx 1 tx queue per nic port

[  392.833410] igb 0000:07:00.0: Using MSI-X interrupts. 1 rx queue(s), 1
tx queue(s)
[  393.035408] igb 0000:07:00.1: Using MSI-X interrupts. 1 rx queue(s), 1
tx queue(s)

I have been reading some of the dpdk older posts and see that VF RSS is
implemented in some cards, does anybody know if its available in this card
(from reading it only seemed the 10GB cards)

One of my plans aside from trying to create more RSS per VM is to add more
CPUS to the VM that are not isolated so that the rx and tx queues can
distribute their load a bit to see if this helps.

Also is it worth investigating the VMDq options, however I understand this
to be less useful than SR-IOV which works well for me with KVM.


Thanks in advance,

Rolando

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] VF RSS availble in I350-T2?
  2017-12-13 12:24     ` ..
@ 2017-12-13 12:53       ` ..
  0 siblings, 0 replies; 7+ messages in thread
From: .. @ 2017-12-13 12:53 UTC (permalink / raw)
  Cc: users

Hi, sorry the ignored comment was a bit brash.

Its my first time posting, and I didnt see the email I sent come into my
inbox (I guess you don't get them sent to yourself), so I did wonder if it
posted ok.  However the list archives showed that it did.

Thanks
On Wed, 13 Dec 2017, 13:20 .., <hyperhead@gmail.com> wrote:

> Hi Paul,
>
> No I didn't spot that.
>
> I guess my only option now is 10gb card that supports it.
>
> Thanks.
>
> On Wed, 13 Dec 2017, 12:35 Paul Emmerich, <emmericp@net.in.tum.de> wrote:
>
>> Did you consult the datasheet? It says that the VF only supports one
>> queue.
>>
>> Paul
>>
>> > Am 12.12.2017 um 13:58 schrieb .. <hyperhead@gmail.com>:
>> >
>> > I assume my message was ignored due to it not being related to dpdk
>> > software?
>> >
>> >
>> > On 11 December 2017 at 10:14, .. <hyperhead@gmail.com> wrote:
>> >
>> >> Hi,
>> >>
>> >> I have an intel I350-T2 which I use for SR-IOV, however, I am hitting
>> some
>> >> rx_dropped on the card when I start increasing traffic. (I have got
>> more
>> >> with the same software out of a identical bare metal system)
>> >>
>> >> I am using the Intel igb driver on Centos 7.2 (downloaded from Intel
>> not
>> >> the driver installed with Centos), so the RSS parameters amongst
>> others are
>> >> availbe to me
>> >>
>> >> This then led me to investigate the interrupts on the tx rx ring
>> buffers
>> >> and I noticed that the interface (vfs enabled) only had on tx/rx
>> queue. Its
>> >> distributed between   This is on the KVM Host
>> >>
>> >>             CPU0       CPU1       CPU2       CPU3       CPU4
>> >> CPU5       CPU6       CPU7       CPU8
>> >> 100:          1         33        137          0          0
>> >> 0          0          0          0     IR-PCI-MSI-edge      ens2f1
>> >> 101:       2224          0          0       6309     178807
>> >> 0          0          0          0     IR-PCI-MSI-edge
>> ens2f1-TxRx-0
>> >>
>> >> Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues
>> >>
>> >> On the VM I only get one tx one rx queue ( I know all the interrupts
>> are
>> >> only using CPU0) but that is defined in our builds.
>> >>
>> >> egrep "CPU|ens11" /proc/interrupts
>> >>                   CPU0       CPU1       CPU2       CPU3       CPU4
>> >> CPU5       CPU6       CPU7
>> >> 34:  715885552      0          0          0          0
>> >> 0          0          0          0           PCI-MSI-edge
>> ens11-tx-0
>> >> 35:  559402399      0          0          0          0
>> >> 0          0          0          0           PCI-MSI-edge
>> ens11-rx-0
>> >>
>> >> I activated RSS in my card, and can set if, however if I use the param
>> >> max_vfs=n then it  defaults back to to 1 rx 1 tx queue per nic port
>> >>
>> >> [  392.833410] igb 0000:07:00.0: Using MSI-X interrupts. 1 rx
>> queue(s), 1
>> >> tx queue(s)
>> >> [  393.035408] igb 0000:07:00.1: Using MSI-X interrupts. 1 rx
>> queue(s), 1
>> >> tx queue(s)
>> >>
>> >> I have been reading some of the dpdk older posts and see that VF RSS is
>> >> implemented in some cards, does anybody know if its available in this
>> card
>> >> (from reading it only seemed the 10GB cards)
>> >>
>> >> One of my plans aside from trying to create more RSS per VM is to add
>> more
>> >> CPUS to the VM that are not isolated so that the rx and tx queues can
>> >> distribute their load a bit to see if this helps.
>> >>
>> >> Also is it worth investigating the VMDq options, however I understand
>> this
>> >> to be less useful than SR-IOV which works well for me with KVM.
>> >>
>> >>
>> >> Thanks in advance,
>> >>
>> >> Rolando
>> >>
>>
>> --
>> Chair of Network Architectures and Services
>> Department of Informatics
>> Technical University of Munich
>> Boltzmannstr. 3
>> 85748 Garching bei München, Germany
>>
>>
>>
>>
>>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] VF RSS availble in I350-T2?
  2017-12-13 11:35   ` Paul Emmerich
@ 2017-12-13 12:24     ` ..
  2017-12-13 12:53       ` ..
  0 siblings, 1 reply; 7+ messages in thread
From: .. @ 2017-12-13 12:24 UTC (permalink / raw)
  To: Paul Emmerich; +Cc: users

Hi Paul,

No I didn't spot that.

I guess my only option now is 10gb card that supports it.

Thanks.

On Wed, 13 Dec 2017, 12:35 Paul Emmerich, <emmericp@net.in.tum.de> wrote:

> Did you consult the datasheet? It says that the VF only supports one queue.
>
> Paul
>
> > Am 12.12.2017 um 13:58 schrieb .. <hyperhead@gmail.com>:
> >
> > I assume my message was ignored due to it not being related to dpdk
> > software?
> >
> >
> > On 11 December 2017 at 10:14, .. <hyperhead@gmail.com> wrote:
> >
> >> Hi,
> >>
> >> I have an intel I350-T2 which I use for SR-IOV, however, I am hitting
> some
> >> rx_dropped on the card when I start increasing traffic. (I have got more
> >> with the same software out of a identical bare metal system)
> >>
> >> I am using the Intel igb driver on Centos 7.2 (downloaded from Intel not
> >> the driver installed with Centos), so the RSS parameters amongst others
> are
> >> availbe to me
> >>
> >> This then led me to investigate the interrupts on the tx rx ring buffers
> >> and I noticed that the interface (vfs enabled) only had on tx/rx queue.
> Its
> >> distributed between   This is on the KVM Host
> >>
> >>             CPU0       CPU1       CPU2       CPU3       CPU4
> >> CPU5       CPU6       CPU7       CPU8
> >> 100:          1         33        137          0          0
> >> 0          0          0          0     IR-PCI-MSI-edge      ens2f1
> >> 101:       2224          0          0       6309     178807
> >> 0          0          0          0     IR-PCI-MSI-edge
> ens2f1-TxRx-0
> >>
> >> Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues
> >>
> >> On the VM I only get one tx one rx queue ( I know all the interrupts are
> >> only using CPU0) but that is defined in our builds.
> >>
> >> egrep "CPU|ens11" /proc/interrupts
> >>                   CPU0       CPU1       CPU2       CPU3       CPU4
> >> CPU5       CPU6       CPU7
> >> 34:  715885552      0          0          0          0
> >> 0          0          0          0           PCI-MSI-edge
> ens11-tx-0
> >> 35:  559402399      0          0          0          0
> >> 0          0          0          0           PCI-MSI-edge
> ens11-rx-0
> >>
> >> I activated RSS in my card, and can set if, however if I use the param
> >> max_vfs=n then it  defaults back to to 1 rx 1 tx queue per nic port
> >>
> >> [  392.833410] igb 0000:07:00.0: Using MSI-X interrupts. 1 rx queue(s),
> 1
> >> tx queue(s)
> >> [  393.035408] igb 0000:07:00.1: Using MSI-X interrupts. 1 rx queue(s),
> 1
> >> tx queue(s)
> >>
> >> I have been reading some of the dpdk older posts and see that VF RSS is
> >> implemented in some cards, does anybody know if its available in this
> card
> >> (from reading it only seemed the 10GB cards)
> >>
> >> One of my plans aside from trying to create more RSS per VM is to add
> more
> >> CPUS to the VM that are not isolated so that the rx and tx queues can
> >> distribute their load a bit to see if this helps.
> >>
> >> Also is it worth investigating the VMDq options, however I understand
> this
> >> to be less useful than SR-IOV which works well for me with KVM.
> >>
> >>
> >> Thanks in advance,
> >>
> >> Rolando
> >>
>
> --
> Chair of Network Architectures and Services
> Department of Informatics
> Technical University of Munich
> Boltzmannstr. 3
> 85748 Garching bei München, Germany
>
>
>
>
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] VF RSS availble in I350-T2?
  2017-12-12 12:58 ` ..
  2017-12-13 11:26   ` Thomas Monjalon
@ 2017-12-13 11:35   ` Paul Emmerich
  2017-12-13 12:24     ` ..
  1 sibling, 1 reply; 7+ messages in thread
From: Paul Emmerich @ 2017-12-13 11:35 UTC (permalink / raw)
  To: hyperhead; +Cc: users

Did you consult the datasheet? It says that the VF only supports one queue.

Paul

> Am 12.12.2017 um 13:58 schrieb .. <hyperhead@gmail.com>:
> 
> I assume my message was ignored due to it not being related to dpdk
> software?
> 
> 
> On 11 December 2017 at 10:14, .. <hyperhead@gmail.com> wrote:
> 
>> Hi,
>> 
>> I have an intel I350-T2 which I use for SR-IOV, however, I am hitting some
>> rx_dropped on the card when I start increasing traffic. (I have got more
>> with the same software out of a identical bare metal system)
>> 
>> I am using the Intel igb driver on Centos 7.2 (downloaded from Intel not
>> the driver installed with Centos), so the RSS parameters amongst others are
>> availbe to me
>> 
>> This then led me to investigate the interrupts on the tx rx ring buffers
>> and I noticed that the interface (vfs enabled) only had on tx/rx queue. Its
>> distributed between   This is on the KVM Host
>> 
>>             CPU0       CPU1       CPU2       CPU3       CPU4
>> CPU5       CPU6       CPU7       CPU8
>> 100:          1         33        137          0          0
>> 0          0          0          0     IR-PCI-MSI-edge      ens2f1
>> 101:       2224          0          0       6309     178807
>> 0          0          0          0     IR-PCI-MSI-edge      ens2f1-TxRx-0
>> 
>> Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues
>> 
>> On the VM I only get one tx one rx queue ( I know all the interrupts are
>> only using CPU0) but that is defined in our builds.
>> 
>> egrep "CPU|ens11" /proc/interrupts
>>                   CPU0       CPU1       CPU2       CPU3       CPU4
>> CPU5       CPU6       CPU7
>> 34:  715885552      0          0          0          0
>> 0          0          0          0           PCI-MSI-edge      ens11-tx-0
>> 35:  559402399      0          0          0          0
>> 0          0          0          0           PCI-MSI-edge      ens11-rx-0
>> 
>> I activated RSS in my card, and can set if, however if I use the param
>> max_vfs=n then it  defaults back to to 1 rx 1 tx queue per nic port
>> 
>> [  392.833410] igb 0000:07:00.0: Using MSI-X interrupts. 1 rx queue(s), 1
>> tx queue(s)
>> [  393.035408] igb 0000:07:00.1: Using MSI-X interrupts. 1 rx queue(s), 1
>> tx queue(s)
>> 
>> I have been reading some of the dpdk older posts and see that VF RSS is
>> implemented in some cards, does anybody know if its available in this card
>> (from reading it only seemed the 10GB cards)
>> 
>> One of my plans aside from trying to create more RSS per VM is to add more
>> CPUS to the VM that are not isolated so that the rx and tx queues can
>> distribute their load a bit to see if this helps.
>> 
>> Also is it worth investigating the VMDq options, however I understand this
>> to be less useful than SR-IOV which works well for me with KVM.
>> 
>> 
>> Thanks in advance,
>> 
>> Rolando
>> 

-- 
Chair of Network Architectures and Services
Department of Informatics
Technical University of Munich
Boltzmannstr. 3
85748 Garching bei München, Germany 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] VF RSS availble in I350-T2?
  2017-12-12 12:58 ` ..
@ 2017-12-13 11:26   ` Thomas Monjalon
  2017-12-13 11:35   ` Paul Emmerich
  1 sibling, 0 replies; 7+ messages in thread
From: Thomas Monjalon @ 2017-12-13 11:26 UTC (permalink / raw)
  To: hyperhead; +Cc: users, Wenzhuo Lu

12/12/2017 13:58, ..:
> I assume my message was ignored due to it not being related to dpdk
> software?

It is ignored because people have not read it or are not expert in
this hardware.
I am CC'ing the maintainer of igb/e1000.


> On 11 December 2017 at 10:14, .. <hyperhead@gmail.com> wrote:
> 
> > Hi,
> >
> > I have an intel I350-T2 which I use for SR-IOV, however, I am hitting some
> > rx_dropped on the card when I start increasing traffic. (I have got more
> > with the same software out of a identical bare metal system)
> >
> > I am using the Intel igb driver on Centos 7.2 (downloaded from Intel not
> > the driver installed with Centos), so the RSS parameters amongst others are
> > availbe to me
> >
> > This then led me to investigate the interrupts on the tx rx ring buffers
> > and I noticed that the interface (vfs enabled) only had on tx/rx queue. Its
> > distributed between   This is on the KVM Host
> >
> >              CPU0       CPU1       CPU2       CPU3       CPU4
> > CPU5       CPU6       CPU7       CPU8
> >  100:          1         33        137          0          0
> > 0          0          0          0     IR-PCI-MSI-edge      ens2f1
> >  101:       2224          0          0       6309     178807
> > 0          0          0          0     IR-PCI-MSI-edge      ens2f1-TxRx-0
> >
> > Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues
> >
> > On the VM I only get one tx one rx queue ( I know all the interrupts are
> > only using CPU0) but that is defined in our builds.
> >
> > egrep "CPU|ens11" /proc/interrupts
> >                    CPU0       CPU1       CPU2       CPU3       CPU4
> > CPU5       CPU6       CPU7
> >  34:  715885552      0          0          0          0
> > 0          0          0          0           PCI-MSI-edge      ens11-tx-0
> >  35:  559402399      0          0          0          0
> > 0          0          0          0           PCI-MSI-edge      ens11-rx-0
> >
> > I activated RSS in my card, and can set if, however if I use the param
> > max_vfs=n then it  defaults back to to 1 rx 1 tx queue per nic port
> >
> > [  392.833410] igb 0000:07:00.0: Using MSI-X interrupts. 1 rx queue(s), 1
> > tx queue(s)
> > [  393.035408] igb 0000:07:00.1: Using MSI-X interrupts. 1 rx queue(s), 1
> > tx queue(s)
> >
> > I have been reading some of the dpdk older posts and see that VF RSS is
> > implemented in some cards, does anybody know if its available in this card
> > (from reading it only seemed the 10GB cards)
> >
> > One of my plans aside from trying to create more RSS per VM is to add more
> > CPUS to the VM that are not isolated so that the rx and tx queues can
> > distribute their load a bit to see if this helps.
> >
> > Also is it worth investigating the VMDq options, however I understand this
> > to be less useful than SR-IOV which works well for me with KVM.
> >
> >
> > Thanks in advance,
> >
> > Rolando
> >
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] VF RSS availble in I350-T2?
  2017-12-11  9:14 ..
@ 2017-12-12 12:58 ` ..
  2017-12-13 11:26   ` Thomas Monjalon
  2017-12-13 11:35   ` Paul Emmerich
  0 siblings, 2 replies; 7+ messages in thread
From: .. @ 2017-12-12 12:58 UTC (permalink / raw)
  To: users

I assume my message was ignored due to it not being related to dpdk
software?


On 11 December 2017 at 10:14, .. <hyperhead@gmail.com> wrote:

> Hi,
>
> I have an intel I350-T2 which I use for SR-IOV, however, I am hitting some
> rx_dropped on the card when I start increasing traffic. (I have got more
> with the same software out of a identical bare metal system)
>
> I am using the Intel igb driver on Centos 7.2 (downloaded from Intel not
> the driver installed with Centos), so the RSS parameters amongst others are
> availbe to me
>
> This then led me to investigate the interrupts on the tx rx ring buffers
> and I noticed that the interface (vfs enabled) only had on tx/rx queue. Its
> distributed between   This is on the KVM Host
>
>              CPU0       CPU1       CPU2       CPU3       CPU4
> CPU5       CPU6       CPU7       CPU8
>  100:          1         33        137          0          0
> 0          0          0          0     IR-PCI-MSI-edge      ens2f1
>  101:       2224          0          0       6309     178807
> 0          0          0          0     IR-PCI-MSI-edge      ens2f1-TxRx-0
>
> Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues
>
> On the VM I only get one tx one rx queue ( I know all the interrupts are
> only using CPU0) but that is defined in our builds.
>
> egrep "CPU|ens11" /proc/interrupts
>                    CPU0       CPU1       CPU2       CPU3       CPU4
> CPU5       CPU6       CPU7
>  34:  715885552      0          0          0          0
> 0          0          0          0           PCI-MSI-edge      ens11-tx-0
>  35:  559402399      0          0          0          0
> 0          0          0          0           PCI-MSI-edge      ens11-rx-0
>
> I activated RSS in my card, and can set if, however if I use the param
> max_vfs=n then it  defaults back to to 1 rx 1 tx queue per nic port
>
> [  392.833410] igb 0000:07:00.0: Using MSI-X interrupts. 1 rx queue(s), 1
> tx queue(s)
> [  393.035408] igb 0000:07:00.1: Using MSI-X interrupts. 1 rx queue(s), 1
> tx queue(s)
>
> I have been reading some of the dpdk older posts and see that VF RSS is
> implemented in some cards, does anybody know if its available in this card
> (from reading it only seemed the 10GB cards)
>
> One of my plans aside from trying to create more RSS per VM is to add more
> CPUS to the VM that are not isolated so that the rx and tx queues can
> distribute their load a bit to see if this helps.
>
> Also is it worth investigating the VMDq options, however I understand this
> to be less useful than SR-IOV which works well for me with KVM.
>
>
> Thanks in advance,
>
> Rolando
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [dpdk-users]  VF RSS availble in I350-T2?
@ 2017-12-11  9:14 ..
  2017-12-12 12:58 ` ..
  0 siblings, 1 reply; 7+ messages in thread
From: .. @ 2017-12-11  9:14 UTC (permalink / raw)
  To: users

Hi,

I have an intel I350-T2 which I use for SR-IOV, however, I am hitting some
rx_dropped on the card when I start increasing traffic. (I have got more
with the same software out of a identical bare metal system)

I am using the Intel igb driver on Centos 7.2 (downloaded from Intel not
the driver installed with Centos), so the RSS parameters amongst others are
availbe to me

This then led me to investigate the interrupts on the tx rx ring buffers
and I noticed that the interface (vfs enabled) only had on tx/rx queue. Its
distributed between   This is on the KVM Host

             CPU0       CPU1       CPU2       CPU3       CPU4
CPU5       CPU6       CPU7       CPU8
 100:          1         33        137          0          0
0          0          0          0     IR-PCI-MSI-edge      ens2f1
 101:       2224          0          0       6309     178807
0          0          0          0     IR-PCI-MSI-edge      ens2f1-TxRx-0

Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues

On the VM I only get one tx one rx queue ( I know all the interrupts are
only using CPU0) but that is defined in our builds.

egrep "CPU|ens11" /proc/interrupts
                   CPU0       CPU1       CPU2       CPU3       CPU4
CPU5       CPU6       CPU7
 34:  715885552      0          0          0          0          0
0          0          0           PCI-MSI-edge      ens11-tx-0
 35:  559402399      0          0          0          0          0
0          0          0           PCI-MSI-edge      ens11-rx-0

I activated RSS in my card, and can set if, however if I use the param
max_vfs=n then it  defaults back to to 1 rx 1 tx queue per nic port

[  392.833410] igb 0000:07:00.0: Using MSI-X interrupts. 1 rx queue(s), 1
tx queue(s)
[  393.035408] igb 0000:07:00.1: Using MSI-X interrupts. 1 rx queue(s), 1
tx queue(s)

I have been reading some of the dpdk older posts and see that VF RSS is
implemented in some cards, does anybody know if its available in this card
(from reading it only seemed the 10GB cards)

One of my plans aside from trying to create more RSS per VM is to add more
CPUS to the VM that are not isolated so that the rx and tx queues can
distribute their load a bit to see if this helps.

Also is it worth investigating the VMDq options, however I understand this
to be less useful than SR-IOV which works well for me with KVM.


Thanks in advance,

Rolando

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-12-13 12:53 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-08  9:57 [dpdk-users] VF RSS availble in I350-T2? ..
2017-12-11  9:14 ..
2017-12-12 12:58 ` ..
2017-12-13 11:26   ` Thomas Monjalon
2017-12-13 11:35   ` Paul Emmerich
2017-12-13 12:24     ` ..
2017-12-13 12:53       ` ..

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).