DPDK usage discussions
 help / color / mirror / Atom feed
From: ".." <hyperhead@gmail.com>
Cc: users@dpdk.org
Subject: Re: [dpdk-users] VF RSS availble in I350-T2?
Date: Wed, 13 Dec 2017 13:53:41 +0100	[thread overview]
Message-ID: <CAL0gemM1u1VjhB2YnLhTP=eMBK1CtXjgmHLxGw4vbOvv4y0MtA@mail.gmail.com> (raw)
In-Reply-To: <CAL0gemPEiL4GLrxgYxWT=mYto0uO4n-5QWXwKFAWqU-quJ3OTQ@mail.gmail.com>

Hi, sorry the ignored comment was a bit brash.

Its my first time posting, and I didnt see the email I sent come into my
inbox (I guess you don't get them sent to yourself), so I did wonder if it
posted ok.  However the list archives showed that it did.

Thanks
On Wed, 13 Dec 2017, 13:20 .., <hyperhead@gmail.com> wrote:

> Hi Paul,
>
> No I didn't spot that.
>
> I guess my only option now is 10gb card that supports it.
>
> Thanks.
>
> On Wed, 13 Dec 2017, 12:35 Paul Emmerich, <emmericp@net.in.tum.de> wrote:
>
>> Did you consult the datasheet? It says that the VF only supports one
>> queue.
>>
>> Paul
>>
>> > Am 12.12.2017 um 13:58 schrieb .. <hyperhead@gmail.com>:
>> >
>> > I assume my message was ignored due to it not being related to dpdk
>> > software?
>> >
>> >
>> > On 11 December 2017 at 10:14, .. <hyperhead@gmail.com> wrote:
>> >
>> >> Hi,
>> >>
>> >> I have an intel I350-T2 which I use for SR-IOV, however, I am hitting
>> some
>> >> rx_dropped on the card when I start increasing traffic. (I have got
>> more
>> >> with the same software out of a identical bare metal system)
>> >>
>> >> I am using the Intel igb driver on Centos 7.2 (downloaded from Intel
>> not
>> >> the driver installed with Centos), so the RSS parameters amongst
>> others are
>> >> availbe to me
>> >>
>> >> This then led me to investigate the interrupts on the tx rx ring
>> buffers
>> >> and I noticed that the interface (vfs enabled) only had on tx/rx
>> queue. Its
>> >> distributed between   This is on the KVM Host
>> >>
>> >>             CPU0       CPU1       CPU2       CPU3       CPU4
>> >> CPU5       CPU6       CPU7       CPU8
>> >> 100:          1         33        137          0          0
>> >> 0          0          0          0     IR-PCI-MSI-edge      ens2f1
>> >> 101:       2224          0          0       6309     178807
>> >> 0          0          0          0     IR-PCI-MSI-edge
>> ens2f1-TxRx-0
>> >>
>> >> Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues
>> >>
>> >> On the VM I only get one tx one rx queue ( I know all the interrupts
>> are
>> >> only using CPU0) but that is defined in our builds.
>> >>
>> >> egrep "CPU|ens11" /proc/interrupts
>> >>                   CPU0       CPU1       CPU2       CPU3       CPU4
>> >> CPU5       CPU6       CPU7
>> >> 34:  715885552      0          0          0          0
>> >> 0          0          0          0           PCI-MSI-edge
>> ens11-tx-0
>> >> 35:  559402399      0          0          0          0
>> >> 0          0          0          0           PCI-MSI-edge
>> ens11-rx-0
>> >>
>> >> I activated RSS in my card, and can set if, however if I use the param
>> >> max_vfs=n then it  defaults back to to 1 rx 1 tx queue per nic port
>> >>
>> >> [  392.833410] igb 0000:07:00.0: Using MSI-X interrupts. 1 rx
>> queue(s), 1
>> >> tx queue(s)
>> >> [  393.035408] igb 0000:07:00.1: Using MSI-X interrupts. 1 rx
>> queue(s), 1
>> >> tx queue(s)
>> >>
>> >> I have been reading some of the dpdk older posts and see that VF RSS is
>> >> implemented in some cards, does anybody know if its available in this
>> card
>> >> (from reading it only seemed the 10GB cards)
>> >>
>> >> One of my plans aside from trying to create more RSS per VM is to add
>> more
>> >> CPUS to the VM that are not isolated so that the rx and tx queues can
>> >> distribute their load a bit to see if this helps.
>> >>
>> >> Also is it worth investigating the VMDq options, however I understand
>> this
>> >> to be less useful than SR-IOV which works well for me with KVM.
>> >>
>> >>
>> >> Thanks in advance,
>> >>
>> >> Rolando
>> >>
>>
>> --
>> Chair of Network Architectures and Services
>> Department of Informatics
>> Technical University of Munich
>> Boltzmannstr. 3
>> 85748 Garching bei München, Germany
>>
>>
>>
>>
>>

  reply	other threads:[~2017-12-13 12:53 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-11  9:14 ..
2017-12-12 12:58 ` ..
2017-12-13 11:26   ` Thomas Monjalon
2017-12-13 11:35   ` Paul Emmerich
2017-12-13 12:24     ` ..
2017-12-13 12:53       ` .. [this message]
  -- strict thread matches above, loose matches on Subject: below --
2017-12-08  9:57 ..

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAL0gemM1u1VjhB2YnLhTP=eMBK1CtXjgmHLxGw4vbOvv4y0MtA@mail.gmail.com' \
    --to=hyperhead@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).