From: Thomas Monjalon <thomas@monjalon.net>
To: hyperhead@gmail.com
Cc: users@dpdk.org, Wenzhuo Lu <wenzhuo.lu@intel.com>
Subject: Re: [dpdk-users] VF RSS availble in I350-T2?
Date: Wed, 13 Dec 2017 12:26:22 +0100 [thread overview]
Message-ID: <1611603.KVu9l4JnOr@xps> (raw)
In-Reply-To: <CAL0gemM68vSPdhLz+N0+cSb5HE6g6BmCvDcRSiUwKUvQ-ipu1w@mail.gmail.com>
12/12/2017 13:58, ..:
> I assume my message was ignored due to it not being related to dpdk
> software?
It is ignored because people have not read it or are not expert in
this hardware.
I am CC'ing the maintainer of igb/e1000.
> On 11 December 2017 at 10:14, .. <hyperhead@gmail.com> wrote:
>
> > Hi,
> >
> > I have an intel I350-T2 which I use for SR-IOV, however, I am hitting some
> > rx_dropped on the card when I start increasing traffic. (I have got more
> > with the same software out of a identical bare metal system)
> >
> > I am using the Intel igb driver on Centos 7.2 (downloaded from Intel not
> > the driver installed with Centos), so the RSS parameters amongst others are
> > availbe to me
> >
> > This then led me to investigate the interrupts on the tx rx ring buffers
> > and I noticed that the interface (vfs enabled) only had on tx/rx queue. Its
> > distributed between This is on the KVM Host
> >
> > CPU0 CPU1 CPU2 CPU3 CPU4
> > CPU5 CPU6 CPU7 CPU8
> > 100: 1 33 137 0 0
> > 0 0 0 0 IR-PCI-MSI-edge ens2f1
> > 101: 2224 0 0 6309 178807
> > 0 0 0 0 IR-PCI-MSI-edge ens2f1-TxRx-0
> >
> > Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues
> >
> > On the VM I only get one tx one rx queue ( I know all the interrupts are
> > only using CPU0) but that is defined in our builds.
> >
> > egrep "CPU|ens11" /proc/interrupts
> > CPU0 CPU1 CPU2 CPU3 CPU4
> > CPU5 CPU6 CPU7
> > 34: 715885552 0 0 0 0
> > 0 0 0 0 PCI-MSI-edge ens11-tx-0
> > 35: 559402399 0 0 0 0
> > 0 0 0 0 PCI-MSI-edge ens11-rx-0
> >
> > I activated RSS in my card, and can set if, however if I use the param
> > max_vfs=n then it defaults back to to 1 rx 1 tx queue per nic port
> >
> > [ 392.833410] igb 0000:07:00.0: Using MSI-X interrupts. 1 rx queue(s), 1
> > tx queue(s)
> > [ 393.035408] igb 0000:07:00.1: Using MSI-X interrupts. 1 rx queue(s), 1
> > tx queue(s)
> >
> > I have been reading some of the dpdk older posts and see that VF RSS is
> > implemented in some cards, does anybody know if its available in this card
> > (from reading it only seemed the 10GB cards)
> >
> > One of my plans aside from trying to create more RSS per VM is to add more
> > CPUS to the VM that are not isolated so that the rx and tx queues can
> > distribute their load a bit to see if this helps.
> >
> > Also is it worth investigating the VMDq options, however I understand this
> > to be less useful than SR-IOV which works well for me with KVM.
> >
> >
> > Thanks in advance,
> >
> > Rolando
> >
>
next prev parent reply other threads:[~2017-12-13 11:26 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-12-11 9:14 ..
2017-12-12 12:58 ` ..
2017-12-13 11:26 ` Thomas Monjalon [this message]
2017-12-13 11:35 ` Paul Emmerich
2017-12-13 12:24 ` ..
2017-12-13 12:53 ` ..
-- strict thread matches above, loose matches on Subject: below --
2017-12-08 9:57 ..
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1611603.KVu9l4JnOr@xps \
--to=thomas@monjalon.net \
--cc=hyperhead@gmail.com \
--cc=users@dpdk.org \
--cc=wenzhuo.lu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).