DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] KNI with multiple kthreads per port
@ 2015-02-28 22:39 JP M.
  2015-03-01  0:46 ` Neil Horman
  2015-03-05  5:40 ` Zhang, Helin
  0 siblings, 2 replies; 4+ messages in thread
From: JP M. @ 2015-02-28 22:39 UTC (permalink / raw)
  To: dev

Howdy! First time posting; please be gentle. :-)

Environment:
 * DPDK 1.8.0 release
 * Linux kernel 3.0.3x-ish
 * 32-bit (yes, KNI works fine, after a few tweaks hugepage init strategy)

I'm trying to use the KNI example app with a configuration where multiple
kthreads are created for a physical port. Per the user guide and code, the
first such kthread is the "master", any the only one configurable; I'll
refer to the additional kthread(s) as "slaves", although their relationship
to the master kthread isn't discussed anywhere that I've looked thus far.

# insmod rte_kni.ko kthread_mode=multiple
# kni [....] --config="(0,0,1,2,3)"
# ifconfig vEth0_0 10.0.0.1 netmask 255.255.255.0

>From the above: PMD-bound physical port0. Rx/Tx on cores 0 and 1,
respectively. Master thread on core 2, one slave kthread on core 3.  Upon
startup, KNI devices vEth0_0 (master) and vEth0_1 (slave) are created.
After ifconfig, vEth0_0 works fine; by design, vEth0_1 cannot be configured.

The problem I'm encountering is that the subset of packets hitting vEth0_1
are being dropped... somewhere.  They're definitely getting as far as the
call to netif_rx(skb).  I'll try on a newer system for comparison.  But
before I go too much further, I'd like to establish the correct set-up and
expectations.

Should I be bonding vEth0_0 and vEth0_1?  Because I tried doing so (via
sysfs); however, attempts to add either as slaves to bond0 were ignored.

Any ideas appreciated. (Though it may end up being a moot point, with the
other work this past week on KNI performance.)

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-dev] KNI with multiple kthreads per port
  2015-02-28 22:39 [dpdk-dev] KNI with multiple kthreads per port JP M.
@ 2015-03-01  0:46 ` Neil Horman
  2015-03-05  5:40 ` Zhang, Helin
  1 sibling, 0 replies; 4+ messages in thread
From: Neil Horman @ 2015-03-01  0:46 UTC (permalink / raw)
  To: JP M.; +Cc: dev

On Sat, Feb 28, 2015 at 02:39:40PM -0800, JP M. wrote:
> Howdy! First time posting; please be gentle. :-)
> 
> Environment:
>  * DPDK 1.8.0 release
>  * Linux kernel 3.0.3x-ish
>  * 32-bit (yes, KNI works fine, after a few tweaks hugepage init strategy)
> 
> I'm trying to use the KNI example app with a configuration where multiple
> kthreads are created for a physical port. Per the user guide and code, the
> first such kthread is the "master", any the only one configurable; I'll
> refer to the additional kthread(s) as "slaves", although their relationship
> to the master kthread isn't discussed anywhere that I've looked thus far.
> 
> # insmod rte_kni.ko kthread_mode=multiple
> # kni [....] --config="(0,0,1,2,3)"
> # ifconfig vEth0_0 10.0.0.1 netmask 255.255.255.0
> 
> From the above: PMD-bound physical port0. Rx/Tx on cores 0 and 1,
> respectively. Master thread on core 2, one slave kthread on core 3.  Upon
> startup, KNI devices vEth0_0 (master) and vEth0_1 (slave) are created.
> After ifconfig, vEth0_0 works fine; by design, vEth0_1 cannot be configured.
> 
> The problem I'm encountering is that the subset of packets hitting vEth0_1
> are being dropped... somewhere.  They're definitely getting as far as the
> call to netif_rx(skb).  I'll try on a newer system for comparison.  But
> before I go too much further, I'd like to establish the correct set-up and
> expectations.
> 
> Should I be bonding vEth0_0 and vEth0_1?  Because I tried doing so (via
> sysfs); however, attempts to add either as slaves to bond0 were ignored.
> 
> Any ideas appreciated. (Though it may end up being a moot point, with the
> other work this past week on KNI performance.)
> 
Start by using dropwatch.  If you know that you're getting as far as netif_rx,
then you know you're getting into the kernel networking stack.  Dropwatch will
tell you exactly where you're loosing frames, and you can work backwards from
there to figure out the why behind the event.

Neil

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-dev] KNI with multiple kthreads per port
  2015-02-28 22:39 [dpdk-dev] KNI with multiple kthreads per port JP M.
  2015-03-01  0:46 ` Neil Horman
@ 2015-03-05  5:40 ` Zhang, Helin
  2015-03-05  7:33   ` JP M.
  1 sibling, 1 reply; 4+ messages in thread
From: Zhang, Helin @ 2015-03-05  5:40 UTC (permalink / raw)
  To: JP M., dev



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of JP M.
> Sent: Sunday, March 1, 2015 6:40 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] KNI with multiple kthreads per port
> 
> Howdy! First time posting; please be gentle. :-)
> 
> Environment:
>  * DPDK 1.8.0 release
>  * Linux kernel 3.0.3x-ish
>  * 32-bit (yes, KNI works fine, after a few tweaks hugepage init strategy)
Interesting! How did you get it works?

> 
> I'm trying to use the KNI example app with a configuration where multiple
> kthreads are created for a physical port. Per the user guide and code, the first
> such kthread is the "master", any the only one configurable; I'll refer to the
> additional kthread(s) as "slaves", although their relationship to the master
> kthread isn't discussed anywhere that I've looked thus far.
> 
> # insmod rte_kni.ko kthread_mode=multiple # kni [....] --config="(0,0,1,2,3)"
> # ifconfig vEth0_0 10.0.0.1 netmask 255.255.255.0
> 
> From the above: PMD-bound physical port0. Rx/Tx on cores 0 and 1,
> respectively. Master thread on core 2, one slave kthread on core 3.  Upon
> startup, KNI devices vEth0_0 (master) and vEth0_1 (slave) are created.
> After ifconfig, vEth0_0 works fine; by design, vEth0_1 cannot be configured.
What do you mean "vEth0_1 cannot be configured"?

> 
> The problem I'm encountering is that the subset of packets hitting vEth0_1 are
> being dropped... somewhere.  They're definitely getting as far as the call to
> netif_rx(skb).  I'll try on a newer system for comparison.  But before I go too
> much further, I'd like to establish the correct set-up and expectations.
So you can check the receiving side in KNI kernel function.

> 
> Should I be bonding vEth0_0 and vEth0_1?  Because I tried doing so (via sysfs);
> however, attempts to add either as slaves to bond0 were ignored.
What do you mean bonding here? Basically KNI has no relationship to bonding.

> 
> Any ideas appreciated. (Though it may end up being a moot point, with the
> other work this past week on KNI performance.)

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-dev] KNI with multiple kthreads per port
  2015-03-05  5:40 ` Zhang, Helin
@ 2015-03-05  7:33   ` JP M.
  0 siblings, 0 replies; 4+ messages in thread
From: JP M. @ 2015-03-05  7:33 UTC (permalink / raw)
  To: Zhang, Helin; +Cc: dev

On Wed, Mar 4, 2015 at 9:40 PM, Zhang, Helin <helin.zhang@intel.com> wrote:
>>  * 32-bit (yes, KNI works fine, after a few tweaks hugepage init strategy)
> Interesting! How did you get it works?

In a nutshell: The current (circa r1.8.0) approach does mmap starting
from the bottom of the address space, then does a second round of
mmap, then unmaps the first round. Result is (AFAICT) unnecessarily
wasting much of the address space. I'll try to get my patch cleaned up
and posted in the next several days.

> What do you mean "vEth0_1 cannot be configured"?

The primary KNI device, vEth0_0, has (I forget which) ops defined, but
any secndary KNI devices do not. That said, I now see that once the
latter are brought up, I can set their MAC address. Which turns out to
be important. (see below)

>> Should I be bonding vEth0_0 and vEth0_1?  Because I tried doing so (via sysfs);
>> however, attempts to add either as slaves to bond0 were ignored.
>
> What do you mean bonding here? Basically KNI has no relationship to bonding.

All the same, I figured out what I was doing wrong (user-error on my
part, I think) with regards to bonding (EtherChannel) and am now able
to get multiple vnics to enslave. The catch was that the secondary KNI
devices, after being enslaved, are assigned a random MAC. After doing
so, it is necessary to then manually set their MAC to that of the
primary KNI device.  Thereafter, Rx/Tx are load-balanced as expected.
That assignment of a random MAC to the secondary KNI devices is the
hang-up; not clear whether there's a deliberate reason for that
happening.

 ~jp

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2015-03-05  7:33 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-28 22:39 [dpdk-dev] KNI with multiple kthreads per port JP M.
2015-03-01  0:46 ` Neil Horman
2015-03-05  5:40 ` Zhang, Helin
2015-03-05  7:33   ` JP M.

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).