> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Wednesday, June 24, 2020 10:09 PM
>
> 24/06/2020 22:01, Lincoln Lavoie:
> > Inline.
> >
> > On Wed, Jun 24, 2020 at 3:55 PM Thomas Monjalon <thomas@monjalon.net>
> wrote:
> >
> > > Hi,
> > >
> > > A bit of context: Daniel is going to implement a test in DTS
> > > for ethdev speed capability:
> > > http://doc.dpdk.org/guides/nics/features.html#speed-capabilities
> > >
Great! The physical layer rarely gets attention by DPDK, although it is the foundation of everything.
> > > 24/06/2020 21:32, Daniel Kirichok:
> > > > The Speed Capabilities test will first check the speed of each
> interface
> > > > that the device lists through ethtool.
> > >
> > > I assume you mean doing a query in Linux before starting DPDK.
> >
> > LYL > I hadn't thought about that approach, we were thinking we would
> > compare what the tested reports as the physical reality of the setup
> with
> > what the DPDK driver reports.
> > If you think we can trust the native kernel drivers as the source of
> truth,
> > we could read those first and compare DPDK output.
>
> Not sure we can trust kernel infos, especially for new HW.
I agree. Otherwise we are depending on the kernel NIC driver development being ahead of the DPDK driver development.
> I was just trying to understand why ethtool came in the picture.
I guess Lincoln is referring to some "ethtool" DPDK application, right?
>
> > > > Then it compares each interface
> > > > speed to a user-defined set of expected speeds set in a newly
> created
> > > > config file, `speed_capabilities.cfg`.
> > >
> > > Why do you need such config file?
> > >
> > > > The test fails if an interface is
> > > > found that isn’t accounted for in the cfg file, the detected
> speed is
> > > less
> > > > than 1 Gb/s, or an interface detects a different speed than what
> is
> > > > expected from the cfg file. Otherwise, it passes.
As I understand it, this test verifies that the speed capabilities reported by DPDK matches the expectations for that driver, where the expectations are in the cfg file. This is good.
There is no need to require any minimum speed. DPDK should be allowed to support 10/100 Mbps Ethernet devices.
> > >
> > > So you don't test DPDK?
> > >
> > > Would be interesting to compare the actual link speed
> > > from rte_eth_link_get() with the advertised capability.
> > >
> > > What else do we want to test regarding link speed? autonegotiation?
> > >
> > LYL > This would become highly dependent on the NIC, and it's
> > capabilities. I have not had good luck with auto-neg on high speed
> links
> > like 10G SPF and higher. Similarly, high speed links would likely
> > require a physical change (assuming the NIC supported multiple
> speeds), to
> > change either the module or the DAC. We're trying to avoid anything
> that
> > would require physical changes that can't be forced through
> > the tester (i.e. disable the port connected to the DUT for a link
> down,
> > etc.)
>
> At least, we can test that autonegotiation is establishing
> a speed advertised in capabilities, right?
>
>
Tests to verify that the link actually comes up at the expected speeds would be nice too:
Verify that 10/100/1000 Mbps copper Ethernet devices link up at the speed advertised by the tester using auto-negotiation, and at 10 and 100 Mbps half duplex when the tester doesn't provide auto-negotiation ("Parallel Detect" in IEEE 802.3 terminology). And similarly when the DUT sets the advertised capabilities.
Flow Control behavior should also be verified. If both tester and DUT advertise Flow Control, the driver should use Flow Control, and if tester and/or DUT advertises No Flow Control, the driver should not use flow control.
I don't have a lot of experience with multi-speed modules above 1 Gbps, but guess similar tests apply here.
And I agree that the tests should be limited to what can be automated with the tester. Running around pulling cables and swapping modules is not an option.
Dan Kirichok
UNH InterOperability Laboratory
21 Madbury Rd, Suite 100, Durham, NH 03824