From: Avi Kivity <avi@cloudius-systems.com>
To: Stephen Hemminger <stephen@networkplumber.org>,
Vlad Zolotarov <vladz@cloudius-systems.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] RFC: i40e xmit path HW limitation
Date: Thu, 30 Jul 2015 20:22:19 +0300 [thread overview]
Message-ID: <55BA5D4B.30009@cloudius-systems.com> (raw)
In-Reply-To: <20150730100158.1516dab3@urahara>
On 07/30/2015 08:01 PM, Stephen Hemminger wrote:
> On Thu, 30 Jul 2015 19:50:27 +0300
> Vlad Zolotarov <vladz@cloudius-systems.com> wrote:
>
>>
>> On 07/30/15 19:20, Avi Kivity wrote:
>>>
>>> On 07/30/2015 07:17 PM, Stephen Hemminger wrote:
>>>> On Thu, 30 Jul 2015 17:57:33 +0300
>>>> Vlad Zolotarov <vladz@cloudius-systems.com> wrote:
>>>>
>>>>> Hi, Konstantin, Helin,
>>>>> there is a documented limitation of xl710 controllers (i40e driver)
>>>>> which is not handled in any way by a DPDK driver.
>>>>> From the datasheet chapter 8.4.1:
>>>>>
>>>>> "• A single transmit packet may span up to 8 buffers (up to 8 data
>>>>> descriptors per packet including
>>>>> both the header and payload buffers).
>>>>> • The total number of data descriptors for the whole TSO (explained
>>>>> later on in this chapter) is
>>>>> unlimited as long as each segment within the TSO obeys the previous
>>>>> rule (up to 8 data descriptors
>>>>> per segment for both the TSO header and the segment payload buffers)."
>>>>>
>>>>> This means that, for instance, long cluster with small fragments has to
>>>>> be linearized before it may be placed on the HW ring.
>>>>> In more standard environments like Linux or FreeBSD drivers the
>>>>> solution
>>>>> is straight forward - call skb_linearize()/m_collapse() corresponding.
>>>>> In the non-conformist environment like DPDK life is not that easy -
>>>>> there is no easy way to collapse the cluster into a linear buffer from
>>>>> inside the device driver
>>>>> since device driver doesn't allocate memory in a fast path and utilizes
>>>>> the user allocated pools only.
>>>>>
>>>>> Here are two proposals for a solution:
>>>>>
>>>>> 1. We may provide a callback that would return a user TRUE if a give
>>>>> cluster has to be linearized and it should always be called before
>>>>> rte_eth_tx_burst(). Alternatively it may be called from inside the
>>>>> rte_eth_tx_burst() and rte_eth_tx_burst() is changed to return
>>>>> some
>>>>> error code for a case when one of the clusters it's given has
>>>>> to be
>>>>> linearized.
>>>>> 2. Another option is to allocate a mempool in the driver with the
>>>>> elements consuming a single page each (standard 2KB buffers would
>>>>> do). Number of elements in the pool should be as Tx ring length
>>>>> multiplied by "64KB/(linear data length of the buffer in the pool
>>>>> above)". Here I use 64KB as a maximum packet length and not taking
>>>>> into an account esoteric things like "Giant" TSO mentioned in the
>>>>> spec above. Then we may actually go and linearize the cluster if
>>>>> needed on top of the buffers from the pool above, post the buffer
>>>>> from the mempool above on the HW ring, link the original
>>>>> cluster to
>>>>> that new cluster (using the private data) and release it when the
>>>>> send is done.
>>>> Or just silently drop heavily scattered packets (and increment oerrors)
>>>> with a PMD_TX_LOG debug message.
>>>>
>>>> I think a DPDK driver doesn't have to accept all possible mbufs and do
>>>> extra work. It seems reasonable to expect caller to be well behaved
>>>> in this restricted ecosystem.
>>>>
>>> How can the caller know what's well behaved? It's device dependent.
>> +1
>>
>> Stephen, how do you imagine this well-behaved application? Having switch
>> case by an underlying device type and then "well-behaving" correspondingly?
>> Not to mention that to "well-behave" the application writer has to read
>> HW specs and understand them, which would limit the amount of DPDK
>> developers to a very small amount of people... ;) Not to mention that
>> the mentioned above switch-case would be a super ugly thing to be found
>> in an application that would raise a big question about the
>> justification of a DPDK existence as as SDK providing device drivers
>> interface. ;)
> Either have a RTE_MAX_MBUF_SEGMENTS that is global or
> a mbuf_linearize function? Driver already can stash the
> mbuf pool used for Rx and reuse it for the transient Tx buffers.
>
The pass/fail criteria is much more complicated than that. You might
have a packet with 340 fragments successfully transmitted (64k/1500*8)
or a packet with 9 fragments fail.
What's wrong with exposing the pass/fail criteria as a driver-supplied
function? If the application is sure that its mbufs pass, it can choose
not to call it. A less constrained application will call it, and
linearize the packet itself if it fails the test.
prev parent reply other threads:[~2015-07-30 17:22 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-07-30 14:57 Vlad Zolotarov
2015-07-30 16:10 ` [dpdk-dev] " Zhang, Helin
2015-07-30 16:44 ` Vlad Zolotarov
2015-07-30 17:33 ` Zhang, Helin
2015-07-30 17:56 ` Vlad Zolotarov
2015-07-30 19:00 ` Zhang, Helin
2015-07-30 19:25 ` Vladislav Zolotarov
2015-07-30 16:17 ` [dpdk-dev] RFC: " Stephen Hemminger
2015-07-30 16:20 ` Avi Kivity
2015-07-30 16:50 ` Vlad Zolotarov
2015-07-30 17:01 ` Stephen Hemminger
2015-07-30 17:14 ` Vlad Zolotarov
2015-07-30 17:22 ` Avi Kivity [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55BA5D4B.30009@cloudius-systems.com \
--to=avi@cloudius-systems.com \
--cc=dev@dpdk.org \
--cc=stephen@networkplumber.org \
--cc=vladz@cloudius-systems.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).