From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wi0-f179.google.com (mail-wi0-f179.google.com [209.85.212.179]) by dpdk.org (Postfix) with ESMTP id D9FF7C398 for ; Thu, 30 Jul 2015 19:22:21 +0200 (CEST) Received: by wibxm9 with SMTP id xm9so1010267wib.1 for ; Thu, 30 Jul 2015 10:22:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:cc:from:message-id:date :user-agent:mime-version:in-reply-to:content-type :content-transfer-encoding; bh=FhgMLsfqkBh2FxW21iSeHo31NHX4Ud5EvJbO+A1+XCk=; b=aPOqBsvyz4H7FTQ/yu2RFPdUj2WaSjkfE0/fGQLnFcFozd8FyO8V3I2jIeZEe7RBCV nxbXjprs6YGWIx4LbzOE+L4hpUhXSOf0YUxI+XmOMQdJV7EwhexJsAClJfAC4SokA2gf TW12cPxfnnvQav2rbuOvk2oSVr7MzKfzOAN1HvajJVHeZAho++6BVqqfaM3JgBNVwdXi lm8WijRPmFgDexHSQHdaYQgzUZrFwzuzEfbYHxJf4egdeTsal6i/pcKmF6THygGBZbv+ cZgqNgaId8ll2oPOvkp+/3jl/oUGMD0X/VHZY6qRalRiHIp74w+O9/QSxxrq+P8B+5O7 OUfQ== X-Gm-Message-State: ALoCoQk95WwOWU7X/gP5ujWHcn0HmSHfx+R2SjPgRPaxkoYZFNdiTSt/r9xc7gZqGerpQ/QI7C3b X-Received: by 10.194.184.82 with SMTP id es18mr95838254wjc.79.1438276941605; Thu, 30 Jul 2015 10:22:21 -0700 (PDT) Received: from avi.cloudius ([37.142.229.250]) by smtp.googlemail.com with ESMTPSA id qq1sm2891340wjc.0.2015.07.30.10.22.20 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 Jul 2015 10:22:20 -0700 (PDT) To: Stephen Hemminger , Vlad Zolotarov References: <55BA3B5D.4020402@cloudius-systems.com> <20150730091753.1af6cc67@urahara> <55BA4EC6.3030301@cloudius-systems.com> <55BA55D3.2070105@cloudius-systems.com> <20150730100158.1516dab3@urahara> From: Avi Kivity Message-ID: <55BA5D4B.30009@cloudius-systems.com> Date: Thu, 30 Jul 2015 20:22:19 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: <20150730100158.1516dab3@urahara> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] RFC: i40e xmit path HW limitation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Jul 2015 17:22:22 -0000 On 07/30/2015 08:01 PM, Stephen Hemminger wrote: > On Thu, 30 Jul 2015 19:50:27 +0300 > Vlad Zolotarov wrote: > >> >> On 07/30/15 19:20, Avi Kivity wrote: >>> >>> On 07/30/2015 07:17 PM, Stephen Hemminger wrote: >>>> On Thu, 30 Jul 2015 17:57:33 +0300 >>>> Vlad Zolotarov wrote: >>>> >>>>> Hi, Konstantin, Helin, >>>>> there is a documented limitation of xl710 controllers (i40e driver) >>>>> which is not handled in any way by a DPDK driver. >>>>> From the datasheet chapter 8.4.1: >>>>> >>>>> "• A single transmit packet may span up to 8 buffers (up to 8 data >>>>> descriptors per packet including >>>>> both the header and payload buffers). >>>>> • The total number of data descriptors for the whole TSO (explained >>>>> later on in this chapter) is >>>>> unlimited as long as each segment within the TSO obeys the previous >>>>> rule (up to 8 data descriptors >>>>> per segment for both the TSO header and the segment payload buffers)." >>>>> >>>>> This means that, for instance, long cluster with small fragments has to >>>>> be linearized before it may be placed on the HW ring. >>>>> In more standard environments like Linux or FreeBSD drivers the >>>>> solution >>>>> is straight forward - call skb_linearize()/m_collapse() corresponding. >>>>> In the non-conformist environment like DPDK life is not that easy - >>>>> there is no easy way to collapse the cluster into a linear buffer from >>>>> inside the device driver >>>>> since device driver doesn't allocate memory in a fast path and utilizes >>>>> the user allocated pools only. >>>>> >>>>> Here are two proposals for a solution: >>>>> >>>>> 1. We may provide a callback that would return a user TRUE if a give >>>>> cluster has to be linearized and it should always be called before >>>>> rte_eth_tx_burst(). Alternatively it may be called from inside the >>>>> rte_eth_tx_burst() and rte_eth_tx_burst() is changed to return >>>>> some >>>>> error code for a case when one of the clusters it's given has >>>>> to be >>>>> linearized. >>>>> 2. Another option is to allocate a mempool in the driver with the >>>>> elements consuming a single page each (standard 2KB buffers would >>>>> do). Number of elements in the pool should be as Tx ring length >>>>> multiplied by "64KB/(linear data length of the buffer in the pool >>>>> above)". Here I use 64KB as a maximum packet length and not taking >>>>> into an account esoteric things like "Giant" TSO mentioned in the >>>>> spec above. Then we may actually go and linearize the cluster if >>>>> needed on top of the buffers from the pool above, post the buffer >>>>> from the mempool above on the HW ring, link the original >>>>> cluster to >>>>> that new cluster (using the private data) and release it when the >>>>> send is done. >>>> Or just silently drop heavily scattered packets (and increment oerrors) >>>> with a PMD_TX_LOG debug message. >>>> >>>> I think a DPDK driver doesn't have to accept all possible mbufs and do >>>> extra work. It seems reasonable to expect caller to be well behaved >>>> in this restricted ecosystem. >>>> >>> How can the caller know what's well behaved? It's device dependent. >> +1 >> >> Stephen, how do you imagine this well-behaved application? Having switch >> case by an underlying device type and then "well-behaving" correspondingly? >> Not to mention that to "well-behave" the application writer has to read >> HW specs and understand them, which would limit the amount of DPDK >> developers to a very small amount of people... ;) Not to mention that >> the mentioned above switch-case would be a super ugly thing to be found >> in an application that would raise a big question about the >> justification of a DPDK existence as as SDK providing device drivers >> interface. ;) > Either have a RTE_MAX_MBUF_SEGMENTS that is global or > a mbuf_linearize function? Driver already can stash the > mbuf pool used for Rx and reuse it for the transient Tx buffers. > The pass/fail criteria is much more complicated than that. You might have a packet with 340 fragments successfully transmitted (64k/1500*8) or a packet with 9 fragments fail. What's wrong with exposing the pass/fail criteria as a driver-supplied function? If the application is sure that its mbufs pass, it can choose not to call it. A less constrained application will call it, and linearize the packet itself if it fails the test.