From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-vk0-f50.google.com (mail-vk0-f50.google.com [209.85.213.50]) by dpdk.org (Postfix) with ESMTP id 5779F2BF9 for ; Wed, 20 Apr 2016 17:48:00 +0200 (CEST) Received: by mail-vk0-f50.google.com with SMTP id n67so56033666vkf.3 for ; Wed, 20 Apr 2016 08:48:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=infinite-io.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=KjPV36XQhST/fzVhcBsmw9nQYhLkgk8GPb2OjR7yf2c=; b=uJhxeShxQ/48qx+HV5vYUi7YLXgnsvQr0S+/8ZW2JO5lS5JaMjinAM0aJ7BvPnBO9m qoTRmGE3yMEngC+/rcbR+QD0pL85I6NfzDYWtnHL0dPBF1ol12DFUUl4vkOPvladtLiX l3TgNBWwV2+SCfafmzXBjEbS61ck9jKbbGVwguc5hS7CxovXPp0Aec++3rqih/Wf0DfJ emxf9HbtZcvUfLbroNmOUzy+St61wjXgpbvfZNNVIxjMfHngs7FKt/QmRSA2Yu5riP7i 0a8qgiWlXKbtsoS7bqr2VcTSUm4rXHpfRiTI8SLVqHg7Q/5Dhv9nohM7lt+GeoTPUi6Q a88Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=KjPV36XQhST/fzVhcBsmw9nQYhLkgk8GPb2OjR7yf2c=; b=IlqnYvHVtVq0LuGX0W5cHolfo+zYfS2BMP9i5jsPx6Tyo6k5CxKv2/XMOhQt0XmTx4 kNpFSQAK4zVg4x0lBliUCDbmWiFgwDNf41lOtneI6trQMdeyRxldE92IPlz0aEYYFHNW oFgVnVcK30Jz702fKFNkLOs0+3hQ/SaEzGoIM4vJeDguDpwXpj4EqpmkxQ0JM3EvHAYr /L7mE+8k/E8TDozXICLZRQvB2clntihPjxzNkGgzpIl7Nn+ujBIvsplINZE8Q5h0v4d7 qeGxoFHEG7tdW+ebbmbU5/CyEmNqrG3fXuSjA6EKjeHrwsxhgppFAwgmKLcGLoix9HZG cCJQ== X-Gm-Message-State: AOPr4FV6HJKtMlTqOc4AE1ahnX5YHiVEQXvD9RCPKjwl3HPywA5qAb9J3q3EQ41qU1NJvcjJxDLmaHh2JyIiQg== MIME-Version: 1.0 X-Received: by 10.176.1.171 with SMTP id 40mr5156968ual.144.1461167279736; Wed, 20 Apr 2016 08:47:59 -0700 (PDT) Received: by 10.103.86.14 with HTTP; Wed, 20 Apr 2016 08:47:59 -0700 (PDT) In-Reply-To: <20160420140554.GA11780@bricha3-MOBL3> References: <20160420091046.GA4080@bricha3-MOBL3> <20160420140554.GA11780@bricha3-MOBL3> Date: Wed, 20 Apr 2016 10:47:59 -0500 Message-ID: From: Jay Rolette To: Bruce Richardson Cc: DPDK , Andriy Berestovskyy Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] Couple of PMD questions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Apr 2016 15:48:00 -0000 On Wed, Apr 20, 2016 at 9:05 AM, Bruce Richardson < bruce.richardson@intel.com> wrote: > On Wed, Apr 20, 2016 at 07:22:57AM -0500, Jay Rolette wrote: > > On Wed, Apr 20, 2016 at 4:10 AM, Bruce Richardson < > > bruce.richardson@intel.com> wrote: > > > > > On Tue, Apr 19, 2016 at 03:16:37PM -0500, Jay Rolette wrote: > > > > In ixgbe_dev_rx_init(), there is this bit of code: > > > > > > > > /* > > > > * Configure the RX buffer size in the BSIZEPACKET field of > > > > * the SRRCTL register of the queue. > > > > * The value is in 1 KB resolution. Valid values can be from > > > > * 1 KB to 16 KB. > > > > */ > > > > buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) > - > > > > RTE_PKTMBUF_HEADROOM); > > > > srrctl |= ((buf_size >> IXGBE_SRRCTL_BSIZEPKT_SHIFT) & > > > > IXGBE_SRRCTL_BSIZEPKT_MASK); > > > > > > > > IXGBE_WRITE_REG(hw, IXGBE_SRRCTL(rxq->reg_idx), srrctl); > > > > > > > > buf_size = (uint16_t) ((srrctl & IXGBE_SRRCTL_BSIZEPKT_MASK) << > > > > IXGBE_SRRCTL_BSIZEPKT_SHIFT); > > > > > > > > /* It adds dual VLAN length for supporting dual VLAN */ > > > > if (dev->data->dev_conf.rxmode.max_rx_pkt_len + > > > > 2 * IXGBE_VLAN_TAG_SIZE > buf_size) > > > > dev->data->scattered_rx = 1; > > > > > > > > > > > > Couple of questions I have about it: > > > > > > > > 1) If the caller configured the MBUF memory pool data room size to be > > > > something other than a multiple of 1K (+ RTE_PKTMBUF_HEADROOM), then > the > > > > driver ends up silently programming the NIC to use a smaller max RX > size > > > > than the caller specified. > > > > > > > > Should the driver error out in that case instead of only "sort of" > > > working? > > > > If not, it seems like it should be logging an error message. > > > > > > Well, it does log at the end of the whole rx init process what RX > function > > > is > > > being used, so there is that. > > > However, looking at it now, given that there is an explicit setting in > the > > > config > > > to request scattered mode, I think you may be right and that we should > > > error out > > > here if scattered mode is needed and not set. It could help prevent > > > unexpected > > > problems with these drivers. > > > > > > > +1. A hard fail at init if I've misconfigured things is much preferred to > > something that mostly works for typical test cases and only fails on > > corner/limit testing. > > > > > > > > 2) Second question is about the "/* It adds dual VLAN length for > > > supporting > > > > dual VLAN */" bit... > > > > > > > > As I understand it, dev_conf.rxmode.max_rx_pkt_len is supposed to be > set > > > to > > > > the max frame size you support (although it says it's only used if > jumbo > > > > frame is enabled). That same value is generally used when > calculating the > > > > size that mbuf elements should be created at. > > > > > > > > The description for the data_room_size parameter of > > > > rte_pktmbuf_pool_create(): > > > > > > > > "Size of data buffer in each mbuf, including RTE_PKTMBUF_HEADROOM." > > > > > > > > > > > > If I support a max frame size of 9216 bytes (exactly a 1K multiple to > > > make > > > > the NIC happy), then max_rx_pkt_len is going to be 9216 and > > > data_room_size > > > > will be 9216 + RTE_PKTMBUF_HEADROOM. > > > > > > > > ixgbe_dev_rx_init() will calculate normalize that back to 9216, which > > > will > > > > fail the dual VLAN length check. The really nasty part about that is > it > > > has > > > > a side-effect of enabling scattered RX regardless of the fact that I > > > didn't > > > > enable scattered RX in dev_conf.rxmode. > > > > > > > > The code in the e1000 PMD is similar, so nothing unique to ixgbe. > > > > > > > > Is that check correct? It seems wrong to be adding space for q-in-q > on > > > top > > > > of your specified max frame size... > > > > > > The issue here is what the correct behaviour needs to be. If we have > the > > > user > > > specify the maximum frame size including all vlan tags, then we hit the > > > problem > > > where we have to subtract the VLAN tag sizes when writing the value to > the > > > NIC. > > > In that case, we will hit a problem where we get a e.g. 9210 byte > frame - > > > to > > > reuse your example - without any VLAN tags, which will be rejected by > the > > > hardware as being oversized. If we don't do the subtraction, and we > get the > > > same 9210 byte packet with 2 VLAN tags on it, the hardware will accept > it > > > and > > > then split it across multiple descriptors because the actual DMA size > is > > > 9218 bytes. > > > > > > > As an app developer, I didn't realize the max frame size didn't include > > VLAN tags. I expected max frame size to be the size of the ethernet frame > > on the wire, which I would expect to include space used by any VLAN or > MPLS > > tags. > > > > Is there anything in the docs or example apps about that? I did some > > digging as I was debugging this and didn't notice it, but entirely > possible > > I just missed it. > > > > > > > I'm not sure there is a works-in-all-cases solution here. > > > > > > > Andriy's suggestion seems like it points in the right direction. > > > > From an app developer point of view, I'd expect to have a single max > frame > > size value to track and the APIs should take care of any adjustments > > required internally. Maybe have rte_pktmbuf_pool_create() add the > > additional bytes when it calls rte_mempool_create() under the covers? > Then > > it's nice and clean for the API without unexpected side-effects. > > > > It will still have unintended side-effects I think, depending on the > resolution > of the NIC buffer length paramters. For drivers like ixgbe or e1000, the > mempool > create call could potentially have to add an additional 1k to each buffer > just > to be able to store the extra eight bytes. The comments in the ixgbe driver say that the value programmed into SRRCTL must be on a 1K boundary. Based on your previous response, it sounded like the NIC ignores that limit for VLAN tags, hence the check for the extra 8 bytes on the mbuf element size. Are you worried about the size resolution on mempool elements? Sounds like I've got to go spend some quality time in the NIC data sheets... Maybe I should back up and just ask the higher level question: What's the right incantation in both the dev_conf structure and in creating the mbuf pool to support jumbo frames of some particular size on the wire, with or without VLAN tags, without requiring scattered_rx support in an app? Thanks, Jay