From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <laswell@infiniteio.com>
Received: from mail-pd0-f179.google.com (mail-pd0-f179.google.com
 [209.85.192.179]) by dpdk.org (Postfix) with ESMTP id 1DA227EC4
 for <dev@dpdk.org>; Wed,  5 Nov 2014 15:39:27 +0100 (CET)
Received: by mail-pd0-f179.google.com with SMTP id g10so865439pdj.24
 for <dev@dpdk.org>; Wed, 05 Nov 2014 06:48:50 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20130820;
 h=x-gm-message-state:mime-version:in-reply-to:references:date
 :message-id:subject:from:to:cc:content-type;
 bh=PN1WidNQJWVEV9kFqqDBxYrXALiodI4QypOvkaOyX2g=;
 b=SUYIgn8yISw1uAXA705p7ah5Kuzu31OvqldmBtE2F2rakMYxCwG+MZMHqOjLZdys4p
 oct+ZSUz7v0JMHK8FLJ7Dlrgft+SZUAxuZvlSileAewqMsUbmTpZeMETT16T0oLAzPu/
 /JQfykPU0TZ2RERRzq20oqPmg0F/iiSRmCu1MlJxyaA3mVgmGCUtlkGbES82MXRi+yBG
 Mb5DtYjfm28ezpRnAl57EedK4y7mlcVYv8253q7QXQ5A9zvU84PKOtrVqCf290lFQl9O
 sqfWFx5pNPas9fwdzUhFoAB3RxYsp7sM6pVkOaAVoDBfGvQkNNgXsk+IC15ypp3NPAR6
 2mOA==
X-Gm-Message-State: ALoCoQlIrvKiSkrkXIZR2l12M4hmyPMAgg59JtrWStZkEQ48In2Jk+Ee5bZb14xwhGPaeUAJrrAj
MIME-Version: 1.0
X-Received: by 10.70.124.196 with SMTP id mk4mr15309407pdb.14.1415198930438;
 Wed, 05 Nov 2014 06:48:50 -0800 (PST)
Received: by 10.70.41.76 with HTTP; Wed, 5 Nov 2014 06:48:50 -0800 (PST)
In-Reply-To: <CAKfHP0Vq5ExWpEBRtehJ7_SUBhbbTP8sfEDHyc164yAjMijZOw@mail.gmail.com>
References: <CAKfHP0Up34C7r6SgrdTt+p-yV0bRkoP=hg8MR1P-C6iEg+cX2Q@mail.gmail.com>
 <20141030110956.GA8456@bricha3-MOBL3>
 <CAKfHP0Vq5ExWpEBRtehJ7_SUBhbbTP8sfEDHyc164yAjMijZOw@mail.gmail.com>
Date: Wed, 5 Nov 2014 08:48:50 -0600
Message-ID: <CA+GnqApDx88-vXQeid20wJgV2iER6dTD2XWARzrAaCY09WTCgg@mail.gmail.com>
From: Matt Laswell <laswell@infiniteio.com>
To: Alex Markuze <alex@weka.io>
Content-Type: text/plain; charset=UTF-8
X-Content-Filtered-By: Mailman/MimeDel 2.1.15
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] segmented recv ixgbevf
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Wed, 05 Nov 2014 14:39:28 -0000

Hey Folks,

I ran into the same issue that Alex is describing here, and I wanted to
expand just a little bit on his comments, as the documentation isn't very
clear.

Per the documentation, the two arguments to rte_pktmbuf_pool_init() are a
pointer to the memory pool that contains the newly-allocated mbufs and an
opaque pointer.  The docs are pretty vague about what the opaque pointer
should point to or what it's contents mean; all of the examples I looked at
just pass a NULL pointer. The docs for this function describe the opaque
pointer this way:

"A pointer that can be used by the user to retrieve useful information for
mbuf initialization. This pointer comes from the init_arg parameter of
rte_mempool_create()
<http://www.dpdk.org/doc/api/rte__mempool_8h.html#a7dc1d01a45144e3203c36d1800cb8f17>
."

This is a little bit misleading.  Under the covers, rte_pktmbuf_pool_init()
doesn't threat the opaque pointer as a pointer at all.  Rather, it just
converts it to a uint16_t which contains the desired mbuf size.   If it
receives 0 (in other words, if you passed in a NULL pointer), it will use
2048 bytes + RTE_PKTMBUF_HEADROOM.  Hence, incoming jumbo frames will be
segmented into 2K chunks.

Any chance we could get an improvement to the documentation about this
parameter?  It seems as though the opaque pointer isn't a pointer and
probably shouldn't be opaque.

Hope this helps the next person who comes across this behavior.

--
Matt Laswell
infinite io, inc.

On Thu, Oct 30, 2014 at 7:48 AM, Alex Markuze <alex@weka.io> wrote:

> For posterity.
>
> 1.When using MTU larger then 2K its advised to provide the value
> to rte_pktmbuf_pool_init.
> 2.ixgbevf rounds down the ("MBUF size" - RTE_PKTMBUF_HEADROOM) to the
> nearest 1K multiple when deciding on the receiving capabilities [buffer
> size]of the Buffers in the pool.
> The function SRRCTL register,  is considered here for some reason?
>