From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-vk0-f43.google.com (mail-vk0-f43.google.com [209.85.213.43]) by dpdk.org (Postfix) with ESMTP id 4821291E4 for ; Tue, 26 Jan 2016 18:14:58 +0100 (CET) Received: by mail-vk0-f43.google.com with SMTP id k1so95632384vkb.2 for ; Tue, 26 Jan 2016 09:14:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=ZWSp0UCWIgG3g97CJ5WiByEYWQuuat7fIOM7Zsv5CK4=; b=AVx7ZD3Rs+HvJJ5aELmCE1DK78KAJZJvDW2xrac90pgpRFXksQmedT2RjwII1LcRhs o9R2/+MOJ+myo6tLNA7aP8ED5hmu30fUNoaYRM9hyC9cnJMypV7Ro4VRQ7Cd0yN4/JrX LDc38FeKdZ0Gpn+8U3Z3DKZ1TWdY1ZKAoaKxvDnZpOpImkRY2Z1EPJHyAVNZ6H9nZJ9V qbCybCkRrwqBkEcjTQ4UTTjYBa1/jSRxYG46rNxlepO3llJqgpym6sREnKFrxoNX3oIk bTeBcPucbtaGe3z8aYLdWv5jyaSmTzuEmbhoIZQ/y6iN85I3DDswZJmKVtXY5GszzHsm c5eQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=ZWSp0UCWIgG3g97CJ5WiByEYWQuuat7fIOM7Zsv5CK4=; b=jAOgOPWnaFqcban5oUdVEOoZ9sLHnzaZwG7OU08LJRYvkKqx/0qHL3hRKcbBjNnJoB j9d9smB2lJSvQlVkcOyPQ3TimXMNQ+by/D5YHsG116hUd6dE8s05QQgNPnAjQNwaNLMK SQx4bD/RAPdaUGSL2ID81LfZ1OVlS8a2L3WKLryT1U4dUHIM/yb29wIUxTlUA5gIlLsk w2PNeboXTMF/uBfpIJZ5vc1G6bIV1Zy7qZbySCv81Y8T7h34D1fbRNZ8L4u72qacTLAW NLxd2PG5BKWkmp7bPHaTUfB8hsxjbWp6nWMgbcbwDKPLzQxLgj4MHJ/PL9fIuyOVYYxo jtiw== X-Gm-Message-State: AG10YOR2x2iCICssafm7cmijGbXUzQMKLC0V0XFCPV6vJsLNzygDDtX6LgrLyVGwOwaEwU87ZWuui3tJKTukRQ== MIME-Version: 1.0 X-Received: by 10.31.166.208 with SMTP id p199mr15444328vke.122.1453828497737; Tue, 26 Jan 2016 09:14:57 -0800 (PST) Received: by 10.31.152.208 with HTTP; Tue, 26 Jan 2016 09:14:57 -0800 (PST) In-Reply-To: <56A7A3C8.7040200@ornl.gov> References: <56A6A46C.1050003@chocot.jp> <745DB4B8861F8E4B9849C970520ABBF149852DC7@ORSMSX102.amr.corp.intel.com> <56A78163.5060500@ornl.gov> <56A7A3C8.7040200@ornl.gov> Date: Tue, 26 Jan 2016 09:14:57 -0800 Message-ID: From: Saurabh Mishra To: Lawrence MacIntyre Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: dev@dpdk.org Subject: Re: [dpdk-dev] rte_mbuf size for jumbo frame X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 26 Jan 2016 17:14:58 -0000 Hi Lawrence -- >It sounds like you benchmarked Apache using Jumbo Packets, but not the DPDK app using large mbufs. >Those are two entirely different issues. I meant I ran Apache benchmark between two guest VMs through our data-processing VM which is using DPDK. I saw 3x better performance with 10k mbuf size vs 2k mbuf size (MTU also set appropriately ) Unfortunately, we can't handle chained mbuf unless we copy into a large buffer. Even we do start handling chained mbufs, for inspection we can't inspect a scattered mbuf payloads. We have to anyway coalesce them into one to make sense of the content of the packet. We inspect full packet (from 1st byte to last byte). Thanks, /Saurabh On Tue, Jan 26, 2016 at 8:50 AM, Lawrence MacIntyre wrote: > Saurabh: > > It sounds like you benchmarked Apache using Jumbo Packets, but not the > DPDK app using large mbufs. Those are two entirely different issues. > > You should be able to write your packet inspection routines to work with > the mbuf chains, rather than copying them into a larger buffer (although if > there are multiple passes through the data, it could be a bit complicated). > Copying the data into a larger buffer will definitely cause the application > to be slower. > > Lawrence > > > This one time (01/26/2016 09:40 AM), at band camp, Saurabh Mishra wrote: > > Hi, > > Since we do full content inspection, we will end up coalescing mbuf chains > into one before inspecting the packet which would require allocating > another buffer of larger size. > > I am inclined towards larger size mbuf for this reason. > > I have benchmarked a bit using apache benchmark and we see 3x performance > improvement over 1500 mtu. Memory is not an issue. > > My only concern is that would all the dpdk drivers work with larger size > mbuf? > > Thanks, > Saurabh > On Jan 26, 2016 6:23 AM, "Lawrence MacIntyre" > wrote: > >> Saurabh: >> >> Raising the mbuf size will make the packet handling for large packets >> slightly more efficient, but it will use much more memory unless the great >> majority of the packets you are handling are of the jumbo size. Using more >> memory has its own costs. In order to evaluate this design choice, it is >> necessary to understand the behavior of the memory subsystem, which is VERY >> complicated. >> >> Before you go down this path, at least benchmark your application using >> the regular sized mbufs and the large ones and see what the effect is. >> >> This one time (01/26/2016 09:01 AM), at band camp, Polehn, Mike A wrote: >> >>> Jumbo frames are generally handled by link lists (but called something >>> else) of mbufs. >>> Enabling jumbo frames for the device driver should enable the right >>> portion of the driver which handles the linked lists. >>> >>> Don't make the mbufs huge. >>> >>> Mike >>> >>> -----Original Message----- >>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Masaru OKI >>> Sent: Monday, January 25, 2016 2:41 PM >>> To: Saurabh Mishra; users@dpdk.org; dev@dpdk.org >>> Subject: Re: [dpdk-dev] rte_mbuf size for jumbo frame >>> >>> Hi, >>> >>> 1. Take care of unit size of mempool for mbuf. >>> 2. Call rte_eth_dev_set_mtu() for each interface. >>> Note that some PMDs does not supported change MTU. >>> >>> On 2016/01/26 6:02, Saurabh Mishra wrote: >>> >>>> Hi, >>>> >>>> We wanted to use 10400 bytes size of each rte_mbuf to enable Jumbo >>>> frames. >>>> Do you guys see any problem with that? Would all the drivers like >>>> ixgbe, i40e, vmxnet3, virtio and bnx2x work with larger rte_mbuf size? >>>> >>>> We would want to avoid detailing with chained mbufs. >>>> >>>> /Saurabh >>>> >>> >> -- >> Lawrence MacIntyre macintyrelp@ornl.gov Oak Ridge National Laboratory >> 865.574.7401 Cyber Space and Information Intelligence Research Group >> >> > -- > Lawrence MacIntyre macintyrelp@ornl.gov Oak Ridge National Laboratory > 865.574.7401 Cyber Space and Information Intelligence Research Group > >