From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 3B42E6895 for ; Mon, 24 Feb 2014 00:56:23 +0100 (CET) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP; 23 Feb 2014 15:53:26 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.97,530,1389772800"; d="scan'208";a="480427149" Received: from orsmsx103.amr.corp.intel.com ([10.22.225.130]) by fmsmga001.fm.intel.com with ESMTP; 23 Feb 2014 15:57:48 -0800 Received: from orsmsx160.amr.corp.intel.com (10.22.226.43) by ORSMSX103.amr.corp.intel.com (10.22.225.130) with Microsoft SMTP Server (TLS) id 14.3.123.3; Sun, 23 Feb 2014 15:57:47 -0800 Received: from orsmsx103.amr.corp.intel.com ([169.254.2.166]) by ORSMSX160.amr.corp.intel.com ([169.254.13.134]) with mapi id 14.03.0123.003; Sun, 23 Feb 2014 15:57:47 -0800 From: "Jayakumar, Muthurajan" To: Daniel Kan , "dev@dpdk.org" Thread-Topic: [dpdk-dev] What's the performance significance of ixgbe_recv_pkts_bulk_alloc Thread-Index: AQHPMPEUfNY8mgiugk6abc1zXeucEprDgotg Date: Sun, 23 Feb 2014 23:57:46 +0000 Message-ID: <5D695A7F6F10504DBD9B9187395A21797C6E574B@ORSMSX103.amr.corp.intel.com> References: <71503436-5A40-4097-B1B0-8ED0B9DB6C3E@nyansa.com> In-Reply-To: <71503436-5A40-4097-B1B0-8ED0B9DB6C3E@nyansa.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.22.254.140] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] What's the performance significance of ixgbe_recv_pkts_bulk_alloc X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 23 Feb 2014 23:56:23 -0000 If some one is interested in more of improving throughput, they will want t= o enable bulk_allocation. If someone wants to have lesser latency, then they may want to go for non b= ulk version. With 10 Gb, it is more critical to use bulk for achieving the desired throu= ghput. -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Daniel Kan Sent: Sunday, February 23, 2014 3:43 PM To: dev@dpdk.org Subject: [dpdk-dev] What's the performance significance of ixgbe_recv_pkts_= bulk_alloc Hi, While browsing through the ixgbe pmd code, I noticed that there is ixgbe_re= cv_pkts_bulk_alloc, which can be enabled if the following preconditions are= met.=20 * rxq->rx_free_thresh >=3D RTE_PMD_IXGBE_RX_MAX_BURST * rxq->rx_free_thresh < rxq->nb_rx_desc * (rxq->nb_rx_desc % rxq->rx_free_thresh) =3D=3D 0 * rxq->nb_rx_desc<(IXGBE_MAX_RING_DESC-RTE_PMD_IXGBE_RX_MAX_BURST) I presume the difference from the normal (non bulk) version has to do with = buffer allocation. Can someone please explain the inner working of bulk_all= oc and why one may or may not want to enable bulk_alloc mode?=20 I only see bulk_alloc is available for ixgbe driver and not igb and e1000. = Why is that? Thanks. Dan