From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from alln-iport-5.cisco.com (alln-iport-5.cisco.com [173.37.142.92]) by dpdk.org (Postfix) with ESMTP id 463663B5 for ; Mon, 15 May 2017 09:14:04 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=3406; q=dns/txt; s=iport; t=1494832445; x=1496042045; h=from:to:subject:date:message-id:mime-version; bh=m+wurB+BUBdZz9zOP1Vd4qOE5Td9TePKgYUxQQzdEFk=; b=UI3F/X/bpNR4N7rpaK5O/JCHvu1kQHH9qQlTqknDm758PvvEuN9LvENH JgEYf1CMh7tI22uHSEmfbm+NahOcTTc4Bb0CWiJO64tVxgl8jXckA/ej5 e1KISS+jLkizeGJjRDccrkJLVcpmd2xfnsYFp7f/ufRM9u8LUZvUNc2Hm 0=; X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0D7AABLVBlZ/4ENJK1cGgEBAQECAQEBA?= =?us-ascii?q?QgBAQEBgm5ngXWNfKIahTiCD4tDPxgBAgEBAQEBAQFrHQuGKgEMdCcEijadC5I?= =?us-ascii?q?xikYBCwElhl+BXo1wBZ4KAYFakUCRa5RCAR84P0twFYc8iEiBDQEBAQ?= X-IronPort-AV: E=Sophos;i="5.38,343,1491264000"; d="scan'208,217";a="424719574" Received: from alln-core-9.cisco.com ([173.36.13.129]) by alln-iport-5.cisco.com with ESMTP/TLS/DHE-RSA-AES256-SHA; 15 May 2017 07:14:03 +0000 Received: from XCH-RCD-001.cisco.com (xch-rcd-001.cisco.com [173.37.102.11]) by alln-core-9.cisco.com (8.14.5/8.14.5) with ESMTP id v4F7E3kC017545 (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256 verify=FAIL) for ; Mon, 15 May 2017 07:14:03 GMT Received: from xch-aln-003.cisco.com (173.36.7.13) by XCH-RCD-001.cisco.com (173.37.102.11) with Microsoft SMTP Server (TLS) id 15.0.1210.3; Mon, 15 May 2017 02:14:03 -0500 Received: from xch-aln-003.cisco.com ([173.36.7.13]) by XCH-ALN-003.cisco.com ([173.36.7.13]) with mapi id 15.00.1210.000; Mon, 15 May 2017 02:14:03 -0500 From: "Neeraj Tandon (netandon)" To: "users@dpdk.org" Thread-Topic: Mechanism to increase MBUF allocation Thread-Index: AQHSzUrUVwRv7myvpUeQW+DEOgy21A== Date: Mon, 15 May 2017 07:14:03 +0000 Message-ID: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: user-agent: Microsoft-MacOutlook/14.6.5.160527 x-ms-exchange-messagesentrepresentingtype: 1 x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.24.73.11] MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-users] Mechanism to increase MBUF allocation X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 May 2017 07:14:05 -0000 Hi, I have recently started using DPDK. I have based my application on l2fwd a= pplication. In my application, I am holding buffers for a period of time = and freeing the mbuf in another thread. The default number of MBUF is 8192 = . I have two questions regarding this: 1. How to increase number of MBUFS : For this increasing NB_MBUF and cal= ling is not having any effect I.e I loose packet when packets > 8192 are s= ent in burst. I see following used for creating mbuf pool: /* create the mbuf pool */ l2fwd_pktmbuf_pool =3D rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id()); If I want to increase MBUF to say 65536 what should I do ? 2. I am receiving packets in RX thread which is running on Core 2 and= freeing on a thread which I launched using PHREAD and runs on Core 0 . Any= implications for this kind of mechanism Thanks for the support and keeping forum active. Regards, Neenah