From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from rcdn-iport-2.cisco.com (rcdn-iport-2.cisco.com [173.37.86.73]) by dpdk.org (Postfix) with ESMTP id 8163A2C5 for ; Thu, 18 May 2017 22:21:29 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=2441; q=dns/txt; s=iport; t=1495138889; x=1496348489; h=from:to:subject:date:message-id:references:in-reply-to: content-id:content-transfer-encoding:mime-version; bh=N/t2oC/mBGMjV/fuCd0DcyF/wMnmrLhjeQqRM+JdfAs=; b=Bd+yq80dF0EYtRu/X1gJJFIUzBjpeSdUc4KNZ6CtC+Masb9Rvi4BtPoO y33WZ3ojY1WXHfQIRQQWfN2psC+EcJrHoVVTaAYkdL+YqZjB3h2i6EBY+ U9ypPZg0moejnaBROY+i7DTSUZBlhXhNqYXYo0YlaBrhUxGh3yjLxoibT 4=; X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0DJAABWAR5Z/5xdJa1dGgEBAQECAQEBA?= =?us-ascii?q?QgBAQEBg1WBXREHjX6RbpV2gg8LhhkChXA/GAECAQEBAQEBAWsohRkBBTpPAgE?= =?us-ascii?q?IGB4QMiUCBAESiiOxOIsZAQEBAQEBAQMBAQEBAQEBIYZfgV6DG4pVBZ4TAZMak?= =?us-ascii?q?W6URQEfOCcYFDdwFYIshRB2hyWBDQEBAQ?= X-IronPort-AV: E=Sophos;i="5.38,360,1491264000"; d="scan'208";a="250152991" Received: from rcdn-core-5.cisco.com ([173.37.93.156]) by rcdn-iport-2.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 18 May 2017 20:21:28 +0000 Received: from XCH-ALN-005.cisco.com (xch-aln-005.cisco.com [173.36.7.15]) by rcdn-core-5.cisco.com (8.14.5/8.14.5) with ESMTP id v4IKLSXI022035 (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256 verify=FAIL) for ; Thu, 18 May 2017 20:21:28 GMT Received: from xch-aln-003.cisco.com (173.36.7.13) by XCH-ALN-005.cisco.com (173.36.7.15) with Microsoft SMTP Server (TLS) id 15.0.1210.3; Thu, 18 May 2017 15:21:27 -0500 Received: from xch-aln-003.cisco.com ([173.36.7.13]) by XCH-ALN-003.cisco.com ([173.36.7.13]) with mapi id 15.00.1210.000; Thu, 18 May 2017 15:21:27 -0500 From: "Neeraj Tandon (netandon)" To: "Neeraj Tandon (netandon)" , "users@dpdk.org" Thread-Topic: [dpdk-users] Mechanism to increase MBUF allocation Thread-Index: AQHSzr2PQ/hVFM7SpEm9pCkcw+Wbd6H6aaCA Date: Thu, 18 May 2017 20:21:27 +0000 Message-ID: References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: user-agent: Microsoft-MacOutlook/14.6.5.160527 x-ms-exchange-messagesentrepresentingtype: 1 x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.155.153.132] Content-Type: text/plain; charset="us-ascii" Content-ID: <9F757463904EDB40A598D3CE84A65059@emea.cisco.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-users] Mechanism to increase MBUF allocation X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 18 May 2017 20:21:30 -0000 Hi, Just for information and helping someone who comes across a similar issue. The root cause was calling MBUF free in a Non EAL thread. The application required delayed buffer free but doing it in a different thread launched via pthread create causes a corruption in mempool. Moving mbuf free to an EAL thread solves the problem. Thanks, Neeraj On 5/16/17, 8:27 PM, "users on behalf of Neeraj Tandon (netandon)" wrote: >Hi, > >I was able to increase mbuf and make it work after increasing the socket >memory. However I am facing an issue of SEGfault in driver code. >Intermittently after receiving sometimes few million packet at 1 Gig line >rate the driver does a segment fault: >(eth_igb_recv_pkts+0xd3)[0x5057a3] > >I have net_e1000_igb driver with two 1 Gig ports on it. > >Thanks in advance for any help or pointer to debug driver . > >EAL: Detected 24 lcore(s) >EAL: Probing VFIO support... >EAL: VFIO support initialized >EAL: PCI device 0000:01:00.0 on NUMA socket 0 >EAL: probe driver: 8086:1521 net_e1000_igb >EAL: PCI device 0000:01:00.1 on NUMA socket 0 >EAL: probe driver: 8086:1521 net_e1000_igb > >Regards, >Neeraj > > > > >On 5/15/17, 12:14 AM, "users on behalf of Neeraj Tandon (netandon)" > wrote: > >>Hi, >> >>I have recently started using DPDK. I have based my application on l2fwd >>application. In my application, I am holding buffers for a period of >>time and freeing the mbuf in another thread. The default number of MBUF >>is 8192 . I have two questions regarding this: >> >> >> 1. How to increase number of MBUFS : For this increasing NB_MBUF and >>calling is not having any effect I.e I loose packet when packets > 8192 >>are sent in burst. I see following used for creating mbuf pool: >> >>/* create the mbuf pool */ >>l2fwd_pktmbuf_pool =3D rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, >>MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, >>rte_socket_id()); >> >>If I want to increase MBUF to say 65536 what should I do ? >> >> 2. I am receiving packets in RX thread which is running on Core 2 >>and freeing on a thread which I launched using PHREAD and runs on Core 0 >>. Any implications for this kind of mechanism >> >>Thanks for the support and keeping forum active. >> >>Regards, >>Neenah >> >