From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR03-VE1-obe.outbound.protection.outlook.com (mail-eopbgr50044.outbound.protection.outlook.com [40.107.5.44]) by dpdk.org (Postfix) with ESMTP id 312E31B202 for ; Thu, 5 Oct 2017 23:46:02 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=pFyedYPLXt9EEhMTdfiBYIPFFiCGVFIRsPmeYWsi6k4=; b=gVl4hPIuidLri0vE1O8YiAjXXyjR6cVb+aMbkXn4axVqRBPEeZrvx9Hqxpr8lHo/QYWsts6jjvfctzEHN0xDps4Au7Hn5FEcago+rFlcS3M7pqs7XIYhgWwXEqeDFBJ2gZ0aivcoiK/G9/lFRo14p496vzhodeTIlvJXt6Zu7tA= Received: from VI1PR0501MB2045.eurprd05.prod.outlook.com (10.167.195.147) by VI1PR0501MB2046.eurprd05.prod.outlook.com (10.167.195.148) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.77.7; Thu, 5 Oct 2017 21:46:00 +0000 Received: from VI1PR0501MB2045.eurprd05.prod.outlook.com ([fe80::84ed:9505:3e7f:9722]) by VI1PR0501MB2045.eurprd05.prod.outlook.com ([fe80::84ed:9505:3e7f:9722%13]) with mapi id 15.20.0077.018; Thu, 5 Oct 2017 21:46:00 +0000 From: Yongseok Koh To: Martin Weiser CC: Adrien Mazarguil , =?iso-8859-1?Q?N=E9lio_Laranjeiro?= , "dev@dpdk.org" Thread-Topic: [dpdk-dev] Mellanox ConnectX-5 crashes and mbuf leak Thread-Index: AQHTNqktVwg/YRPpzEiZDb3J3tSb96LV2VeA Date: Thu, 5 Oct 2017 21:46:00 +0000 Message-ID: References: <5d1f07c4-5933-806d-4d11-8fdfabc701d7@allegro-packets.com> In-Reply-To: <5d1f07c4-5933-806d-4d11-8fdfabc701d7@allegro-packets.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=yskoh@mellanox.com; x-originating-ip: [209.116.155.178] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; VI1PR0501MB2046; 6:G7WNnn+Fn02BrcVeL3dVCy8sTgSdr7t21qVlahGR17THkFAyCKqod/4LFXw2BG9VH3qqvHVmreZ3bVy5BrEdhGoRJGavFZKY0mKWjShqr0o4T4kSd/BVOrZ9mKVkcJYyXc3HjI3bc+6DFIVmxhgmPk7nCAL4yIQB4FSds3TkN+Bv1bdonrZACsrhigZMcazTyQZVLlmgq98Kfu6+fryxWBd4H6FJLUJZ4vkm6LtPmm1pP41dKEv7qYxvNQdxJDuj9PDHJ6ZucyJu+mQsLsL+C+x4+UrqCw0CY9DlBuCesB80dWczzFvmVx6hKHqnmIWY7IMbsB6SuKytA4xPiRB6YA==; 5:ycNcDIR6ZOcscPcXJ1KUyHWGaMqu/IXDzrn30oE7+8B5YmICrWrdcQ6fKjN+YITtrB/s/P4CRboOdKQCvsMO/dH/z2i3KCpa1TQik/vnXOxKvbxrK8CnziA3CWDqcO0MNanhoTNAfxFvtqUxTYoWiQ==; 24:JBMP8PCSZXYcPzdlW1WGgqiFvXrCHIYjkmCYNwey7D9sjjff7zMUhWiy3k57+HJeA75t8f8rDg5IzHvWftDrZsIqLjmoa3g6DLysb0cgSEQ=; 7:Yxl0gvmXNydB4Qn/ZCwwqg4HAVswyu5kMA4jfgvdhVhPF43TEeyK4XxTgLkXu4SD/qaKQ00asIT+lB/056uCeZBbOY1I23Amucmk4T9k2hJ/1qBlrp0N5TSRjs810kn7/M4Awf+iaOsH2T1LDlo6QGAcnUO9Udq85WMsk4fJoWmMSegm34yWRVviyjlqvl1vPAbXy+6qlZNP11wSA6eVoKoz8Z3C215h30ZN/8YD5NI= x-ms-exchange-antispam-srfa-diagnostics: SSOS; x-ms-office365-filtering-correlation-id: d52e8d38-3ca7-4b69-fd22-08d50c3a7829 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(2017030254152)(48565401081)(2017052603199)(201703131423075)(201703031133081)(201702281549075); SRVR:VI1PR0501MB2046; x-ms-traffictypediagnostic: VI1PR0501MB2046: x-ld-processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr x-exchange-antispam-report-test: UriScan:(158342451672863); x-microsoft-antispam-prvs: x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(5005006)(8121501046)(10201501046)(100000703101)(100105400095)(93006095)(93001095)(3002001)(6055026)(6041248)(20161123562025)(20161123555025)(20161123560025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123558100)(20161123564025)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:VI1PR0501MB2046; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:VI1PR0501MB2046; x-forefront-prvs: 04519BA941 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(6009001)(376002)(346002)(39860400002)(24454002)(199003)(189002)(377454003)(2950100002)(66066001)(966005)(4326008)(2906002)(83716003)(6512007)(5250100002)(99286003)(33656002)(54906003)(575784001)(2900100001)(6306002)(7736002)(189998001)(478600001)(81156014)(82746002)(6486002)(86362001)(305945005)(5890100001)(5660300001)(8676002)(53936002)(36756003)(8936002)(6436002)(25786009)(81166006)(229853002)(50986999)(316002)(97736004)(76176999)(101416001)(6246003)(3846002)(1720100001)(6506006)(68736007)(53376002)(102836003)(3280700002)(6116002)(54356999)(6916009)(3660700001)(53546010)(106356001)(14454004)(105586002); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR0501MB2046; H:VI1PR0501MB2045.eurprd05.prod.outlook.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="iso-8859-1" Content-ID: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Oct 2017 21:46:00.4884 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0501MB2046 Subject: Re: [dpdk-dev] Mellanox ConnectX-5 crashes and mbuf leak X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Oct 2017 21:46:03 -0000 Hi, Martin Thanks for your thorough and valuable reporting. We could reproduce it. I f= ound a bug and fixed it. Please refer to the patch [1] I sent to the mailing lis= t. This might not be automatically applicable to v17.08 as I rebased it on top= of Nelio's flow cleanup patch. But as this is a simple patch, you can easily a= pply it manually. Thanks, Yongseok [1] http://dpdk.org/dev/patchwork/patch/29781 > On Sep 26, 2017, at 2:23 AM, Martin Weiser wrote: >=20 > Hi, >=20 > we are currently testing the Mellanox ConnectX-5 100G NIC with DPDK > 17.08 as well as dpdk-net-next and are > experiencing mbuf leaks as well as crashes (and in some instances even > kernel panics in a mlx5 module) under > certain load conditions. >=20 > We initially saw these issues only in our own DPDK-based application and > it took some effort to reproduce this > in one of the DPDK example applications. However with the attached patch > to the load-balancer example we can > reproduce the issues reliably. >=20 > The patch may look weird at first but I will explain why I made these > changes: >=20 > * the sleep introduced in the worker threads simulates heavy processing > which causes the software rx rings to fill > up under load. If the rings are large enough (I increased the ring > size with the load-balancer command line option > as you can see in the example call further down) the mbuf pool may run > empty and I believe this leads to a malfunction > in the mlx5 driver. As soon as this happens the NIC will stop > forwarding traffic, probably because the driver > cannot allocate mbufs for the packets received by the NIC. > Unfortunately when this happens most of the mbufs will > never return to the mbuf pool so that even when the traffic stops the > pool will remain almost empty and the > application will not forward traffic even at a very low rate. >=20 > * the use of the reference count in the mbuf in addition to the > situation described above is what makes the > mlx5 DPDK driver crash almost immediately under load. In our > application we rely on this feature to be able to forward > the packet quickly and still send the packet to a worker thread for > analysis and finally free the packet when analysis is > done. Here I simulated this by increasing the mbuf reference count > immediately after receiving the mbuf from the > driver and then calling rte_pktmbuf_free in the worker thread which > should only decrement the reference count again > and not actually free the mbuf. >=20 > We executed the patched load-balancer application with the following > command line: >=20 > ./build/load_balancer -l 3-7 -n 4 -- --rx "(0,0,3),(1,0,3)" --tx > "(0,3),(1,3)" --w "4" --lpm "16.0.0.0/8=3D>0; 48.0.0.0/8=3D>1;" --pos-lb = 29 > --rsz "1024, 32768, 1024, 1024" >=20 > Then we generated traffic using the t-rex traffic generator and the sfr > test case. On our machine the issues start > to happen when the traffic exceeds ~6 Gbps but this may vary depending > on how powerful the test machine is (by > the way we were able to reproduce this on different types of hardware). >=20 > A typical stacktrace looks like this: >=20 > Thread 1 "load_balancer" received signal SIGSEGV, Segmentation fault. > 0x0000000000614475 in _mm_storeu_si128 (__B=3D..., __P=3D out>) at /usr/lib/gcc/x86_64-linux-gnu/5/include/emmintrin.h:716 > 716 __builtin_ia32_storedqu ((char *)__P, (__v16qi)__B); > (gdb) bt > #0 0x0000000000614475 in _mm_storeu_si128 (__B=3D..., __P=3D out>) at /usr/lib/gcc/x86_64-linux-gnu/5/include/emmintrin.h:716 > #1 rxq_cq_decompress_v (elts=3D0x7fff3732bef0, cq=3D0x7ffff7f99380, > rxq=3D0x7fff3732a980) at > /root/dpdk-next-net/drivers/net/mlx5/mlx5_rxtx_vec_sse.c:679 > #2 rxq_burst_v (pkts_n=3D, pkts=3D0xa7c7b0 , > rxq=3D0x7fff3732a980) at > /root/dpdk-next-net/drivers/net/mlx5/mlx5_rxtx_vec_sse.c:1242 > #3 mlx5_rx_burst_vec (dpdk_rxq=3D0x7fff3732a980, pkts=3D out>, pkts_n=3D) at > /root/dpdk-next-net/drivers/net/mlx5/mlx5_rxtx_vec_sse.c:1277 > #4 0x000000000043c11d in rte_eth_rx_burst (nb_pkts=3D3599, > rx_pkts=3D0xa7c7b0 , queue_id=3D0, port_id=3D0 '\000') > at > /root/dpdk-next-net//x86_64-native-linuxapp-gcc/include/rte_ethdev.h:2781 > #5 app_lcore_io_rx (lp=3Dlp@entry=3D0xa7c700 , > n_workers=3Dn_workers@entry=3D1, bsz_rd=3Dbsz_rd@entry=3D144, > bsz_wr=3Dbsz_wr@entry=3D144, pos_lb=3Dpos_lb@entry=3D29 '\035') > at /root/dpdk-next-net/examples/load_balancer/runtime.c:198 > #6 0x0000000000447dc0 in app_lcore_main_loop_io () at > /root/dpdk-next-net/examples/load_balancer/runtime.c:485 > #7 app_lcore_main_loop (arg=3D) at > /root/dpdk-next-net/examples/load_balancer/runtime.c:669 > #8 0x0000000000495e8b in rte_eal_mp_remote_launch () > #9 0x0000000000441e0d in main (argc=3D, > argv=3D) at > /root/dpdk-next-net/examples/load_balancer/main.c:99 >=20 > The crash does not always happen at the exact same spot but in our tests > always in the same function. > In a few instances instead of an application crash the system froze > completely with what appeared to be a kernel > panic. The last output looked like a crash in the interrupt handler of a > mlx5 module but unfortunately I cannot > provide the exact output right now. >=20 > All tests were performed under Ubuntu 16.04 server running a > 4.4.0-96-generic kernel and the lasted Mellanox OFED > MLNX_OFED_LINUX-4.1-1.0.2.0-ubuntu16.04-x86_64 was used. >=20 > Any help with this issue is greatly appreciated. >=20 > Best regards, > Martin >=20 >