From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR03-DB5-obe.outbound.protection.outlook.com (mail-eopbgr40057.outbound.protection.outlook.com [40.107.4.57]) by dpdk.org (Postfix) with ESMTP id CA4F92B88 for ; Sat, 7 Oct 2017 00:31:02 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=2Zcu+bCBBtko2IEUTgQROwu4OFjj+9PR2Mz3bZYPfFc=; b=cmLRQ3jK44SXDmpscnknrXlS62PlppXQ4VybuZFkY9g9T9BHNWOGTG4Ym5oTSjiBUBzAcvf/fLWZg8GsWn89/mLWFnBg6jsFYra+3DUn6S9TBj9Q599ls9eo8+2AA8sppZI00uet0j/mxNFO48YnqHc21vlD0ifawmy3+rJhtK4= Received: from VI1PR0501MB2045.eurprd05.prod.outlook.com (10.167.195.147) by VI1PR0501MB2045.eurprd05.prod.outlook.com (10.167.195.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.77.7; Fri, 6 Oct 2017 22:31:00 +0000 Received: from VI1PR0501MB2045.eurprd05.prod.outlook.com ([fe80::84ed:9505:3e7f:9722]) by VI1PR0501MB2045.eurprd05.prod.outlook.com ([fe80::84ed:9505:3e7f:9722%13]) with mapi id 15.20.0077.018; Fri, 6 Oct 2017 22:31:00 +0000 From: Yongseok Koh To: Martin Weiser CC: Adrien Mazarguil , =?iso-8859-1?Q?N=E9lio_Laranjeiro?= , "dev@dpdk.org" , Ferruh Yigit Thread-Topic: [dpdk-dev] Mellanox ConnectX-5 crashes and mbuf leak Thread-Index: AQHTNqktVwg/YRPpzEiZDb3J3tSb96LV2VeAgAES/ACAAIvqAA== Date: Fri, 6 Oct 2017 22:31:00 +0000 Message-ID: <374F8C13-CFB0-42FD-8993-BF7F0401F891@mellanox.com> References: <5d1f07c4-5933-806d-4d11-8fdfabc701d7@allegro-packets.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=yskoh@mellanox.com; x-originating-ip: [209.116.155.178] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; VI1PR0501MB2045; 6:qlfx5cpA6nj9j9gcAPil5UZrjAfuwhZXYm1N5eDaor5+8i1pFpJdXWWQ/c1LpnEy+TeUuw5C15dgMy7sPS5KiwjJjF2Y/wUHULlumLH35UbJkMaDSTgFY1hS8OboePFZxCx4NqrsivsYYP6rvpvZCZuPlRRTphiqTf+REB4mRW7orlugVDPyT5PtwqstujPtFR9gE5iX+zYwMrxYHtvCApMH6C8zPG6d7l7IUW+HhpmEQFO0su1+5+KF3/SLHt2goqouydo9oZdgjU9k363X3DLHzz5jE+ZFp/hXVTri/V+OjbLPc+L+Uqb7j+AJ2KA6VPM7DMF+K1U5FfQaJQUlzQ==; 5:xB7xzcMlD213DXAyXwbshAtCH10j/PWsTV1lh3qu19QJg4f5HuCBoTyhYM0BzRwnXAOaNEyeu6+vZsCDS69Mx+JEwJuK5mie9v49oIraIVH6XJOpbm2IW02Wu5t6b7qzaKz0NP9VWJ6+YEB9I9eKjg==; 24:qUwBpL2yLNRrl/yQ4pAyw+c+qgIGo9AF3sUflewz70RBbD3rjV2WbQS8vK0FsAKgUVZI3c86VFJ5b2kRZQNEnZAUKvkOhluYGI6/Pspv9vs=; 7:z7v3GwC2pVXQ8kHXjyfwQ2VixpS+S9yz4BmvZWmXBJJzMQPrrtWbhMO3kU2VUogwgUutpMeAiXFCrjVKjhHaJJ0Xpe1BpehxAtbY4GJKpcOnw2NGEXH8w7Q3HHkrqun6O/YMIGa0Iy5Nh1iEUDRFpM7y5fG7Z3yCAmdrHKpoNU8Cd15lhh6SukArTGYw01XNZNPb2N4cHbprmYFPu+e+sckL/0MWqBpMS5BuuMR0bZo= x-ms-exchange-antispam-srfa-diagnostics: SSOS; x-ms-office365-filtering-correlation-id: 146b62c2-44e3-4945-b083-08d50d09ebb1 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(2017030254152)(48565401081)(2017052603199)(201703131423075)(201703031133081)(201702281549075); SRVR:VI1PR0501MB2045; x-ms-traffictypediagnostic: VI1PR0501MB2045: x-ld-processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr x-exchange-antispam-report-test: UriScan:(158342451672863)(189930954265078)(45079756050767); x-microsoft-antispam-prvs: x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(8121501046)(5005006)(3002001)(100000703101)(100105400095)(10201501046)(93006095)(93001095)(6055026)(6041248)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123560025)(20161123558100)(20161123564025)(20161123562025)(20161123555025)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:VI1PR0501MB2045; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:VI1PR0501MB2045; x-forefront-prvs: 0452022BE1 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(6009001)(376002)(346002)(24454002)(189002)(199003)(377454003)(6486002)(966005)(101416001)(3660700001)(68736007)(66066001)(25786009)(2906002)(76176999)(82746002)(54356999)(50986999)(5660300001)(3280700002)(2900100001)(33656002)(2950100002)(36756003)(6916009)(106356001)(105586002)(14454004)(6116002)(3846002)(6512007)(102836003)(6306002)(575784001)(86362001)(316002)(7736002)(1720100001)(53376002)(305945005)(478600001)(54906003)(83716003)(6246003)(53936002)(45080400002)(8676002)(189998001)(8936002)(81156014)(81166006)(6436002)(229853002)(53546010)(99286003)(4326008)(97736004)(5250100002)(5890100001)(6506006); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR0501MB2045; H:VI1PR0501MB2045.eurprd05.prod.outlook.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="iso-8859-1" Content-ID: <000CE835BBF831478A85536B4ED444C5@eurprd05.prod.outlook.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Oct 2017 22:31:00.1388 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0501MB2045 Subject: Re: [dpdk-dev] Mellanox ConnectX-5 crashes and mbuf leak X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Oct 2017 22:31:03 -0000 Hi, Martin Even though I had done quite serious tests before sending out the patch, I figured out deadlock could happen if the Rx queue size is smaller. It is = 128 by default in testpmd while I usually use 256. I've fixed the bug and submitted a new patch [1], which actually reverts th= e previous patch. So, you can apply the attached with disregarding the old on= e. And I have also done extensive tests for this new patch but please let me k= now your test results. [1] "net/mlx5: fix deadlock due to buffered slots in Rx SW ring" at http://dpdk.org/dev/patchwork/patch/29847 Thanks, Yongseok diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.c b/drivers/net/mlx5/mlx5_r= xtx_vec_sse.c index aff3359..9d37954 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.c @@ -549,7 +549,7 @@ rxq_replenish_bulk_mbuf(struct rxq *rxq, uint16_t n) { const uint16_t q_n =3D 1 << rxq->elts_n; const uint16_t q_mask =3D q_n - 1; - const uint16_t elts_idx =3D rxq->rq_ci & q_mask; + uint16_t elts_idx =3D rxq->rq_ci & q_mask; struct rte_mbuf **elts =3D &(*rxq->elts)[elts_idx]; volatile struct mlx5_wqe_data_seg *wq =3D &(*rxq->wqes)[elts_idx]; unsigned int i; @@ -567,6 +567,11 @@ rxq_replenish_bulk_mbuf(struct rxq *rxq, uint16_t n) wq[i].addr =3D rte_cpu_to_be_64((uintptr_t)elts[i]->buf_add= r + RTE_PKTMBUF_HEADROOM); rxq->rq_ci +=3D n; + /* Prevent overflowing into consumed mbufs. */ + elts_idx =3D rxq->rq_ci & q_mask; + for (i =3D 0; i < MLX5_VPMD_DESCS_PER_LOOP; i +=3D 2) + _mm_storeu_si128((__m128i *)&(*rxq->elts)[elts_idx + i], + _mm_set1_epi64x((uintptr_t)&rxq->fake_mbuf= )); rte_wmb(); *rxq->rq_db =3D rte_cpu_to_be_32(rxq->rq_ci); } > On Oct 6, 2017, at 7:10 AM, Martin Weiser wrote: >=20 > Hi Yongseok, >=20 > unfortunately in a quick test using testpmd and ~20Gb/s of traffic with > your patch traffic forwarding always stops completely after a few seconds= . >=20 > I wanted to test this with the current master of dpdk-next-net but after > "net/mlx5: support upstream rdma-core" it will not compile against > MLNX_OFED_LINUX-4.1-1.0.2.0. > So i used the last commit before that (v17.08-306-gf214841) and applied > your patch leading to the result described above. > Apart from your patch no other modifications were made and without the > patch testpmd forwards the traffic without a problem (in this > configuration mbufs should never run out so this test was never affected > by the original issue). >=20 > For this test I simply used testpmd with the following command line: > "testpmd -c 0xfe -- -i" and issued the "start" command. As traffic > generator I used t-rex with the sfr traffic profile. >=20 > Best regards, > Martin >=20 >=20 >=20 > On 05.10.17 23:46, Yongseok Koh wrote: >> Hi, Martin >>=20 >> Thanks for your thorough and valuable reporting. We could reproduce it. = I found >> a bug and fixed it. Please refer to the patch [1] I sent to the mailing = list. >> This might not be automatically applicable to v17.08 as I rebased it on = top of >> Nelio's flow cleanup patch. But as this is a simple patch, you can easil= y apply >> it manually. >>=20 >> Thanks, >> Yongseok >>=20 >> [1] https://emea01.safelinks.protection.outlook.com/?url=3Dhttp%3A%2F%2F= dpdk.org%2Fdev%2Fpatchwork%2Fpatch%2F29781&data=3D02%7C01%7Cyskoh%40mellano= x.com%7C61eea153c6ca4966b26c08d50cc3f763%7Ca652971c7d2e4d9ba6a4d149256f461b= %7C0%7C0%7C636428958171139449&sdata=3Dd%2BEj79F%2BRZ03rkREti%2Fhaw9pYl8kF5b= G7CkhK1kGQSs%3D&reserved=3D0 >>=20 >>> On Sep 26, 2017, at 2:23 AM, Martin Weiser wrote: >>>=20 >>> Hi, >>>=20 >>> we are currently testing the Mellanox ConnectX-5 100G NIC with DPDK >>> 17.08 as well as dpdk-net-next and are >>> experiencing mbuf leaks as well as crashes (and in some instances even >>> kernel panics in a mlx5 module) under >>> certain load conditions. >>>=20 >>> We initially saw these issues only in our own DPDK-based application an= d >>> it took some effort to reproduce this >>> in one of the DPDK example applications. However with the attached patc= h >>> to the load-balancer example we can >>> reproduce the issues reliably. >>>=20 >>> The patch may look weird at first but I will explain why I made these >>> changes: >>>=20 >>> * the sleep introduced in the worker threads simulates heavy processing >>> which causes the software rx rings to fill >>> up under load. If the rings are large enough (I increased the ring >>> size with the load-balancer command line option >>> as you can see in the example call further down) the mbuf pool may run >>> empty and I believe this leads to a malfunction >>> in the mlx5 driver. As soon as this happens the NIC will stop >>> forwarding traffic, probably because the driver >>> cannot allocate mbufs for the packets received by the NIC. >>> Unfortunately when this happens most of the mbufs will >>> never return to the mbuf pool so that even when the traffic stops the >>> pool will remain almost empty and the >>> application will not forward traffic even at a very low rate. >>>=20 >>> * the use of the reference count in the mbuf in addition to the >>> situation described above is what makes the >>> mlx5 DPDK driver crash almost immediately under load. In our >>> application we rely on this feature to be able to forward >>> the packet quickly and still send the packet to a worker thread for >>> analysis and finally free the packet when analysis is >>> done. Here I simulated this by increasing the mbuf reference count >>> immediately after receiving the mbuf from the >>> driver and then calling rte_pktmbuf_free in the worker thread which >>> should only decrement the reference count again >>> and not actually free the mbuf. >>>=20 >>> We executed the patched load-balancer application with the following >>> command line: >>>=20 >>> ./build/load_balancer -l 3-7 -n 4 -- --rx "(0,0,3),(1,0,3)" --tx >>> "(0,3),(1,3)" --w "4" --lpm "16.0.0.0/8=3D>0; 48.0.0.0/8=3D>1;" --pos-l= b 29 >>> --rsz "1024, 32768, 1024, 1024" >>>=20 >>> Then we generated traffic using the t-rex traffic generator and the sfr >>> test case. On our machine the issues start >>> to happen when the traffic exceeds ~6 Gbps but this may vary depending >>> on how powerful the test machine is (by >>> the way we were able to reproduce this on different types of hardware). >>>=20 >>> A typical stacktrace looks like this: >>>=20 >>> Thread 1 "load_balancer" received signal SIGSEGV, Segmentation fault= . >>> 0x0000000000614475 in _mm_storeu_si128 (__B=3D..., __P=3D>> out>) at /usr/lib/gcc/x86_64-linux-gnu/5/include/emmintrin.h:716 >>> 716 __builtin_ia32_storedqu ((char *)__P, (__v16qi)__B); >>> (gdb) bt >>> #0 0x0000000000614475 in _mm_storeu_si128 (__B=3D..., __P=3D>> out>) at /usr/lib/gcc/x86_64-linux-gnu/5/include/emmintrin.h:716 >>> #1 rxq_cq_decompress_v (elts=3D0x7fff3732bef0, cq=3D0x7ffff7f99380, >>> rxq=3D0x7fff3732a980) at >>> /root/dpdk-next-net/drivers/net/mlx5/mlx5_rxtx_vec_sse.c:679 >>> #2 rxq_burst_v (pkts_n=3D, pkts=3D0xa7c7b0 , >>> rxq=3D0x7fff3732a980) at >>> /root/dpdk-next-net/drivers/net/mlx5/mlx5_rxtx_vec_sse.c:1242 >>> #3 mlx5_rx_burst_vec (dpdk_rxq=3D0x7fff3732a980, pkts=3D>> out>, pkts_n=3D) at >>> /root/dpdk-next-net/drivers/net/mlx5/mlx5_rxtx_vec_sse.c:1277 >>> #4 0x000000000043c11d in rte_eth_rx_burst (nb_pkts=3D3599, >>> rx_pkts=3D0xa7c7b0 , queue_id=3D0, port_id=3D0 '\000') >>> at >>> /root/dpdk-next-net//x86_64-native-linuxapp-gcc/include/rte_ethdev.h:27= 81 >>> #5 app_lcore_io_rx (lp=3Dlp@entry=3D0xa7c700 , >>> n_workers=3Dn_workers@entry=3D1, bsz_rd=3Dbsz_rd@entry=3D144, >>> bsz_wr=3Dbsz_wr@entry=3D144, pos_lb=3Dpos_lb@entry=3D29 '\035') >>> at /root/dpdk-next-net/examples/load_balancer/runtime.c:198 >>> #6 0x0000000000447dc0 in app_lcore_main_loop_io () at >>> /root/dpdk-next-net/examples/load_balancer/runtime.c:485 >>> #7 app_lcore_main_loop (arg=3D) at >>> /root/dpdk-next-net/examples/load_balancer/runtime.c:669 >>> #8 0x0000000000495e8b in rte_eal_mp_remote_launch () >>> #9 0x0000000000441e0d in main (argc=3D, >>> argv=3D) at >>> /root/dpdk-next-net/examples/load_balancer/main.c:99 >>>=20 >>> The crash does not always happen at the exact same spot but in our test= s >>> always in the same function. >>> In a few instances instead of an application crash the system froze >>> completely with what appeared to be a kernel >>> panic. The last output looked like a crash in the interrupt handler of = a >>> mlx5 module but unfortunately I cannot >>> provide the exact output right now. >>>=20 >>> All tests were performed under Ubuntu 16.04 server running a >>> 4.4.0-96-generic kernel and the lasted Mellanox OFED >>> MLNX_OFED_LINUX-4.1-1.0.2.0-ubuntu16.04-x86_64 was used. >>>=20 >>> Any help with this issue is greatly appreciated. >>>=20 >>> Best regards, >>> Martin >>>=20 >>> >=20