From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6954EA00C4; Sun, 13 Feb 2022 12:40:05 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 09AAA4067C; Sun, 13 Feb 2022 12:40:05 +0100 (CET) Received: from wout1-smtp.messagingengine.com (wout1-smtp.messagingengine.com [64.147.123.24]) by mails.dpdk.org (Postfix) with ESMTP id 5E28C4014E; Sun, 13 Feb 2022 12:40:03 +0100 (CET) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.west.internal (Postfix) with ESMTP id 0C24432009E5; Sun, 13 Feb 2022 06:40:01 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Sun, 13 Feb 2022 06:40:02 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm3; bh=SbnOFAf/WQ/ePc OAX3gP+S8bYPOw866l+PP+RVnEGSI=; b=lPUBMnDmy2+eoeMKcbeH2dZ5LREuLa OIM/gTvofhuKC+n6uN8TsECbgGURzYU2LW8DYbF3GHf6j6jN3PCQeFdj37svtkxx M/qvPMMyjdH7pW0dy6HOd0CP2FHfRbLI7FBOhL7/AQOIMx+7Roilg1o0wUymEKe0 DOa+/zqEfmxzGBUwSsLAuZelKDAhDAZEjntnK3lxo2C+VbBu/P9aGSIUnVCgz73H PY/EaNzpt+b9yi5fra4kbH11lWr/34xOi6684epI5Fw5zZ1c463vc/evrvHONQyM qYyko2OY0RUjN84Ea0QNeq0l9AHi/ql4wja+E+iCdFi7wgZoqyov7BoQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; bh=SbnOFAf/WQ/ePcOAX3gP+S8bYPOw866l+PP+RVnEG SI=; b=eDrJjH/N7m2PNPqFiFZUavFcuivDhM65xX0tYtLtfkjlQpnYMUY/91L82 xddyWm7hPVeBKTLuSJIlVgl2kOWYEJRp72sMyeGwMmjzfHj2O/r6A98PCqMBLD3e 1RS7Pj7hyWalsWF0JEdjxG/uTgu6trvEC8AHCVBtsz7H1lMRssC+Ve69eiFNyW3k 31hZAEeSccXZv4aqCdLEPpqjP0jjbDsAhDSvct7JF/sIdmxOvpLLvZ+h85Hsj7Tx OHl0+/PmQ7gZmbeisV5SuhsD6YeaHYx078IDdC6mMnWYscFuHWCf5oQhZGnVZpRC L4uvBvAgZ+VnEzWSq8HT3Nm60Qo6w== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvvddrjedtgddviecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhmrghs ucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenucggtf frrghtthgvrhhnpedugefgvdefudfftdefgeelgffhueekgfffhfeujedtteeutdejueei iedvffegheenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhroh hmpehthhhomhgrshesmhhonhhjrghlohhnrdhnvght X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun, 13 Feb 2022 06:40:00 -0500 (EST) From: Thomas Monjalon To: Stephen Hemminger Cc: anatoly.burakov@intel.com, stable@dpdk.org, dev@dpdk.org, david.marchand@redhat.com Subject: Re: [PATCH] eal: fix data race in multi-process support Date: Sun, 13 Feb 2022 12:39:59 +0100 Message-ID: <9400637.ag9G3TJQzC@thomas> In-Reply-To: <20211217182922.159503-1-stephen@networkplumber.org> References: <20211217181649.154972-1-stephen@networkplumber.org> <20211217182922.159503-1-stephen@networkplumber.org> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org 17/12/2021 19:29, Stephen Hemminger: > If DPDK is built with thread sanitizer it reports a race > in setting of multiprocess file descriptor. The fix is to > use atomic operations when updating mp_fd. Please could explain more the condition of the race? Is it between init and cleanup of the same file descriptor? How atomic is helping here? > > Simple example: > $ dpdk-testpmd -l 1-3 --no-huge > ... > EAL: Error - exiting with code: 1 > Cause: Creation of mbuf pool for socket 0 failed: Cannot allocate memory > ================== > WARNING: ThreadSanitizer: data race (pid=83054) > Write of size 4 at 0x55e3b7fce450 by main thread: > #0 rte_mp_channel_cleanup (dpdk-testpmd+0x160d79c) > #1 rte_eal_cleanup (dpdk-testpmd+0x1614fb5) > #2 rte_exit (dpdk-testpmd+0x15ec97a) > #3 mbuf_pool_create.cold (dpdk-testpmd+0x242e1a) > #4 main (dpdk-testpmd+0x5ab05d) > > Previous read of size 4 at 0x55e3b7fce450 by thread T2: > #0 mp_handle (dpdk-testpmd+0x160c979) > #1 ctrl_thread_init (dpdk-testpmd+0x15ff76e) > > As if synchronized via sleep: > #0 nanosleep ../../../../src/libsanitizer/tsan/tsan_interceptors_posix.cpp:362 (libtsan.so.0+0x5cd8e) > #1 get_tsc_freq (dpdk-testpmd+0x1622889) > #2 set_tsc_freq (dpdk-testpmd+0x15ffb9c) > #3 rte_eal_timer_init (dpdk-testpmd+0x1622a34) > #4 rte_eal_init.cold (dpdk-testpmd+0x26b314) > #5 main (dpdk-testpmd+0x5aab45) > > Location is global 'mp_fd' of size 4 at 0x55e3b7fce450 (dpdk-testpmd+0x0000027c7450) > > Thread T2 'rte_mp_handle' (tid=83057, running) created by main thread at: > #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors_posix.cpp:962 (libtsan.so.0+0x58ba2) > #1 rte_ctrl_thread_create (dpdk-testpmd+0x15ff870) > #2 rte_mp_channel_init.cold (dpdk-testpmd+0x269986) > #3 rte_eal_init (dpdk-testpmd+0x1615b28) > #4 main (dpdk-testpmd+0x5aab45)