From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D8806A0567; Wed, 11 Mar 2020 10:09:21 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 55D391BF7F; Wed, 11 Mar 2020 10:09:21 +0100 (CET) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by dpdk.org (Postfix) with ESMTP id 6B296FEB for ; Wed, 11 Mar 2020 10:09:19 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1583917758; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sdYZ1w5lNkLDw76ucemYVvj6yxKpC1BlqptUpC7J18g=; b=M+hvzvXGBl67l7sx85SrnsT6/z5A4W8Y2Y7M8OPc+h1cRTo6q0eEkFcwrAPVdf0C8/uRIo gX/Em/sdtnLtXWunPyap6lrIwgScIyBvnuA/f1iOJZu82jt2YJfsUce1iSci4B5IMv7+wY rpgpT2TDZxaoCuXK0Ir1gE0W+Ml1UvY= Received: from mail-ua1-f72.google.com (mail-ua1-f72.google.com [209.85.222.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-127-30L6vXHvPs2yII7KrqvukQ-1; Wed, 11 Mar 2020 05:09:17 -0400 X-MC-Unique: 30L6vXHvPs2yII7KrqvukQ-1 Received: by mail-ua1-f72.google.com with SMTP id y26so144169uaq.14 for ; Wed, 11 Mar 2020 02:09:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=X2w10a+asnSoVgRq+tsVNvSln2nwMsGgsERHjBhO304=; b=Q+jRrV18Hc6nEI+Tu822nc4PXevFOmyDbc4sQ6cBav+FXLZlpHOjSJclHkLHJcRbyx Xv5CQ0rEpS1RNietIdSvkicIwDSgaUcy3iPWODDqsJkytm6/yj/UinwVj/sPT8oiYmkp 2xVrUkZ/ifdyaQFGM1YwunbmTkE7QVkWt5IFOychCZcGXZ+X6oIqZpdLNxTKMcwyFB+Z T6RKB81RuA+WUH1/WjGOIat8ifc/E0bJkYXr7kDAyymuLN/reicnpH2dk3q46V76KiEb LKk+kZSI6QXytEq15LatActEhQp72siKxq0kYf0Szd90b0WLE/Gp4XTCwsaLIMeCkYvT 4iBA== X-Gm-Message-State: ANhLgQ19G+cfqCEF0S6T1uao3JF1J4xoVt+W7UvcCXxq5mtnmWH86q7y BvmHLJ2DKOxpM5/CzuXdN2ZC9lrA17GA4KbOGNohbXoicUHOwDYEf1H6Bg3gWlaMjDzA885YuH9 psmKJlSsUYzB/9CcoeKg= X-Received: by 2002:a05:6102:2d8:: with SMTP id h24mr1280179vsh.39.1583917756567; Wed, 11 Mar 2020 02:09:16 -0700 (PDT) X-Google-Smtp-Source: ADFU+vvY+/kd8+ieoxASivPqvmuA27qByosonSsN8Oj2LrOGUgDAz+KkR1toOAqQ1GQaoscq/AoSHQ55ZtPOiwXZh7I= X-Received: by 2002:a05:6102:2d8:: with SMTP id h24mr1280162vsh.39.1583917756269; Wed, 11 Mar 2020 02:09:16 -0700 (PDT) MIME-Version: 1.0 References: <20200310133304.39951-1-harry.van.haaren@intel.com> In-Reply-To: From: David Marchand Date: Wed, 11 Mar 2020 10:09:04 +0100 Message-ID: To: "Van Haaren, Harry" Cc: dev , Aaron Conole X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [dpdk-dev] [PATCH] eal/service: fix exit by resetting service lcores X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Tue, Mar 10, 2020 at 5:38 PM Van Haaren, Harry wrote: > > > -----Original Message----- > > From: David Marchand > > Sent: Tuesday, March 10, 2020 4:31 PM > > To: Van Haaren, Harry > > Cc: dev ; Aaron Conole > > Subject: Re: [PATCH] eal/service: fix exit by resetting service lcores > > > > On Tue, Mar 10, 2020 at 2:32 PM Harry van Haaren > > wrote: > > > > > > This commit releases all service cores from thier role, > > > returning them to ROLE_RTE on rte_service_finalize(). > > > > > > This may fix an issue relating to the service cores causing > > > a race-condition on eal_cleanup(), where the service core > > > could still be executing while the main thread has already > > > free-d the service memory, leading to a segfault. > > > > Adding rte_service_lcore_reset_all() just tells a (remaining) service > > lcore to quit its loop, but does not close the race on lcore_states. > > > > The backtrace shows the same. > > > > (gdb) bt full > > #0 rte_service_runner_func (arg=3D) at > > ../lib/librte_eal/common/rte_service.c:455 > > service_mask =3D 1 > > i =3D > > lcore =3D 1 > > cs =3D 0x1003ea200 > > #1 0x00007ffff72030ef in eal_thread_loop (arg=3D) at > > ../lib/librte_eal/linux/eal/eal_thread.c:153 > > fct_arg =3D > > c =3D 0 '\000' > > n =3D > > ret =3D > > lcore_id =3D > > thread_id =3D 140737203603200 > > m2s =3D 14 > > s2m =3D 22 > > cpuset =3D "1", '\000' , > > "\200\000\000\000\000\000\000\000\221\354e\360\377\177", '\000' > > > > __func__ =3D "eal_thread_loop" > > #2 0x00007ffff065ddd5 in start_thread () from /lib64/libpthread.so.0 > > No symbol table info available. > > #3 0x00007ffff038702d in clone () from /lib64/libc.so.6 > > No symbol table info available. > > > > > > I added a rte_eal_mp_wait_lcore(), to ensure that each service lcore > > _did_ quit its loop. > > @@ -123,6 +123,7 @@ rte_service_finalize(void) > > return; > > > > rte_service_lcore_reset_all(); > > + rte_eal_mp_wait_lcore(); > > > > rte_free(rte_services); > > rte_free(lcore_states); > > > > > > I can't reproduce with this. > > OK - that's good news, thanks for the quick testing & feedback. > > Agree with your analysis of the above, indeed waiting for the cores > explicitly seems the right solution to remove the race. Another thing that seemed odd with your patch is that the unit test already calls rte_service_lcore_reset_all() as part of the unregister_all() helper. Why don't we ensure that calling rte_service_lcore_start|stop|reset_all guarantee the service lcores status? Putting explicit (and documented) synchronisation points in the rte_service API seems the right fix to me and could help remove those rte_delay we have in the unit test. --=20 David Marchand