From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 56F84A0547;
	Sun, 26 Sep 2021 21:16:15 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id DA3EF4003D;
	Sun, 26 Sep 2021 21:16:14 +0200 (CEST)
Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com
 [66.111.4.25]) by mails.dpdk.org (Postfix) with ESMTP id 6C87D4003C
 for <dev@dpdk.org>; Sun, 26 Sep 2021 21:16:13 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.nyi.internal (Postfix) with ESMTP id B161A5C00C5;
 Sun, 26 Sep 2021 15:16:12 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Sun, 26 Sep 2021 15:16:12 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h=
 from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding:content-type; s=fm2; bh=
 OXuSJfRcRQreg5NtIdKgJow1+6hmk+aIwZTwuCxkXfo=; b=j3ixw1el/SCrPait
 E5YBPZbdTirIJpqw3HCtr3VgXyXPLDgCvhTWu1hx1WVOKKI/G8j0dzx8iL1T8xic
 Qdf2+CaWsP2xWE+k6DYLckcmTof1WiFZIME4D1Q9N1JapLH4B2TFdk633jlNaH5h
 UWa1amw5ZncjdoqYBwuFo0qZaolzFT3HGsJ/TyVxq/ZIZ04be1vAfxtH5WRJS5W9
 Yf8J1wNJKCHy6JkekE2Py/nr7L5eNzMulK+s72dWf/kCThJqHd4ykNZRwII/JXwm
 LPqFiJXCcxYe5ddaQonBXFJLpyvO1jZsdKnsA/331a0CIP4QxxIKXe8Y6gTWKK+7
 uS1Tlw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
 messagingengine.com; h=cc:content-transfer-encoding:content-type
 :date:from:in-reply-to:message-id:mime-version:references
 :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
 :x-sasl-enc; s=fm3; bh=OXuSJfRcRQreg5NtIdKgJow1+6hmk+aIwZTwuCxkX
 fo=; b=ZWKRnjOI7+ND4Dy4URKL+o0H6tJ4QiLyBPYmumerkT0B/MfQe/ddHoi67
 qNMb16fY7RhThUEg2XSQ4oWnYc/NJ7ZJ6vhnjmSu1IQXkuG+Wnpar6gJTjYkCFBo
 z1wifL/FRe3Kqg9JHBPmgLdjyqtZIjD67HvDNfuSJmNRj0yCS4zor9b6JD2pHuzD
 hNB0z/T1n5sdQIFPYyAvdQvBykQ9+YiZW+Vx+pjE4XTXI7/PM+IEi1nDWRoWq+wq
 fvaHgpQWibY0RLwWt/hhS/aOUAfqJBrPnSHAIXi4+RrWf3klF68JhOqP5XxXVRgc
 K62rb80YW9rcemtPhy9si41AEgOyw==
X-ME-Sender: <xms:-8ZQYcY-JSkJ7sSaFRwd-P68XppDp3j6mU7HUTkdDqDHneEl1NW4ZA>
 <xme:-8ZQYXa6v9XOjYWTNGAS8EjhDVcUAiXN39uX3YEcQtX9lTNuoCYVQ6ftZev2PPC6V
 ndu8h65cj7SdNF1KQ>
X-ME-Received: <xmr:-8ZQYW_AUHSqDCmFeM_irj8nVP-2cruoP1ixnP--KqE5zKVjQUm4nyk9umbaEzgMGq0ctz5DXzzW0oFN3PuK68ArUg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudejiedgudeffecutefuodetggdotefrod
 ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
 necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
 enucfjughrpefhvffufffkjghfggfgtgesthhqredttddtjeenucfhrhhomhepvfhhohhm
 rghsucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenuc
 ggtffrrghtthgvrhhnpeekteehtdeivefhieegjeelgedufeejheekkeetueevieeuvdev
 uedtjeevheevteenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfh
 hrohhmpehthhhomhgrshesmhhonhhjrghlohhnrdhnvght
X-ME-Proxy: <xmx:-8ZQYWoVfQCnxPKNCK0Y7sGnGBFV1SDCnz_0SnFA2aLQh-EpN0JwiQ>
 <xmx:-8ZQYXoee3zKfx94aFT32gqJKrBbD_Kv1TR-6iUJaVH4zsyQgxROrA>
 <xmx:-8ZQYUQyLAOmnihIsjtcDkcPWQ8JWeJnGXtaCinvzyNgHrJHtti_fQ>
 <xmx:_MZQYe3AOW7PvRn2IIZvZH3cqznCCxCawOZWJU8VWM09wGIM0mYQyg>
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 26 Sep 2021 15:16:11 -0400 (EDT)
From: Thomas Monjalon <thomas@monjalon.net>
To: Huisong Li <lihuisong@huawei.com>
Cc: dev@dpdk.org, ferruh.yigit@intel.com, david.marchand@redhat.com
Date: Sun, 26 Sep 2021 21:16:09 +0200
Message-ID: <4188639.QZEZhMFxuf@thomas>
In-Reply-To: <430246ab-36ce-402f-8570-d305ada9d720@huawei.com>
References: <20210907034108.58763-1-lihuisong@huawei.com>
 <2004569.RrOHqjGOaX@thomas> <430246ab-36ce-402f-8570-d305ada9d720@huawei.com>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="UTF-8"
Subject: Re: [dpdk-dev] [RFC V1] examples/l3fwd-power: fix memory leak for
 rte_pci_device
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

26/09/2021 14:20, Huisong Li:
> =E5=9C=A8 2021/9/18 16:46, Thomas Monjalon =E5=86=99=E9=81=93:
> > 18/09/2021 05:24, Huisong Li:
> >> =E5=9C=A8 2021/9/17 20:50, Thomas Monjalon =E5=86=99=E9=81=93:
> >>> 17/09/2021 04:13, Huisong Li:
> >>>> How should PMD free it? What should we do? Any good suggestions?
> >>> Check that there is no other port sharing the same PCI device,
> >>> then call the PMD callback for rte_pci_remove_t.
> >> For primary and secondary processes, their rte_pci_device is independe=
nt.
> > Yes it requires to free on both primary and secondary.
> >
> >> Is this for a scenario where there are multiple representor ports under
> >> the same PCI address in the same processe?
> > A PCI device can have multiple physical or representor ports.
> Got it.
> >
> >>>> Would it be more appropriate to do this in rte_eal_cleanup() if it
> >>>> cann't be done in the API above?
> >>> rte_eal_cleanup is a last cleanup for what was not done earlier.
> >>> We could do that but first we should properly free devices when close=
d.
> >>>
> >> Totally, it is appropriate that rte_eal_cleanup is responsible for
> >> releasing devices under the pci bus.
> > Yes, but if a device is closed while the rest of the app keep running,
> > we should not wait to free it.
>=20
>  From this point of view, it seems to make sense. However, according to=20
> the OVS-DPDK
>=20
> usage, it calls dev_close() first, and then check whether all ports=20
> under the PCI address are
>=20
> closed to free rte_pci_device by calling rte_dev_remove().
>=20
>=20
> If we do not want the user to be aware of this, and we want=20
> rte_pci_device to be freed
>=20
> in a timely manner. Can we add a code logic calculating the number of=20
> ports under a PCI address
>=20
> and calling rte_dev_remove() to rte_eth_dev_close() to free=20
> rte_pci_device and delete it from rte_pci_bus?
>=20
> If we do, we may need to make some extra work, otherwise some=20
> applications, such as OVS-DPDK, will
>=20
> fail due to a second call to rte_dev_remove().

I don't understand the proposal.
Please could explain again the code path?
It may deserve a separate mail thread.