From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 6D54CA0C4B;
	Sat, 18 Sep 2021 10:46:30 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id E28BC40041;
	Sat, 18 Sep 2021 10:46:29 +0200 (CEST)
Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com
 [64.147.123.20]) by mails.dpdk.org (Postfix) with ESMTP id 416524003D
 for <dev@dpdk.org>; Sat, 18 Sep 2021 10:46:28 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.west.internal (Postfix) with ESMTP id 9BFF8320090F;
 Sat, 18 Sep 2021 04:46:26 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Sat, 18 Sep 2021 04:46:27 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h=
 from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding:content-type; s=fm2; bh=
 308LK3txSTZKauXZYqTml5EjGdO+94SC4Kg5AeXEjDo=; b=ZvL+FEYvgQ88ZmBR
 Ig2Y30pUCtIYJsgMTibVckw1QNlrvJpESSW6Czv4UBW2pgA4VHNHuhmnMmtxJYRL
 2JXNBl53U8dbOdI07MIDWe2tPq8P7YlNra7rPmPvLVLLOqhCrNVObfNGlIlhXutE
 WP84rSp/O9j3bH91fOwbmboysYD/JlgfM3l38rW0pvZX7aF7PJZv9RdH7LA0GKC4
 ULt3PJdAUqi8Loqble+UjeIqFWq5mL6XKRG1iZlLjvZtUEILxMwmRaCgxOS44nA/
 dIqZxZj2YYazVpulZOLx/UTuV+8kpMM441UCGr9jTmSwX21EP1GY1SJoy3mtZvgA
 FHtlTg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
 messagingengine.com; h=cc:content-transfer-encoding:content-type
 :date:from:in-reply-to:message-id:mime-version:references
 :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
 :x-sasl-enc; s=fm3; bh=308LK3txSTZKauXZYqTml5EjGdO+94SC4Kg5AeXEj
 Do=; b=qXovw4iCugXRdf5eoVjoSAfXhA074VC9KZAPndzKBs/s4bWX9TAayI8m8
 ii+c6YeJws6gk6YbsICAvEVLtZMLsQ1ZONAFVW9YHiYdWhtoDDJIA0TcDutuO1nU
 0c12z0r23r+jKgHYBuoNblKMFSvLNLJ6mWyjC0rsczUYTDBeK+cAG9e5soFrAHNM
 8ugHzs+bxhgmmEl7H3H/X2Td+eVSnHWt2qIZPKHCXQF13YFEY/DHvU1E1k0zMzxC
 8V2YIbGUlYWZUU2QFdZJqno+7cCyeIef1RnRrJJ3oz0UWajTdXYeNdomvvcULNo1
 A61tH21i4Gi/3Ar7e17Chof28N5jw==
X-ME-Sender: <xms:YadFYZP-5SKODJvwDFZCwkeVPbbAcQjUjuXKfBTw_nhL5kc4D6bJ0w>
 <xme:YadFYb-SH_B3D4xeOH-vj__4ysHatXVX4AOcLeEfEWMu4sMlkKmClmyihlgOp2uHT
 BrLbqsV0ktyPP80hQ>
X-ME-Received: <xmr:YadFYYRtvXw5m7vgWl8uCe5_TpHkDuzexHDHuxZBBqz17ZbkFV-SCwrMml6FKpQkMGlj5d7oI3G6gTDPcDVku0YnLQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudehkedgtdeiucetufdoteggodetrfdotf
 fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
 uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
 cujfgurhephffvufffkfgjfhgggfgtsehtqhertddttdejnecuhfhrohhmpefvhhhomhgr
 shcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqnecugg
 ftrfgrthhtvghrnhepkeethedtieevhfeigeejleegudefjeehkeekteeuveeiuedvveeu
 tdejveehveetnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrh
 homhepthhhohhmrghssehmohhnjhgrlhhonhdrnhgvth
X-ME-Proxy: <xmx:YadFYVuln_8UhASpL1QQbh11Eyuk_K41Sj7QysluX4BfLmkr2x0Tzg>
 <xmx:YadFYRfusNvn4n6nWJtw0q9plLOdeHWYBysT8Cgi4t6MjoKFcNCBpg>
 <xmx:YadFYR2mWHBAKoQEh-F5R5wmXM0Cr51xibWQzAj-Sk5RdJiUpGtyVQ>
 <xmx:YqdFYSoHzaRL2QwI0kvf5U3ErVLInzQIE8y75NzWo3cX_YUcRBFlgA>
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat,
 18 Sep 2021 04:46:24 -0400 (EDT)
From: Thomas Monjalon <thomas@monjalon.net>
To: Huisong Li <lihuisong@huawei.com>
Cc: dev@dpdk.org, ferruh.yigit@intel.com, david.marchand@redhat.com
Date: Sat, 18 Sep 2021 10:46:21 +0200
Message-ID: <2004569.RrOHqjGOaX@thomas>
In-Reply-To: <4f4e5d5f-3e52-6974-fa07-c1c587d92e80@huawei.com>
References: <20210907034108.58763-1-lihuisong@huawei.com>
 <2969231.2FXvsJDIr2@thomas> <4f4e5d5f-3e52-6974-fa07-c1c587d92e80@huawei.com>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="UTF-8"
Subject: Re: [dpdk-dev] [RFC V1] examples/l3fwd-power: fix memory leak for
 rte_pci_device
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

18/09/2021 05:24, Huisong Li:
> =E5=9C=A8 2021/9/17 20:50, Thomas Monjalon =E5=86=99=E9=81=93:
> > 17/09/2021 04:13, Huisong Li:
> >> =E5=9C=A8 2021/9/16 18:36, Thomas Monjalon =E5=86=99=E9=81=93:
> >>> 16/09/2021 10:01, Huisong Li:
> >>>> =E5=9C=A8 2021/9/8 15:20, Thomas Monjalon =E5=86=99=E9=81=93:
> >>>>> 08/09/2021 04:01, Huisong Li:
> >>>>>> =E5=9C=A8 2021/9/7 16:53, Thomas Monjalon =E5=86=99=E9=81=93:
> >>>>>>> 07/09/2021 05:41, Huisong Li:
> >>>>>>>> Calling rte_eth_dev_close() will release resources of eth device=
 and close
> >>>>>>>> it. But rte_pci_device struct isn't released when app exit, whic=
h will lead
> >>>>>>>> to memory leak.
> >>>>>>> That's a PMD issue.
> >>>>>>> When the last port of a PCI device is closed, the device should b=
e freed.
> >>>>>> Why is this a PMD problem? I don't understand.
> >>>>> In the PMD close function, freeing of PCI device must be managed,
> >>>>> so the app doesn't have to bother.
> >>>> I know what you mean. Currently, there are two ways to close PMD dev=
ice
> >>>> (rte_eth_dev_close() and rte_dev_remove()).
> >>>>
> >>>> For rte_dev_remove(), eth device can be closed and rte_pci_device al=
so
> >>>> can be freed, so it can make app not care about that.
> >>>>
> >>>> But dev_close() is only used to close eth device, and nothing about
> >>>> rte_pci_device is involved in the framework layer
> >>>>
> >>>> call stack of dev_close(). The rte_pci_device is allocated and
> >>>> initialized when the rte_pci_bus scans "/sys/bus/pci/devices" direct=
ory.
> >>>>
> >>>> Generally, the PMD of eth devices operates on the basis of eth devic=
es,
> >>>> and rarely on rte_pci_device.
> >>> No. The PMD is doing the relation between the PCI device and the ethd=
ev port.
> >> It seems that the ethdev layer can create eth devices based on
> >> rte_pci_device, but does not release rte_pci_device.
> > No, the ethdev layer does not manage any bus.
> > Only the PMD does that.
>=20
> I don't mean that the ethdev layer manages the bus.
>=20
> I mean, it neither allocate rte_pci_device nor free it.
>=20
> >>>> And the rte_pci_device corresponding to the eth devices managed and
> >>>> processed by rte_pci_bus.
> >>>>
> >>>> So, PMD is closed only based on the port ID of the eth device, whilch
> >>>> only shuts down eth devices, not frees rte_pci_device
> >>>> and remove it from rte_pci_bus.
> >>> Not really.
> >> I do not see any PMD driver releasing rte_pci_device in dev_close().
> > Maybe not but we should.
>=20
> I'm sure.
>=20
> As far as I know, the PMD does not free rte_pci_device for devices under=
=20
> the PCI bus, whether ethdev or dmadev.
>=20
> >
> >>> If there is no port using the PCI device, it should be released.
> >> Yes.
> >>>>>> As far as I know, most apps or examples in the DPDK project have o=
nly
> >>>>>> one port for a pci device.
> >>>>> The number of ports per PCI device is driver-specific.
> >>>>>
> >>>>>> When the port is closed, the rte_pci_device should be freed. But n=
one of
> >>>>>> the apps seem to do this.
> >>>>> That's because from the app point of view, only ports should be man=
aged.
> >>>>> The hardware device is managed by the PMD.
> >>>>> Only drivers (PMDs) have to do the relation between class ports
> >>>>> and hardware devices.
> >>>> Yes. But the current app only closes the port to disable the PMD, and
> >>>> the rte_pci_device cannot be freed.
> >>> Why not?
> >> Because most apps in DPDK call dev_close() to close the eth device
> >> corresponding to a port.
> > You don't say why the underlying PCI device could not be freed.
>  From the current implementation, rte_eth_dev_close() in ethdev layer=20
> and dev_close() in PMD both do not free it.
> >
> >>>> Because rte_pci_device cannot be released in dev_close() of PMD, and=
 is
> >>>> managed by framework layer.
> >>> No
> >>>
> >>>> Btw. Excluding rte_dev_probe() and rte_dev_remove(),  it seems that =
the
> >>>> DPDK framework only automatically
> >>>> scans PCI devices, but does not automatically release PCI devices wh=
en
> >>>> the process exits.
> >>> Indeed, because such freeing is the responsibility of the PMD.
> >> Do you mean to free rte_pci_device in the dev_close() API?
> > I mean free the PCI device in the PMD implementation of dev_close.
>=20
> I don't think it's reasonable.

What is not reasonable, is to not free a device which is closed.

> In the normal process, the rte_pci_device is allocated rte_eal_init()=20
> when pci bus scan "/sys/bus/pci/devices"
>=20
> by calling rte_bus_scan() and insert to rte_pci_bus.device_list.
>=20
> Then, calling rte_bus_probe() in rte_eal_init to match rte_pci_device=20
> and rte_pci_driver registered under rte_pci_bus
>=20
> to generate an eth device.
>=20
>  From this point of view, the rte_pci_device should be managed and=20
> released by the rte_pci_bus.
>=20
> Generally, the uninstallation operation should be reversed. Release the=20
> eth device first and then release the rte_pci_device.

Same for mbuf in mempool: allocation is done by the app,
free is done by the PMD.
Not everything is symmetrical.

> Therefore the rte_pci_device  does not be freed in the PMD=20
> implementation of dev_close.
>=20
> >
> >> How should PMD free it? What should we do? Any good suggestions?
> > Check that there is no other port sharing the same PCI device,
> > then call the PMD callback for rte_pci_remove_t.
>=20
> For primary and secondary processes, their rte_pci_device is independent.

Yes it requires to free on both primary and secondary.

> Is this for a scenario where there are multiple representor ports under=20
> the same PCI address in the same processe?

A PCI device can have multiple physical or representor ports.

> >> Would it be more appropriate to do this in rte_eal_cleanup() if it
> >> cann't be done in the API above?
> > rte_eal_cleanup is a last cleanup for what was not done earlier.
> > We could do that but first we should properly free devices when closed.
> >
> Totally, it is appropriate that rte_eal_cleanup is responsible for=20
> releasing devices under the pci bus.

Yes, but if a device is closed while the rest of the app keep running,
we should not wait to free it.