DPDK patches and discussions
 help / color / mirror / Atom feed
From: Thomas Monjalon <thomas@monjalon.net>
To: Huisong Li <lihuisong@huawei.com>
Cc: dev@dpdk.org, ferruh.yigit@intel.com, david.marchand@redhat.com
Subject: Re: [dpdk-dev] [RFC V1] examples/l3fwd-power: fix memory leak for rte_pci_device
Date: Wed, 08 Sep 2021 09:20:31 +0200	[thread overview]
Message-ID: <4929922.EBv6eS3NRu@thomas> (raw)
In-Reply-To: <76ee3238-5d1f-70c5-3ec1-92662dea2185@huawei.com>

08/09/2021 04:01, Huisong Li:
> 在 2021/9/7 16:53, Thomas Monjalon 写道:
> > 07/09/2021 05:41, Huisong Li:
> >> Calling rte_eth_dev_close() will release resources of eth device and close
> >> it. But rte_pci_device struct isn't released when app exit, which will lead
> >> to memory leak.
> > That's a PMD issue.
> > When the last port of a PCI device is closed, the device should be freed.
> 
> Why is this a PMD problem? I don't understand.

In the PMD close function, freeing of PCI device must be managed,
so the app doesn't have to bother.

> As far as I know, most apps or examples in the DPDK project have only 
> one port for a pci device.

The number of ports per PCI device is driver-specific.

> When the port is closed, the rte_pci_device should be freed. But none of 
> the apps seem to do this.

That's because from the app point of view, only ports should be managed.
The hardware device is managed by the PMD.
Only drivers (PMDs) have to do the relation between class ports
and hardware devices.

> >> +		/* Retrieve device address in eth device before closing it. */
> >> +		eth_dev = &rte_eth_devices[portid];
> > You should not access this array, considered internal.
> 
> We have to save the address of rte_device to free rte_pci_device before 
> closing eth device.
> 
> Because the the device address in rte_eth_dev struct will be set to a 
> NULL after closing eth device.
> 
> It's also handled in OVS in this way.

No you don't have to call rte_dev_remove at all from an app.

> >> +		rte_dev = eth_dev->device;
> >>   		rte_eth_dev_close(portid);
> >> +		ret = rte_dev_remove(rte_dev);




  reply	other threads:[~2021-09-08  7:20 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-07  3:41 Huisong Li
2021-09-07  8:53 ` Thomas Monjalon
2021-09-08  2:01   ` Huisong Li
2021-09-08  7:20     ` Thomas Monjalon [this message]
2021-09-16  8:01       ` Huisong Li
2021-09-16 10:36         ` Thomas Monjalon
2021-09-17  2:13           ` Huisong Li
2021-09-17 12:50             ` Thomas Monjalon
2021-09-18  3:24               ` Huisong Li
2021-09-18  8:46                 ` Thomas Monjalon
2021-09-26 12:20                   ` Huisong Li
2021-09-26 19:16                     ` Thomas Monjalon
2021-09-27  1:44                       ` Huisong Li
2021-09-30  6:28                         ` Huisong Li
2021-09-30  7:50                           ` Thomas Monjalon
2021-10-08  6:26                             ` lihuisong (C)
2021-10-08  6:29                               ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4929922.EBv6eS3NRu@thomas \
    --to=thomas@monjalon.net \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=lihuisong@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).