From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0F70BA0C41; Fri, 8 Oct 2021 08:26:12 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D278C4067E; Fri, 8 Oct 2021 08:26:11 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 3C1AD40143 for ; Fri, 8 Oct 2021 08:26:09 +0200 (CEST) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4HQdPf6KMlzbn3J; Fri, 8 Oct 2021 14:21:42 +0800 (CST) Received: from dggema767-chm.china.huawei.com (10.1.198.209) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.8; Fri, 8 Oct 2021 14:26:06 +0800 Received: from [10.67.103.231] (10.67.103.231) by dggema767-chm.china.huawei.com (10.1.198.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2308.8; Fri, 8 Oct 2021 14:26:05 +0800 Message-ID: <19202698-59ac-d920-2635-851e2e89694e@huawei.com> Date: Fri, 8 Oct 2021 14:26:06 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0 To: Thomas Monjalon CC: , , References: <20210907034108.58763-1-lihuisong@huawei.com> <6934941a-23e0-5466-3013-69a79c42fec0@huawei.com> <2162308.5El5PrcoHi@thomas> From: "lihuisong (C)" In-Reply-To: <2162308.5El5PrcoHi@thomas> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.67.103.231] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggema767-chm.china.huawei.com (10.1.198.209) X-CFilter-Loop: Reflected Subject: Re: [dpdk-dev] [RFC V1] examples/l3fwd-power: fix memory leak for rte_pci_device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 在 2021/9/30 15:50, Thomas Monjalon 写道: > 30/09/2021 08:28, Huisong Li: >> Hi. Thomas >> >> I've summed up our previous discussion. >> >> Can you look at the final proposal again? >> >> Do you think we should deal with the problem better? > I don't understand what is the final proposal. Sorry. The last idea we discussed was: As you mentioned, if we do not want the user to free rte_pci_device and we want rte_pci_device to be freed in time. Can we add a code logic calculating the number of ports under a PCI address and calling rte_dev_remove() in rte_eth_dev_close() to free rte_pci_device and delete it from rte_pci_bus? If we do, we may need to make some extra work, otherwise some applications, such as OVS-DPDK, will fail due to a second call to rte_dev_remove(). The method of releasing rte_pci_device in OVS-DPDK is as follows: It calls dev_close() first, and then check whether all ports under the PCI address are closed to free rte_pci_device by calling rte_dev_remove(). If it's not clear enough, please take a look at the discussion in our email line. Thanks.😁 > > >> 在 2021/9/27 9:44, Huisong Li 写道: >>> 在 2021/9/27 3:16, Thomas Monjalon 写道: >>>> 26/09/2021 14:20, Huisong Li: >>>>> 在 2021/9/18 16:46, Thomas Monjalon 写道: >>>>>> 18/09/2021 05:24, Huisong Li: >>>>>>> 在 2021/9/17 20:50, Thomas Monjalon 写道: >>>>>>>> 17/09/2021 04:13, Huisong Li: >>>>>>>>> How should PMD free it? What should we do? Any good suggestions? >>>>>>>> Check that there is no other port sharing the same PCI device, >>>>>>>> then call the PMD callback for rte_pci_remove_t. >>>>>>> For primary and secondary processes, their rte_pci_device is >>>>>>> independent. >>>>>> Yes it requires to free on both primary and secondary. >>>>>> >>>>>>> Is this for a scenario where there are multiple representor ports >>>>>>> under >>>>>>> the same PCI address in the same processe? >>>>>> A PCI device can have multiple physical or representor ports. >>>>> Got it. >>>>>>>>> Would it be more appropriate to do this in rte_eal_cleanup() if it >>>>>>>>> cann't be done in the API above? >>>>>>>> rte_eal_cleanup is a last cleanup for what was not done earlier. >>>>>>>> We could do that but first we should properly free devices when >>>>>>>> closed. >>>>>>>> >>>>>>> Totally, it is appropriate that rte_eal_cleanup is responsible for >>>>>>> releasing devices under the pci bus. >>>>>> Yes, but if a device is closed while the rest of the app keep running, >>>>>> we should not wait to free it. >>>>> From this point of view, it seems to make sense. However, >>>>> according to >>>>> the OVS-DPDK >>>>> >>>>> usage, it calls dev_close() first, and then check whether all ports >>>>> under the PCI address are >>>>> >>>>> closed to free rte_pci_device by calling rte_dev_remove(). >>>>> >>>>> >>>>> If we do not want the user to be aware of this, and we want >>>>> rte_pci_device to be freed >>>>> >>>>> in a timely manner. Can we add a code logic calculating the number of >>>>> ports under a PCI address >>>>> >>>>> and calling rte_dev_remove() to rte_eth_dev_close() to free >>>>> rte_pci_device and delete it from rte_pci_bus? >>>>> >>>>> If we do, we may need to make some extra work, otherwise some >>>>> applications, such as OVS-DPDK, will >>>>> >>>>> fail due to a second call to rte_dev_remove(). >>>> I don't understand the proposal. >>>> Please could explain again the code path? >>> 1. This RFC patch intended to free rte_pci_device in DPDK app by calling >>> >>> rte_dev_remove() after calling dev_close(). >>> >>> 2. For the above-mentioned usage in OVS-DPDK, please see function >>> >>> netdev_dpdk_destruct() in lib/netdev-dpdk.c. >>> >>> 3. Later, you suggest that the release of rte_pci_device should be done >>> >>> in the dev_close() API, not in the rte_eal_init() which is not real-time. >>> >>> To sum up, the above proposal comes out. >>> >>>> It may deserve a separate mail thread. >>>> >>>> >>>> . >>> . > > > > > . >