From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-vk1-f193.google.com (mail-vk1-f193.google.com [209.85.221.193]) by dpdk.org (Postfix) with ESMTP id 5F3B64C93 for ; Tue, 30 Apr 2019 17:38:08 +0200 (CEST) Received: by mail-vk1-f193.google.com with SMTP id u131so1602289vke.9 for ; Tue, 30 Apr 2019 08:38:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=yNdKACPVNlTFr7GZwhVJdI8iRHaEcaaOgDPZfmZFxVg=; b=o/0PA9AquhGWWMXE0fWcTQhT0Ub41wZc0it+91Zj6AjR3eSGbCIqo5VSY3GInRLxyw +T1+/nsMBjUbfu7zxBRLYbYGSQgX9aR6QvEi3tbZblmbjXOtI/Bj4agsE2b4DQ3hxkAF +QPmSga1xv3cMa9KEmFpoDwE4b3TqrKmfJNUONSzS0BqQclJMxSEsw8AYG2u5wPPuoPW J9RUc1VdMee0mYOl8lL8Jeo7Wtsm3rzU/wvKSJW++XDl9suBUq0O3OuDEcwZuBxKyqP8 QAbg4pNKYrfWwFVa7dX4QOblDa4qdm4A+qX9aZ22M+vNrnG0qNjm7hi4ddr25n5Y98P7 ZSrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=yNdKACPVNlTFr7GZwhVJdI8iRHaEcaaOgDPZfmZFxVg=; b=SMzFaraPWGDsMFGEBCLaRt5Xw+Lz09rV2qVLwxOxpPg5OCeSaxRXFas2K75kccaOAJ Ry1aCwXroI4eyDtKut9ulT4Us5E7+Gzac82M8Zf70j7U5n0AQOaCxTm6OZXTBtN/TFg0 XYiYn6rC/jM1nrwUCteffDGq5fT3Q87j6KRn+XKQJH/Qvp9RriE43W0bwcjGjR1ig3B5 Bs6FK10M7nbMGYCSMwtpzmI5RGRHgwKjSY9UUBydJSTEsIkLLGpuz/1CflPcDPv0ZkyA ZqfkAWKGcQByrDlVZjoiuQrdlJozvQ1suMuCBv8ShNzq9A87leHiBx9Bkuzc+QWWbk2N 3uKg== X-Gm-Message-State: APjAAAVygCNHZt4OMGFau4MmbIh8WLbYokxbQXEqopGPSgkw0d9TZafi QDlBTdzvr9lNlP2ZRGEzviPeN09s80Tt4xWxYUc= X-Google-Smtp-Source: APXvYqzhE018N+jSP3K/Ve+Vaiv31brKlZt6anAY9VaIGhNFT8gVZSeaFPUnbB9f75ZbrC/YC02avUsEJPAkEhpSgwg= X-Received: by 2002:a1f:9287:: with SMTP id u129mr7512834vkd.54.1556638687769; Tue, 30 Apr 2019 08:38:07 -0700 (PDT) MIME-Version: 1.0 References: <9c7588c8-b1b8-7363-5b44-209a2bf1fa0e@intel.com> In-Reply-To: <9c7588c8-b1b8-7363-5b44-209a2bf1fa0e@intel.com> From: Paras Jha Date: Tue, 30 Apr 2019 11:37:56 -0400 Message-ID: To: Ferruh Yigit Cc: dev@dpdk.org Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] rte_eth_tx_burst improperly freeing mbufs from KNI mbuf pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Apr 2019 15:38:08 -0000 Hi, I think this issue seems to be due to how the PMD frees mbufs. If the PMD is configured with pool X, and KNI configured with pool Y, and pool Y has far fewer mbufs available than pool X, when an application calls tx_burst on the PMD, the mbufs will not be freed as the threshold for freeing seems to be based on how many mbufs are in the pool the mbuf originated from. Setting the amount of mbufs in pool X and Y to be equal resolved the issue, even with arbitrarily small or large counts. This kind of seems like a "gotcha" where it isn't immediately clear without experimentation. On Tue, Apr 30, 2019 at 10:59 AM Ferruh Yigit wrote: > On 4/10/2019 2:10 PM, Paras Jha wrote: > > Hi all, > > > > I've been chasing down a strange issue related to rte_kni_tx_burst. > > > > My application calls rte_kni_rx_burst, which allocates from a discrete > mbuf > > pool using kni_allocate_mbufs. That traffic is immediately sent to > > rte_eth_tx_burst which does not seem to be freeing mbufs even upon > > succesful completion. > > > > My application follows the standard model of freeing mbufs only if the > > number of tx mbufs is less than the rx mbufs - however, after sending as > > many mbufs as there are in the pool, I get KNI: Out of memory soon after > > when calling rte_kni_rx_burst > > > > My concern is that if I free all mbufs allocated by the KNI during > > rte_kni_rx_burst the application seems to work as intended without memory > > leaks, even though this goes against how the actual PMDs work. Is this a > > bug, or intended behavior? The documented examples on the DPDK website > seem > > to only free mbufs if it fails to send, even with the KNI example. > > The behavior in the KNI sample application is correct thing to do, free > only the > mbufs failed to Tx. As far as I understand you are doing same thing as the > kni > sample app, so it should be OK, sample app works fine. > > 'rte_eth_tx_burst()' sends packets to the kernel, so it shouldn't free the > mbufs, right, userspace can't know when kernel side will be done with them. > When kernel side is done, it puts the mbufs into 'free_q' fifo, > 'kni_free_mbufs()' pulls the mbuf from 'free_q' fifo and frees them. > So the mbufs sent via 'rte_eth_tx_burst()' freed asynchronously. > I hope this helps. > > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 091EEA0679 for ; Tue, 30 Apr 2019 17:38:11 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4F651569B; Tue, 30 Apr 2019 17:38:10 +0200 (CEST) Received: from mail-vk1-f193.google.com (mail-vk1-f193.google.com [209.85.221.193]) by dpdk.org (Postfix) with ESMTP id 5F3B64C93 for ; Tue, 30 Apr 2019 17:38:08 +0200 (CEST) Received: by mail-vk1-f193.google.com with SMTP id u131so1602289vke.9 for ; Tue, 30 Apr 2019 08:38:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=yNdKACPVNlTFr7GZwhVJdI8iRHaEcaaOgDPZfmZFxVg=; b=o/0PA9AquhGWWMXE0fWcTQhT0Ub41wZc0it+91Zj6AjR3eSGbCIqo5VSY3GInRLxyw +T1+/nsMBjUbfu7zxBRLYbYGSQgX9aR6QvEi3tbZblmbjXOtI/Bj4agsE2b4DQ3hxkAF +QPmSga1xv3cMa9KEmFpoDwE4b3TqrKmfJNUONSzS0BqQclJMxSEsw8AYG2u5wPPuoPW J9RUc1VdMee0mYOl8lL8Jeo7Wtsm3rzU/wvKSJW++XDl9suBUq0O3OuDEcwZuBxKyqP8 QAbg4pNKYrfWwFVa7dX4QOblDa4qdm4A+qX9aZ22M+vNrnG0qNjm7hi4ddr25n5Y98P7 ZSrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=yNdKACPVNlTFr7GZwhVJdI8iRHaEcaaOgDPZfmZFxVg=; b=SMzFaraPWGDsMFGEBCLaRt5Xw+Lz09rV2qVLwxOxpPg5OCeSaxRXFas2K75kccaOAJ Ry1aCwXroI4eyDtKut9ulT4Us5E7+Gzac82M8Zf70j7U5n0AQOaCxTm6OZXTBtN/TFg0 XYiYn6rC/jM1nrwUCteffDGq5fT3Q87j6KRn+XKQJH/Qvp9RriE43W0bwcjGjR1ig3B5 Bs6FK10M7nbMGYCSMwtpzmI5RGRHgwKjSY9UUBydJSTEsIkLLGpuz/1CflPcDPv0ZkyA ZqfkAWKGcQByrDlVZjoiuQrdlJozvQ1suMuCBv8ShNzq9A87leHiBx9Bkuzc+QWWbk2N 3uKg== X-Gm-Message-State: APjAAAVygCNHZt4OMGFau4MmbIh8WLbYokxbQXEqopGPSgkw0d9TZafi QDlBTdzvr9lNlP2ZRGEzviPeN09s80Tt4xWxYUc= X-Google-Smtp-Source: APXvYqzhE018N+jSP3K/Ve+Vaiv31brKlZt6anAY9VaIGhNFT8gVZSeaFPUnbB9f75ZbrC/YC02avUsEJPAkEhpSgwg= X-Received: by 2002:a1f:9287:: with SMTP id u129mr7512834vkd.54.1556638687769; Tue, 30 Apr 2019 08:38:07 -0700 (PDT) MIME-Version: 1.0 References: <9c7588c8-b1b8-7363-5b44-209a2bf1fa0e@intel.com> In-Reply-To: <9c7588c8-b1b8-7363-5b44-209a2bf1fa0e@intel.com> From: Paras Jha Date: Tue, 30 Apr 2019 11:37:56 -0400 Message-ID: To: Ferruh Yigit Cc: dev@dpdk.org Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] rte_eth_tx_burst improperly freeing mbufs from KNI mbuf pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190430153756.zu2ZLGDryc_6dXkCNppy75Ebn7-k6qYVA_5qoL-7ym4@z> Hi, I think this issue seems to be due to how the PMD frees mbufs. If the PMD is configured with pool X, and KNI configured with pool Y, and pool Y has far fewer mbufs available than pool X, when an application calls tx_burst on the PMD, the mbufs will not be freed as the threshold for freeing seems to be based on how many mbufs are in the pool the mbuf originated from. Setting the amount of mbufs in pool X and Y to be equal resolved the issue, even with arbitrarily small or large counts. This kind of seems like a "gotcha" where it isn't immediately clear without experimentation. On Tue, Apr 30, 2019 at 10:59 AM Ferruh Yigit wrote: > On 4/10/2019 2:10 PM, Paras Jha wrote: > > Hi all, > > > > I've been chasing down a strange issue related to rte_kni_tx_burst. > > > > My application calls rte_kni_rx_burst, which allocates from a discrete > mbuf > > pool using kni_allocate_mbufs. That traffic is immediately sent to > > rte_eth_tx_burst which does not seem to be freeing mbufs even upon > > succesful completion. > > > > My application follows the standard model of freeing mbufs only if the > > number of tx mbufs is less than the rx mbufs - however, after sending as > > many mbufs as there are in the pool, I get KNI: Out of memory soon after > > when calling rte_kni_rx_burst > > > > My concern is that if I free all mbufs allocated by the KNI during > > rte_kni_rx_burst the application seems to work as intended without memory > > leaks, even though this goes against how the actual PMDs work. Is this a > > bug, or intended behavior? The documented examples on the DPDK website > seem > > to only free mbufs if it fails to send, even with the KNI example. > > The behavior in the KNI sample application is correct thing to do, free > only the > mbufs failed to Tx. As far as I understand you are doing same thing as the > kni > sample app, so it should be OK, sample app works fine. > > 'rte_eth_tx_burst()' sends packets to the kernel, so it shouldn't free the > mbufs, right, userspace can't know when kernel side will be done with them. > When kernel side is done, it puts the mbufs into 'free_q' fifo, > 'kni_free_mbufs()' pulls the mbuf from 'free_q' fifo and frees them. > So the mbufs sent via 'rte_eth_tx_burst()' freed asynchronously. > I hope this helps. > >