From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F3872A0C45; Tue, 14 Sep 2021 04:05:12 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7C3844068F; Tue, 14 Sep 2021 04:05:12 +0200 (CEST) Received: from mail-io1-f49.google.com (mail-io1-f49.google.com [209.85.166.49]) by mails.dpdk.org (Postfix) with ESMTP id AAAF040151 for ; Tue, 14 Sep 2021 04:05:10 +0200 (CEST) Received: by mail-io1-f49.google.com with SMTP id a22so14851143iok.12 for ; Mon, 13 Sep 2021 19:05:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=f043Xar9daRkXrxdORz59KWGf3I3grSFDICBP7v+Weo=; b=RQ7d0AWtXMFhzUJ27zPW34WfHdIAZfXnUG5mdJxqqm3hkwUfJZF15DGr05MWCrICjl RtAtLRw85+XdBQbH51nL/jovMpEQL6+0wDWz/GuGgCZTVijA3Z67L3XhCKoY1v+tzghW oq7AF7SgqtExJ+v+javK2gX2eEUN1xILEPrsA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=f043Xar9daRkXrxdORz59KWGf3I3grSFDICBP7v+Weo=; b=M3s6NxVHqE+7GtaHz9CZW73f+NNUIiMrKLBYxpiHL+z6UJBNo00TGjyMk5NwkGMe/4 MkgJ+KGVJ86IlfTTo6FEisjBcc8Gr56Y/bQIqWAdwBXa73m+2vL6NfjNZiwotOAR2haJ oFhpGLjGJEdENO77sEovf0c0jS4RfxpAC7QoYdTxX4y05EGIDzzhmRciRYPHZhjguk3b NhNuvxcn5YiS/WBuCLN9jSJDS+1T6CEiMI3Hr/KJMNYvfKCPmiHSRIHMJMlwWv0nnBRN sBJqXHnp4YdmlQhA/Em5MXhGjDBWDUYTo3+KbIp+awFlWubCrXgD7wPerULEtagRfnbg ZZ+w== X-Gm-Message-State: AOAM530sGLyeQAfpSlshYbU39o4dylhfyzWCxLs/9aOK1zdomsQiuMfz DmbTJ5o7CpcdC9h1nqCc5ASrTuRcP7d4U1K+5XGibQ== X-Google-Smtp-Source: ABdhPJyT2kmZ3VNitjBD3R7IzXnpjdErTfog4YL+x4SDTK1cf9s0XlwTVLU9NeWJwsySqNfFhzAY3oqvgvlu4ReUIg8= X-Received: by 2002:a6b:8f4e:: with SMTP id r75mr11831448iod.172.1631585109973; Mon, 13 Sep 2021 19:05:09 -0700 (PDT) MIME-Version: 1.0 References: <2a936b73-9935-6cd9-6d05-780d2f28982f@intel.com> <20210913085104.064bcc39@hermes.local> In-Reply-To: <20210913085104.064bcc39@hermes.local> From: Lance Richardson Date: Mon, 13 Sep 2021 22:04:58 -0400 Message-ID: To: Stephen Hemminger Cc: Ferruh Yigit , Kamaraj P , dev , "Burakov, Anatoly" Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha-256; boundary="000000000000ead0c205cbeb0227" X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: Re: [dpdk-dev] DPDK Max Mbuf Allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" --000000000000ead0c205cbeb0227 Content-Type: text/plain; charset="UTF-8" On Mon, Sep 13, 2021 at 11:51 AM Stephen Hemminger wrote: > > On Mon, 13 Sep 2021 16:43:18 +0100 > Ferruh Yigit wrote: > > > On 9/13/2021 5:56 AM, Kamaraj P wrote: > > > Hello All, > > > > > > Would like to understand or if there are any guidelines to allocate the max > > > no of mbuf per NIC ? > > > For example, if i have defined as below: > > > #define RX_RING_SIZE 1024 > > > #define TX_RING_SIZE 1024 > > > > > > The Maximum RX/TX queues can be defined as 8 per NIC, What would be the max > > > no of mbuf can be allocated per NIC ? > > > Please share if there is any guildliness or any limitation to increase the > > > mbuf ? > > > > > > > Hi Kamaraj, > > > > Max number of the queues and max number of the descriptors per queue depends to > > HW and changes form HW to HW. > > This information is shared by the PMDs that application needs to take into > > account. For example the descriptor limitations are provided by > > 'rx_desc_lim'/'tx_desc_lim' etc. > > > > After descriptor number is defined, testpmd uses the mbuf count as following, > > which can be taken as sample: > > > > nb_mbuf_per_pool = RTE_TEST_RX_DESC_MAX + RTE_TEST_TX_DESC_MAX + MAX_PKT_BURST + > > (nb_lcores * mb_mempool_cache); > > > > It is a a little more complicated since some devices (like bnxt) allocate > multiple mbuf's per packet. Something like +1, and it's worth noting that this makes it difficult to run many sample applications on the bnxt PMD. > > nb_mbuf_per_pool = MAX_RX_QUEUES * (RTE_TEST_RX_DESC_MAX * MBUF_PER_RX + MBUF_PER_Q) > + MAX_TX_QUEUE * RTE_TEST_TX_DESC_MAX * MBUF_PER_TX > + nb_lcores * MAX_PKT_BURST > + nb_lcores * mb_mempool_cache > + nb_lcores * PKTMBUF_POOL_RESERVED; > > Ended up with > MBUF_PER_RX = 3 For releases up to around 20.11, 3 is the correct value (one mbuf per RX ring entry, two mbufs in each aggregation ring per RX ring entry). Currently the value for MBUF_PER_RX would be 5 (four mbufs in each aggregation ring for each RX ring entry). BTW, a future version will avoid populating aggregation rings with mbufs when LRO or scattered receive are not enabled. > MBUF_PER_Q = 6 Hmm, it's not clear where these would be allocated in the bnxt PMD. It seems to me that MBUF_PER_Q is zero for the bnxt PMD. > and when using jumbo > MBUF_PER_TX = MAX_MTU / MBUF_DATA_SIZE = 2 I don't think this is correct... the bnxt PMD allocates TX descriptor rings with the requested number of descriptors from tx_queue_setup(), this is the maximum number of mbufs that can be present in a TX ring. > > > > > --000000000000ead0c205cbeb0227--