From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 668DEA0C45; Mon, 13 Sep 2021 17:51:09 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0526940151; Mon, 13 Sep 2021 17:51:09 +0200 (CEST) Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by mails.dpdk.org (Postfix) with ESMTP id 1BF854014F for ; Mon, 13 Sep 2021 17:51:08 +0200 (CEST) Received: by mail-pf1-f175.google.com with SMTP id e16so9261208pfc.6 for ; Mon, 13 Sep 2021 08:51:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gn7cHHmdZS0hUPOkmgw50WtAYgZruXkQhr72cMIV4OA=; b=Ph43agAi8gGCEfTO9xdiB1NvbKqwDJKEB6Z1VJowNpCZ/u7tPqv+ervR8cnjW0u/3m NSiQpKj1DctVh3iRdSFo7ih5B/d2uQIZxGEw2I3hq86QZ9ufJC9DO59ueIER+59dq5Sb MviqcDFj57ylqNMiqz8ThK8ZaXQoSHLPkY/V4anTEoM5PfvTImrizY4WVqbKn6z3uwk9 Ot1SJbiCdxJXXBaZP0rL718GlHD+jbAnClYCf5u4M7cZ6Lmk0H/UKgRaJtn9okjJnibo k+H5REWw7DBFfzT6BuSvgWQJpL7WhKIGtJXQh5YggODJDpy49kYX4Q1o/vap8fqsR70s WCFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gn7cHHmdZS0hUPOkmgw50WtAYgZruXkQhr72cMIV4OA=; b=QLhJPTTUV7AWx7U6+SUlr7ouwJhqOSWMx2SkZgofWlICwIK3VUqvB0LPIDJxfy/0X7 BGKWv0mxJbBpTnMsaLQxodJ7Mhu31j5pi8mcJwh50vvWCJydIvr8S0QLgdZbaxMRpWqG 4cAm9Jt7UYM2WiN+0ot1rzr3F094nT7cIZv4Hzysr8X1csOLEq5zAF3FvaK/2Wgyr21p OQG40YpLWj4HfZ2eTjQ4zfUoolgkIGxpdF/SKQhnCwNWb89VfnMRrR1v6qneAw5l5Ayt OmbBw0BzJGumGwVyLJ8KDtSqrbGfIUe1N8MJnil+djKQ3XU1/LX4x7H9kjYOr8V9Gdys jVyQ== X-Gm-Message-State: AOAM531FuYYKDJ+zt3W4vwfB6ydJPkLOHjZlvWPHWzi5U903Ge7NjKxt QoQ2hteTq3NTkfulJMePFJfqRw== X-Google-Smtp-Source: ABdhPJyzpekw6929gYpZyQhFOP4fsNf54GmHcXrPyS0zbYGFRTbyTdMCJ5JOJze50L942hpls2AdPQ== X-Received: by 2002:a05:6a00:a10:b0:412:448c:89c7 with SMTP id p16-20020a056a000a1000b00412448c89c7mr180118pfh.83.1631548267185; Mon, 13 Sep 2021 08:51:07 -0700 (PDT) Received: from hermes.local (204-195-33-123.wavecable.com. [204.195.33.123]) by smtp.gmail.com with ESMTPSA id fh3sm7362863pjb.8.2021.09.13.08.51.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 Sep 2021 08:51:06 -0700 (PDT) Date: Mon, 13 Sep 2021 08:51:04 -0700 From: Stephen Hemminger To: Ferruh Yigit Cc: Kamaraj P , dev , "Burakov, Anatoly" Message-ID: <20210913085104.064bcc39@hermes.local> In-Reply-To: <2a936b73-9935-6cd9-6d05-780d2f28982f@intel.com> References: <2a936b73-9935-6cd9-6d05-780d2f28982f@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] DPDK Max Mbuf Allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Mon, 13 Sep 2021 16:43:18 +0100 Ferruh Yigit wrote: > On 9/13/2021 5:56 AM, Kamaraj P wrote: > > Hello All, > > > > Would like to understand or if there are any guidelines to allocate the max > > no of mbuf per NIC ? > > For example, if i have defined as below: > > #define RX_RING_SIZE 1024 > > #define TX_RING_SIZE 1024 > > > > The Maximum RX/TX queues can be defined as 8 per NIC, What would be the max > > no of mbuf can be allocated per NIC ? > > Please share if there is any guildliness or any limitation to increase the > > mbuf ? > > > > Hi Kamaraj, > > Max number of the queues and max number of the descriptors per queue depends to > HW and changes form HW to HW. > This information is shared by the PMDs that application needs to take into > account. For example the descriptor limitations are provided by > 'rx_desc_lim'/'tx_desc_lim' etc. > > After descriptor number is defined, testpmd uses the mbuf count as following, > which can be taken as sample: > > nb_mbuf_per_pool = RTE_TEST_RX_DESC_MAX + RTE_TEST_TX_DESC_MAX + MAX_PKT_BURST + > (nb_lcores * mb_mempool_cache); > It is a a little more complicated since some devices (like bnxt) allocate multiple mbuf's per packet. Something like nb_mbuf_per_pool = MAX_RX_QUEUES * (RTE_TEST_RX_DESC_MAX * MBUF_PER_RX + MBUF_PER_Q) + MAX_TX_QUEUE * RTE_TEST_TX_DESC_MAX * MBUF_PER_TX + nb_lcores * MAX_PKT_BURST + nb_lcores * mb_mempool_cache + nb_lcores * PKTMBUF_POOL_RESERVED; Ended up with MBUF_PER_RX = 3 MBUF_PER_Q = 6 and when using jumbo MBUF_PER_TX = MAX_MTU / MBUF_DATA_SIZE = 2