From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B3E2BA0350 for ; Tue, 1 Mar 2022 23:46:07 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4A23840696; Tue, 1 Mar 2022 23:46:07 +0100 (CET) Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) by mails.dpdk.org (Postfix) with ESMTP id 5825240040 for ; Tue, 1 Mar 2022 23:46:06 +0100 (CET) Received: by mail-pf1-f172.google.com with SMTP id d187so146392pfa.10 for ; Tue, 01 Mar 2022 14:46:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=z+59TVrXxAtjNAhabBIy/f+lFPpiw4Kimq4bwfOHS3s=; b=f0w9Wwbunfiq3eb6KfoW/Ehd9d6mHMrDTH3NiM9wIbQP757WDe37PiB7sZsYWdD/ty Sfnqwrf0PMp+PC99j937qqDM19il3VC7Lyp1wFd359RIRe+Nucnd4nllyHWlzGQZcgJq CgC6yQKjWcxLgDYvAmLbGa8EAS2DB5LQuIlfR7ypUxbsPpWa7f4jnDL5wuM1ur4ldtor sg+hveo6t3PcCD6EkK+hzOb1bZY7q41t5oQ8KmTBAtUstpppBdJEbt7QZ45DQ43RQAfQ 0ouY6Ka1afCZY2vEgvr38aqtyjv04/lPTXSfR+ossI7Fr9rYgPDsfz/RuCuVqokFBZBB UJ/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=z+59TVrXxAtjNAhabBIy/f+lFPpiw4Kimq4bwfOHS3s=; b=4e1jUOxfIBvy2vQmeoyYKiH3I6AAXr+1Yn/v8QzdzhanIHv9wVH7h/JU7AT1F67ABm y+KWI3YaNTxC+OHiMltBrVahFmrD4gWLiJ2qPy1MMVSMgZs+EfM0ZfbXHxqD/ps+S8oj 7NoPKcEtgPv1Hl2m9VElH/CPio+p1feQKjnZAX0mLd0SrYva5fRBBvAkxw9Z15KZd2rG F45NnKwzSmnx8XmsMR+yRAth8BkrcYX7FaAuyhCmw0AygghhtLavsy8IqWdDUphvb1kq IIYtmbR/adzxDRR0V1ggJJRcPFIEFzPjFQB+4BC406Fri3zjQIxJuSNTdtCnfUteCcHZ r99Q== X-Gm-Message-State: AOAM532DyBmfNreDn2otcInlOSaGoA9lw4vzQOFP/tFP242Tm7QMrf4+ e03yIykC4gCl2d97LItKG9SsgQ== X-Google-Smtp-Source: ABdhPJzLKAh0CbuX11k5fu85nIDocVu6jGcH7uxgffsTF464SQQ3Cvc2Y+k/1m1LhQwWd6fvhYF2Dg== X-Received: by 2002:a05:6a00:8c5:b0:4c7:f9a5:ebc6 with SMTP id s5-20020a056a0008c500b004c7f9a5ebc6mr30280304pfu.34.1646174765365; Tue, 01 Mar 2022 14:46:05 -0800 (PST) Received: from hermes.local (204-195-112-199.wavecable.com. [204.195.112.199]) by smtp.gmail.com with ESMTPSA id z10-20020a17090a8b8a00b001b8d20074c8sm2971115pjn.33.2022.03.01.14.46.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Mar 2022 14:46:04 -0800 (PST) Date: Tue, 1 Mar 2022 14:46:02 -0800 From: Stephen Hemminger To: Cliff Burdick Cc: "Lombardo, Ed" , "users@dpdk.org" Subject: Re: How to increase mbuf size in dpdk version 17.11 Message-ID: <20220301144602.73c8ff95@hermes.local> In-Reply-To: References: <20220301115638.62387935@hermes.local> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org On Tue, 1 Mar 2022 13:37:07 -0800 Cliff Burdick wrote: > Can you verify how many buffers you're allocating? I don't see how many > you're allocating in this thread. > > On Tue, Mar 1, 2022 at 1:30 PM Lombardo, Ed > wrote: > > > Hi Stephen, > > The VM is configured to have 32 GB of memory. > > Will dpdk consume the 2GB of hugepage memory for the mbufs? > > I don't mind having less mbufs with mbuf size of 16K vs original mbuf size > > of 2K. > > > > Thanks, > > Ed > > > > -----Original Message----- > > From: Stephen Hemminger > > Sent: Tuesday, March 1, 2022 2:57 PM > > To: Lombardo, Ed > > Cc: users@dpdk.org > > Subject: Re: How to increase mbuf size in dpdk version 17.11 > > > > External Email: This message originated outside of NETSCOUT. Do not click > > links or open attachments unless you recognize the sender and know the > > content is safe. > > > > On Tue, 1 Mar 2022 18:34:22 +0000 > > "Lombardo, Ed" wrote: > > > > > Hi, > > > I have an application built with dpdk 17.11. > > > During initialization I want to change the mbuf size from 2K to 16K. > > > I want to receive packet sizes of 8K or more in one mbuf. > > > > > > The VM running the application is configured to have 2G hugepages. > > > > > > I tried many things and I get an error when a packet arrives. > > > > > > I read online that there is #define DEFAULT_MBUF_DATA_SIZE that I > > changed from 2176 to ((2048*8)+128), where 128 is for headroom. > > > The call to rte_pktmbuf_pool_create() returns success with my changes. > > > From the rte_mempool_dump() - "rx_nombuf" - Total number of Rx mbuf > > allocation failures. This value increments each time a packet arrives. > > > > > > Is there any reference document explaining what causes this error? > > > Is there a user guide I should follow to make the mbuf size change, > > starting with the hugepage value? > > > > > > Thanks, > > > Ed > > > > Did you check that you have enough memory in the system for the larger > > footprint? > > Using 16K per mbuf is going to cause lots of memory to be consumed. A little maths you can fill in your own values. Assuming you want 16K of data. You need at a minimum [1] num_rxq := total number of receive queues num_rxd := number of receive descriptors per receive queue num_txq := total number of transmit queues (assume all can be full) num_txd := number of transmit descriptors num_mbufs = num_rxq * num_rxd + num_txq * num_txd + num_cores * burst_size Assuming you are using code copy/pasted from some example like l3fwd. With 4 Rxq num_mbufs = 4 * 1024 + 4 * 1024 + 4 * 32 = 8320 Each mbuf element requires [2] elt_size = sizeof(struct rte_mbuf) + HEADROOM + mbuf_size = 128 + 128 + 16K = 16640 obj_size = rte_mempool_calc_obj_size(elt_size, 0, NULL) = 16832 So total pool is num_mbufs * obj_size = 8320 * 16832 = 140,042,240 ~ 139M [1] Some devices line bnxt need multiple buffers per packet. [2] Often applications want additional space per mbuf for meta-data.