From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 10EC645D7C for ; Fri, 22 Nov 2024 17:46:03 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EC132402CE; Fri, 22 Nov 2024 17:46:02 +0100 (CET) Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) by mails.dpdk.org (Postfix) with ESMTP id 660D3402C3 for ; Fri, 22 Nov 2024 17:46:01 +0100 (CET) Received: by mail-pf1-f173.google.com with SMTP id d2e1a72fcca58-724d8422dbaso1587417b3a.0 for ; Fri, 22 Nov 2024 08:46:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1732293960; x=1732898760; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:from:to:cc:subject:date :message-id:reply-to; bh=eHtN80BmW69UoXQ4pJmYmiVL7xcmehFHdXnLVFiGESc=; b=Yc10HZ+A3nHV8lAQ5DcehgC7BuRUVINGxhAdpwowbmNN/nxj5yg8QCWOqxj5MTQ07l 75xbuxW4Ubb6G6jE41muxuIWJhQ6ATFBtKCFVGhJ1xp+npStUZtlppgVmCGM1V2sfjXu tQJGIMMxZw/9fS9tga5/sZa4BuGDRUfmFZypR4wCY1hxpOh0QHj58+8zTKK/7qSoeA4f /TXbFCd6dCb6uk0A264gfFULPqfhHW1oMtp62ZEzjVS0Zs5+MF2DfS1JKLnSmL4hABHu viyVj36e9GF+lzLtnP7l4LWZc4Y1RgvoZ6VP9lt7tLD80mUBuvJzZjFaPvfEkIeblUv1 U0zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732293960; x=1732898760; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eHtN80BmW69UoXQ4pJmYmiVL7xcmehFHdXnLVFiGESc=; b=MmFic4fTKMymz8t1F6krtp8Ov+hBf7UWUXZ6iY/7EnoADxFCuHGO361VQOolrQnbFf pCslc1Ipx/m6lI291gwH2ynr0KZeDPjHHjy7DNdS6YcrPYYs8mV1Qt5twq/B/ahkjgqf PBkylIyFoBfEwrYrKhcxeL7NpodMo9BtEqz41tQ6tjUTXePGRhApakNo/HzkPlWD7lLs wp9g0tLJXj9uXdu90DxtkEDEIMnh7AzkHnnwW2KaxvMTCl3cO1lApKSTvHpB1pwBOVFu PllkEq5AfKyirk+GzBkpHCLjKIfoROqSCHaoGtmHqiS9aimdE7bNsEIPIQ8LGtnEQNPi lq7Q== X-Gm-Message-State: AOJu0Yw7iKd+yWlXCM4aw4Iu3gRV2/Yi5dGmENmDQHafWFeFUH5IzMnu h73IZR4kVuQdJwIRgJfqPy2gDGhEvEfUzlW8l+IBYlPIqbRRPBmdRQMNKwqNwegeMiDwvRvFb7e 2 X-Gm-Gg: ASbGnct+3FenAFYq7jOO80DN5PEcnhPzaOOrQn57//nqdINDYs1Jf01JHmxXTjPrUW0 bDR0KIz2thZqy40jqNCjZozdvy6XU0Rrte9Pfii5PBL++wy/FRNa18KZszMfe+SpQmB7uiVg4rA ih0vrRj5GaJ9TjTVCzs7C8z9qdOA+ujAHwhvoPgo6gcQ+i0Lqy/RYtA1GEg5eA/7DUxrR/zBpF5 IDhdaiWG6a8oE8qUDbuBisx80+an54YeYrwZ/M0650XU9e4RJlIBmRAdORv98XPr91l5d04a2aR 9BQ9kCFALRgCIUbIUs9F/1IgFgw= X-Google-Smtp-Source: AGHT+IFrC/12JJZnehnf3aJnfKlPpFnb+8yBYKfaQGR+F4eG7dM0aerU38BauiaGyysYfHID6bG7rw== X-Received: by 2002:a05:6a00:92a2:b0:71e:cc7:c507 with SMTP id d2e1a72fcca58-724df6b566emr5424987b3a.23.1732293960164; Fri, 22 Nov 2024 08:46:00 -0800 (PST) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-724de5312f2sm1824202b3a.113.2024.11.22.08.45.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Nov 2024 08:45:59 -0800 (PST) Date: Fri, 22 Nov 2024 08:45:57 -0800 From: Stephen Hemminger To: amit sehas Cc: "users@dpdk.org" Subject: Re: rte_pktmbuf_alloc() out of rte_mbufs Message-ID: <20241122084557.726e38e7@hermes.local> In-Reply-To: <67781150.1429748.1732243135675@mail.yahoo.com> References: <67781150.1429748.1732243135675.ref@mail.yahoo.com> <67781150.1429748.1732243135675@mail.yahoo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org On Fri, 22 Nov 2024 02:38:55 +0000 (UTC) amit sehas wrote: > I am frequently running into out of mbufs when allocating packets. When this happens is there a way to dump counts of which buffers are where so we know what is going on? > > I know that each rte_mbuf pool also has per cpu core cache to speed up alloc/free, and some of the buffers will end up there and if one were to never utilize a particular core for a particular mpool perhaps those mbufs are lost ... that is my rough guess ... > > How do you debug out of mbufs issue? > > regards The function rte_mempool_dump() will tell you some information about the status of a particular mempool. If you enable mempool statistics you can get more info. The best way to size a memory pool is to account for all the possible places mbuf's can be waiting. Something like: Num Port * Num RxQ * Num RxD + Num Port * Num TxQ * Num TxD + Num Lcores * Burst Size + Num Lcores * Cache size Often running out of mbufs is because of failure to free an recveived mbuf, or a buggy driver.