The API rte_pktmbuf_pool_create() https://doc.dpdk.org/api/rte__mbuf_8h.html#a593921f13307803b94bbb4e0932db962 at first glance seems minimal, complete. It's not. It's really not. The doc's link to `rte_mempool_create()` helps a little but not much. It lacks at least the following important facets. I would appreciate answers from whomever knows how DPDK thinks here: 1. Does cache_size include or exclude data_room_size? 2. Does cache_size include or exclude sizeof(struct rtre_mbuf)? 3. Does cache size include or exclude RTE_PKTMBUF_HEADROOM? 4. What lcore is the allocated memory pinned to? The lcore of the caller when this method is run? The answer here is important. If it's the lcore of the caller when called, this routine should be called in the lcore's entry point so it's on the right lcore the memory is intended. Calling it on the lcore that happens to be running main, for example, could have a bad side effect if it's different from where the memory will be ultimately used. 5. Which one of the formal arguments represents tail room indicated in https://doc.dpdk.org/guides/prog_guide/mbuf_lib.html#figure-mbuf1 My answers best I can tell follow: 1. Excludes 2. Excludes 3. Excludes 4. Caller does not enter into this situation; see below 5. Unknown. Perhaps if you want private data which corresponds to tail room in the diagram above one has to call rte_mempool_create() instead and focus on private_data_size. Discussion: Mempool creation is like malloc: you request the total number of absolute bytes required. The API will not add or remove bytes to the number you specify. Therefore the number you give must be inclusive of all needs including your payload, any DPDK overheader, headroom, tailroom, and so on. DPDK is not adding to the number you give for its own purposes. Clearer? Perhaps ... but what needs? Read on ... Unlike malloc rte_pktmbuf_pool_create() takes *n* the number of objects that memory will hold. Therefore the cache_size mod n should be 0. Indeed, some DPDK code like that of https://github.com/erpc-io/eRPC the author allocates one mempool per RXQ or TXQ where the memory requested and number of objects in that memory pool are appropriate for exactly one RXQ or exactly one TXQ. Clearly then the total amount of memory in a specific mempool divided by the number of objects it's intended to hold should mod to zero. This then begs the question, ok, if DPDK can do this what lcore is the memory pinned or cached for? The caller's lcore? If so it depends on what lcore one allocates the memory which can depend on where/when in the program's history the call is made. Note the API does not take a lcore argument. A *careful reading, however, suggests the eRPC code is misguided*. DPDK does not support creating a mempool for usage on one lcore. The doc reads the formal argument *cache_size* gives the *size of the per-core object cache. See rte_mempool_create() for details.* Also the doc reads: *the optimum size (in terms of memory usage) for a mempool is when n is a power of two minus one: n = (2^q - 1) *also contradicts a mod 0 scenario. Now I'm a little but not totally surprised here. Yes, in some applications TXQs and RXQs are bouncing around uniform packets. So telling DPDK I need X bytes and letting DPDK do the spade work of breaking up the memory for efficient per core access ultimately per RXQ or per TXQ is a real benefit. DPDK will give nice per lcore cache, lockless memory access. Gotta like that. Right? No. I might not. I might have half my TXQ and RXQs dealing with tiny mbufs/packets, and the other half dealing with completely different traffic of a completely different size and structure. So I might want memory pool allocation to be done on a smaller scale e.g. per RXQ/TXQ/lcore. DPDK doesn't seem to permit this. DPDK seems to require me to allocate for the largest possible application mbuf size possible for all cases and, then upon allocating a mbuf from the pool, *use as much or as few bytes of allocation as needed for the particular purpose at hand. *If that's what DPDK wants, fine. But I think it ought to be a hell of lot easy to see that's so.