From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id ECF39A00BE; Wed, 30 Oct 2019 15:54:29 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6EA3A1BFE3; Wed, 30 Oct 2019 15:54:29 +0100 (CET) Received: from mail-il1-f194.google.com (mail-il1-f194.google.com [209.85.166.194]) by dpdk.org (Postfix) with ESMTP id E5A405B3E for ; Wed, 30 Oct 2019 15:54:27 +0100 (CET) Received: by mail-il1-f194.google.com with SMTP id b12so2352804ilf.12 for ; Wed, 30 Oct 2019 07:54:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=wJH4VnkcHuMmA3QYlcQMpPBW6TFQdX9I5cTAIHkvw/E=; b=LUCEVRpZKiFLUMgjsToA7rV8mCppTjPto8sfGGgkSuCgXSw/JdUkjPHif2UBYatUcs 7oXcBlyBjSk2NqybkPlFouy0DCQBC6CcHqUVuPtIjSQ5H4aSyXkDTCURlxX9cckm8TAd Rd6pTUtfev7FplCyTKKCMLoEu0DYVLDeCBvRRgfZ3pIpujtqEMVZVyBeLFfZVg2bFT6q 8AxiaKclT55VXbgyZ6SNg53BfqPkvkVWdIyqjkqRUkTDQVPrvxztY1Z5zynSqBmOGDdM nUI9zBvkzQUdcJmhIftlwiTp2TE9sDm+/YjBIH6AsCghfo0e1jzEJtz8Z4onWJNUMKkx mAKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=wJH4VnkcHuMmA3QYlcQMpPBW6TFQdX9I5cTAIHkvw/E=; b=mSalUsm4tZScKnVrdDqk5aU5nzr868mhJf8Db5NpcaHQ6oszkeo8oMldNlmI3UVNtR JFXxJWV1s6gMjvObOuY2EaxWfrvvN+65wr9tBC/Sf1p2tmu7ec5r8kaC2FIPnH72p8rE +4nvQpZecMJhKgsVoPac5q3I6msKkyO5ILJmw4ty0HWe8O04K3T1tSYemx4QAPSusS5L RC70NGwbJosWZd0elOQj1dUN+EgMIxEnN2D60hNRbM73OezsAdQwa6BmpdE6TBgXEWVG qXidqYiHdsg3fAbH7cYMk8HeEga6vnpoTlNXcWzfqQyEr6QSni2CQgLj+fgze/OZWmRY RyOw== X-Gm-Message-State: APjAAAUftCIb3ceHi9o/iomGaNeFUQAgzq7jLZa5TGthm2mOfLPQz5MW CogGljW1uWDdnwEP4bGLt+5XY1dYbOh/uywQZKI= X-Google-Smtp-Source: APXvYqxJ+Zcb1VLqYVOYBY+7oDGac6groNjJQKmf8dsa+Pxq87P8pzqzybKCeYVEZg6aGGnI1TGM70gT3v7KHPEDkBM= X-Received: by 2002:a92:d78f:: with SMTP id d15mr510034iln.294.1572447266998; Wed, 30 Oct 2019 07:54:26 -0700 (PDT) MIME-Version: 1.0 References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191028140122.9592-1-olivier.matz@6wind.com> <20191028140122.9592-6-olivier.matz@6wind.com> <2cbd66e8-7551-4eaf-6097-8ac60ea9b61e@solarflare.com> <20191030143317.hean7zsyyde2g2bf@platinum> In-Reply-To: <20191030143317.hean7zsyyde2g2bf@platinum> From: Jerin Jacob Date: Wed, 30 Oct 2019 20:24:09 +0530 Message-ID: To: Olivier Matz Cc: Andrew Rybchenko , Vamsi Krishna Attunuru , "dev@dpdk.org" , Anatoly Burakov , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , Kiran Kumar Kokkilagadda , Stephen Hemminger , Thomas Monjalon Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [EXT] [PATCH 5/5] mempool: prevent objects from being across pages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Wed, Oct 30, 2019 at 8:03 PM Olivier Matz wrote: > > Hi Jerin, > > On Wed, Oct 30, 2019 at 02:08:40PM +0530, Jerin Jacob wrote: > > On Wed, Oct 30, 2019 at 1:16 PM Andrew Rybchenko > > wrote: > > > > > > > > >> int > > > >> rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int > > > >> max_objs, > > > >> void *vaddr, rte_iova_t iova, size_t len, > > > >> rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) > > > >> { > > > >> - size_t total_elt_sz; > > > >> + char *va = vaddr; > > > >> + size_t total_elt_sz, pg_sz; > > > >> size_t off; > > > >> unsigned int i; > > > >> void *obj; > > > >> > > > >> + rte_mempool_get_page_size(mp, &pg_sz); > > > >> + > > > >> total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; > > > >> > > > >> - for (off = 0, i = 0; off + total_elt_sz <= len && i < max_objs; i++) { > > > >> + for (off = 0, i = 0; i < max_objs; i++) { > > > >> + /* align offset to next page start if required */ > > > >> + if (check_obj_bounds(va + off, pg_sz, total_elt_sz) < 0) > > > >> + off += RTE_PTR_ALIGN_CEIL(va + off, pg_sz) - (va + > > > >> off); > > > > Moving offset to the start of next page and than freeing (vaddr + off + header_size) to pool, this scheme is not aligning with octeontx2 mempool's buf alignment requirement(buffer address needs to be multiple of buffer size). > > > > > > It sounds line octeontx2 mempool should have its own populate callback > > > which cares about it. > > > > Driver specific populate function is not a bad idea. The only concern > > would be to > > > > # We need to duplicate rte_mempool_op_populate_default() and > > rte_mempool_op_calc_mem_size_default() > > # We need to make sure if some one changes the > > rte_mempool_op_populate_default() and > > rte_mempool_op_calc_mem_size_default() then he/she needs to update the > > drivers too > > Agree, we need to be careful. Hopefully we shouldn't change this code > very often. > > I'm sending a v2 with a patch to the octeontx2 driver which --I hope-- > should solve the issue. Thanks, Olivier. We will test it. > > > # I would like to add one more point here is that: > > - Calculation of object pad requirements for MEMPOOL_F_NO_SPREAD i.e > > optimize_object_size() > > is NOT GENERIC. i.e get_gcd() based logic is not generic. DDR > > controller defines the address to DDR channel "spread" and it will be > > based on SoC or Mirco architecture. > > So we need to consider that as well. > > Could you give some details about how it should be optimized on your > platforms, and what is the behavior today (advertised nb_rank and > nb_channels)? I will start a new thread on this CCing all arch maintainers. > > Thanks, > Olivier