From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CF3ACA00BE; Wed, 30 Oct 2019 15:33:22 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4C1591BFD4; Wed, 30 Oct 2019 15:33:21 +0100 (CET) Received: from mail-wm1-f65.google.com (mail-wm1-f65.google.com [209.85.128.65]) by dpdk.org (Postfix) with ESMTP id DB2A81BFCF for ; Wed, 30 Oct 2019 15:33:19 +0100 (CET) Received: by mail-wm1-f65.google.com with SMTP id c22so2440834wmd.1 for ; Wed, 30 Oct 2019 07:33:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=E8ka54R4MAcEGvYTXgDJo5kbnZ+ro5QSFd6epJGrfEE=; b=gV/9O1rkNvhLrpd8xxnGUCGCNb8Ht7kgVmyoVJ7q4+GAZLlh78Ct8ATgU6+woUIChJ FKUFZybQl1H5ZqFmzdGabaUdz1c66CBavBJ7BOB+bRys9Q1q6Q3EBhzmH36oYi4qZt3N CBPQnejS2BBG3CV3UanVmJJ/SQVY81eLfWPculEfKPIVys0gcTTzv/XqRtaPPkXcq+YS VKlVLm5k+g3MRvC95/37CPuWUGsSmwQgeCFGS7hDK8BeSnEZOTx8kBiGgh2fPWUjW4vg 24ytdYmlimVppuwHdWfajV2bLOAK+h3acaGijm6Qd244Lz/MODDXZNM+vQ5bM1sn8JG/ pCAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=E8ka54R4MAcEGvYTXgDJo5kbnZ+ro5QSFd6epJGrfEE=; b=m8oTU0L5WuAwNRkKGWsFLbtz/gcuWcnrIKIP1mso3lCnOtAPjlkA33RZz8wsUqLo/u tXD3r7Q1fxaYU1FKdpPOyVU2VPClx3/1F0ThCrh/BfdJJtC2EHlO885ZEJTeKS6aIEkt 36RVnzRqVYtgRZMELuxB3Pn6lgP+nwBECXRH1Eq114nvyOyzJP0g/AegM7LhTHOLmy7b OFRlyy2hqPamkIFSMerySmtOSJ9YjiBP9hhX/FsAw2NPyqLb93nfhk2CBzkTuZRAD5TY JZJvVspi9Dh4/gjWbWIJxdz/U8DFz3dz0py97MhFZsnBhDVV0ge5leUWRkbwnv5mfX0S ertQ== X-Gm-Message-State: APjAAAU6a7cdl5+Ai+eKpv53RMMbrpH6IUrZmIRNYUsHLMF4rBroWMMW 6h4W2eQtNrCZyP24x/Q+x9LxvQ== X-Google-Smtp-Source: APXvYqwjOHAUAXlrhu912GKWglUUaTDfXhwD6cemOKusKoWn0qWYrrAAV17NM53Unv4Nd53cs5HKPA== X-Received: by 2002:a1c:2d17:: with SMTP id t23mr8819648wmt.59.1572445999463; Wed, 30 Oct 2019 07:33:19 -0700 (PDT) Received: from 6wind.com (2a01cb0c0005a6000226b0fffeed02fc.ipv6.abo.wanadoo.fr. [2a01:cb0c:5:a600:226:b0ff:feed:2fc]) by smtp.gmail.com with ESMTPSA id v8sm469125wra.79.2019.10.30.07.33.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 30 Oct 2019 07:33:18 -0700 (PDT) Date: Wed, 30 Oct 2019 15:33:17 +0100 From: Olivier Matz To: Jerin Jacob Cc: Andrew Rybchenko , Vamsi Krishna Attunuru , "dev@dpdk.org" , Anatoly Burakov , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , Kiran Kumar Kokkilagadda , Stephen Hemminger , Thomas Monjalon Message-ID: <20191030143317.hean7zsyyde2g2bf@platinum> References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191028140122.9592-1-olivier.matz@6wind.com> <20191028140122.9592-6-olivier.matz@6wind.com> <2cbd66e8-7551-4eaf-6097-8ac60ea9b61e@solarflare.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20180716 Subject: Re: [dpdk-dev] [EXT] [PATCH 5/5] mempool: prevent objects from being across pages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Jerin, On Wed, Oct 30, 2019 at 02:08:40PM +0530, Jerin Jacob wrote: > On Wed, Oct 30, 2019 at 1:16 PM Andrew Rybchenko > wrote: > > > > > >> int > > >> rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int > > >> max_objs, > > >> void *vaddr, rte_iova_t iova, size_t len, > > >> rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) > > >> { > > >> - size_t total_elt_sz; > > >> + char *va = vaddr; > > >> + size_t total_elt_sz, pg_sz; > > >> size_t off; > > >> unsigned int i; > > >> void *obj; > > >> > > >> + rte_mempool_get_page_size(mp, &pg_sz); > > >> + > > >> total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; > > >> > > >> - for (off = 0, i = 0; off + total_elt_sz <= len && i < max_objs; i++) { > > >> + for (off = 0, i = 0; i < max_objs; i++) { > > >> + /* align offset to next page start if required */ > > >> + if (check_obj_bounds(va + off, pg_sz, total_elt_sz) < 0) > > >> + off += RTE_PTR_ALIGN_CEIL(va + off, pg_sz) - (va + > > >> off); > > > Moving offset to the start of next page and than freeing (vaddr + off + header_size) to pool, this scheme is not aligning with octeontx2 mempool's buf alignment requirement(buffer address needs to be multiple of buffer size). > > > > It sounds line octeontx2 mempool should have its own populate callback > > which cares about it. > > Driver specific populate function is not a bad idea. The only concern > would be to > > # We need to duplicate rte_mempool_op_populate_default() and > rte_mempool_op_calc_mem_size_default() > # We need to make sure if some one changes the > rte_mempool_op_populate_default() and > rte_mempool_op_calc_mem_size_default() then he/she needs to update the > drivers too Agree, we need to be careful. Hopefully we shouldn't change this code very often. I'm sending a v2 with a patch to the octeontx2 driver which --I hope-- should solve the issue. > # I would like to add one more point here is that: > - Calculation of object pad requirements for MEMPOOL_F_NO_SPREAD i.e > optimize_object_size() > is NOT GENERIC. i.e get_gcd() based logic is not generic. DDR > controller defines the address to DDR channel "spread" and it will be > based on SoC or Mirco architecture. > So we need to consider that as well. Could you give some details about how it should be optimized on your platforms, and what is the behavior today (advertised nb_rank and nb_channels)? Thanks, Olivier