From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D2BED42AF1; Thu, 18 May 2023 17:55:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C2C1F40A8A; Thu, 18 May 2023 17:55:17 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 3C9534014F for ; Thu, 18 May 2023 17:55:15 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684425315; x=1715961315; h=message-id:date:subject:to:cc:references:from: in-reply-to:content-transfer-encoding:mime-version; bh=pNwsrHPN8uZLI3y3eseUjmqidliV4xxjY8B81IRcJ0Q=; b=jl8VLYd29JJNXGFGBkPm/vrAP+HOQgNoVn1hFO07qay2LLBjRVAAE92G 8hGj+P1jCbFRR2f5yvB4m12blLo4sdUUSdQklXSnMsw2nmrg6MhCL+G4Q ij5Wablr1tY8zk5lT+8vY9rgvdk3arfsb1UrTi5Yb3+myl3YhJj+702lU BIZsqxniih+7PJpDtPhjsSYpi4UiztM3C4OdPLL4TSSI1vqFAhrtzIAhg MiI5N/yhNgFpVL6F16BYVjy0EgPBXcxOx5AhwB/T/d6FbjANOJ0AsnsU2 eyH1TMZ1Om1+g099oWlAJUFQOELFtzSEVNKlR20DSS/2HYJ9k+AGlm02+ w==; X-IronPort-AV: E=McAfee;i="6600,9927,10714"; a="336692832" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="336692832" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2023 08:54:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10714"; a="732937276" X-IronPort-AV: E=Sophos;i="5.99,285,1677571200"; d="scan'208";a="732937276" Received: from orsmsx603.amr.corp.intel.com ([10.22.229.16]) by orsmga008.jf.intel.com with ESMTP; 18 May 2023 08:54:42 -0700 Received: from orsmsx611.amr.corp.intel.com (10.22.229.24) by ORSMSX603.amr.corp.intel.com (10.22.229.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Thu, 18 May 2023 08:54:41 -0700 Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by orsmsx611.amr.corp.intel.com (10.22.229.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23 via Frontend Transport; Thu, 18 May 2023 08:54:41 -0700 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (104.47.55.175) by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.23; Thu, 18 May 2023 08:54:41 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Yi1awv+LjJEPJP3ko4xO/ZR+I1uabQSAs+XmT84njpVetezV4DnArBuw+bgpVY+aK7Y9jMdks5+Ja/i1uGbk6DphBULwOuzZB0J1Yw5lF97tMNbqSY8ou4IJF0K1LKnGqRlmSjl2N4EbS4ucHTzzA32EnuOJYaQU4MKW5V7HmfqUBYJk3iMcMeNhFAhmE0l6m4wsDfavx+LjSJAOz1Z/6R+q7Dep8HcmIy4chO4Mel3PK2W1zHEUccOYtptpy3RlzY/e0qMi8OWXR/cCgtQ65g2gE/FSUFqCFrcufkILnRDMWBVRcNeyioKF+wu01qFgxc0piYuUkM2krQy9kXTpjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qemp5QNimk1H9TboYTQFalJCKd0AFX1VDfkR/5j9h1Q=; b=gUfMG7vafNTamO5u5UNi9xpD8kmC65tq1IIBbMOoYJhPEYf1ADshN2J5VniLnahJNTlf5J385VWmCJes+Emkr3f4LajVdP7amZtHZD/1vGdbPMff+DoEFzb/+OwpPk2ue9hiBcaqxptZqTgQHb6pVEIUgvtSVCvy/rwcP9N4mO9jWwcCe+M8HV24i9iKXr9RYZ5kUfO+WvDA5REi9XrHQ6/kEDLXyc9zYQ8H/6B5f7IvnmG1QBcmpa9Aj86RcY5xUnQQMuhMnvRJhILm1JS9Ttwt6cRx+gSbInq58XrXadDQRyNZyNxnH78PPJJUjzzTnD4xAcmZVkzmYB+jub1d7Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; Received: from DM4PR11MB6502.namprd11.prod.outlook.com (2603:10b6:8:89::7) by LV3PR11MB8484.namprd11.prod.outlook.com (2603:10b6:408:1b4::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Thu, 18 May 2023 15:54:34 +0000 Received: from DM4PR11MB6502.namprd11.prod.outlook.com ([fe80::49c0:aa4c:e5b4:e718]) by DM4PR11MB6502.namprd11.prod.outlook.com ([fe80::49c0:aa4c:e5b4:e718%5]) with mapi id 15.20.6387.033; Thu, 18 May 2023 15:54:34 +0000 Message-ID: <84723e01-f3f1-203c-55e3-bc73da9ff75c@intel.com> Date: Thu, 18 May 2023 16:54:27 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Firefox/102.0 Thunderbird/102.11.0 Subject: Re: [PATCH V3] lib: set/get max memzone segments Content-Language: en-US To: Ophir Munk , , Bruce Richardson , Devendra Singh Rawat , Alok Prasad CC: Ophir Munk , Matan Azrad , "Thomas Monjalon" , Lior Margalit References: <20230425164009.2391632-1-ophirmu@nvidia.com> <20230503072641.474600-1-ophirmu@nvidia.com> From: "Burakov, Anatoly" In-Reply-To: <20230503072641.474600-1-ophirmu@nvidia.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-ClientProxiedBy: LO4P302CA0042.GBRP302.PROD.OUTLOOK.COM (2603:10a6:600:317::14) To DM4PR11MB6502.namprd11.prod.outlook.com (2603:10b6:8:89::7) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM4PR11MB6502:EE_|LV3PR11MB8484:EE_ X-MS-Office365-Filtering-Correlation-Id: 213656f8-079b-4d22-632d-08db57b82cc0 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: E64GlXZXG+aAsL+cg8LTByj19eydw1tM/CBZFtEVznNL3mBVk6/3pxRPMUhummEPRA6wB0arSbKW+nfht2zuD5TRNL5PNFBGJAyXzMT+rYE9rifeV7JmuxkoT6HudU0nmCaheQc5nzyLMbsbq8twjRbUZjz4n1vDptweNWe1vdToxZ9OiEPLOR5ydzh7EqElZb/2+LMT5GPwf2sqZPip97E0P+tmpKp3KImsxEvB2bN+X3qd/VqYok74Na9y6Idq7azoCHoaaGIz6OxfKHF5ob6R4hS/W1WnbHH9JE1JfvJ3hvVaNtN0oUP8VqdAetax3AAbUeYOxdQsHFx4Ugn4Fu3xkhr3pnsU6qMSAeFNFabX5+ev6K3wMaZ2W0kVjK3gbO/Vx812FNFb+h9y5n6uz5wdDe5P/wFkjWtLBinM88BunIRYXo0ppStBikl77JdsFzEsJ00H2r/QTXGtPaqvpOQRVIv2pdnn9iv+2NQL2KR/OOzDxvLs5TlinkadgKKZ1NecIvo1Dm7hNaFSjHJEc+gqlrDDjzvioPJToR89luNfgx39dt9p579atBYPIwhRyE52mFiChgWlPX6b+W2KH+OAhtjdPaAjylD/BReX71DKhHP0hF/pOuNpJEdZqm5KafBVSYk90t+qq3opdoguQg== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM4PR11MB6502.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230028)(366004)(136003)(39860400002)(346002)(376002)(396003)(451199021)(26005)(6506007)(6512007)(53546011)(36756003)(2616005)(83380400001)(31696002)(86362001)(38100700002)(82960400001)(186003)(6486002)(54906003)(110136005)(478600001)(2906002)(316002)(31686004)(4326008)(8936002)(8676002)(41300700001)(5660300002)(66476007)(66946007)(66556008)(6666004)(43740500002)(45980500001); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?OFR4dExwWndzQXpmVDkrMVBoSGdGRSt6MmxNTmMrMWhZSDl6amY1T1NBSEFh?= =?utf-8?B?ZXl5WEUrOW1wcjRJOU1ZaUZrRnZod1NNT3p3c2E0eTFZV0tzVHBzZWN6RnNF?= =?utf-8?B?dW5VR2ZsY0tUQ0hJZ1BaZkUrdU5iRkkzMFJpMGI5eTZoZnZXVzV4UnBKYUtI?= =?utf-8?B?YTJjNzczVGtabnYrcy9DSk1wOUhPaU03d3cwZW9kWmFwR3pmWDcxVEx1T3lZ?= =?utf-8?B?aFYwUlVEU3I2R2QvbkZKOWRYNXMzdTc0RHVSbk1wbko5OFNvZGdrbzV4bEZu?= =?utf-8?B?cENibld4UVRENTNlWVJLM3ZjQ0x6RGdvbEM3Nk9BYnhqRmhjZWU2QzYvNWdm?= =?utf-8?B?Wi9SUjJDd0pMeHU2S2doTVdnNjI2UFF3eTlYTUp6akVmeGFDc1J3eVpTd2Jw?= =?utf-8?B?aTBGR0hzZHZ1dWpjeUV5SjJNUitnRGViMHhZZVg2TktJcWVITXp6b1Z4Zm5J?= =?utf-8?B?REl4ejZHYzUzZUluUHVXVFMwZnBFbTVoc3J5SnZwM3BTSmwvV0E3K1BPQkY4?= =?utf-8?B?RmxVZFA4TWxnS05aUlVvd01hcHFmc21RT2ovdk5zQnpJeFI0V1B6OWxBRVYv?= =?utf-8?B?VUxXbnIvQUlmWE50Wjkwd3ZhelBZT0E1ZzNLOS9ZaERtQlVQYmkvN1NQYTRZ?= =?utf-8?B?SitkWHVkeERSQk82Z2NUL3JNMEptTm9CQlF4V05mN2QrK3ZzZTdiK0QrZWNk?= =?utf-8?B?SzhBOGtSWXFSbzc4clVVL2dGN2pFQWxPeFNzVGpPbHExWG5MSzlWTlEyUEpD?= =?utf-8?B?Z09RUnNPNmdqUG5IQ0tpZUdlVmgvdGRBZUx6NmhWNExYNStRcktkOWxSbVN0?= =?utf-8?B?YmFFN1ZzeXp0MlJYOS9ZbzNJVzhnRkRpWEg3akVnaXFtY0lZNU5tU0J1NHRv?= =?utf-8?B?VEU3SGxrZVJEbzg3ZEZNR3RZM1RtaFBmM0pwQWE5L0hxQWJhVUdKVnhuMHVh?= =?utf-8?B?bVBES1BPdEpDQmtJdnEwK0RyYzRhTkRvblZoeGJ1em9ZdFB5aGk0VDRpNHpX?= =?utf-8?B?WHRFMzZSZGRHSjkxYWdlWjBucWg4OUpGMnVrSk12Z2dzT0YzcW5HVzhSemVl?= =?utf-8?B?a202WktrdHpNSUllVXZneTBNS2c3YXZSS0t3RUtuOGwzbEppVXZMeEUzOTEx?= =?utf-8?B?YndiUG1VUW85VStjMldNUmFRS1JGYmFZUmdMQkpYTG5oMXFuMDFWbm9oWmt3?= =?utf-8?B?Y21keXc4VlRDS2VYZEdiY0tkZnkrV1hSdWhYTGt2R0MwbUF6RmtFUFZtdE9t?= =?utf-8?B?ZEpCMFgyNEpwRndUM3dsK2dwcnp1MVFvTGdsVHk2ektxSEppY3EvM2oydmNq?= =?utf-8?B?OTNOWDVoQ1M5VTBIbzZONy9wNm1sWnJrZ0JQVlVmYWE4U1VWNE14UDBPMEJF?= =?utf-8?B?VEVJak9UN3JLQloxbjl1dzdralB5K0grTG0zYWtGenlDbjlGYVYwSTlPTFdW?= =?utf-8?B?OGpaR2pqY2Nqbk9vWWZTSDNHakNTR3pKNUhwSGdpdVNKOXlBWmhvMURjcklL?= =?utf-8?B?djZ4c2ovNlo5bHhnRlMwQnVpbUZIb3BMNkEyb05pQnU2dTNFdTNEMEF6SWRr?= =?utf-8?B?RDJPbmFZenREUlRKa0xrWGdoRWVTY2JzS2hjZDlIcjc0M2xiY0haU0pua2Y3?= =?utf-8?B?K3dYNUVMNGlpM0dINjl0QnB4U3pKU202ckZvNGlKL2NsdXNtWTliQk00ZDRq?= =?utf-8?B?Sm9DVFBKZFc0bFhvZ0E2RDFNN0RmR3pkMCtmQXZWWXAzYS9zd0xIMVpvUW9S?= =?utf-8?B?OHdtZlRQbjJNdTZyL0hTU1QwQVVjeXg5UW9XcGNwRG9zSHpsd0JBK0ZWSzNt?= =?utf-8?B?N0Q0TCtrS2Q0bC9ndzk3YzJzRTJUUGdnUC9NcDJMMlFnc1ErSWY3OUFVMHov?= =?utf-8?B?YzI5T3F3VXczMC9CcStSbkZUOVZKYWJzb2V2bGRmYlREbWp4djZBdy9yNXY5?= =?utf-8?B?UzJtWU9IYy9Mc3p2MUJ0ZDZsdy9oUlE5OVhGV3VZQTduWXAzKzg4bGdGQmZk?= =?utf-8?B?OVJUUlJoZ0lLZTVJRDVIanJZcml5dzBDK2UrL3grbTN1L1VleGtubVY4cjFo?= =?utf-8?B?U3RKRmVyWTU3WjhYcHJVMFpoRlNQak8rRnE4SmNlOTgrWVVpaXg0OXBZTU5w?= =?utf-8?B?UHhHRmFxcURpZ2grUWpxdk5lcGlNVHVsT3JmNktyQ2hiZERQS1BwV0gzODdR?= =?utf-8?B?UVE9PQ==?= X-MS-Exchange-CrossTenant-Network-Message-Id: 213656f8-079b-4d22-632d-08db57b82cc0 X-MS-Exchange-CrossTenant-AuthSource: DM4PR11MB6502.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2023 15:54:34.1160 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: fSHGtP8NuKJlK+IpsecZOWmLE/czsoLzFNNnITIzKbOY2ydrYHY1ANvB38HFOAhjilE8+b5ydVkL6YKkXdl3zYSTu8t7JwOyXCA+Y60IsRk= X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV3PR11MB8484 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hi, On 5/3/2023 8:26 AM, Ophir Munk wrote: > In current DPDK the RTE_MAX_MEMZONE definition is unconditionally hard > coded as 2560. For applications requiring different values of this > parameter – it is more convenient to set the max value via an rte API - > rather than changing the dpdk source code per application. In many > organizations, the possibility to compile a private DPDK library for a > particular application does not exist at all. With this option there is > no need to recompile DPDK and it allows using an in-box packaged DPDK. > An example usage for updating the RTE_MAX_MEMZONE would be of an > application that uses the DPDK mempool library which is based on DPDK > memzone library. The application may need to create a number of > steering tables, each of which will require its own mempool allocation. > This commit is not about how to optimize the application usage of > mempool nor about how to improve the mempool implementation based on > memzone. It is about how to make the max memzone definition - run-time > customized. > This commit adds an API which must be called before rte_eal_init(): > rte_memzone_max_set(int max). If not called, the default memzone > (RTE_MAX_MEMZONE) is used. There is also an API to query the effective > max memzone: rte_memzone_max_get(). Commit message could use a little rewording and shortening. Suggested: --- Currently, RTE_MAX_MEMZONE constant is used to decide how many memzones a DPDK application can have. This value could technically be changed by manually editing `rte_config.h` before compilation, but if DPDK is already compiled, that option is not useful. There are certain use cases that would benefit from making this value configurable. This commit addresses the issue by adding a new API to set maximum number of memzones before EAL initialization (while using the old constant as default value), as well as an API to get current maximum number of memzones. --- What do you think? > /* Array of memzone pointers */ > -static const struct rte_memzone *ecore_mz_mapping[RTE_MAX_MEMZONE]; > +static const struct rte_memzone **ecore_mz_mapping; > /* Counter to track current memzone allocated */ > static uint16_t ecore_mz_count; > > +int ecore_mz_mapping_alloc(void) > +{ > + ecore_mz_mapping = rte_zmalloc("ecore_mz_map", > + rte_memzone_max_get() * sizeof(struct rte_memzone *), 0); Doesn't this need to check if it's already allocated? Does it need any special secondary process handling? > + > + if (!ecore_mz_mapping) > + return -ENOMEM; > + > + return 0; > +} > + > +void ecore_mz_mapping_free(void) > +{ > + rte_free(ecore_mz_mapping); Shouldn't this at least set the pointer to NULL to avoid double-free? > +#define RTE_DEFAULT_MAX_MEMZONE 2560 > + > +static size_t memzone_max = RTE_DEFAULT_MAX_MEMZONE; > + > static inline const struct rte_memzone * > memzone_lookup_thread_unsafe(const char *name) > { > @@ -81,8 +85,9 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, > /* no more room in config */ > if (arr->count >= arr->len) { > RTE_LOG(ERR, EAL, > - "%s(): Number of requested memzone segments exceeds RTE_MAX_MEMZONE\n", > - __func__); > + "%s(): Number of requested memzone segments exceeds max " > + "memzone segments (%d >= %d)\n", I think the "segments" terminology can be dropped, it is a holdover from the times when memzones were not allocated by malloc. The message can just say "Number of requested memzones exceeds maximum number of memzones". > + __func__, arr->count, arr->len); > rte_errno = ENOSPC; > return NULL; > } > @@ -396,7 +401,7 @@ rte_eal_memzone_init(void) > > if (rte_eal_process_type() == RTE_PROC_PRIMARY && > rte_fbarray_init(&mcfg->memzones, "memzone", > - RTE_MAX_MEMZONE, sizeof(struct rte_memzone))) { > + rte_memzone_max_get(), sizeof(struct rte_memzone))) { > RTE_LOG(ERR, EAL, "Cannot allocate memzone list\n"); > ret = -1; > } else if (rte_eal_process_type() == RTE_PROC_SECONDARY && > @@ -430,3 +435,20 @@ void rte_memzone_walk(void (*func)(const struct rte_memzone *, void *), > } > rte_rwlock_read_unlock(&mcfg->mlock); > } > + > +int > +rte_memzone_max_set(size_t max) > +{ > + /* Setting max memzone must occur befaore calling rte_eal_init() */ > + if (eal_get_internal_configuration()->init_complete > 0) > + return -1; > + > + memzone_max = max; > + return 0; > +} > + > +size_t > +rte_memzone_max_get(void) > +{ > + return memzone_max; > +} It seems that this is a local (static) value, which means it is not shared between processes, and thus could potentially mismatch between two different processes. While this _technically_ would not be a problem because secondary process init will not actually use this value, but the API will still return incorrect information. I suggest updating/syncing this value somewhere in `eal_mcfg_update_internal()/eal_mcfg_update_from_internal()`, and adding this value to mem config. An alternative to that would be to just return `mem_config->memzones.count` (instead of the value itself), and return -1 (or zero?) if init hasn't yet completed. -- Thanks, Anatoly