From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 42B27A00C3;
	Fri, 13 May 2022 23:41:32 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id DCF4240E64;
	Fri, 13 May 2022 23:41:31 +0200 (CEST)
Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com
 [209.85.210.180])
 by mails.dpdk.org (Postfix) with ESMTP id 495AB40DF7
 for <dev@dpdk.org>; Fri, 13 May 2022 23:41:30 +0200 (CEST)
Received: by mail-pf1-f180.google.com with SMTP id 204so8791426pfx.3
 for <dev@dpdk.org>; Fri, 13 May 2022 14:41:30 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=networkplumber-org.20210112.gappssmtp.com; s=20210112;
 h=date:from:to:cc:subject:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=RULCBl1ACoY6MNzUpegcAewTcISHzbUQ6ZmiaBDyxJU=;
 b=xpiSj0FGpqs0CVyo+xvERRr6yShCp9P6eodhBTxnGok7vfiybWbs4aVl1+gMMnBYtV
 sTVzEY/nLSsPvFz8EESLThqnD4IZM7mryHWQqKUv14wfmH5HspkjFqiNJ8sncM6o4iWd
 xyILHUmNv3LmVYvUPJdjHW/nm9eTlc9VCV7qiDZpIcoQOOeym25wAV+EHBNrn6I8K4Gc
 MkTCP9S034lDh0mNGV6HJ+OlmYOPzedKu7xkQQc6WJBw/gK9DIxf4neyTXCpui+s4v74
 aSdZd2jaWtjtvwqt6xuiKM4o/WqdlARpeB9EtVIjsh9CCrwAEuvs5sjGkuQuv/uBd9o2
 naVg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20210112;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=RULCBl1ACoY6MNzUpegcAewTcISHzbUQ6ZmiaBDyxJU=;
 b=STcuw0xU0zC5vafdndixEhLryEaglvcaUKcRXahVAs6GB96uZTfCxwJKtc7FAuyTxl
 PKVMLDOXZtv/HOXNgMbTc0LKpc1w5Rd2qQFxV0bTJUabpSZexpNN19Qi6LmFN3i3a0b0
 VuIB9/JwCBo0dY+gkftQdoRaqp+CHVtWqYSo6Ys/Vz59ZbFHUOadlgDO/wNcU1ew8mkE
 QyVHKB8PHitLn6O9rTmmk7cxcx77mF0OU4KoLNxzr/nRTUAMkidgJfXH+aJ3hXJ5bhXj
 vnviQ0Xla/3NjH+IX+bQhzyakSFQ+aZHw0T+yU7MS5J6sqRyDKOOOip+rcttGhXo5cgg
 Rdfw==
X-Gm-Message-State: AOAM53196lzv9OpdzNKM6sICdjDiwmXYJAgVj+gVCuhLL7WE0/lZCIK+
 IFaiE7UmFTAbr7rxl5QMZE0+XQ==
X-Google-Smtp-Source: ABdhPJw7BSGCQJPPYdObq6FPwnfL+P3fheix7EzshRz46XwaxRw1f32Qr9z5wqC3vaeVMGT8POeWrQ==
X-Received: by 2002:a62:1547:0:b0:50e:d9b:ddf with SMTP id
 68-20020a621547000000b0050e0d9b0ddfmr6460285pfv.46.1652478089214; 
 Fri, 13 May 2022 14:41:29 -0700 (PDT)
Received: from hermes.local (204-195-112-199.wavecable.com. [204.195.112.199])
 by smtp.gmail.com with ESMTPSA id
 a8-20020a1709027e4800b0015e8d4eb273sm2280219pln.189.2022.05.13.14.41.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 13 May 2022 14:41:28 -0700 (PDT)
Date: Fri, 13 May 2022 14:41:26 -0700
From: Stephen Hemminger <stephen@networkplumber.org>
To: Don Wallwork <donw@xsightlabs.com>
Cc: dev@dpdk.org, mb@smartsharesystems.com, anatoly.burakov@intel.com,
 dmitry.kozliuk@gmail.com, bruce.richardson@intel.com,
 Honnappa.Nagarahalli@arm.com, nd@arm.com, haiyue.wang@intel.com
Subject: Re: [PATCH v2] eal: allow worker lcore stacks to be allocated from
 hugepage memory
Message-ID: <20220513144126.19e37480@hermes.local>
In-Reply-To: <20220513175822.69905-1-donw@xsightlabs.com>
References: <20220502141058.12707-1-donw@xsightlabs.com>
 <20220513175822.69905-1-donw@xsightlabs.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

On Fri, 13 May 2022 13:58:22 -0400
Don Wallwork <donw@xsightlabs.com> wrote:

> +		if (internal_conf->huge_worker_stack_size == 0) {
> +			ret = pthread_create(&lcore_config[i].thread_id, NULL,
> +					     eal_thread_loop,
> +					     (void *)(uintptr_t)i);
> +		} else {
> +			/* Allocate NUMA aware stack memory and set
> +			 * pthread attributes
> +			 */
> +			pthread_attr_t attr;
> +			size_t stack_size;
> +			void *stack_ptr;
> +
> +			if (pthread_attr_init(&attr) != 0) {
> +				rte_eal_init_alert("Cannot init pthread "
> +						   "attributes");
> +				rte_errno = EFAULT;
> +				return -1;
> +			}
> +			if (internal_conf->huge_worker_stack_size ==
> +			    USE_OS_STACK_SIZE) {
> +				if (pthread_attr_getstacksize(&attr,
> +							      &stack_size) != 0) {
> +					rte_errno = EFAULT;
> +					return -1;
> +				}
> +			} else {
> +				stack_size =
> +					internal_conf->huge_worker_stack_size;
> +			}
> +			stack_ptr =
> +				rte_zmalloc_socket("lcore_stack",
> +						   stack_size,
> +						   stack_size,
> +						   rte_lcore_to_socket_id(i));
> +
> +			if (stack_ptr == NULL) {
> +				rte_eal_init_alert("Cannot allocate stack "
> +						   "memory for worker lcore");
> +				rte_errno = ENOMEM;
> +				return -1;
> +			}
> +
> +			if (pthread_attr_setstack(&attr,
> +						  stack_ptr,
> +						  stack_size) != 0) {
> +				rte_eal_init_alert("Cannot set pthread "
> +						   "stack attributes");
> +				rte_errno = EFAULT;
> +				return -1;
> +			}
> +
> +			/* create a thread for each lcore */
> +			ret = pthread_create(&lcore_config[i].thread_id, &attr,
> +					     eal_thread_loop,
> +					     (void *)(uintptr_t)i);
> +
> +			if (pthread_attr_destroy(&attr) != 0) {
> +				rte_eal_init_alert("Cannot destroy pthread "
> +						   "attributes");
> +				rte_errno = EFAULT;
> +				return -1;
> +			}

The indentation is getting kind of deep here, and to me that indicates
a good place to split this into a helper function?