From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 5E9F54F98 for ; Thu, 1 Nov 2018 11:12:59 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 01 Nov 2018 03:12:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,451,1534834800"; d="scan'208";a="246154136" Received: from aburakov-mobl1.ger.corp.intel.com (HELO [10.237.220.72]) ([10.237.220.72]) by orsmga004.jf.intel.com with ESMTP; 01 Nov 2018 03:12:57 -0700 To: Alejandro Lucero , dev@dpdk.org References: <20181031172931.11894-1-alejandro.lucero@netronome.com> <20181031172931.11894-5-alejandro.lucero@netronome.com> From: "Burakov, Anatoly" Message-ID: <6d53242c-7fd3-20ef-27a5-5a26cd1e05a7@intel.com> Date: Thu, 1 Nov 2018 10:12:56 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20181031172931.11894-5-alejandro.lucero@netronome.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH 4/7] bus/pci: avoid call to DMA mask check X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2018 10:12:59 -0000 On 31-Oct-18 5:29 PM, Alejandro Lucero wrote: > Calling rte_mem_check_dma_mask when memory has not been initialized > yet is wrong. This patch use rte_mem_set_dma_mask instead. > > Once memory initialization is done, the dma mask set will be used > for checking memory mapped is within the specified mask. > > Fixes: fe822eb8c565 ("bus/pci: use IOVA DMA mask check when setting IOVA mode") > > Signed-off-by: Alejandro Lucero > --- > drivers/bus/pci/linux/pci.c | 11 ++++++++++- > 1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c > index 0a81e063b..d87384c72 100644 > --- a/drivers/bus/pci/linux/pci.c > +++ b/drivers/bus/pci/linux/pci.c > @@ -590,7 +590,16 @@ pci_one_device_iommu_support_va(struct rte_pci_device *dev) > > mgaw = ((vtd_cap_reg & VTD_CAP_MGAW_MASK) >> VTD_CAP_MGAW_SHIFT) + 1; > > - return rte_mem_check_dma_mask(mgaw) == 0 ? true : false; > + /* > + * Assuming there is no limitation by now. We can not know at this point > + * because the memory has not been initialized yet. Setting the dma mask > + * will force a check once memory initialization is done. We can not do > + * a fallback to IOVA PA now, but if the dma check fails, the error > + * message should advice for using '--iova-mode pa' if IOVA VA is the > + * current mode. > + */ > + rte_mem_set_dma_mask(mgaw); > + return true; This is IMO a good solution to the circular dependency between setting DMA mask and memory init. At least i can't think of a better one :) Acked-by: Anatoly Burakov > } > #elif defined(RTE_ARCH_PPC_64) > static bool > -- Thanks, Anatoly