From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 2FADA8E5A for ; Sun, 6 Sep 2015 09:13:21 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga103.fm.intel.com with ESMTP; 06 Sep 2015 00:13:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,478,1437462000"; d="scan'208";a="799340518" Received: from shvmail01.sh.intel.com ([10.239.29.42]) by fmsmga002.fm.intel.com with ESMTP; 06 Sep 2015 00:13:20 -0700 Received: from shecgisg004.sh.intel.com (shecgisg004.sh.intel.com [10.239.29.89]) by shvmail01.sh.intel.com with ESMTP id t867DIv5019634; Sun, 6 Sep 2015 15:13:18 +0800 Received: from shecgisg004.sh.intel.com (localhost [127.0.0.1]) by shecgisg004.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP id t867DGoV026449; Sun, 6 Sep 2015 15:13:18 +0800 Received: (from wujingji@localhost) by shecgisg004.sh.intel.com (8.13.6/8.13.6/Submit) id t867DFCk026445; Sun, 6 Sep 2015 15:13:15 +0800 From: Jingjing Wu To: dev@dpdk.org Date: Sun, 6 Sep 2015 15:11:44 +0800 Message-Id: <1441523526-26202-31-git-send-email-jingjing.wu@intel.com> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1441523526-26202-1-git-send-email-jingjing.wu@intel.com> References: <1441523526-26202-1-git-send-email-jingjing.wu@intel.com> Subject: [dpdk-dev] [PATCH 30/52] i40e/base: Add support for pre-allocated pages for pd X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 06 Sep 2015 07:13:22 -0000 The i40e_add_pd_table_entry() routine is being modified to handle both cases where a backing page is passed and where backing page is allocated in i40e_add_pd_table_entry(). For pble resource management, it is more efficient for it to manage its backing pages. For VF, pble backing page addresses will be send to PF driver for pble resource. The i40e_remove_pd_bp() is also modified to not free pre-allocated pages and free only ones which were allocated in i40e_add_pd_table_entry(). Signed-off-by: Jingjing Wu --- drivers/net/i40e/base/i40e_hmc.c | 30 ++++++++++++++++++++---------- drivers/net/i40e/base/i40e_hmc.h | 4 +++- drivers/net/i40e/base/i40e_lan_hmc.c | 2 +- 3 files changed, 24 insertions(+), 12 deletions(-) diff --git a/drivers/net/i40e/base/i40e_hmc.c b/drivers/net/i40e/base/i40e_hmc.c index 6ddf8b3..7e2362d 100644 --- a/drivers/net/i40e/base/i40e_hmc.c +++ b/drivers/net/i40e/base/i40e_hmc.c @@ -127,6 +127,7 @@ exit: * @hw: pointer to our HW structure * @hmc_info: pointer to the HMC configuration information structure * @pd_index: which page descriptor index to manipulate + * @rsrc_pg: if not NULL, use preallocated page instead of allocating new one. * * This function: * 1. Initializes the pd entry @@ -140,12 +141,14 @@ exit: **/ enum i40e_status_code i40e_add_pd_table_entry(struct i40e_hw *hw, struct i40e_hmc_info *hmc_info, - u32 pd_index) + u32 pd_index, + struct i40e_dma_mem *rsrc_pg) { enum i40e_status_code ret_code = I40E_SUCCESS; struct i40e_hmc_pd_table *pd_table; struct i40e_hmc_pd_entry *pd_entry; struct i40e_dma_mem mem; + struct i40e_dma_mem *page = &mem; u32 sd_idx, rel_pd_idx; u64 *pd_addr; u64 page_desc; @@ -166,19 +169,25 @@ enum i40e_status_code i40e_add_pd_table_entry(struct i40e_hw *hw, pd_table = &hmc_info->sd_table.sd_entry[sd_idx].u.pd_table; pd_entry = &pd_table->pd_entry[rel_pd_idx]; if (!pd_entry->valid) { - /* allocate a 4K backing page */ - ret_code = i40e_allocate_dma_mem(hw, &mem, i40e_mem_bp, - I40E_HMC_PAGED_BP_SIZE, - I40E_HMC_PD_BP_BUF_ALIGNMENT); - if (ret_code) - goto exit; + if (rsrc_pg) { + pd_entry->rsrc_pg = true; + page = rsrc_pg; + } else { + /* allocate a 4K backing page */ + ret_code = i40e_allocate_dma_mem(hw, page, i40e_mem_bp, + I40E_HMC_PAGED_BP_SIZE, + I40E_HMC_PD_BP_BUF_ALIGNMENT); + if (ret_code) + goto exit; + pd_entry->rsrc_pg = false; + } - i40e_memcpy(&pd_entry->bp.addr, &mem, + i40e_memcpy(&pd_entry->bp.addr, page, sizeof(struct i40e_dma_mem), I40E_NONDMA_TO_NONDMA); pd_entry->bp.sd_pd_index = pd_index; pd_entry->bp.entry_type = I40E_SD_TYPE_PAGED; /* Set page address and valid bit */ - page_desc = mem.pa | 0x1; + page_desc = page->pa | 0x1; pd_addr = (u64 *)pd_table->pd_page_addr.va; pd_addr += rel_pd_idx; @@ -253,7 +262,8 @@ enum i40e_status_code i40e_remove_pd_bp(struct i40e_hw *hw, I40E_INVALIDATE_PF_HMC_PD(hw, sd_idx, idx); /* free memory here */ - ret_code = i40e_free_dma_mem(hw, &(pd_entry->bp.addr)); + if (!pd_entry->rsrc_pg) + ret_code = i40e_free_dma_mem(hw, &(pd_entry->bp.addr)); if (I40E_SUCCESS != ret_code) goto exit; if (!pd_table->ref_cnt) diff --git a/drivers/net/i40e/base/i40e_hmc.h b/drivers/net/i40e/base/i40e_hmc.h index c2cdc92..343b251 100644 --- a/drivers/net/i40e/base/i40e_hmc.h +++ b/drivers/net/i40e/base/i40e_hmc.h @@ -69,6 +69,7 @@ struct i40e_hmc_bp { struct i40e_hmc_pd_entry { struct i40e_hmc_bp bp; u32 sd_index; + bool rsrc_pg; bool valid; }; @@ -225,7 +226,8 @@ enum i40e_status_code i40e_add_sd_table_entry(struct i40e_hw *hw, enum i40e_status_code i40e_add_pd_table_entry(struct i40e_hw *hw, struct i40e_hmc_info *hmc_info, - u32 pd_index); + u32 pd_index, + struct i40e_dma_mem *rsrc_pg); enum i40e_status_code i40e_remove_pd_bp(struct i40e_hw *hw, struct i40e_hmc_info *hmc_info, u32 idx); diff --git a/drivers/net/i40e/base/i40e_lan_hmc.c b/drivers/net/i40e/base/i40e_lan_hmc.c index 1533a62..4eab959 100644 --- a/drivers/net/i40e/base/i40e_lan_hmc.c +++ b/drivers/net/i40e/base/i40e_lan_hmc.c @@ -394,7 +394,7 @@ enum i40e_status_code i40e_create_lan_hmc_object(struct i40e_hw *hw, /* update the pd table entry */ ret_code = i40e_add_pd_table_entry(hw, info->hmc_info, - i); + i, NULL); if (I40E_SUCCESS != ret_code) { pd_error = true; break; -- 2.4.0