From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 81E3442931;
	Thu, 13 Apr 2023 11:51:35 +0200 (CEST)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id B160442D63;
	Thu, 13 Apr 2023 11:50:33 +0200 (CEST)
Received: from mga07.intel.com (mga07.intel.com [134.134.136.100])
 by mails.dpdk.org (Postfix) with ESMTP id 2074842D38
 for <dev@dpdk.org>; Thu, 13 Apr 2023 11:50:31 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
 d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
 t=1681379432; x=1712915432;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=d1xR7S990AdEi3exqcwT8dyyEWyM85AY8GpZU7bWuMc=;
 b=HqyB/xdCOcf8uTJ3tfdmKyI789a1ukHAfPnx/6BSSdHOcvTjohUw46Jk
 x/jPzwy1AYNtjaD7T0kr+Gxytxi/sXcVZSWYe8/MrftLP3Ev4i7RVCy4N
 oukp4Vxw0gK3OyOloiudRuUKK4fEY7cIm47q01t5t2/6dACnmip1dnylq
 VbYoZqTFETY051FJcz/HAVXVjxl3jzPxP7Sc/D2nDtVyO5EkiCHN+R5fE
 inzGZm1lOHF3K4pNM/Q3Nmtiz0Kle8Yx3HFclCDNi+rrbqSQQF+IkG/2m
 x5jYFRYXecGHQTj3R4Ak54+TaFf3NHejh4bdbkpbjHewzZhlQdj5nGhW1 g==;
X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="409290495"
X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="409290495"
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 13 Apr 2023 02:50:31 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6600,9927,10678"; a="778699326"
X-IronPort-AV: E=Sophos;i="5.98,341,1673942400"; d="scan'208";a="778699326"
Received: from dpdk-wenjing-01.sh.intel.com ([10.67.119.244])
 by FMSMGA003.fm.intel.com with ESMTP; 13 Apr 2023 02:50:29 -0700
From: Wenjing Qiao <wenjing.qiao@intel.com>
To: jingjing.wu@intel.com,
	beilei.xing@intel.com,
	qi.z.zhang@intel.com
Cc: dev@dpdk.org, Wenjing Qiao <wenjing.qiao@intel.com>,
 Christopher Pau <christopher.pau@intel.com>
Subject: [PATCH 11/18] common/idpf: allocate static buffer at initialization
Date: Thu, 13 Apr 2023 05:44:55 -0400
Message-Id: <20230413094502.1714755-12-wenjing.qiao@intel.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20230413094502.1714755-1-wenjing.qiao@intel.com>
References: <20230413094502.1714755-1-wenjing.qiao@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Some OSs don't allow allocating DMA memory at runtime. So create an
initial static buffer at initialization to hold this data.

Signed-off-by: Christopher Pau <christopher.pau@intel.com>
Signed-off-by: Wenjing Qiao <wenjing.qiao@intel.com>
---
 drivers/common/idpf/base/idpf_common.c | 26 +++++++++++++++-----------
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_common.c b/drivers/common/idpf/base/idpf_common.c
index de82c3458f..f4a5707272 100644
--- a/drivers/common/idpf/base/idpf_common.c
+++ b/drivers/common/idpf/base/idpf_common.c
@@ -6,6 +6,7 @@
 #include "idpf_prototype.h"
 #include "virtchnl.h"
 
+struct idpf_dma_mem send_dma_mem = { 0 };
 
 /**
  * idpf_set_mac_type - Sets MAC type
@@ -132,6 +133,15 @@ int idpf_init_hw(struct idpf_hw *hw, struct idpf_ctlq_size ctlq_size)
 
 	idpf_free(hw, q_info);
 
+	/*
+	 * Need an initial static buffer to copy DMA memory to send
+	 * for drivers that do not allow this allocation at runtime
+	 */
+	send_dma_mem.va = (struct idpf_dma_mem *)
+		idpf_alloc_dma_mem(hw, &send_dma_mem, 4096);
+	if (!send_dma_mem.va)
+		return -ENOMEM;
+
 	return 0;
 }
 
@@ -152,7 +162,6 @@ int idpf_send_msg_to_cp(struct idpf_hw *hw, int v_opcode,
 			int v_retval, u8 *msg, u16 msglen)
 {
 	struct idpf_ctlq_msg ctlq_msg = { 0 };
-	struct idpf_dma_mem dma_mem = { 0 };
 	int status;
 
 	ctlq_msg.opcode = idpf_mbq_opc_send_msg_to_pf;
@@ -162,19 +171,11 @@ int idpf_send_msg_to_cp(struct idpf_hw *hw, int v_opcode,
 	ctlq_msg.cookie.mbx.chnl_opcode = v_opcode;
 
 	if (msglen > 0) {
-		dma_mem.va = (struct idpf_dma_mem *)
-			  idpf_alloc_dma_mem(hw, &dma_mem, msglen);
-		if (!dma_mem.va)
-			return -ENOMEM;
-
-		idpf_memcpy(dma_mem.va, msg, msglen, IDPF_NONDMA_TO_DMA);
-		ctlq_msg.ctx.indirect.payload = &dma_mem;
+		idpf_memcpy(send_dma_mem.va, msg, msglen, IDPF_NONDMA_TO_DMA);
+		ctlq_msg.ctx.indirect.payload = &send_dma_mem;
 	}
 	status = idpf_ctlq_send(hw, hw->asq, 1, &ctlq_msg);
 
-	if (dma_mem.va)
-		idpf_free_dma_mem(hw, &dma_mem);
-
 	return status;
 }
 
@@ -262,6 +263,9 @@ int idpf_clean_arq_element(struct idpf_hw *hw,
  */
 int idpf_deinit_hw(struct idpf_hw *hw)
 {
+	if (send_dma_mem.va)
+		idpf_free_dma_mem(hw, &send_dma_mem);
+
 	hw->asq = NULL;
 	hw->arq = NULL;
 
-- 
2.25.1