From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7DAC645E79; Wed, 11 Dec 2024 17:10:44 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1050D40267; Wed, 11 Dec 2024 17:10:44 +0100 (CET) Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) by mails.dpdk.org (Postfix) with ESMTP id 31D33400D7 for ; Wed, 11 Dec 2024 17:10:42 +0100 (CET) Received: by mail-pf1-f180.google.com with SMTP id d2e1a72fcca58-725ed193c9eso3140384b3a.1 for ; Wed, 11 Dec 2024 08:10:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1733933441; x=1734538241; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:from:to:cc:subject:date :message-id:reply-to; bh=vvvY/0B8ensYbP37wyFAmvnlCTVQ/ZbqTFkl5MUXYjE=; b=kijrZ3baH9QRios0WgP9/LL37gCNUtlQqy+nsqPzXT1XDjBnrE9Fi1CY6nG4l4TzCR S0KTznJLospAUagXWzk64/1DRnt59hfdn6H3EypmBxZhFyRkWOPFigP9I4B27+D9vAib TkYpTdl05qkJZ1JCUMs5Owt1Llc1glBAVF/68IryWtwyFF5UqxRVKhXL8IXzjxHN9OwR vOHJKa5HV05dKuEl0WZ0t6gw8WMkwtEzshcoIw7nEBHSOKuFS0dcubdBKvJXaABoqztb GxLDLjGelyix68nE2d15DuKO0SzAYuQjF0Sx1NBxWZOsgTO0HwjjK78aniHltFkNvriC GYkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733933441; x=1734538241; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vvvY/0B8ensYbP37wyFAmvnlCTVQ/ZbqTFkl5MUXYjE=; b=XcB2GQHzd3bCm+1E+sJjp1nr/Ixm1PB6YIITarKeDciU3Ios+eqgW70p4HLWeZ0P2O BQoczjhfkrFFo3WVbNziRj0OcOom94d9tGdU5OkiGCmizvuxBMy2lbRLy2DdUXNK21Iy AV2XQPtRZb0qgCDaHt8tyYhk7v7eNL4B/2zGeDqM0L6ndxHBa91USVHNo5W+pTwhluLu 4mAX4hdA5x3394m5gF+S4ewhEnmoGsKi/PNRXEEmYbssWK0vzW4Uked7OprVHR7Xzsm4 F0fNaX5mhJD0ouMQ7TedOxBSz+NFSWVzY/lvllFfFMQ5a4Bo5C/Vm151hc74OAFpQ3He vz5g== X-Forwarded-Encrypted: i=1; AJvYcCW2sCzYFyV35fXL6xjSdPhOgbgfsej86X/KybBiHpb7/PU6b2Ld9CEhMDu6rRXenuTytNM=@dpdk.org X-Gm-Message-State: AOJu0YwXJNSM6p7qx2MlrBMngs0YVNZK5hj6NMR+Zx50ijGY/Pe5GWO3 CmUJyzhNdOsJsIDqrubrD+xWecOPwcItAf2uW1xpsxDsW5xKKFP2VvKM798pjQs= X-Gm-Gg: ASbGncvSkpWULGspOsy5+Hl2c01ZQPzMdKEwaRKSGFAykkkFRc+hDj5zwCOwpRD3+IE GuASJTY0aWDTgzLlIAfR7lFTV6ncFj1SyAs3redJPxVYhgVP8J80DrYs3jH2Lq/rndmLQn/2xu7 V2cUh+oXx5k3rrPDtc8eCWNx+ixsaQMcrVPwS85KfLZ1kYC4jQ9Rd7vAjTUgcdOn+DpvWNDmoXg b5e0OMziTUlIbsJNI3WCZFEdhYs8rnavk7W7XinmcKNnkxa6WDgLD6WvTGGIaOMwJQf6gkskfvL MtzmMye7JWSQ9SO3vmvGHIt9wzUjEvE= X-Google-Smtp-Source: AGHT+IHkyiU9udx8PaHqiEEM0LGsck5va94C8badSNkW5wkKa5fsOL86rmokydFWvS9G4cr3U0YpRQ== X-Received: by 2002:a05:6a21:6da8:b0:1e0:d104:4dbd with SMTP id adf61e73a8af0-1e1c1397802mr6228876637.46.1733933440853; Wed, 11 Dec 2024 08:10:40 -0800 (PST) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7fd4657a124sm6108569a12.2.2024.12.11.08.10.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Dec 2024 08:10:40 -0800 (PST) Date: Wed, 11 Dec 2024 08:10:38 -0800 From: Stephen Hemminger To: Junlong Wang Cc: ferruh.yigit@amd.com, dev@dpdk.org Subject: Re: [PATCH v2 01/15] net/zxdh: zxdh np init implementation Message-ID: <20241211081038.1008e37a@hermes.local> In-Reply-To: <20241210055333.782901-2-wang.junlong1@zte.com.cn> References: <20241206055715.506961-2-wang.junlong1@zte.com.cn> <20241210055333.782901-1-wang.junlong1@zte.com.cn> <20241210055333.782901-2-wang.junlong1@zte.com.cn> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Tue, 10 Dec 2024 13:53:19 +0800 Junlong Wang wrote: > (np)network Processor initialize resources in host, > and initialize a channel for some tables insert/get/del. > > Signed-off-by: Junlong Wang This mostly looks good, just some small stuff. > --- > drivers/net/zxdh/meson.build | 1 + > drivers/net/zxdh/zxdh_ethdev.c | 238 ++++++++++++++++++++-- > drivers/net/zxdh/zxdh_ethdev.h | 27 +++ > drivers/net/zxdh/zxdh_msg.c | 45 +++++ > drivers/net/zxdh/zxdh_msg.h | 37 ++++ > drivers/net/zxdh/zxdh_np.c | 347 +++++++++++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_np.h | 198 +++++++++++++++++++ > drivers/net/zxdh/zxdh_pci.c | 2 +- > drivers/net/zxdh/zxdh_pci.h | 6 +- > drivers/net/zxdh/zxdh_queue.c | 2 +- > drivers/net/zxdh/zxdh_queue.h | 14 +- > 11 files changed, 884 insertions(+), 33 deletions(-) > create mode 100644 drivers/net/zxdh/zxdh_np.c > create mode 100644 drivers/net/zxdh/zxdh_np.h > > diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build > index c9960f4c73..ab24a3145c 100644 > --- a/drivers/net/zxdh/meson.build > +++ b/drivers/net/zxdh/meson.build > @@ -19,4 +19,5 @@ sources = files( > 'zxdh_msg.c', > 'zxdh_pci.c', > 'zxdh_queue.c', > + 'zxdh_np.c', > ) > diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c > index c786198535..c54d1f6669 100644 > --- a/drivers/net/zxdh/zxdh_ethdev.c > +++ b/drivers/net/zxdh/zxdh_ethdev.c > @@ -5,6 +5,7 @@ > #include > #include > #include > +#include > > #include "zxdh_ethdev.h" > #include "zxdh_logs.h" > @@ -12,8 +13,15 @@ > #include "zxdh_msg.h" > #include "zxdh_common.h" > #include "zxdh_queue.h" > +#include "zxdh_np.h" > > struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; If you want to support primary/secondary in future, variables in BSS are not shared between primary and secondary process > +struct zxdh_shared_data *zxdh_shared_data; > +const char *ZXDH_PMD_SHARED_DATA_MZ = "zxdh_pmd_shared_data"; > +rte_spinlock_t zxdh_shared_data_lock = RTE_SPINLOCK_INITIALIZER; > +struct zxdh_dtb_shared_data g_dtb_data; The shared data will be a problem if you support multiple devices. Or is this really a singleton device with only one bus and slot. > + > +#define ZXDH_INVALID_DTBQUE 0xFFFF > > uint16_t > zxdh_vport_to_vfid(union zxdh_virport_num v) > > +static int > +zxdh_np_dtb_res_init(struct rte_eth_dev *dev) > +{ > + struct zxdh_hw *hw = dev->data->dev_private; > + struct zxdh_bar_offset_params param = {0}; > + struct zxdh_bar_offset_res res = {0}; > + int ret = 0; > + > + if (g_dtb_data.init_done) { > + PMD_DRV_LOG(DEBUG, "DTB res already init done, dev %s no need init", > + dev->device->name); > + return 0; > + } > + g_dtb_data.queueid = ZXDH_INVALID_DTBQUE; > + g_dtb_data.bind_device = dev; > + g_dtb_data.dev_refcnt++; > + g_dtb_data.init_done = 1; > + > + ZXDH_DEV_INIT_CTRL_T *dpp_ctrl = rte_malloc(NULL, sizeof(*dpp_ctrl) + > + sizeof(ZXDH_DTB_ADDR_INFO_T) * 256, 0); > + > + if (dpp_ctrl == NULL) { > + PMD_DRV_LOG(ERR, "dev %s annot allocate memory for dpp_ctrl", dev->device->name); > + ret = -ENOMEM; > + goto free_res; > + } > + memset(dpp_ctrl, 0, sizeof(*dpp_ctrl) + sizeof(ZXDH_DTB_ADDR_INFO_T) * 256); You could use rte_zmalloc() and avoid having to do memset. > + > + dpp_ctrl->queue_id = 0xff; > + dpp_ctrl->vport = hw->vport.vport; > + dpp_ctrl->vector = ZXDH_MSIX_INTR_DTB_VEC; > + strcpy((char *)dpp_ctrl->port_name, dev->device->name); Why the cast, port_name is already character. Should use strlcpy() incase device name is bigger than port_name. > + dpp_ctrl->pcie_vir_addr = (uint32_t)hw->bar_addr[0]; > + > + param.pcie_id = hw->pcie_id; > + param.virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET; > + param.type = ZXDH_URI_NP; > + > + ret = zxdh_get_bar_offset(¶m, &res); > + if (ret) { > + PMD_DRV_LOG(ERR, "dev %s get npbar offset failed", dev->device->name); > + goto free_res; > + } > + dpp_ctrl->np_bar_len = res.bar_length; > + dpp_ctrl->np_bar_offset = res.bar_offset; > + > + if (!g_dtb_data.dtb_table_conf_mz) { > + const struct rte_memzone *conf_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_conf_mz", > + ZXDH_DTB_TABLE_CONF_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); > + > + if (conf_mz == NULL) { > + PMD_DRV_LOG(ERR, > + "dev %s annot allocate memory for dtb table conf", > + dev->device->name); > + ret = -ENOMEM; > + goto free_res; > + } > + dpp_ctrl->down_vir_addr = conf_mz->addr_64; > + dpp_ctrl->down_phy_addr = conf_mz->iova; > + g_dtb_data.dtb_table_conf_mz = conf_mz; > + } > + > + if (!g_dtb_data.dtb_table_dump_mz) { > + const struct rte_memzone *dump_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_dump_mz", > + ZXDH_DTB_TABLE_DUMP_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); > + > + if (dump_mz == NULL) { > + PMD_DRV_LOG(ERR, > + "dev %s Cannot allocate memory for dtb table dump", > + dev->device->name); > + ret = -ENOMEM; > + goto free_res; > + } > + dpp_ctrl->dump_vir_addr = dump_mz->addr_64; > + dpp_ctrl->dump_phy_addr = dump_mz->iova; > + g_dtb_data.dtb_table_dump_mz = dump_mz; > + } > + > + ret = zxdh_np_host_init(0, dpp_ctrl); > + if (ret) { > + PMD_DRV_LOG(ERR, "dev %s dpp host np init failed .ret %d", dev->device->name, ret); > + goto free_res; > + } > + > + PMD_DRV_LOG(DEBUG, "dev %s dpp host np init ok.dtb queue %d", > + dev->device->name, dpp_ctrl->queue_id); > + g_dtb_data.queueid = dpp_ctrl->queue_id; > + rte_free(dpp_ctrl); > + return 0; > + > +free_res: > + rte_free(dpp_ctrl); > + return ret; > +} > + > +static int > +zxdh_init_shared_data(void) > +{ > + const struct rte_memzone *mz; > + int ret = 0; > + > + rte_spinlock_lock(&zxdh_shared_data_lock); > + if (zxdh_shared_data == NULL) { > + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { > + /* Allocate shared memory. */ > + mz = rte_memzone_reserve(ZXDH_PMD_SHARED_DATA_MZ, > + sizeof(*zxdh_shared_data), SOCKET_ID_ANY, 0); > + if (mz == NULL) { > + PMD_DRV_LOG(ERR, "Cannot allocate zxdh shared data"); > + ret = -rte_errno; > + goto error; > + } > + zxdh_shared_data = mz->addr; > + memset(zxdh_shared_data, 0, sizeof(*zxdh_shared_data)); > + rte_spinlock_init(&zxdh_shared_data->lock); > + } else { /* Lookup allocated shared memory. */ > + mz = rte_memzone_lookup(ZXDH_PMD_SHARED_DATA_MZ); > + if (mz == NULL) { > + PMD_DRV_LOG(ERR, "Cannot attach zxdh shared data"); > + ret = -rte_errno; > + goto error; > + } > + zxdh_shared_data = mz->addr; > + } > + } > + > +error: > + rte_spinlock_unlock(&zxdh_shared_data_lock); > + return ret; > +} > + > +static int > +zxdh_init_once(void) > +{ > + int ret = 0; > + > + if (zxdh_init_shared_data()) > + return -1; > + > + struct zxdh_shared_data *sd = zxdh_shared_data; > + rte_spinlock_lock(&sd->lock); > + if (rte_eal_process_type() == RTE_PROC_SECONDARY) { > + if (!sd->init_done) { > + ++sd->secondary_cnt; > + sd->init_done = true; > + } > + goto out; > + } > + /* RTE_PROC_PRIMARY */ > + if (!sd->init_done) > + sd->init_done = true; > + sd->dev_refcnt++; > + > +out: > + rte_spinlock_unlock(&sd->lock); > + return ret; > +} > + > +static int > +zxdh_np_init(struct rte_eth_dev *eth_dev) > +{ > + struct zxdh_hw *hw = eth_dev->data->dev_private; > + int ret = 0; > + > + if (hw->is_pf) { > + ret = zxdh_np_dtb_res_init(eth_dev); > + if (ret) { > + PMD_DRV_LOG(ERR, "np dtb init failed, ret:%d ", ret); > + return ret; > + } > + } > + if (zxdh_shared_data != NULL) > + zxdh_shared_data->np_init_done = 1; > + > + PMD_DRV_LOG(DEBUG, "np init ok "); > + return 0; > +} > + > + > static int > zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) > { > @@ -950,6 +1138,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) > hw->is_pf = 1; > } > > + ret = zxdh_init_once(); > + if (ret != 0) > + goto err_zxdh_init; > + > ret = zxdh_init_device(eth_dev); > if (ret < 0) > goto err_zxdh_init; > @@ -977,6 +1169,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) > if (ret != 0) > goto err_zxdh_init; > > + ret = zxdh_np_init(eth_dev); > + if (ret) > + goto err_zxdh_init; > + > ret = zxdh_configure_intr(eth_dev); > if (ret != 0) > goto err_zxdh_init; > diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h > index 7658cbb461..78b1edd5a4 100644 > --- a/drivers/net/zxdh/zxdh_ethdev.h > +++ b/drivers/net/zxdh/zxdh_ethdev.h > @@ -34,6 +34,9 @@ > #define ZXDH_QUERES_SHARE_BASE (0x5000) > > #define ZXDH_MBUF_BURST_SZ 64 > +#define ZXDH_MAX_BASE_DTB_TABLE_COUNT 30 > +#define ZXDH_DTB_TABLE_DUMP_SIZE (32 * (16 + 16 * 1024)) > +#define ZXDH_DTB_TABLE_CONF_SIZE (32 * (16 + 16 * 1024)) > > union zxdh_virport_num { > uint16_t vport; > @@ -89,6 +92,30 @@ struct zxdh_hw { > uint8_t has_rx_offload; > }; > > +struct zxdh_dtb_shared_data { > + int init_done; You mix int and int32 when these are really booleans. Maybe use bool type > + char name[32]; Better to not hardcode 32 directly. Maybe ZXDH_MAX_NAMELEN as a #define > + uint16_t queueid; > + uint16_t vport; > + uint32_t vector; > + const struct rte_memzone *dtb_table_conf_mz; > + const struct rte_memzone *dtb_table_dump_mz; > + const struct rte_memzone *dtb_table_bulk_dump_mz[ZXDH_MAX_BASE_DTB_TABLE_COUNT]; > + struct rte_eth_dev *bind_device; > + uint32_t dev_refcnt; > +}; > + > +/* Shared data between primary and secondary processes. */ > +struct zxdh_shared_data { > + rte_spinlock_t lock; /* Global spinlock for primary and secondary processes. */ > + int32_t init_done; /* Whether primary has done initialization. */ > + unsigned int secondary_cnt; /* Number of secondary processes init'd. */ > + > + int32_t np_init_done; > + uint32_t dev_refcnt; > + struct zxdh_dtb_shared_data *dtb_data; > +}; > + > uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v); > > #endif /* ZXDH_ETHDEV_H */ > diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c > index 53cf972f86..a0a005b178 100644 > --- a/drivers/net/zxdh/zxdh_msg.c > +++ b/drivers/net/zxdh/zxdh_msg.c > @@ -1035,3 +1035,48 @@ zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev) > rte_free(recved_msg); > return ZXDH_BAR_MSG_OK; > } > + > +int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, > + struct zxdh_bar_offset_res *res) > +{ > + uint16_t check_token = 0; > + uint16_t sum_res = 0; > + int ret = 0; unnecessary initialization, first usage will set. > + > + if (!paras) > + return ZXDH_BAR_MSG_ERR_NULL; > + > + struct zxdh_offset_get_msg send_msg = { > + .pcie_id = paras->pcie_id, > + .type = paras->type, > + }; > + struct zxdh_pci_bar_msg in = {0}; > + > + in.payload_addr = &send_msg; > + in.payload_len = sizeof(send_msg); > + in.virt_addr = paras->virt_addr; > + in.src = ZXDH_MSG_CHAN_END_PF; > + in.dst = ZXDH_MSG_CHAN_END_RISC; > + in.module_id = ZXDH_BAR_MODULE_OFFSET_GET; > + in.src_pcieid = paras->pcie_id; Could use struct initializer here > + struct zxdh_bar_recv_msg recv_msg = {0}; > + struct zxdh_msg_recviver_mem result = { > + .recv_buffer = &recv_msg, > + .buffer_len = sizeof(recv_msg), > + }; > + ret = zxdh_bar_chan_sync_msg_send(&in, &result); > + if (ret != ZXDH_BAR_MSG_OK) > + return -ret; > + > + check_token = recv_msg.offset_reps.check; > + sum_res = zxdh_bar_get_sum((uint8_t *)&send_msg, sizeof(send_msg)); > + > + if (check_token != sum_res) { > + PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x", sum_res, check_token); > + return ZXDH_BAR_MSG_ERR_REPLY; > + } > + res->bar_offset = recv_msg.offset_reps.offset; > + res->bar_length = recv_msg.offset_reps.length; > + return ZXDH_BAR_MSG_OK; > +} > diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h > index 530ee406b1..fbc79e8f9d 100644 > --- a/drivers/net/zxdh/zxdh_msg.h > +++ b/drivers/net/zxdh/zxdh_msg.h > @@ -131,6 +131,26 @@ enum ZXDH_TBL_MSG_TYPE { > ZXDH_TBL_TYPE_NON, > }; > > +enum pciebar_layout_type { > + ZXDH_URI_VQM = 0, > + ZXDH_URI_SPINLOCK = 1, > + ZXDH_URI_FWCAP = 2, > + ZXDH_URI_FWSHR = 3, > + ZXDH_URI_DRS_SEC = 4, > + ZXDH_URI_RSV = 5, > + ZXDH_URI_CTRLCH = 6, > + ZXDH_URI_1588 = 7, > + ZXDH_URI_QBV = 8, > + ZXDH_URI_MACPCS = 9, > + ZXDH_URI_RDMA = 10, > + ZXDH_URI_MNP = 11, > + ZXDH_URI_MSPM = 12, > + ZXDH_URI_MVQM = 13, > + ZXDH_URI_MDPI = 14, > + ZXDH_URI_NP = 15, > + ZXDH_URI_MAX, > +}; > + > struct zxdh_msix_para { > uint16_t pcie_id; > uint16_t vector_risc; > @@ -174,6 +194,17 @@ struct zxdh_bar_offset_reps { > uint32_t length; > } __rte_packed; > > +struct zxdh_bar_offset_params { > + uint64_t virt_addr; /* Bar space control space virtual address */ > + uint16_t pcie_id; > + uint16_t type; /* Module types corresponding to PCIBAR planning */ > +}; > + > +struct zxdh_bar_offset_res { > + uint32_t bar_offset; > + uint32_t bar_length; > +}; > + > struct zxdh_bar_recv_msg { > uint8_t reps_ok; > uint16_t reps_len; > @@ -204,9 +235,15 @@ struct zxdh_bar_msg_header { > uint16_t dst_pcieid; /* used in PF-->VF */ > }; > > +struct zxdh_offset_get_msg { > + uint16_t pcie_id; > + uint16_t type; > +}; > + > typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, > void *reps_buffer, uint16_t *reps_len, void *dev); > > +int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, struct zxdh_bar_offset_res *res); > int zxdh_msg_chan_init(void); > int zxdh_bar_msg_chan_exit(void); > int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); > diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c > new file mode 100644 > index 0000000000..9c50039fb1 > --- /dev/null > +++ b/drivers/net/zxdh/zxdh_np.c > @@ -0,0 +1,347 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2024 ZTE Corporation > + */ > + > +#include > +#include > + > +#include > +#include > + > +#include "zxdh_np.h" > +#include "zxdh_logs.h" > + > +static uint64_t g_np_bar_offset; > +static ZXDH_DEV_MGR_T g_dev_mgr = {0}; > +static ZXDH_SDT_MGR_T g_sdt_mgr = {0}; > +ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; > +ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; > + > +#define ZXDH_COMM_ASSERT(x) assert(x) > +#define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) > +#define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) > + > +#define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ > +do {\ > + if (NULL == (point)) {\ > + PMD_DRV_LOG(ERR, "dev: %d ZXIC %s:%d[Error:POINT NULL] !"\ > + "FUNCTION : %s!", (dev_id), __FILE__, __LINE__, __func__);\ > + ZXDH_COMM_ASSERT(0);\ > + } \ > +} while (0) > + > +#define ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, becall)\ > +do {\ > + if ((rc) != 0) {\ > + PMD_DRV_LOG(ERR, "dev: %d ZXIC %s:%d !"\ > + "-- %s Call %s Fail!", (dev_id), __FILE__, __LINE__, __func__, becall);\ > + ZXDH_COMM_ASSERT(0);\ > + } \ > +} while (0) > + > +#define ZXDH_COMM_CHECK_POINT_NO_ASSERT(point)\ > +do {\ > + if ((point) == NULL) {\ > + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] ! FUNCTION : %s!",\ > + __FILE__, __LINE__, __func__);\ > + } \ > +} while (0) > + > +#define ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, becall)\ > +do {\ > + if ((rc) != 0) {\ > + PMD_DRV_LOG(ERR, "ZXIC %s:%d !-- %s Call %s"\ > + " Fail!", __FILE__, __LINE__, __func__, becall);\ > + } \ > +} while (0) > + > +#define ZXDH_COMM_CHECK_RC(rc, becall)\ > +do {\ > + if ((rc) != 0) {\ > + PMD_DRV_LOG(ERR, "ZXIC %s:%d!-- %s Call %s "\ > + "Fail!", __FILE__, __LINE__, __func__, becall);\ > + ZXDH_COMM_ASSERT(0);\ > + } \ > +} while (0) Better to use RTE_ASSERT() or RTE_VERIFY() here rather than custom macros > + > +static uint32_t > +zxdh_np_dev_init(void) > +{ > + if (g_dev_mgr.is_init) { > + PMD_DRV_LOG(ERR, "Dev is already initialized."); > + return 0; > + } > + > + g_dev_mgr.device_num = 0; > + g_dev_mgr.is_init = 1; > + > + return 0; > +} > + > +static uint32_t > +zxdh_np_dev_add(uint32_t dev_id, ZXDH_DEV_TYPE_E dev_type, > + ZXDH_DEV_ACCESS_TYPE_E access_type, uint64_t pcie_addr, > + uint64_t riscv_addr, uint64_t dma_vir_addr, > + uint64_t dma_phy_addr) > +{ > + ZXDH_DEV_CFG_T *p_dev_info = NULL; > + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; > + > + p_dev_mgr = &g_dev_mgr; > + if (!p_dev_mgr->is_init) { > + PMD_DRV_LOG(ERR, "ErrorCode[ 0x%x]: Device Manager is not init!!!", > + ZXDH_RC_DEV_MGR_NOT_INIT); > + return ZXDH_RC_DEV_MGR_NOT_INIT; > + } > + > + if (p_dev_mgr->p_dev_array[dev_id] != NULL) { > + /* device is already exist. */ > + PMD_DRV_LOG(ERR, "Device is added again!!!"); > + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; > + } else { > + /* device is new. */ > + p_dev_info = (ZXDH_DEV_CFG_T *)malloc(sizeof(ZXDH_DEV_CFG_T)); > + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_dev_info); > + p_dev_mgr->p_dev_array[dev_id] = p_dev_info; > + p_dev_mgr->device_num++; > + } > + > + p_dev_info->device_id = dev_id; > + p_dev_info->dev_type = dev_type; > + p_dev_info->access_type = access_type; > + p_dev_info->pcie_addr = pcie_addr; > + p_dev_info->riscv_addr = riscv_addr; > + p_dev_info->dma_vir_addr = dma_vir_addr; > + p_dev_info->dma_phy_addr = dma_phy_addr; > + > + return 0; > +} > + > +static uint32_t > +zxdh_np_dev_agent_status_set(uint32_t dev_id, uint32_t agent_flag) > +{ > + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; > + ZXDH_DEV_CFG_T *p_dev_info = NULL; > + > + p_dev_mgr = &g_dev_mgr; > + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; > + > + if (p_dev_info == NULL) > + return ZXDH_DEV_TYPE_INVALID; > + p_dev_info->agent_flag = agent_flag; > + > + return 0; > +} > + > +static uint32_t > +zxdh_np_sdt_mgr_init(void) > +{ > + if (!g_sdt_mgr.is_init) { > + g_sdt_mgr.channel_num = 0; > + g_sdt_mgr.is_init = 1; > + memset(g_sdt_mgr.sdt_tbl_array, 0, ZXDH_DEV_CHANNEL_MAX * > + sizeof(ZXDH_SDT_SOFT_TABLE_T *)); > + } > + > + return 0; > +} > + > +static uint32_t > +zxdh_np_sdt_mgr_create(uint32_t dev_id) > +{ > + ZXDH_SDT_SOFT_TABLE_T *p_sdt_tbl_temp = NULL; > + ZXDH_SDT_MGR_T *p_sdt_mgr = NULL; > + > + p_sdt_mgr = ZXDH_SDT_MGR_PTR_GET(); > + > + if (ZXDH_SDT_SOFT_TBL_GET(dev_id) == NULL) { > + p_sdt_tbl_temp = malloc(sizeof(ZXDH_SDT_SOFT_TABLE_T)); > + > + p_sdt_tbl_temp->device_id = dev_id; > + memset(p_sdt_tbl_temp->sdt_array, 0, ZXDH_DEV_SDT_ID_MAX * sizeof(ZXDH_SDT_ITEM_T)); > + > + ZXDH_SDT_SOFT_TBL_GET(dev_id) = p_sdt_tbl_temp; > + > + p_sdt_mgr->channel_num++; > + } else { > + PMD_DRV_LOG(ERR, "Error: %s for dev[%d]" > + "is called repeatedly!", __func__, dev_id); > + return -1; > + } > + > + return 0; > +} > + > +static uint32_t > +zxdh_np_sdt_init(uint32_t dev_num, uint32_t *dev_id_array) > +{ > + uint32_t rc = 0; > + uint32_t i = 0; > + > + zxdh_np_sdt_mgr_init(); > + > + for (i = 0; i < dev_num; i++) { > + rc = zxdh_np_sdt_mgr_create(dev_id_array[i]); > + ZXDH_COMM_CHECK_RC(rc, "zxdh_sdt_mgr_create"); > + } > + > + return 0; > +} > + > +static uint32_t > +zxdh_np_ppu_parse_cls_bitmap(uint32_t dev_id, > + uint32_t bitmap) > +{ > + uint32_t cls_id = 0; > + uint32_t mem_id = 0; > + uint32_t cls_use = 0; > + uint32_t instr_mem = 0; > + > + for (cls_id = 0; cls_id < ZXDH_PPU_CLUSTER_NUM; cls_id++) { > + cls_use = (bitmap >> cls_id) & 0x1; > + g_ppu_cls_bit_map[dev_id].cls_use[cls_id] = cls_use; > + } > + > + for (mem_id = 0; mem_id < ZXDH_PPU_INSTR_MEM_NUM; mem_id++) { > + instr_mem = (bitmap >> (mem_id * 2)) & 0x3; > + g_ppu_cls_bit_map[dev_id].instr_mem[mem_id] = ((instr_mem > 0) ? 1 : 0); > + } > + > + return 0; > +} > + > +static ZXDH_DTB_MGR_T * > +zxdh_np_dtb_mgr_get(uint32_t dev_id) > +{ > + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) > + return NULL; > + else > + return p_dpp_dtb_mgr[dev_id]; > +} > + > +static uint32_t > +zxdh_np_dtb_soft_init(uint32_t dev_id) > +{ > + ZXDH_DTB_MGR_T *p_dtb_mgr = NULL; > + > + p_dtb_mgr = zxdh_np_dtb_mgr_get(dev_id); > + if (p_dtb_mgr == NULL) { > + p_dpp_dtb_mgr[dev_id] = (ZXDH_DTB_MGR_T *)malloc(sizeof(ZXDH_DTB_MGR_T)); malloc() returns void *, cast here is not needed. Why does DTB_MGR_T come from malloc when most of other data is using rte_malloc()? It will matter if you support multiprocess > + memset(p_dpp_dtb_mgr[dev_id], 0, sizeof(ZXDH_DTB_MGR_T)); > + p_dtb_mgr = zxdh_np_dtb_mgr_get(dev_id); > + if (p_dtb_mgr == NULL) > + return 1; > + } > + > + return 0; > +} > + > +static unsigned int > +zxdh_np_base_soft_init(unsigned int dev_id, ZXDH_SYS_INIT_CTRL_T *p_init_ctrl) > +{ > + unsigned int rt = 0; > + unsigned int access_type = 0; > + unsigned int dev_id_array[ZXDH_DEV_CHANNEL_MAX] = {0}; > + unsigned int agent_flag = 0; Why init variable here, and set it one line later? > + > + rt = zxdh_np_dev_init(); > + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_init"); > + > + if (p_init_ctrl->flags & ZXDH_INIT_FLAG_ACCESS_TYPE) > + access_type = ZXDH_DEV_ACCESS_TYPE_RISCV; > + else > + access_type = ZXDH_DEV_ACCESS_TYPE_PCIE; > + > + if (p_init_ctrl->flags & ZXDH_INIT_FLAG_AGENT_FLAG) > + agent_flag = ZXDH_DEV_AGENT_ENABLE; > + else > + agent_flag = ZXDH_DEV_AGENT_DISABLE; > + > + rt = zxdh_np_dev_add(dev_id, > + p_init_ctrl->device_type, > + access_type, > + p_init_ctrl->pcie_vir_baddr, > + p_init_ctrl->riscv_vir_baddr, > + p_init_ctrl->dma_vir_baddr, > + p_init_ctrl->dma_phy_baddr); > + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_add"); > + > + rt = zxdh_np_dev_agent_status_set(dev_id, agent_flag); > + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_agent_status_set"); > + > + dev_id_array[0] = dev_id; > + rt = zxdh_np_sdt_init(1, dev_id_array); > + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_sdt_init"); > + > + rt = zxdh_np_ppu_parse_cls_bitmap(dev_id, ZXDH_PPU_CLS_ALL_START); > + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_ppu_parse_cls_bitmap"); > + > + rt = zxdh_np_dtb_soft_init(dev_id); > + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dtb_soft_init"); > + > + return 0; > +} > + > +static uint32_t > +zxdh_np_dev_vport_set(uint32_t dev_id, uint32_t vport) > +{ > + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; > + ZXDH_DEV_CFG_T *p_dev_info = NULL; > + > + p_dev_mgr = &g_dev_mgr; > + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; > + p_dev_info->vport = vport; > + > + return 0; > +} > + > +static uint32_t > +zxdh_np_dev_agent_addr_set(uint32_t dev_id, uint64_t agent_addr) > +{ > + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; > + ZXDH_DEV_CFG_T *p_dev_info = NULL; > + > + p_dev_mgr = &g_dev_mgr; > + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; > + p_dev_info->agent_addr = agent_addr; > + > + return 0; > +} Always returns 0, could just be void and skip the ASSERTION later. > +static uint64_t > +zxdh_np_addr_calc(uint64_t pcie_vir_baddr, uint32_t bar_offset) > +{ > + uint64_t np_addr = 0; > + > + np_addr = ((pcie_vir_baddr + bar_offset) > ZXDH_PCIE_NP_MEM_SIZE) > + ? (pcie_vir_baddr + bar_offset - ZXDH_PCIE_NP_MEM_SIZE) : 0; > + g_np_bar_offset = bar_offset; > + > + return np_addr; > +} > + > +int > +zxdh_np_host_init(uint32_t dev_id, > + ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl) > +{ > + unsigned int rc = 0; > + uint64_t agent_addr = 0; > + ZXDH_SYS_INIT_CTRL_T sys_init_ctrl = {0}; > + > + ZXDH_COMM_CHECK_POINT_NO_ASSERT(p_dev_init_ctrl); > + > + sys_init_ctrl.flags = (ZXDH_DEV_ACCESS_TYPE_PCIE << 0) | (ZXDH_DEV_AGENT_ENABLE << 10); > + sys_init_ctrl.pcie_vir_baddr = zxdh_np_addr_calc(p_dev_init_ctrl->pcie_vir_addr, > + p_dev_init_ctrl->np_bar_offset); > + sys_init_ctrl.device_type = ZXDH_DEV_TYPE_CHIP; > + rc = zxdh_np_base_soft_init(dev_id, &sys_init_ctrl); > + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_base_soft_init"); > + > + rc = zxdh_np_dev_vport_set(dev_id, p_dev_init_ctrl->vport); > + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dev_vport_set"); > + > + agent_addr = ZXDH_PCIE_AGENT_ADDR_OFFSET + p_dev_init_ctrl->pcie_vir_addr; > + rc = zxdh_np_dev_agent_addr_set(dev_id, agent_addr); > + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dev_agent_addr_set"); > + return 0; > +} > diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h > new file mode 100644 > index 0000000000..573eafe796 > --- /dev/null > +++ b/drivers/net/zxdh/zxdh_np.h > @@ -0,0 +1,198 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2023 ZTE Corporation > + */ > + > +#ifndef ZXDH_NP_H > +#define ZXDH_NP_H > + > +#include > + > +#define ZXDH_PORT_NAME_MAX (32) > +#define ZXDH_DEV_CHANNEL_MAX (2) > +#define ZXDH_DEV_SDT_ID_MAX (256U) > +/*DTB*/ > +#define ZXDH_DTB_QUEUE_ITEM_NUM_MAX (32) > +#define ZXDH_DTB_QUEUE_NUM_MAX (128) > + > +#define ZXDH_PPU_CLS_ALL_START (0x3F) > +#define ZXDH_PPU_CLUSTER_NUM (6) > +#define ZXDH_PPU_INSTR_MEM_NUM (3) > +#define ZXDH_SDT_CFG_LEN (2) > + > +#define ZXDH_RC_DEV_BASE (0x600) > +#define ZXDH_RC_DEV_PARA_INVALID (ZXDH_RC_DEV_BASE | 0x0) > +#define ZXDH_RC_DEV_RANGE_INVALID (ZXDH_RC_DEV_BASE | 0x1) > +#define ZXDH_RC_DEV_CALL_FUNC_FAIL (ZXDH_RC_DEV_BASE | 0x2) > +#define ZXDH_RC_DEV_TYPE_INVALID (ZXDH_RC_DEV_BASE | 0x3) > +#define ZXDH_RC_DEV_CONNECT_FAIL (ZXDH_RC_DEV_BASE | 0x4) > +#define ZXDH_RC_DEV_MSG_INVALID (ZXDH_RC_DEV_BASE | 0x5) > +#define ZXDH_RC_DEV_NOT_EXIST (ZXDH_RC_DEV_BASE | 0x6) > +#define ZXDH_RC_DEV_MGR_NOT_INIT (ZXDH_RC_DEV_BASE | 0x7) > +#define ZXDH_RC_DEV_CFG_NOT_INIT (ZXDH_RC_DEV_BASE | 0x8) > + > +#define ZXDH_SYS_VF_NP_BASE_OFFSET 0 > +#define ZXDH_PCIE_DTB4K_ADDR_OFFSET (0x6000) > +#define ZXDH_PCIE_NP_MEM_SIZE (0x2000000) > +#define ZXDH_PCIE_AGENT_ADDR_OFFSET (0x2000) > + > +#define ZXDH_INIT_FLAG_ACCESS_TYPE (1 << 0) > +#define ZXDH_INIT_FLAG_SERDES_DOWN_TP (1 << 1) > +#define ZXDH_INIT_FLAG_DDR_BACKDOOR (1 << 2) > +#define ZXDH_INIT_FLAG_SA_MODE (1 << 3) > +#define ZXDH_INIT_FLAG_SA_MESH (1 << 4) > +#define ZXDH_INIT_FLAG_SA_SERDES_MODE (1 << 5) > +#define ZXDH_INIT_FLAG_INT_DEST_MODE (1 << 6) > +#define ZXDH_INIT_FLAG_LIF0_MODE (1 << 7) > +#define ZXDH_INIT_FLAG_DMA_ENABLE (1 << 8) > +#define ZXDH_INIT_FLAG_TM_IMEM_FLAG (1 << 9) > +#define ZXDH_INIT_FLAG_AGENT_FLAG (1 << 10) > + > +typedef enum zxdh_module_init_e { > + ZXDH_MODULE_INIT_NPPU = 0, > + ZXDH_MODULE_INIT_PPU, > + ZXDH_MODULE_INIT_SE, > + ZXDH_MODULE_INIT_ETM, > + ZXDH_MODULE_INIT_DLB, > + ZXDH_MODULE_INIT_TRPG, > + ZXDH_MODULE_INIT_TSN, > + ZXDH_MODULE_INIT_MAX > +} ZXDH_MODULE_INIT_E; > + > +typedef enum zxdh_dev_type_e { > + ZXDH_DEV_TYPE_SIM = 0, > + ZXDH_DEV_TYPE_VCS = 1, > + ZXDH_DEV_TYPE_CHIP = 2, > + ZXDH_DEV_TYPE_FPGA = 3, > + ZXDH_DEV_TYPE_PCIE_ACC = 4, > + ZXDH_DEV_TYPE_INVALID, > +} ZXDH_DEV_TYPE_E; > + > +typedef enum zxdh_dev_access_type_e { > + ZXDH_DEV_ACCESS_TYPE_PCIE = 0, > + ZXDH_DEV_ACCESS_TYPE_RISCV = 1, > + ZXDH_DEV_ACCESS_TYPE_INVALID, > +} ZXDH_DEV_ACCESS_TYPE_E; > + > +typedef enum zxdh_dev_agent_flag_e { > + ZXDH_DEV_AGENT_DISABLE = 0, > + ZXDH_DEV_AGENT_ENABLE = 1, > + ZXDH_DEV_AGENT_INVALID, > +} ZXDH_DEV_AGENT_FLAG_E; > + > +typedef struct zxdh_dtb_tab_up_user_addr_t { > + uint32_t user_flag; > + uint64_t phy_addr; > + uint64_t vir_addr; > +} ZXDH_DTB_TAB_UP_USER_ADDR_T; > + > +typedef struct zxdh_dtb_tab_up_info_t { > + uint64_t start_phy_addr; > + uint64_t start_vir_addr; > + uint32_t item_size; > + uint32_t wr_index; > + uint32_t rd_index; > + uint32_t data_len[ZXDH_DTB_QUEUE_ITEM_NUM_MAX]; > + ZXDH_DTB_TAB_UP_USER_ADDR_T user_addr[ZXDH_DTB_QUEUE_ITEM_NUM_MAX]; > +} ZXDH_DTB_TAB_UP_INFO_T; > + > +typedef struct zxdh_dtb_tab_down_info_t { > + uint64_t start_phy_addr; > + uint64_t start_vir_addr; > + uint32_t item_size; > + uint32_t wr_index; > + uint32_t rd_index; > +} ZXDH_DTB_TAB_DOWN_INFO_T; > + > +typedef struct zxdh_dtb_queue_info_t { > + uint32_t init_flag; > + uint32_t vport; > + uint32_t vector; > + ZXDH_DTB_TAB_UP_INFO_T tab_up; > + ZXDH_DTB_TAB_DOWN_INFO_T tab_down; > +} ZXDH_DTB_QUEUE_INFO_T; > + > +typedef struct zxdh_dtb_mgr_t { > + ZXDH_DTB_QUEUE_INFO_T queue_info[ZXDH_DTB_QUEUE_NUM_MAX]; > +} ZXDH_DTB_MGR_T; > + > +typedef struct zxdh_ppu_cls_bitmap_t { > + uint32_t cls_use[ZXDH_PPU_CLUSTER_NUM]; > + uint32_t instr_mem[ZXDH_PPU_INSTR_MEM_NUM]; > +} ZXDH_PPU_CLS_BITMAP_T; > + > +typedef struct dpp_sdt_item_t { > + uint32_t valid; > + uint32_t table_cfg[ZXDH_SDT_CFG_LEN]; > +} ZXDH_SDT_ITEM_T; > + > +typedef struct dpp_sdt_soft_table_t { > + uint32_t device_id; > + ZXDH_SDT_ITEM_T sdt_array[ZXDH_DEV_SDT_ID_MAX]; > +} ZXDH_SDT_SOFT_TABLE_T; > + > +typedef struct zxdh_sys_init_ctrl_t { > + ZXDH_DEV_TYPE_E device_type; > + uint32_t flags; > + uint32_t sa_id; > + uint32_t case_num; > + uint32_t lif0_port_type; > + uint32_t lif1_port_type; > + uint64_t pcie_vir_baddr; > + uint64_t riscv_vir_baddr; > + uint64_t dma_vir_baddr; > + uint64_t dma_phy_baddr; > +} ZXDH_SYS_INIT_CTRL_T; > + > +typedef struct dpp_dev_cfg_t { > + uint32_t device_id; > + ZXDH_DEV_TYPE_E dev_type; > + uint32_t chip_ver; > + uint32_t access_type; > + uint32_t agent_flag; > + uint32_t vport; > + uint64_t pcie_addr; > + uint64_t riscv_addr; > + uint64_t dma_vir_addr; > + uint64_t dma_phy_addr; > + uint64_t agent_addr; > + uint32_t init_flags[ZXDH_MODULE_INIT_MAX]; > +} ZXDH_DEV_CFG_T; > + > +typedef struct zxdh_dev_mngr_t { > + uint32_t device_num; > + uint32_t is_init; > + ZXDH_DEV_CFG_T *p_dev_array[ZXDH_DEV_CHANNEL_MAX]; > +} ZXDH_DEV_MGR_T; > + > +typedef struct zxdh_dtb_addr_info_t { > + uint32_t sdt_no; > + uint32_t size; > + uint32_t phy_addr; > + uint32_t vir_addr; > +} ZXDH_DTB_ADDR_INFO_T; > + > +typedef struct zxdh_dev_init_ctrl_t { > + uint32_t vport; > + char port_name[ZXDH_PORT_NAME_MAX]; > + uint32_t vector; > + uint32_t queue_id; > + uint32_t np_bar_offset; > + uint32_t np_bar_len; > + uint32_t pcie_vir_addr; > + uint32_t down_phy_addr; > + uint32_t down_vir_addr; > + uint32_t dump_phy_addr; > + uint32_t dump_vir_addr; > + uint32_t dump_sdt_num; > + ZXDH_DTB_ADDR_INFO_T dump_addr_info[]; > +} ZXDH_DEV_INIT_CTRL_T; > + > +typedef struct zxdh_sdt_mgr_t { > + uint32_t channel_num; > + uint32_t is_init; > + ZXDH_SDT_SOFT_TABLE_T *sdt_tbl_array[ZXDH_DEV_CHANNEL_MAX]; > +} ZXDH_SDT_MGR_T; > + > +int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); > + > +#endif /* ZXDH_NP_H */ > diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c > index 06d3f92b20..250e67d560 100644 > --- a/drivers/net/zxdh/zxdh_pci.c > +++ b/drivers/net/zxdh/zxdh_pci.c > @@ -159,7 +159,7 @@ zxdh_setup_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) > > desc_addr = vq->vq_ring_mem; > avail_addr = desc_addr + vq->vq_nentries * sizeof(struct zxdh_vring_desc); > - if (vtpci_packed_queue(vq->hw)) { > + if (zxdh_pci_packed_queue(vq->hw)) { > used_addr = RTE_ALIGN_CEIL((avail_addr + > sizeof(struct zxdh_vring_packed_desc_event)), > ZXDH_PCI_VRING_ALIGN); > diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h > index ed6fd89742..d6487a574f 100644 > --- a/drivers/net/zxdh/zxdh_pci.h > +++ b/drivers/net/zxdh/zxdh_pci.h > @@ -114,15 +114,15 @@ struct zxdh_pci_common_cfg { > }; > > static inline int32_t > -vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) > +zxdh_pci_with_feature(struct zxdh_hw *hw, uint64_t bit) > { > return (hw->guest_features & (1ULL << bit)) != 0; > } > > static inline int32_t > -vtpci_packed_queue(struct zxdh_hw *hw) > +zxdh_pci_packed_queue(struct zxdh_hw *hw) > { > - return vtpci_with_feature(hw, ZXDH_F_RING_PACKED); > + return zxdh_pci_with_feature(hw, ZXDH_F_RING_PACKED); > } > > struct zxdh_pci_ops { > diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c > index 462a88b23c..b4ef90ea36 100644 > --- a/drivers/net/zxdh/zxdh_queue.c > +++ b/drivers/net/zxdh/zxdh_queue.c > @@ -13,7 +13,7 @@ > #include "zxdh_msg.h" > > struct rte_mbuf * > -zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq) > +zxdh_queue_detach_unused(struct zxdh_virtqueue *vq) > { > struct rte_mbuf *cookie = NULL; > int32_t idx = 0; > diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h > index 1088bf08fc..1304d5e4ea 100644 > --- a/drivers/net/zxdh/zxdh_queue.h > +++ b/drivers/net/zxdh/zxdh_queue.h > @@ -206,11 +206,11 @@ struct zxdh_tx_region { > }; > > static inline size_t > -vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) > +zxdh_vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) > { > size_t size; > > - if (vtpci_packed_queue(hw)) { > + if (zxdh_pci_packed_queue(hw)) { > size = num * sizeof(struct zxdh_vring_packed_desc); > size += sizeof(struct zxdh_vring_packed_desc_event); > size = RTE_ALIGN_CEIL(size, align); > @@ -226,7 +226,7 @@ vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) > } > > static inline void > -vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, > +zxdh_vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, > unsigned long align, uint32_t num) > { > vr->num = num; > @@ -238,7 +238,7 @@ vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, > } > > static inline void > -vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) > +zxdh_vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) > { > int32_t i = 0; > > @@ -251,7 +251,7 @@ vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) > } > > static inline void > -vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) > +zxdh_vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) > { > int32_t i = 0; > > @@ -262,7 +262,7 @@ vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) > } > > static inline void > -virtqueue_disable_intr(struct zxdh_virtqueue *vq) > +zxdh_queue_disable_intr(struct zxdh_virtqueue *vq) > { > if (vq->vq_packed.event_flags_shadow != ZXDH_RING_EVENT_FLAGS_DISABLE) { > vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE; > @@ -270,7 +270,7 @@ virtqueue_disable_intr(struct zxdh_virtqueue *vq) > } > } > > -struct rte_mbuf *zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq); > +struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); > int32_t zxdh_free_queues(struct rte_eth_dev *dev); > int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); >