From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 02C8BA0A03; Mon, 18 Jan 2021 10:37:35 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 99FAC140D3F; Mon, 18 Jan 2021 10:37:13 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id B9B0B140D17 for ; Mon, 18 Jan 2021 10:37:08 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 10I9QqGc001601 for ; Mon, 18 Jan 2021 01:37:08 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=dBsq9Tx+C7G0rLQwkiNTgRgzNxq8KFFr+xT+hm8c8f4=; b=CYnKk1/3P1+n7mMJYSb0YzIOQTr/GUeEsf0BCtqyr+51r5WP8aJp7qNBnD3O7u7/JY0/ +vpycPYYdhdd3cjZJm7M+7GZV2o5kGNNXLd75pnIfQidYLS5JO6hkt6d9z0qz6vsTm3+ L4b+m8h5p9+E2/OThQkgEuFbIesFLs1p1kSv6lBraGgdIBcURxqc+839lkkGeZt8VWzx 5bjtxy/19zZvVBJPI/jpAMGIawZPMWxVl3U43RNCQBLRJBfthYAntAnrPSiNz03MyQXy vkqp1rUTz36+9xq/hTpv3gRLGF6c8WZIuE7cEYwbH70HzALkQ7Zp1DvhMdnvWPAU0Qed yA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3640hsu1xj-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 18 Jan 2021 01:37:08 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 18 Jan 2021 01:37:06 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 18 Jan 2021 01:37:05 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 18 Jan 2021 01:37:05 -0800 Received: from localhost.localdomain (unknown [10.111.145.157]) by maili.marvell.com (Postfix) with ESMTP id 605EF3F7043; Mon, 18 Jan 2021 01:37:05 -0800 (PST) From: Nalla Pradeep To: Nalla Pradeep , Radha Mohan Chintakuntla , Veerasenareddy Burru CC: , , Date: Mon, 18 Jan 2021 09:35:57 +0000 Message-ID: <20210118093602.5449-6-pnalla@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210118093602.5449-1-pnalla@marvell.com> References: <20210118093602.5449-1-pnalla@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2021-01-18_07:2021-01-15, 2021-01-18 signatures=0 Subject: [dpdk-dev] [PATCH v2 06/11] net/octeontx_ep: Added rxq setup and release X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Receive queue setup involves allocating memory for the queue, initializing data structure representing the queue and filling queue with receive buffers of rx descriptor count. Receive queues are referred as droq. Hardware fills the receive buffers in queue with the packet. It can use same receive buffer (BUFPTR_ONLY_MODE) or separate buffer (INFOPTR_MODE) to fill with packet metadata. BUFPTR_ONLY_MODE is supported now. In receive queue release, receive buffers are freed along with the receive queue. Signed-off-by: Nalla Pradeep --- drivers/net/octeontx_ep/meson.build | 1 + drivers/net/octeontx_ep/otx_ep_common.h | 160 ++++++++++++++++- drivers/net/octeontx_ep/otx_ep_ethdev.c | 132 ++++++++++++++ drivers/net/octeontx_ep/otx_ep_rxtx.c | 222 ++++++++++++++++++++++++ drivers/net/octeontx_ep/otx_ep_vf.h | 6 + 5 files changed, 516 insertions(+), 5 deletions(-) create mode 100644 drivers/net/octeontx_ep/otx_ep_rxtx.c diff --git a/drivers/net/octeontx_ep/meson.build b/drivers/net/octeontx_ep/meson.build index 7c43c077cf..0f23c0bfe2 100644 --- a/drivers/net/octeontx_ep/meson.build +++ b/drivers/net/octeontx_ep/meson.build @@ -6,6 +6,7 @@ sources = files( 'otx_ep_ethdev.c', 'otx_ep_vf.c', 'otx2_ep_vf.c', + 'otx_ep_rxtx.c', ) extra_flags = [] diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h index f3a08cdfa4..291bd3a7c6 100644 --- a/drivers/net/octeontx_ep/otx_ep_common.h +++ b/drivers/net/octeontx_ep/otx_ep_common.h @@ -16,6 +16,10 @@ #define OTX_EP_OQ_INFOPTR_MODE (0) #define OTX_EP_OQ_REFIL_THRESHOLD (16) +#define OTX_EP_PCI_RING_ALIGN 65536 +#define SDP_PKIND 40 +#define SDP_OTX2_PKIND 57 +#define OTX_EP_MAX_IOQS_PER_VF 8 #define otx_ep_printf(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \ fmt, ##args) @@ -52,6 +56,65 @@ struct otx_ep_iq_config { uint32_t pending_list_size; }; +/** Descriptor format. + * The descriptor ring is made of descriptors which have 2 64-bit values: + * -# Physical (bus) address of the data buffer. + * -# Physical (bus) address of a otx_ep_droq_info structure. + * The device DMA's incoming packets and its information at the address + * given by these descriptor fields. + */ +struct otx_ep_droq_desc { + /* The buffer pointer */ + uint64_t buffer_ptr; + + /* The Info pointer */ + uint64_t info_ptr; +}; +#define OTX_EP_DROQ_DESC_SIZE (sizeof(struct otx_ep_droq_desc)) + +/* Receive Header */ +union otx_ep_rh { + uint64_t rh64; +}; +#define OTX_EP_RH_SIZE (sizeof(union otx_ep_rh)) + +/** Information about packet DMA'ed by OCTEON TX2. + * The format of the information available at Info Pointer after OCTEON TX2 + * has posted a packet. Not all descriptors have valid information. Only + * the Info field of the first descriptor for a packet has information + * about the packet. + */ +struct otx_ep_droq_info { + /* The Length of the packet. */ + uint64_t length; + + /* The Output Receive Header. */ + union otx_ep_rh rh; +}; +#define OTX_EP_DROQ_INFO_SIZE (sizeof(struct otx_ep_droq_info)) + + +/* DROQ statistics. Each output queue has four stats fields. */ +struct otx_ep_droq_stats { + /* Number of packets received in this queue. */ + uint64_t pkts_received; + + /* Bytes received by this queue. */ + uint64_t bytes_received; + + /* Num of failures of rte_pktmbuf_alloc() */ + uint64_t rx_alloc_failure; + + /* Rx error */ + uint64_t rx_err; + + /* packets with data got ready after interrupt arrived */ + uint64_t pkts_delayed_data; + + /* packets dropped due to zero length */ + uint64_t dropped_zlp; +}; + /* Structure to define the configuration attributes for each Output queue. */ struct otx_ep_oq_config { /* Max number of OQs available */ @@ -67,6 +130,74 @@ struct otx_ep_oq_config { uint32_t refill_threshold; }; +/* The Descriptor Ring Output Queue(DROQ) structure. */ +struct otx_ep_droq { + struct otx_ep_device *otx_ep_dev; + /* The 8B aligned descriptor ring starts at this address. */ + struct otx_ep_droq_desc *desc_ring; + + uint32_t q_no; + uint64_t last_pkt_count; + + struct rte_mempool *mpool; + + /* Driver should read the next packet at this index */ + uint32_t read_idx; + + /* OCTEON TX2 will write the next packet at this index */ + uint32_t write_idx; + + /* At this index, the driver will refill the descriptor's buffer */ + uint32_t refill_idx; + + /* Packets pending to be processed */ + uint64_t pkts_pending; + + /* Number of descriptors in this ring. */ + uint32_t nb_desc; + + /* The number of descriptors pending to refill. */ + uint32_t refill_count; + + uint32_t refill_threshold; + + /* The 8B aligned info ptrs begin from this address. */ + struct otx_ep_droq_info *info_list; + + /* receive buffer list contains mbuf ptr list */ + struct rte_mbuf **recv_buf_list; + + /* The size of each buffer pointed by the buffer pointer. */ + uint32_t buffer_size; + + /* Statistics for this DROQ. */ + struct otx_ep_droq_stats stats; + + /* DMA mapped address of the DROQ descriptor ring. */ + size_t desc_ring_dma; + + /* Info_ptr list is allocated at this virtual address. */ + size_t info_base_addr; + + /* DMA mapped address of the info list */ + size_t info_list_dma; + + /* Allocated size of info list. */ + uint32_t info_alloc_size; + + /* Memory zone **/ + const struct rte_memzone *desc_ring_mz; + const struct rte_memzone *info_mz; +}; +#define OTX_EP_DROQ_SIZE (sizeof(struct otx_ep_droq)) + +/* IQ/OQ mask */ +struct otx_ep_io_enable { + uint64_t iq; + uint64_t oq; + uint64_t iq64B; +}; + /* Structure to define the configuration. */ struct otx_ep_config { /* Input Queue attributes. */ @@ -85,6 +216,15 @@ struct otx_ep_config { uint32_t oqdef_buf_size; }; +/* Required functions for each VF device */ +struct otx_ep_fn_list { + void (*setup_oq_regs)(struct otx_ep_device *otx_ep, uint32_t q_no); + + int (*setup_device_regs)(struct otx_ep_device *otx_ep); + + void (*disable_io_queues)(struct otx_ep_device *otx_ep); +}; + /* SRIOV information */ struct otx_ep_sriov_info { /* Number of rings assigned to VF */ @@ -94,11 +234,6 @@ struct otx_ep_sriov_info { uint32_t num_vfs; }; -/* Required functions for each VF device */ -struct otx_ep_fn_list { - int (*setup_device_regs)(struct otx_ep_device *otx_ep); -}; - /* OTX_EP EP VF device data structure */ struct otx_ep_device { /* PCI device pointer */ @@ -106,6 +241,8 @@ struct otx_ep_device { uint16_t chip_id; + uint32_t pkind; + struct rte_eth_dev *eth_dev; int port_id; @@ -119,6 +256,15 @@ struct otx_ep_device { uint32_t max_rx_queues; + /* Num OQs */ + uint32_t nb_rx_queues; + + /* The DROQ output queues */ + struct otx_ep_droq *droq[OTX_EP_MAX_IOQS_PER_VF]; + + /* IOQ mask */ + struct otx_ep_io_enable io_qmask; + /* SR-IOV info */ struct otx_ep_sriov_info sriov_info; @@ -132,6 +278,10 @@ struct otx_ep_device { uint64_t tx_offloads; }; +int otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs, + int desc_size, struct rte_mempool *mpool, + unsigned int socket_id); +int otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no); #define OTX_EP_MAX_PKT_SZ 64000U #define OTX_EP_MAX_MAC_ADDRS 1 diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c index 3ba391dc02..bc85e15134 100644 --- a/drivers/net/octeontx_ep/otx_ep_ethdev.c +++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c @@ -72,11 +72,13 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf) case PCI_DEVID_OCTEONTX_EP_VF: otx_epvf->chip_id = dev_id; ret = otx_ep_vf_setup_device(otx_epvf); + otx_epvf->fn_list.disable_io_queues(otx_epvf); break; case PCI_DEVID_OCTEONTX2_EP_NET_VF: case PCI_DEVID_CN98XX_EP_NET_VF: otx_epvf->chip_id = dev_id; ret = otx2_ep_vf_setup_device(otx_epvf); + otx_epvf->fn_list.disable_io_queues(otx_epvf); break; default: otx_ep_err("Unsupported device\n"); @@ -93,6 +95,8 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf) static int otx_epdev_init(struct otx_ep_device *otx_epvf) { + uint32_t ethdev_queues; + if (otx_ep_chip_specific_setup(otx_epvf)) { otx_ep_err("Chip specific setup failed\n"); goto setup_fail; @@ -103,6 +107,10 @@ otx_epdev_init(struct otx_ep_device *otx_epvf) goto setup_fail; } + ethdev_queues = (uint32_t)(otx_epvf->sriov_info.rings_per_vf); + otx_epvf->max_rx_queues = ethdev_queues; + otx_epvf->max_tx_queues = ethdev_queues; + otx_ep_info("OTX_EP Device is Ready\n"); return 0; @@ -140,12 +148,125 @@ otx_ep_dev_configure(struct rte_eth_dev *eth_dev) return 0; } +/** + * Setup our receive queue/ringbuffer. This is the + * queue the Octeon uses to send us packets and + * responses. We are given a memory pool for our + * packet buffers that are used to populate the receive + * queue. + * + * @param eth_dev + * Pointer to the structure rte_eth_dev + * @param q_no + * Queue number + * @param num_rx_descs + * Number of entries in the queue + * @param socket_id + * Where to allocate memory + * @param rx_conf + * Pointer to the struction rte_eth_rxconf + * @param mp + * Pointer to the packet pool + * + * @return + * - On success, return 0 + * - On failure, return -1 + */ +static int +otx_ep_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no, + uint16_t num_rx_descs, unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf __rte_unused, + struct rte_mempool *mp) +{ + struct otx_ep_device *otx_epvf = OTX_EP_DEV(eth_dev); + struct rte_pktmbuf_pool_private *mbp_priv; + uint16_t buf_size; + + if (q_no >= otx_epvf->max_rx_queues) { + otx_ep_err("Invalid rx queue number %u\n", q_no); + return -EINVAL; + } + + if (num_rx_descs & (num_rx_descs - 1)) { + otx_ep_err("Invalid rx desc number should be pow 2 %u\n", + num_rx_descs); + return -EINVAL; + } + if (num_rx_descs < (SDP_GBL_WMARK * 8)) { + otx_ep_err("Invalid rx desc number should at least be greater than 8xwmark %u\n", + num_rx_descs); + return -EINVAL; + } + + otx_ep_dbg("setting up rx queue %u\n", q_no); + + mbp_priv = rte_mempool_get_priv(mp); + buf_size = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM; + + if (otx_ep_setup_oqs(otx_epvf, q_no, num_rx_descs, buf_size, mp, + socket_id)) { + otx_ep_err("droq allocation failed\n"); + return -1; + } + + eth_dev->data->rx_queues[q_no] = otx_epvf->droq[q_no]; + + return 0; +} + +/** + * Release the receive queue/ringbuffer. Called by + * the upper layers. + * + * @param rxq + * Opaque pointer to the receive queue to release + * + * @return + * - nothing + */ +static void +otx_ep_rx_queue_release(void *rxq) +{ + struct otx_ep_droq *rq = (struct otx_ep_droq *)rxq; + struct otx_ep_device *otx_epvf = rq->otx_ep_dev; + int q_id = rq->q_no; + + if (otx_ep_delete_oqs(otx_epvf, q_id)) + otx_ep_err("Failed to delete OQ:%d\n", q_id); +} + /* Define our ethernet definitions */ static const struct eth_dev_ops otx_ep_eth_dev_ops = { .dev_configure = otx_ep_dev_configure, + .rx_queue_setup = otx_ep_rx_queue_setup, + .rx_queue_release = otx_ep_rx_queue_release, .dev_infos_get = otx_ep_dev_info_get, }; + + +static int +otx_epdev_exit(struct rte_eth_dev *eth_dev) +{ + struct otx_ep_device *otx_epvf; + uint32_t num_queues, q; + + otx_ep_info("%s:\n", __func__); + + otx_epvf = OTX_EP_DEV(eth_dev); + + num_queues = otx_epvf->nb_rx_queues; + for (q = 0; q < num_queues; q++) { + if (otx_ep_delete_oqs(otx_epvf, q)) { + otx_ep_err("Failed to delete OQ:%d\n", q); + return -ENOMEM; + } + } + otx_ep_info("Num OQs:%d freed\n", otx_epvf->nb_rx_queues); + + return 0; +} + static int otx_ep_eth_dev_uninit(struct rte_eth_dev *eth_dev) { @@ -153,11 +274,15 @@ otx_ep_eth_dev_uninit(struct rte_eth_dev *eth_dev) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + otx_epdev_exit(eth_dev); + otx_epvf->port_configured = 0; if (eth_dev->data->mac_addrs != NULL) rte_free(eth_dev->data->mac_addrs); + eth_dev->dev_ops = NULL; + return 0; } @@ -187,6 +312,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev) eth_dev->data->mac_addrs = rte_zmalloc("otx_ep", RTE_ETHER_ADDR_LEN, 0); if (eth_dev->data->mac_addrs == NULL) { otx_ep_err("MAC addresses memory allocation failed\n"); + eth_dev->dev_ops = NULL; return -ENOMEM; } rte_eth_random_addr(vf_mac_addr); @@ -195,6 +321,12 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev) otx_epvf->pdev = pdev; otx_epdev_init(otx_epvf); + if (pdev->id.device_id == PCI_DEVID_OCTEONTX2_EP_NET_VF) + otx_epvf->pkind = SDP_OTX2_PKIND; + else + otx_epvf->pkind = SDP_PKIND; + otx_ep_info("using pkind %d\n", otx_epvf->pkind); + otx_epvf->port_configured = 0; return 0; diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c new file mode 100644 index 0000000000..bc3637350e --- /dev/null +++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c @@ -0,0 +1,222 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2020 Marvell. + */ + +#include + +#include +#include +#include +#include + +#include "otx_ep_common.h" +#include "otx_ep_vf.h" +#include "otx2_ep_vf.h" +#include "otx_ep_rxtx.h" + +static void +otx_ep_dmazone_free(const struct rte_memzone *mz) +{ + const struct rte_memzone *mz_tmp; + int ret = 0; + + if (mz == NULL) { + otx_ep_err("Memzone %s : NULL\n", mz->name); + return; + } + + mz_tmp = rte_memzone_lookup(mz->name); + if (mz_tmp == NULL) { + otx_ep_err("Memzone %s Not Found\n", mz->name); + return; + } + + ret = rte_memzone_free(mz); + if (ret) + otx_ep_err("Memzone free failed : ret = %d\n", ret); +} + +static void +otx_ep_droq_reset_indices(struct otx_ep_droq *droq) +{ + droq->read_idx = 0; + droq->write_idx = 0; + droq->refill_idx = 0; + droq->refill_count = 0; + droq->last_pkt_count = 0; + droq->pkts_pending = 0; +} + +static void +otx_ep_droq_destroy_ring_buffers(struct otx_ep_droq *droq) +{ + uint32_t idx; + + for (idx = 0; idx < droq->nb_desc; idx++) { + if (droq->recv_buf_list[idx]) { + rte_pktmbuf_free(droq->recv_buf_list[idx]); + droq->recv_buf_list[idx] = NULL; + } + } + + otx_ep_droq_reset_indices(droq); +} + +/* Free OQs resources */ +int +otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no) +{ + struct otx_ep_droq *droq; + + droq = otx_ep->droq[oq_no]; + if (droq == NULL) { + otx_ep_err("Invalid droq[%d]\n", oq_no); + return -ENOMEM; + } + + otx_ep_droq_destroy_ring_buffers(droq); + rte_free(droq->recv_buf_list); + droq->recv_buf_list = NULL; + + if (droq->desc_ring_mz) { + otx_ep_dmazone_free(droq->desc_ring_mz); + droq->desc_ring_mz = NULL; + } + + memset(droq, 0, OTX_EP_DROQ_SIZE); + + rte_free(otx_ep->droq[oq_no]); + otx_ep->droq[oq_no] = NULL; + + otx_ep->nb_rx_queues--; + + otx_ep_info("OQ[%d] is deleted\n", oq_no); + return 0; +} + +static int +otx_ep_droq_setup_ring_buffers(struct otx_ep_droq *droq) +{ + struct otx_ep_droq_desc *desc_ring = droq->desc_ring; + struct otx_ep_droq_info *info; + struct rte_mbuf *buf; + uint32_t idx; + + for (idx = 0; idx < droq->nb_desc; idx++) { + buf = rte_pktmbuf_alloc(droq->mpool); + if (buf == NULL) { + otx_ep_err("OQ buffer alloc failed\n"); + droq->stats.rx_alloc_failure++; + /* otx_ep_droq_destroy_ring_buffers(droq);*/ + return -ENOMEM; + } + + droq->recv_buf_list[idx] = buf; + info = rte_pktmbuf_mtod(buf, struct otx_ep_droq_info *); + memset(info, 0, sizeof(*info)); + desc_ring[idx].buffer_ptr = rte_mbuf_data_iova_default(buf); + } + + otx_ep_droq_reset_indices(droq); + + return 0; +} + +/* OQ initialization */ +static int +otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no, + uint32_t num_descs, uint32_t desc_size, + struct rte_mempool *mpool, unsigned int socket_id) +{ + const struct otx_ep_config *conf = otx_ep->conf; + uint32_t c_refill_threshold; + struct otx_ep_droq *droq; + uint32_t desc_ring_size; + + otx_ep_info("OQ[%d] Init start\n", q_no); + + droq = otx_ep->droq[q_no]; + droq->otx_ep_dev = otx_ep; + droq->q_no = q_no; + droq->mpool = mpool; + + droq->nb_desc = num_descs; + droq->buffer_size = desc_size; + c_refill_threshold = RTE_MAX(conf->oq.refill_threshold, + droq->nb_desc / 2); + + /* OQ desc_ring set up */ + desc_ring_size = droq->nb_desc * OTX_EP_DROQ_DESC_SIZE; + droq->desc_ring_mz = rte_eth_dma_zone_reserve(otx_ep->eth_dev, "droq", + q_no, desc_ring_size, + OTX_EP_PCI_RING_ALIGN, + socket_id); + + if (droq->desc_ring_mz == NULL) { + otx_ep_err("OQ:%d desc_ring allocation failed\n", q_no); + goto init_droq_fail; + } + + droq->desc_ring_dma = droq->desc_ring_mz->iova; + droq->desc_ring = (struct otx_ep_droq_desc *)droq->desc_ring_mz->addr; + + otx_ep_dbg("OQ[%d]: desc_ring: virt: 0x%p, dma: %lx\n", + q_no, droq->desc_ring, (unsigned long)droq->desc_ring_dma); + otx_ep_dbg("OQ[%d]: num_desc: %d\n", q_no, droq->nb_desc); + + /* OQ buf_list set up */ + droq->recv_buf_list = rte_zmalloc_socket("recv_buf_list", + (droq->nb_desc * sizeof(struct rte_mbuf *)), + RTE_CACHE_LINE_SIZE, socket_id); + if (droq->recv_buf_list == NULL) { + otx_ep_err("OQ recv_buf_list alloc failed\n"); + goto init_droq_fail; + } + + if (otx_ep_droq_setup_ring_buffers(droq)) + goto init_droq_fail; + + droq->refill_threshold = c_refill_threshold; + + /* Set up OQ registers */ + otx_ep->fn_list.setup_oq_regs(otx_ep, q_no); + + otx_ep->io_qmask.oq |= (1ull << q_no); + + return 0; + +init_droq_fail: + return -ENOMEM; +} + +/* OQ configuration and setup */ +int +otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs, + int desc_size, struct rte_mempool *mpool, unsigned int socket_id) +{ + struct otx_ep_droq *droq; + + /* Allocate new droq. */ + droq = (struct otx_ep_droq *)rte_zmalloc("otx_ep_OQ", + sizeof(*droq), RTE_CACHE_LINE_SIZE); + if (droq == NULL) { + otx_ep_err("Droq[%d] Creation Failed\n", oq_no); + return -ENOMEM; + } + otx_ep->droq[oq_no] = droq; + + if (otx_ep_init_droq(otx_ep, oq_no, num_descs, desc_size, mpool, + socket_id)) { + otx_ep_err("Droq[%d] Initialization failed\n", oq_no); + goto delete_OQ; + } + otx_ep_info("OQ[%d] is created.\n", oq_no); + + otx_ep->nb_rx_queues++; + + return 0; + +delete_OQ: + otx_ep_delete_oqs(otx_ep, oq_no); + return -ENOMEM; +} diff --git a/drivers/net/octeontx_ep/otx_ep_vf.h b/drivers/net/octeontx_ep/otx_ep_vf.h index 5366297ee8..42bb04f14f 100644 --- a/drivers/net/octeontx_ep/otx_ep_vf.h +++ b/drivers/net/octeontx_ep/otx_ep_vf.h @@ -37,6 +37,12 @@ #define PCI_DEVID_OCTEONTX_EP_VF 0xa303 +/* this is a static value set by SLI PF driver in octeon + * No handshake is available + * Change this if changing the value in SLI PF driver + */ +#define SDP_GBL_WMARK 0x100 + int otx_ep_vf_setup_device(struct otx_ep_device *otx_ep); #endif /*_OTX_EP_VF_H_ */ -- 2.17.1