From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 88309A0350; Mon, 3 Jan 2022 11:02:10 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 080A041174; Mon, 3 Jan 2022 11:01:41 +0100 (CET) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 93C9E41143 for ; Mon, 3 Jan 2022 11:01:35 +0100 (CET) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 70F3D1A093F; Mon, 3 Jan 2022 11:01:35 +0100 (CET) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 1A4D31A093A; Mon, 3 Jan 2022 11:01:35 +0100 (CET) Received: from lsv03274.swis.in-blr01.nxp.com (lsv03274.swis.in-blr01.nxp.com [92.120.147.114]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 77F56183AD05; Mon, 3 Jan 2022 18:01:34 +0800 (+08) From: nipun.gupta@nxp.com To: dev@dpdk.org Cc: thomas@monjalon.net, ferruh.yigit@intel.com, hemant.agrawal@nxp.com, stephen@networkplumber.org, Jun Yang Subject: [PATCH v3 06/15] net/dpaa2: support multiple txqs en-queue for ordered Date: Mon, 3 Jan 2022 15:31:20 +0530 Message-Id: <20220103100129.23965-7-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220103100129.23965-1-nipun.gupta@nxp.com> References: <20211206121824.3493-1-nipun.gupta@nxp.com> <20220103100129.23965-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Jun Yang Support the tx enqueue in order queue mode, where queue id for each event may be different. Signed-off-by: Jun Yang --- drivers/event/dpaa2/dpaa2_eventdev.c | 12 ++- drivers/net/dpaa2/dpaa2_ethdev.h | 4 + drivers/net/dpaa2/dpaa2_rxtx.c | 142 +++++++++++++++++++++++++++ drivers/net/dpaa2/version.map | 1 + 4 files changed, 155 insertions(+), 4 deletions(-) diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c index 4d94c315d2..ffc7b8b073 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev.c +++ b/drivers/event/dpaa2/dpaa2_eventdev.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright 2017,2019 NXP + * Copyright 2017,2019-2021 NXP */ #include @@ -1003,16 +1003,20 @@ dpaa2_eventdev_txa_enqueue(void *port, struct rte_event ev[], uint16_t nb_events) { - struct rte_mbuf *m = (struct rte_mbuf *)ev[0].mbuf; + void *txq[DPAA2_EVENT_MAX_PORT_ENQUEUE_DEPTH]; + struct rte_mbuf *m[DPAA2_EVENT_MAX_PORT_ENQUEUE_DEPTH]; uint8_t qid, i; RTE_SET_USED(port); for (i = 0; i < nb_events; i++) { - qid = rte_event_eth_tx_adapter_txq_get(m); - rte_eth_tx_burst(m->port, qid, &m, 1); + m[i] = (struct rte_mbuf *)ev[i].mbuf; + qid = rte_event_eth_tx_adapter_txq_get(m[i]); + txq[i] = rte_eth_devices[m[i]->port].data->tx_queues[qid]; } + dpaa2_dev_tx_multi_txq_ordered(txq, m, nb_events); + return nb_events; } diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h index c21571e63d..e001a7e49d 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.h +++ b/drivers/net/dpaa2/dpaa2_ethdev.h @@ -241,6 +241,10 @@ void dpaa2_dev_process_ordered_event(struct qbman_swp *swp, uint16_t dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts); uint16_t dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts); +__rte_internal +uint16_t dpaa2_dev_tx_multi_txq_ordered(void **queue, + struct rte_mbuf **bufs, uint16_t nb_pkts); + uint16_t dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts); void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci); void dpaa2_flow_clean(struct rte_eth_dev *dev); diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c index ee3ed1b152..1096b1cf1d 100644 --- a/drivers/net/dpaa2/dpaa2_rxtx.c +++ b/drivers/net/dpaa2/dpaa2_rxtx.c @@ -1468,6 +1468,148 @@ dpaa2_set_enqueue_descriptor(struct dpaa2_queue *dpaa2_q, *dpaa2_seqn(m) = DPAA2_INVALID_MBUF_SEQN; } +uint16_t +dpaa2_dev_tx_multi_txq_ordered(void **queue, + struct rte_mbuf **bufs, uint16_t nb_pkts) +{ + /* Function to transmit the frames to multiple queues respectively.*/ + uint32_t loop, retry_count; + int32_t ret; + struct qbman_fd fd_arr[MAX_TX_RING_SLOTS]; + uint32_t frames_to_send; + struct rte_mempool *mp; + struct qbman_eq_desc eqdesc[MAX_TX_RING_SLOTS]; + struct dpaa2_queue *dpaa2_q[MAX_TX_RING_SLOTS]; + struct qbman_swp *swp; + uint16_t bpid; + struct rte_mbuf *mi; + struct rte_eth_dev_data *eth_data; + struct dpaa2_dev_priv *priv; + struct dpaa2_queue *order_sendq; + + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { + ret = dpaa2_affine_qbman_swp(); + if (ret) { + DPAA2_PMD_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); + return 0; + } + } + swp = DPAA2_PER_LCORE_PORTAL; + + for (loop = 0; loop < nb_pkts; loop++) { + dpaa2_q[loop] = (struct dpaa2_queue *)queue[loop]; + eth_data = dpaa2_q[loop]->eth_data; + priv = eth_data->dev_private; + qbman_eq_desc_clear(&eqdesc[loop]); + if (*dpaa2_seqn(*bufs) && priv->en_ordered) { + order_sendq = (struct dpaa2_queue *)priv->tx_vq[0]; + dpaa2_set_enqueue_descriptor(order_sendq, + (*bufs), + &eqdesc[loop]); + } else { + qbman_eq_desc_set_no_orp(&eqdesc[loop], + DPAA2_EQ_RESP_ERR_FQ); + qbman_eq_desc_set_fq(&eqdesc[loop], + dpaa2_q[loop]->fqid); + } + + retry_count = 0; + while (qbman_result_SCN_state(dpaa2_q[loop]->cscn)) { + retry_count++; + /* Retry for some time before giving up */ + if (retry_count > CONG_RETRY_COUNT) + goto send_frames; + } + + if (likely(RTE_MBUF_DIRECT(*bufs))) { + mp = (*bufs)->pool; + /* Check the basic scenario and set + * the FD appropriately here itself. + */ + if (likely(mp && mp->ops_index == + priv->bp_list->dpaa2_ops_index && + (*bufs)->nb_segs == 1 && + rte_mbuf_refcnt_read((*bufs)) == 1)) { + if (unlikely((*bufs)->ol_flags + & RTE_MBUF_F_TX_VLAN)) { + ret = rte_vlan_insert(bufs); + if (ret) + goto send_frames; + } + DPAA2_MBUF_TO_CONTIG_FD((*bufs), + &fd_arr[loop], + mempool_to_bpid(mp)); + bufs++; + dpaa2_q[loop]++; + continue; + } + } else { + mi = rte_mbuf_from_indirect(*bufs); + mp = mi->pool; + } + /* Not a hw_pkt pool allocated frame */ + if (unlikely(!mp || !priv->bp_list)) { + DPAA2_PMD_ERR("Err: No buffer pool attached"); + goto send_frames; + } + + if (mp->ops_index != priv->bp_list->dpaa2_ops_index) { + DPAA2_PMD_WARN("Non DPAA2 buffer pool"); + /* alloc should be from the default buffer pool + * attached to this interface + */ + bpid = priv->bp_list->buf_pool.bpid; + + if (unlikely((*bufs)->nb_segs > 1)) { + DPAA2_PMD_ERR( + "S/G not supp for non hw offload buffer"); + goto send_frames; + } + if (eth_copy_mbuf_to_fd(*bufs, + &fd_arr[loop], bpid)) { + goto send_frames; + } + /* free the original packet */ + rte_pktmbuf_free(*bufs); + } else { + bpid = mempool_to_bpid(mp); + if (unlikely((*bufs)->nb_segs > 1)) { + if (eth_mbuf_to_sg_fd(*bufs, + &fd_arr[loop], + mp, + bpid)) + goto send_frames; + } else { + eth_mbuf_to_fd(*bufs, + &fd_arr[loop], bpid); + } + } + + bufs++; + dpaa2_q[loop]++; + } + +send_frames: + frames_to_send = loop; + loop = 0; + while (loop < frames_to_send) { + ret = qbman_swp_enqueue_multiple_desc(swp, &eqdesc[loop], + &fd_arr[loop], + frames_to_send - loop); + if (likely(ret > 0)) { + loop += ret; + } else { + retry_count++; + if (retry_count > DPAA2_MAX_TX_RETRY_COUNT) + break; + } + } + + return loop; +} + /* Callback to handle sending ordered packets through WRIOP based interface */ uint16_t dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map index 2fe61f3442..cc82b8579d 100644 --- a/drivers/net/dpaa2/version.map +++ b/drivers/net/dpaa2/version.map @@ -21,6 +21,7 @@ EXPERIMENTAL { INTERNAL { global: + dpaa2_dev_tx_multi_txq_ordered; dpaa2_eth_eventq_attach; dpaa2_eth_eventq_detach; -- 2.17.1