From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BEC8DA0548; Sat, 6 Mar 2021 17:36:33 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1C4E122A52F; Sat, 6 Mar 2021 17:32:12 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 772BB22A44F for ; Sat, 6 Mar 2021 17:32:07 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 126GRndB028142 for ; Sat, 6 Mar 2021 08:32:06 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=TOnPOe70js+bosGjklKgB9e1HesZLxFA6qvwhUuI75Q=; b=bDUVWll/AUDbPInuPHEplOfAWFMI88A+YNByIuc7zHwa09bq+78cYdzVeO1bCUZMie6Z km9SJmKu6xue3EeV+pC2XEEw0JugldmY8qmCI7dPZ0D1Avcm0ejNKGaKYsUOZAK3ll7y FRgbOlI4Zi0KapZSVaGh4NL8w9PT03hqkHXhHhj42RDjlLsmG6n0j1PXdQxcxSxFsyzd iV1sDO5U5XB65kN32fxCKDjhXMA30fRmhXAGFXATyhD+GWVHRGh2posut/0Hptvh7VXl qdzZVSfljpan9utwhHsIJAyYNkGB4cjN8QjADQYfwB7h9Ow6mBKe85mfyri4KPENBVeA 9g== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 3747yurf04-7 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sat, 06 Mar 2021 08:32:06 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sat, 6 Mar 2021 08:32:03 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sat, 6 Mar 2021 08:32:02 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sat, 6 Mar 2021 08:32:02 -0800 Received: from BG-LT7430.marvell.com (unknown [10.193.68.121]) by maili.marvell.com (Postfix) with ESMTP id ED0AD3F703F; Sat, 6 Mar 2021 08:32:00 -0800 (PST) From: To: , Pavan Nikhilesh , "Shijith Thotton" CC: , Date: Sat, 6 Mar 2021 21:59:41 +0530 Message-ID: <20210306162942.6845-37-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210306162942.6845-1-pbhagavatula@marvell.com> References: <20210306162942.6845-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-03-06_08:2021-03-03, 2021-03-06 signatures=0 Subject: [dpdk-dev] [PATCH 36/36] event/cnxk: add Tx adapter fastpath ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add support for event eth Tx adapter fastpath operations. Signed-off-by: Pavan Nikhilesh --- drivers/event/cnxk/cn10k_eventdev.c | 35 ++++++++++++ drivers/event/cnxk/cn10k_worker.c | 32 +++++++++++ drivers/event/cnxk/cn10k_worker.h | 67 ++++++++++++++++++++++ drivers/event/cnxk/cn9k_eventdev.c | 76 +++++++++++++++++++++++++ drivers/event/cnxk/cn9k_worker.c | 60 ++++++++++++++++++++ drivers/event/cnxk/cn9k_worker.h | 87 +++++++++++++++++++++++++++++ 6 files changed, 357 insertions(+) diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 3662fd720..817dcc7cc 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -336,6 +336,22 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef R }; + /* Tx modes */ + const event_tx_adapter_enqueue sso_hws_tx_adptr_enq[2][2][2][2][2] = { +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + [f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_##name, + NIX_TX_FASTPATH_MODES +#undef T + }; + + const event_tx_adapter_enqueue sso_hws_tx_adptr_enq_seg[2][2][2][2][2] = + { +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + [f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_seg_##name, + NIX_TX_FASTPATH_MODES +#undef T + }; + event_dev->enqueue = cn10k_sso_hws_enq; event_dev->enqueue_burst = cn10k_sso_hws_enq_burst; event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst; @@ -395,6 +411,25 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; } } + + if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) { + /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */ + event_dev->txa_enqueue = sso_hws_tx_adptr_enq_seg + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; + } else { + event_dev->txa_enqueue = sso_hws_tx_adptr_enq + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; + } + + event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; } static void diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c index 46f72cf20..ab149c5e3 100644 --- a/drivers/event/cnxk/cn10k_worker.c +++ b/drivers/event/cnxk/cn10k_worker.c @@ -175,3 +175,35 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[], NIX_RX_FASTPATH_MODES #undef R + +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events) \ + { \ + struct cn10k_sso_hws *ws = port; \ + uint64_t cmd[sz]; \ + \ + RTE_SET_USED(nb_events); \ + return cn10k_sso_hws_event_tx( \ + ws, &ev[0], cmd, \ + (const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) & \ + ws->tx_adptr_data, \ + flags); \ + } \ + \ + uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_seg_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events) \ + { \ + uint64_t cmd[(sz) + CNXK_NIX_TX_MSEG_SG_DWORDS - 2]; \ + struct cn10k_sso_hws *ws = port; \ + \ + RTE_SET_USED(nb_events); \ + return cn10k_sso_hws_event_tx( \ + ws, &ev[0], cmd, \ + (const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) & \ + ws->tx_adptr_data, \ + (flags) | NIX_TX_MULTI_SEG_F); \ + } + +NIX_TX_FASTPATH_MODES +#undef T diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h index 9521a5c94..ebfd5dee9 100644 --- a/drivers/event/cnxk/cn10k_worker.h +++ b/drivers/event/cnxk/cn10k_worker.h @@ -11,6 +11,7 @@ #include "cn10k_ethdev.h" #include "cn10k_rx.h" +#include "cn10k_tx.h" /* SSO Operations */ @@ -239,4 +240,70 @@ uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port, NIX_RX_FASTPATH_MODES #undef R +static __rte_always_inline const struct cn10k_eth_txq * +cn10k_sso_hws_xtract_meta(struct rte_mbuf *m, + const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT]) +{ + return (const struct cn10k_eth_txq *) + txq_data[m->port][rte_event_eth_tx_adapter_txq_get(m)]; +} + +static __rte_always_inline uint16_t +cn10k_sso_hws_event_tx(struct cn10k_sso_hws *ws, struct rte_event *ev, + uint64_t *cmd, + const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT], + const uint32_t flags) +{ + const struct cn10k_eth_txq *txq; + struct rte_mbuf *m = ev->mbuf; + uint16_t ref_cnt = m->refcnt; + uintptr_t lmt_addr; + uint16_t lmt_id; + uintptr_t pa; + + lmt_addr = ws->lmt_base; + ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id); + txq = cn10k_sso_hws_xtract_meta(m, txq_data); + cn10k_nix_tx_skeleton(txq, cmd, flags); + /* Perform header writes before barrier for TSO */ + if (flags & NIX_TX_OFFLOAD_TSO_F) + cn10k_nix_xmit_prepare_tso(m, flags); + + cn10k_nix_xmit_prepare(m, cmd, lmt_addr, flags); + if (flags & NIX_TX_MULTI_SEG_F) { + const uint16_t segdw = + cn10k_nix_prepare_mseg(m, (uint64_t *)lmt_addr, flags); + pa = txq->io_addr | ((segdw - 1) << 4); + } else { + pa = txq->io_addr | (cn10k_nix_tx_ext_subs(flags) + 1) << 4; + } + if (!ev->sched_type) + cnxk_sso_hws_head_wait(ws->base + SSOW_LF_GWS_TAG); + + roc_lmt_submit_steorl(lmt_id, pa); + + if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) { + if (ref_cnt > 1) + return 1; + } + + cnxk_sso_hws_swtag_flush(ws->base + SSOW_LF_GWS_TAG, + ws->base + SSOW_LF_GWS_OP_SWTAG_FLUSH); + + return 1; +} + +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events); \ + uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_seg_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events); \ + uint16_t __rte_hot cn10k_sso_hws_dual_tx_adptr_enq_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events); \ + uint16_t __rte_hot cn10k_sso_hws_dual_tx_adptr_enq_seg_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events); + +NIX_TX_FASTPATH_MODES +#undef T + #endif diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 33b3b6237..39e5e516d 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -427,6 +427,38 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef R }; + /* Tx modes */ + const event_tx_adapter_enqueue sso_hws_tx_adptr_enq[2][2][2][2][2] = { +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + [f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_##name, + NIX_TX_FASTPATH_MODES +#undef T + }; + + const event_tx_adapter_enqueue sso_hws_tx_adptr_enq_seg[2][2][2][2][2] = + { +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + [f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_seg_##name, + NIX_TX_FASTPATH_MODES +#undef T + }; + + const event_tx_adapter_enqueue + sso_hws_dual_tx_adptr_enq[2][2][2][2][2] = { +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + [f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_tx_adptr_enq_##name, + NIX_TX_FASTPATH_MODES +#undef T + }; + + const event_tx_adapter_enqueue + sso_hws_dual_tx_adptr_enq_seg[2][2][2][2][2] = { +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + [f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_tx_adptr_enq_seg_##name, + NIX_TX_FASTPATH_MODES +#undef T + }; + event_dev->enqueue = cn9k_sso_hws_enq; event_dev->enqueue_burst = cn9k_sso_hws_enq_burst; event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst; @@ -487,6 +519,23 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) } } + if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) { + /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */ + event_dev->txa_enqueue = sso_hws_tx_adptr_enq_seg + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; + } else { + event_dev->txa_enqueue = sso_hws_tx_adptr_enq + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; + } + if (dev->dual_ws) { event_dev->enqueue = cn9k_sso_hws_dual_enq; event_dev->enqueue_burst = cn9k_sso_hws_dual_enq_burst; @@ -567,8 +616,35 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) NIX_RX_OFFLOAD_RSS_F)]; } } + + if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) { + /* [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] + */ + event_dev->txa_enqueue = sso_hws_dual_tx_adptr_enq_seg + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] + [!!(dev->tx_offloads & + NIX_TX_OFFLOAD_MBUF_NOFF_F)] + [!!(dev->tx_offloads & + NIX_TX_OFFLOAD_VLAN_QINQ_F)] + [!!(dev->tx_offloads & + NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] + [!!(dev->tx_offloads & + NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; + } else { + event_dev->txa_enqueue = sso_hws_dual_tx_adptr_enq + [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] + [!!(dev->tx_offloads & + NIX_TX_OFFLOAD_MBUF_NOFF_F)] + [!!(dev->tx_offloads & + NIX_TX_OFFLOAD_VLAN_QINQ_F)] + [!!(dev->tx_offloads & + NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] + [!!(dev->tx_offloads & + NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; + } } + event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; rte_mb(); } diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c index fb572c7c9..b19078618 100644 --- a/drivers/event/cnxk/cn9k_worker.c +++ b/drivers/event/cnxk/cn9k_worker.c @@ -376,3 +376,63 @@ cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], NIX_RX_FASTPATH_MODES #undef R + +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events) \ + { \ + struct cn9k_sso_hws *ws = port; \ + uint64_t cmd[sz]; \ + \ + RTE_SET_USED(nb_events); \ + return cn9k_sso_hws_event_tx( \ + ws->base, &ev[0], cmd, \ + (const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) & \ + ws->tx_adptr_data, \ + flags); \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_seg_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events) \ + { \ + uint64_t cmd[(sz) + CNXK_NIX_TX_MSEG_SG_DWORDS - 2]; \ + struct cn9k_sso_hws *ws = port; \ + \ + RTE_SET_USED(nb_events); \ + return cn9k_sso_hws_event_tx( \ + ws->base, &ev[0], cmd, \ + (const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) & \ + ws->tx_adptr_data, \ + (flags) | NIX_TX_MULTI_SEG_F); \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events) \ + { \ + struct cn9k_sso_hws_dual *ws = port; \ + uint64_t cmd[sz]; \ + \ + RTE_SET_USED(nb_events); \ + return cn9k_sso_hws_event_tx( \ + ws->base[!ws->vws], &ev[0], cmd, \ + (const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) & \ + ws->tx_adptr_data, \ + flags); \ + } \ + \ + uint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_seg_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events) \ + { \ + uint64_t cmd[(sz) + CNXK_NIX_TX_MSEG_SG_DWORDS - 2]; \ + struct cn9k_sso_hws_dual *ws = port; \ + \ + RTE_SET_USED(nb_events); \ + return cn9k_sso_hws_event_tx( \ + ws->base[!ws->vws], &ev[0], cmd, \ + (const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) & \ + ws->tx_adptr_data, \ + (flags) | NIX_TX_MULTI_SEG_F); \ + } + +NIX_TX_FASTPATH_MODES +#undef T diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h index bbdca3c95..382910a25 100644 --- a/drivers/event/cnxk/cn9k_worker.h +++ b/drivers/event/cnxk/cn9k_worker.h @@ -11,6 +11,7 @@ #include "cn9k_ethdev.h" #include "cn9k_rx.h" +#include "cn9k_tx.h" /* SSO Operations */ @@ -400,4 +401,90 @@ NIX_RX_FASTPATH_MODES NIX_RX_FASTPATH_MODES #undef R +static __rte_always_inline const struct cn9k_eth_txq * +cn9k_sso_hws_xtract_meta(struct rte_mbuf *m, + const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT]) +{ + return (const struct cn9k_eth_txq *) + txq_data[m->port][rte_event_eth_tx_adapter_txq_get(m)]; +} + +static __rte_always_inline void +cn9k_sso_hws_prepare_pkt(const struct cn9k_eth_txq *txq, struct rte_mbuf *m, + uint64_t *cmd, const uint32_t flags) +{ + roc_lmt_mov(cmd, txq->cmd, cn9k_nix_tx_ext_subs(flags)); + cn9k_nix_xmit_prepare(m, cmd, flags); +} + +static __rte_always_inline uint16_t +cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd, + const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT], + const uint32_t flags) +{ + struct rte_mbuf *m = ev->mbuf; + const struct cn9k_eth_txq *txq; + uint16_t ref_cnt = m->refcnt; + + /* Perform header writes before barrier for TSO */ + cn9k_nix_xmit_prepare_tso(m, flags); + /* Lets commit any changes in the packet here in case when + * fast free is set as no further changes will be made to mbuf. + * In case of fast free is not set, both cn9k_nix_prepare_mseg() + * and cn9k_nix_xmit_prepare() has a barrier after refcnt update. + */ + if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)) + rte_io_wmb(); + txq = cn9k_sso_hws_xtract_meta(m, txq_data); + cn9k_sso_hws_prepare_pkt(txq, m, cmd, flags); + + if (flags & NIX_TX_MULTI_SEG_F) { + const uint16_t segdw = cn9k_nix_prepare_mseg(m, cmd, flags); + if (!ev->sched_type) { + cn9k_nix_xmit_mseg_prep_lmt(cmd, txq->lmt_addr, segdw); + cnxk_sso_hws_head_wait(base + SSOW_LF_GWS_TAG); + if (cn9k_nix_xmit_submit_lmt(txq->io_addr) == 0) + cn9k_nix_xmit_mseg_one(cmd, txq->lmt_addr, + txq->io_addr, segdw); + } else { + cn9k_nix_xmit_mseg_one(cmd, txq->lmt_addr, txq->io_addr, + segdw); + } + } else { + if (!ev->sched_type) { + cn9k_nix_xmit_prep_lmt(cmd, txq->lmt_addr, flags); + cnxk_sso_hws_head_wait(base + SSOW_LF_GWS_TAG); + if (cn9k_nix_xmit_submit_lmt(txq->io_addr) == 0) + cn9k_nix_xmit_one(cmd, txq->lmt_addr, + txq->io_addr, flags); + } else { + cn9k_nix_xmit_one(cmd, txq->lmt_addr, txq->io_addr, + flags); + } + } + + if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) { + if (ref_cnt > 1) + return 1; + } + + cnxk_sso_hws_swtag_flush(base + SSOW_LF_GWS_TAG, + base + SSOW_LF_GWS_OP_SWTAG_FLUSH); + + return 1; +} + +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events); \ + uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_seg_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events); \ + uint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events); \ + uint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_seg_##name( \ + void *port, struct rte_event ev[], uint16_t nb_events); + +NIX_TX_FASTPATH_MODES +#undef T + #endif -- 2.17.1