From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E5C5EA046B for ; Fri, 28 Jun 2019 09:53:53 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 042121B9F6; Fri, 28 Jun 2019 09:51:27 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id B70D51B9C4 for ; Fri, 28 Jun 2019 09:51:08 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5S7o7bs001145 for ; Fri, 28 Jun 2019 00:51:08 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=H2e8XHnNX3HsyRMXQCs/qfVOOfDoJJ9PAscdDpf9/fI=; b=PlQIwhMBbi3gO4C3vckNoXO/K1LWFPoJdXL4Jf2BrKjXF4122qib8b9PZ9j/E77G5rCh ByBHaN1X76uCo3BE7uDYNF9GVhtpnfshF8oeNKqr2cLDs5XE3o9ybtUzolzAHAB1IJg6 8hJGBSttGrXpN8EzvnuIjeKHPKRsJ6nuZbDLGweL3mDWMiVeLMfQi0ejPCCvCjbF7kTx Q+uE4JqYfoCDhTau5a9G8+/vPmq6eemJNZJ+VD4kSjCmTGnA5DGOmSUum3ouvwy1ZPNe wL/6UkxGqD73mWZmVPWn4Vocs/Jtiq2dcddbZkpeMnaJMH6jEtW52pb7KF+qcteNvvnk jg== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd778ary-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 00:51:07 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 00:51:06 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 00:51:06 -0700 Received: from BG-LT7430.marvell.com (bg-lt7430.marvell.com [10.28.10.255]) by maili.marvell.com (Postfix) with ESMTP id A292C3F7040; Fri, 28 Jun 2019 00:51:05 -0700 (PDT) From: To: CC: , Pavan Nikhilesh Date: Fri, 28 Jun 2019 13:19:58 +0530 Message-ID: <20190628075024.404-20-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628075024.404-1-pbhagavatula@marvell.com> References: <20190628075024.404-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_02:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 19/44] event/octeontx2: add worker dual GWS enqueue functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add dual workslot mode event enqueue functions. Signed-off-by: Pavan Nikhilesh Signed-off-by: Jerin Jacob --- drivers/event/octeontx2/otx2_evdev.h | 9 ++ drivers/event/octeontx2/otx2_worker_dual.c | 135 +++++++++++++++++++++ 2 files changed, 144 insertions(+) diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index 72de9ace5..fd2a4c330 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -187,6 +187,7 @@ parse_kvargs_value(const char *key, const char *value, void *opaque) return 0; } +/* Single WS API's */ uint16_t otx2_ssogws_enq(void *port, const struct rte_event *ev); uint16_t otx2_ssogws_enq_burst(void *port, const struct rte_event ev[], uint16_t nb_events); @@ -204,6 +205,14 @@ uint16_t otx2_ssogws_deq_timeout(void *port, struct rte_event *ev, uint16_t otx2_ssogws_deq_timeout_burst(void *port, struct rte_event ev[], uint16_t nb_events, uint64_t timeout_ticks); +/* Dual WS API's */ +uint16_t otx2_ssogws_dual_enq(void *port, const struct rte_event *ev); +uint16_t otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[], + uint16_t nb_events); +uint16_t otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[], + uint16_t nb_events); +uint16_t otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], + uint16_t nb_events); /* Init and Fini API's */ int otx2_sso_init(struct rte_eventdev *event_dev); diff --git a/drivers/event/octeontx2/otx2_worker_dual.c b/drivers/event/octeontx2/otx2_worker_dual.c index f762436aa..661c78c23 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.c +++ b/drivers/event/octeontx2/otx2_worker_dual.c @@ -4,3 +4,138 @@ #include "otx2_worker_dual.h" #include "otx2_worker.h" + +static __rte_noinline uint8_t +otx2_ssogws_dual_new_event(struct otx2_ssogws_dual *ws, + const struct rte_event *ev) +{ + const uint32_t tag = (uint32_t)ev->event; + const uint8_t new_tt = ev->sched_type; + const uint64_t event_ptr = ev->u64; + const uint16_t grp = ev->queue_id; + + if (ws->xaq_lmt <= *ws->fc_mem) + return 0; + + otx2_ssogws_dual_add_work(ws, event_ptr, tag, new_tt, grp); + + return 1; +} + +static __rte_always_inline void +otx2_ssogws_dual_fwd_swtag(struct otx2_ssogws_state *ws, + const struct rte_event *ev) +{ + const uint32_t tag = (uint32_t)ev->event; + const uint8_t new_tt = ev->sched_type; + const uint8_t cur_tt = ws->cur_tt; + + /* 96XX model + * cur_tt/new_tt SSO_SYNC_ORDERED SSO_SYNC_ATOMIC SSO_SYNC_UNTAGGED + * + * SSO_SYNC_ORDERED norm norm untag + * SSO_SYNC_ATOMIC norm norm untag + * SSO_SYNC_UNTAGGED norm norm NOOP + */ + if (new_tt == SSO_SYNC_UNTAGGED) { + if (cur_tt != SSO_SYNC_UNTAGGED) + otx2_ssogws_swtag_untag((struct otx2_ssogws *)ws); + } else { + otx2_ssogws_swtag_norm((struct otx2_ssogws *)ws, tag, new_tt); + } +} + +static __rte_always_inline void +otx2_ssogws_dual_fwd_group(struct otx2_ssogws_state *ws, + const struct rte_event *ev, const uint16_t grp) +{ + const uint32_t tag = (uint32_t)ev->event; + const uint8_t new_tt = ev->sched_type; + + otx2_write64(ev->u64, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) + + SSOW_LF_GWS_OP_UPD_WQP_GRP1); + rte_smp_wmb(); + otx2_ssogws_swtag_desched((struct otx2_ssogws *)ws, tag, new_tt, grp); +} + +static __rte_always_inline void +otx2_ssogws_dual_forward_event(struct otx2_ssogws_dual *ws, + struct otx2_ssogws_state *vws, + const struct rte_event *ev) +{ + const uint8_t grp = ev->queue_id; + + /* Group hasn't changed, Use SWTAG to forward the event */ + if (vws->cur_grp == grp) { + otx2_ssogws_dual_fwd_swtag(vws, ev); + ws->swtag_req = 1; + } else { + /* + * Group has been changed for group based work pipelining, + * Use deschedule/add_work operation to transfer the event to + * new group/core + */ + otx2_ssogws_dual_fwd_group(vws, ev, grp); + } +} + +uint16_t __hot +otx2_ssogws_dual_enq(void *port, const struct rte_event *ev) +{ + struct otx2_ssogws_dual *ws = port; + struct otx2_ssogws_state *vws = &ws->ws_state[!ws->vws]; + + switch (ev->op) { + case RTE_EVENT_OP_NEW: + rte_smp_mb(); + return otx2_ssogws_dual_new_event(ws, ev); + case RTE_EVENT_OP_FORWARD: + otx2_ssogws_dual_forward_event(ws, vws, ev); + break; + case RTE_EVENT_OP_RELEASE: + otx2_ssogws_swtag_flush((struct otx2_ssogws *)vws); + break; + default: + return 0; + } + + return 1; +} + +uint16_t __hot +otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[], + uint16_t nb_events) +{ + RTE_SET_USED(nb_events); + return otx2_ssogws_dual_enq(port, ev); +} + +uint16_t __hot +otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[], + uint16_t nb_events) +{ + struct otx2_ssogws_dual *ws = port; + uint16_t i, rc = 1; + + rte_smp_mb(); + if (ws->xaq_lmt <= *ws->fc_mem) + return 0; + + for (i = 0; i < nb_events && rc; i++) + rc = otx2_ssogws_dual_new_event(ws, &ev[i]); + + return nb_events; +} + +uint16_t __hot +otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], + uint16_t nb_events) +{ + struct otx2_ssogws_dual *ws = port; + struct otx2_ssogws_state *vws = &ws->ws_state[!ws->vws]; + + RTE_SET_USED(nb_events); + otx2_ssogws_dual_forward_event(ws, vws, ev); + + return 1; +} -- 2.22.0