From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM02-CY1-obe.outbound.protection.outlook.com (mail-cys01nam02on0045.outbound.protection.outlook.com [104.47.37.45]) by dpdk.org (Postfix) with ESMTP id 4ED1E1B029 for ; Fri, 15 Dec 2017 13:59:48 +0100 (CET) Received: from MWHPR03CA0017.namprd03.prod.outlook.com (10.175.133.155) by BN6PR03MB2692.namprd03.prod.outlook.com (10.173.144.11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.302.9; Fri, 15 Dec 2017 12:59:46 +0000 Received: from BY2FFO11FD023.protection.gbl (2a01:111:f400:7c0c::157) by MWHPR03CA0017.outlook.office365.com (2603:10b6:300:117::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.302.9 via Frontend Transport; Fri, 15 Dec 2017 12:59:46 +0000 Authentication-Results: spf=fail (sender IP is 192.88.168.50) smtp.mailfrom=nxp.com; nxp.com; dkim=none (message not signed) header.d=none;nxp.com; dmarc=fail action=none header.from=nxp.com; Received-SPF: Fail (protection.outlook.com: domain of nxp.com does not designate 192.88.168.50 as permitted sender) receiver=protection.outlook.com; client-ip=192.88.168.50; helo=tx30smr01.am.freescale.net; Received: from tx30smr01.am.freescale.net (192.88.168.50) by BY2FFO11FD023.mail.protection.outlook.com (10.1.15.212) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.20.302.6 via Frontend Transport; Fri, 15 Dec 2017 12:59:37 +0000 Received: from sunil-OptiPlex-790.ap.freescale.net (sunil-OptiPlex-790.ap.freescale.net [10.232.132.53]) by tx30smr01.am.freescale.net (8.14.3/8.14.0) with ESMTP id vBFCxZpO020705; Fri, 15 Dec 2017 05:59:43 -0700 From: Sunil Kumar Kori To: CC: , Date: Fri, 15 Dec 2017 18:29:31 +0530 Message-ID: <20171215125933.14302-5-sunil.kori@nxp.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20171215125933.14302-1-sunil.kori@nxp.com> References: <20171215125933.14302-1-sunil.kori@nxp.com> X-EOPAttributedMessage: 0 X-Matching-Connectors: 131578163773226777; (91ab9b29-cfa4-454e-5278-08d120cd25b8); () X-Forefront-Antispam-Report: CIP:192.88.168.50; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(979002)(7966004)(336005)(346002)(39380400002)(39860400002)(376002)(396003)(2980300002)(1110001)(1109001)(339900001)(199004)(189003)(104016004)(5660300001)(6666003)(59450400001)(6916009)(2950100002)(316002)(76176011)(16586007)(2906002)(8936002)(50466002)(68736007)(48376002)(50226002)(356003)(4326008)(305945005)(81166006)(81156014)(5890100001)(8676002)(575784001)(77096006)(105606002)(86362001)(97736004)(498600001)(85426001)(2351001)(8656006)(1076002)(51416003)(53936002)(47776003)(54906003)(36756003)(106466001)(969003)(989001)(999001)(1009001)(1019001); DIR:OUT; SFP:1101; SCL:1; SRVR:BN6PR03MB2692; H:tx30smr01.am.freescale.net; FPR:; SPF:Fail; PTR:InfoDomainNonexistent; A:1; MX:1; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; BY2FFO11FD023; 1:9B+HB+pWcGw8DqKloNmQyTU/7znY1MX/XcNgqBP87kjtROUk2GO2TXsJqHgozCBgsZ5MR9WH+MZVMIHColeTENTjeiLD491K4NFqrTr53Qvz8VvzLPw1c/sWanS52Pjv MIME-Version: 1.0 Content-Type: text/plain X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: cc53a5fb-fed9-44cd-bf19-08d543bbb25d X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(5600026)(4604075)(4534020)(4628075)(201703131517081)(2017052603307); SRVR:BN6PR03MB2692; X-Microsoft-Exchange-Diagnostics: 1; BN6PR03MB2692; 3:bH7D7aJpqupaGSHBW3UlYWN4U/RomC2SxXJEVr2yeYxJno0+OSWI9LPoY4JoRGyDqIA8v1vJpqzpQ1MrRbj5/g1SHJ3+2S+GGgMBYx/wBcOSx0aDQ6icDjsXdOmlHGeSUflZZNdenReRCbXN3fWRjVXoCXSrNOuiGkBGjFW7NkN764ExNmeupiwakj4TgLo5DyWMw3F8rDhoURiEerIXQZu+QfCbe9fx0Xu6OMhYAKHV9R0LrPonHgP0AxM7+UCnXztGrV9WCNcu/2Mf9QRLfl/cYRdvZ4nEj0pIh+Y1gxczH50UhZHO1TVkbuC0TOMoEMKkEVJMb97nBf3FP/j3fBohOdjvW0SSfq7GQSIE9/8=; 25:/SDmgg6sawHki6DRMQHyKdyh6WyaNqeVXXQK6nnB2k6z5rmAi6Cq11FxeR0R9YJocSsiG0MOYDBoZXNo7L4BZmsXTx2CscsrqJF/yqr3Q2IdXwgBYWYXRklZ3mXhBm6nNkBNjnGCatET+Vu3ksuEIb13+oCwwltVnrv32Mp44ZKvPZ0UrJsUkNOHhmYitYNem8NTAb6FNKSOYiXZzuoJWFsTm3V3FOK0I5Js3wBs+oV9KFnfUSNrLme6avRcNrp5uh1EzfA1GDGlw9O6T41Lu4MzIGIihaybvyS0W9ksPP20OMa72otXWI611ubaKvS0eV48ZMv64YbObrAWxwhlkp6WV99ZRybvyXSVr0jJx/0= X-MS-TrafficTypeDiagnostic: BN6PR03MB2692: X-Microsoft-Exchange-Diagnostics: 1; BN6PR03MB2692; 31:omqnlVAUuzLYW9D68h1OHKueFWBZWOLxaW84BKIVNX1qckB8vVfHpb6IpLi5+TjXiLyYQGV9QT7op7qnzbFldQ3MaOn678X/HfGb2YCBn6SzDZpucyFJdGI5T9v4agnCI364qBhmv9cCGtLyJJikG6a6uLOC4HEpjoFXK3Ivgo5K1soa4/Oq74Xr4t+4eSfmScWKNM/9kJ/HdHHjQ0KTQwD2YZwSyMIIWG1hA7Q9nQ0=; 4:J1ShJbjTdVrEi60a9DgqruattVoGv4B372WnuS8ESgZiyNMGkOlDpMgib4viwtfZfy9C0P9fyu1TScM+66kqhQRfDluCO5L3ZuMaKzg4Gk8xRkismU5ZzF4mn5xYnBNA0jyugtybiHqAJE5JBAxKVYRKlAceZaoTHXrs5XLqTXMbNQHVH13XMbjDE+/v2dat1VNUDYy/cmR1nPIZl61+gQIeQLiVFhULvzUxPNqLJDtCSjVyZ+ImQ6MBXT/Opd77ZeVNEMPvruSz1RBuciM/41uRA14wmkzLO+LqTfiQX+Cq714rZhkZRuz1AHUwVrYVMOBUssM0g+cUPf2UlbHv8MMrfUHhmu5Uy4lCvN7vE0U44gmeC205pn9m9UR3v1Y/ X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(185117386973197)(227817650892897)(275809806118684); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6095135)(2401047)(5005006)(8121501046)(93006095)(93001095)(3231023)(10201501046)(3002001)(6055026)(6096035)(20161123565025)(20161123563025)(201703131430075)(201703131441075)(201703131448075)(201703131433075)(201703161259150)(20161123559100)(20161123556025)(20161123561025)(201708071742011); SRVR:BN6PR03MB2692; BCL:0; PCL:0; RULEID:(100000803101)(100110400095)(400006); SRVR:BN6PR03MB2692; X-Forefront-PRVS: 05220145DE X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN6PR03MB2692; 23:2FyXMn6+qrlI6U0JdXMTsIoMFTJ5WwEnRU2AsVoEV?= =?us-ascii?Q?eAv/IUK7JxFQ3xhgJj71sGuuR2lPo7yvgD8RRqs1PEN7eC9KBXS8RLUbrCD6?= =?us-ascii?Q?ZQEFxiGN0cqrq4Hg0FUO3AIO/8BBcLVLbqdgZ6Dyy/MlAN5gejDdP+hqVVUl?= =?us-ascii?Q?/Vdk84RufE2GzZ0gbRFs/Q9WB57SitJrO5l0I0EowoZuiPAiXjtxIsY/iZCQ?= =?us-ascii?Q?HW45xK2wtmRBqL4eybcRt3gmOZXawxG4mYsic/5qX0sFgjrlflns1ocmuaGb?= =?us-ascii?Q?NeVBhMcRsgxhHQrXlA3bKsEFg2Yniutn/poYHfnHZMXnLsM3QZwPk5nNtGS+?= =?us-ascii?Q?K+2OyxRw45/MEBvdJ/dvc/hQBmmCLUbk0GBxsN35Y3OUTZy+sqih5fnEWVrV?= =?us-ascii?Q?DeZwgSv/SxhRhozjibw/7ndFVV8dWV4RwsUY+Xa0l7VA5+7PBb1gZt9y+88u?= =?us-ascii?Q?3kC5zjmXyCygQ+0C3tBtrzlIgVR6qgpebwullloK+ksZenOewigWZfkBOg7C?= =?us-ascii?Q?LhTRi+Y+gC25Z2+7carnaJcy+X4llUPWfQz+BRym7c6Z6uthY5l9v9I+a/Z2?= =?us-ascii?Q?fWtCMCyzzdzpW5D6toRW97/v3PmCGk6Y2EEQY5gjJXUJhzZDopXVB8T7unD9?= =?us-ascii?Q?6C0/FMPR1x7ZNFhs/t2fF0CF1n1qzYuQj5NmmHf12Y+Hyi1GRpV6NQe7Puar?= =?us-ascii?Q?wCqrErDflYrcF+kAJTyLH5rDn+Agnu9fPdvOyFatEublhXGzmDO1tXb1M0iN?= =?us-ascii?Q?9gh2F1dfCSftXT8pV5JWyHRCi+/rOS4+36L1K/fbxREJy7cu17yDTWNcs7rX?= =?us-ascii?Q?WBPXhdhq5a3oo/8ZShbeLsHa8TwUMqhalA6ADL+IoLsZIHGuYg9Py95bL4gZ?= =?us-ascii?Q?aOQHnzhFCRk9Z0zRQaMKVCRqznueH3AzJmvVX49vwVna9ZixEdM5X/xxwITs?= =?us-ascii?Q?aLqvKcR7PB+Y0Axm9eLe+5TJtFldnDWRaQYrjNF9Nqpp5vN4sjY9DyDbsyLu?= =?us-ascii?Q?XStjmFFnJdWyjSbLcuW4cezEwruhvSHhkQn4aTQC+44tQ3JmUQPmbgFLlWAA?= =?us-ascii?Q?qkfG9NNZKdfQ3dRIXQqtg2VXtGGlpYtFDWvdbztC+1TWfOlQ70r09vBOg5bv?= =?us-ascii?Q?P9aYwNgl4RVpInGGc8jNxT9shdF26HfxAimZu66cNXljBcXxpfjSu6mK2bHn?= =?us-ascii?Q?DKFg8k5QqM9zUZiUtm1grISiNVH8SLdYeXm3ieZC0jTGELP0jg9nQUP1Wg7j?= =?us-ascii?Q?JB7dNtaUc981r/C0iP5S6qe5l23elpPcrTFDScM7QrH6su0AxD6YfWxVlzQ9?= =?us-ascii?B?dz09?= X-Microsoft-Exchange-Diagnostics: 1; BN6PR03MB2692; 6:1RseDuZg0k1JVNRtrE0igO5n9pBO8jnGHrkxdkf+64T7GuSLWOHCy1nz45RksuI3ftw9j5yOwYyuBRWW/r8HFU6Ux0e442EKVOQv+lFaQnB3gZaaur3bebvkzNF4k1NFX/eo+BYL/GFXoxcA41PNXQaJZBuQJwsdbSYo/N3NuTonBR/KXb8nWZ7Yme/hXrUJuafIa9mrkoGNB2ey+NHlbDOoxGi0OtqGasZNKHhKSg5XUfk6P/Lc+Nu+lEkuE36hqZJv6aLnfK0E136Rs8Qza/r+g4sP3UzlkTD3ITbkK9DHshCzLLJzdFBNXSUJVdK0CQLNmj5Pob/O5Dc4uW0rdFUIcpG/zWCt7DzIWtY5Fvk=; 5:PfZUoK7JwEyb5bGAbfmK9anhR4LyCMcfmcdTl6lYn6WS03EsKI2Prc+6XRGEfBBW+U63D7k0kNtjYsWX4kilbTmNFFpj2insUrAA8c0a6O7n2omKVX9mYknSDiYO1yCWpzJvljuQ37yQKAHtp1AK8gUMgMTU17J+q+sit9YwjvE=; 24:dgXTsLh78yNzxNNmOAMd8L/QKBNXJqU5AuYBEfy2UllckjQZaC4thmY8hQjMERlcAuepsflXAgYhkejfgxShP0c9nDgmbBbWdRpxsZhY/rc=; 7:MSsnGb2KV0vsWuOgHlu3IveQ75oE2KYQDjQhyD6FrLiGV8E0HikVc7eoRQ5swMO0TeNHGbJqhe8LFZgFe3YAEX1DOFt0UduE9ck9LZoJzxsSl8PacNO/hGrRSP7kw6V6QZeuOUxGMCRua5aYsdVMMrmkw2iJU3QL+FoTXj7gziG63LXIu00NOBJPVemwkYJZPQSRsBY0CqPJFymmvi2M1MxbdMkJIEw3zs8s2aghZsWmjuLBKNOOyzXPirhNFg7J SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Dec 2017 12:59:37.1198 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cc53a5fb-fed9-44cd-bf19-08d543bbb25d X-MS-Exchange-CrossTenant-Id: 5afe0b00-7697-4969-b663-5eab37d5f47e X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=5afe0b00-7697-4969-b663-5eab37d5f47e; Ip=[192.88.168.50]; Helo=[tx30smr01.am.freescale.net] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR03MB2692 Subject: [dpdk-dev] [PATCH 4/6] event/dpaa: add eventdev PMD X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 Dec 2017 12:59:49 -0000 Signed-off-by: Sunil Kumar Kori --- drivers/event/Makefile | 1 + drivers/event/dpaa/Makefile | 37 ++ drivers/event/dpaa/dpaa_eventdev.c | 652 ++++++++++++++++++++++ drivers/event/dpaa/dpaa_eventdev.h | 86 +++ drivers/event/dpaa/rte_pmd_dpaa_event_version.map | 4 + 5 files changed, 780 insertions(+) create mode 100644 drivers/event/dpaa/Makefile create mode 100644 drivers/event/dpaa/dpaa_eventdev.c create mode 100644 drivers/event/dpaa/dpaa_eventdev.h create mode 100644 drivers/event/dpaa/rte_pmd_dpaa_event_version.map diff --git a/drivers/event/Makefile b/drivers/event/Makefile index 1f9c0ba..c726234 100644 --- a/drivers/event/Makefile +++ b/drivers/event/Makefile @@ -35,5 +35,6 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += skeleton DIRS-$(CONFIG_RTE_LIBRTE_PMD_SW_EVENTDEV) += sw DIRS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF) += octeontx DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_EVENTDEV) += dpaa2 +DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA_EVENTDEV) += dpaa include $(RTE_SDK)/mk/rte.subdir.mk diff --git a/drivers/event/dpaa/Makefile b/drivers/event/dpaa/Makefile new file mode 100644 index 0000000..bd0b6c9 --- /dev/null +++ b/drivers/event/dpaa/Makefile @@ -0,0 +1,37 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright 2017 NXP +# + +include $(RTE_SDK)/mk/rte.vars.mk +RTE_SDK_DPAA=$(RTE_SDK)/drivers/net/dpaa + +# +# library name +# +LIB = librte_pmd_dpaa_event.a + +CFLAGS := -I$(SRCDIR) $(CFLAGS) +CFLAGS += -O3 $(WERROR_FLAGS) +CFLAGS += -Wno-pointer-arith +CFLAGS += -I$(RTE_SDK_DPAA)/ +CFLAGS += -I$(RTE_SDK_DPAA)/include +CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa +CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/ +CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa +CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include +CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal/include + +EXPORT_MAP := rte_pmd_dpaa_event_version.map + +LIBABIVER := 1 + +# Interfaces with DPDK +SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA_EVENTDEV) += dpaa_eventdev.c + +LDLIBS += -lrte_bus_dpaa +LDLIBS += -lrte_mempool_dpaa +LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring +LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs +LDLIBS += -lrte_eventdev -lrte_pmd_dpaa -lrte_bus_vdev + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c new file mode 100644 index 0000000..371599f --- /dev/null +++ b/drivers/event/dpaa/dpaa_eventdev.c @@ -0,0 +1,652 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2017 NXP + */ + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include "dpaa_eventdev.h" +#include + +/* + * Clarifications + * Evendev = Virtual Instance for SoC + * Eventport = Portal Instance + * Eventqueue = Channel Instance + * 1 Eventdev can have N Eventqueue + */ + +static int +dpaa_event_dequeue_timeout_ticks(struct rte_eventdev *dev, uint64_t ns, + uint64_t *timeout_ticks) +{ + uint64_t cycles_per_second; + + EVENTDEV_DRV_FUNC_TRACE(); + + RTE_SET_USED(dev); + + cycles_per_second = rte_get_timer_hz(); + *timeout_ticks = ns * (cycles_per_second / NS_PER_S); + + return 0; +} + +static void +dpaa_eventq_portal_add(u16 ch_id) +{ + uint32_t sdqcr; + + sdqcr = QM_SDQCR_CHANNELS_POOL_CONV(ch_id); + qman_static_dequeue_add(sdqcr, NULL); +} + +static uint16_t +dpaa_event_enqueue_burst(void *port, const struct rte_event ev[], + uint16_t nb_events) +{ + uint16_t i; + struct rte_mbuf *mbuf; + + RTE_SET_USED(port); + /*Release all the contexts saved previously*/ + for (i = 0; i < nb_events; i++) { + switch (ev[i].op) { + case RTE_EVENT_OP_RELEASE: + qman_dca_index(ev[i].impl_opaque, 0); + mbuf = DPAA_PER_LCORE_DQRR_MBUF(i); + mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + DPAA_PER_LCORE_DQRR_HELD &= ~(1 << i); + DPAA_PER_LCORE_DQRR_SIZE--; + break; + default: + break; + } + } + + return nb_events; +} + +static uint16_t +dpaa_event_enqueue(void *port, const struct rte_event *ev) +{ + return dpaa_event_enqueue_burst(port, ev, 1); +} + +static uint16_t +dpaa_event_dequeue_burst(void *port, struct rte_event ev[], + uint16_t nb_events, uint64_t timeout_ticks) +{ + int ret; + u16 ch_id; + void *buffers[8]; + u32 num_frames, i; + uint64_t wait_time, cur_ticks, start_ticks; + struct dpaa_port *portal = (struct dpaa_port *)port; + struct rte_mbuf *mbuf; + + /* Affine current thread context to a qman portal */ + ret = rte_dpaa_portal_init((void *)0); + if (ret) { + DPAA_EVENTDEV_ERR("Unable to initialize portal"); + return ret; + } + + if (unlikely(!portal->is_port_linked)) { + /* + * Affine event queue for current thread context + * to a qman portal. + */ + for (i = 0; i < portal->num_linked_evq; i++) { + ch_id = portal->evq_info[i].ch_id; + dpaa_eventq_portal_add(ch_id); + } + portal->is_port_linked = true; + } + + /* Check if there are atomic contexts to be released */ + i = 0; + while (DPAA_PER_LCORE_DQRR_SIZE) { + if (DPAA_PER_LCORE_DQRR_HELD & (1 << i)) { + qman_dca_index(i, 0); + mbuf = DPAA_PER_LCORE_DQRR_MBUF(i); + mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + DPAA_PER_LCORE_DQRR_HELD &= ~(1 << i); + DPAA_PER_LCORE_DQRR_SIZE--; + } + i++; + } + DPAA_PER_LCORE_DQRR_HELD = 0; + + if (portal->timeout == DPAA_EVENT_PORT_DEQUEUE_TIMEOUT_INVALID) + wait_time = timeout_ticks; + else + wait_time = portal->timeout; + + /* Lets dequeue the frames */ + start_ticks = rte_get_timer_cycles(); + wait_time += start_ticks; + do { + num_frames = qman_portal_dequeue(ev, nb_events, buffers); + if (num_frames != 0) + break; + cur_ticks = rte_get_timer_cycles(); + } while (cur_ticks < wait_time); + + return num_frames; +} + +static uint16_t +dpaa_event_dequeue(void *port, struct rte_event *ev, uint64_t timeout_ticks) +{ + return dpaa_event_dequeue_burst(port, ev, 1, timeout_ticks); +} + +static void +dpaa_event_dev_info_get(struct rte_eventdev *dev, + struct rte_event_dev_info *dev_info) +{ + EVENTDEV_DRV_FUNC_TRACE(); + + RTE_SET_USED(dev); + dev_info->driver_name = "event_dpaa"; + dev_info->min_dequeue_timeout_ns = + DPAA_EVENT_MIN_DEQUEUE_TIMEOUT; + dev_info->max_dequeue_timeout_ns = + DPAA_EVENT_MAX_DEQUEUE_TIMEOUT; + dev_info->dequeue_timeout_ns = + DPAA_EVENT_MIN_DEQUEUE_TIMEOUT; + dev_info->max_event_queues = + DPAA_EVENT_MAX_QUEUES; + dev_info->max_event_queue_flows = + DPAA_EVENT_MAX_QUEUE_FLOWS; + dev_info->max_event_queue_priority_levels = + DPAA_EVENT_MAX_QUEUE_PRIORITY_LEVELS; + dev_info->max_event_priority_levels = + DPAA_EVENT_MAX_EVENT_PRIORITY_LEVELS; + dev_info->max_event_ports = + DPAA_EVENT_MAX_EVENT_PORT; + dev_info->max_event_port_dequeue_depth = + DPAA_EVENT_MAX_PORT_DEQUEUE_DEPTH; + dev_info->max_event_port_enqueue_depth = + DPAA_EVENT_MAX_PORT_ENQUEUE_DEPTH; + /* + * TODO: Need to find out that how to fetch this info + * from kernel or somewhere else. + */ + dev_info->max_num_events = + DPAA_EVENT_MAX_NUM_EVENTS; + dev_info->event_dev_cap = + RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED | + RTE_EVENT_DEV_CAP_BURST_MODE; +} + +static int +dpaa_event_dev_configure(const struct rte_eventdev *dev) +{ + struct dpaa_eventdev *priv = dev->data->dev_private; + struct rte_event_dev_config *conf = &dev->data->dev_conf; + int ret, i; + uint32_t *ch_id; + + EVENTDEV_DRV_FUNC_TRACE(); + + priv->dequeue_timeout_ns = conf->dequeue_timeout_ns; + priv->nb_events_limit = conf->nb_events_limit; + priv->nb_event_queues = conf->nb_event_queues; + priv->nb_event_ports = conf->nb_event_ports; + priv->nb_event_queue_flows = conf->nb_event_queue_flows; + priv->nb_event_port_dequeue_depth = conf->nb_event_port_dequeue_depth; + priv->nb_event_port_enqueue_depth = conf->nb_event_port_enqueue_depth; + priv->event_dev_cfg = conf->event_dev_cfg; + + /* Check dequeue timeout method is per dequeue or global */ + if (priv->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT) { + /* + * Use timeout value as given in dequeue operation. + * So invalidating this timetout value. + */ + priv->dequeue_timeout_ns = 0; + } + + ch_id = rte_malloc("dpaa-channels", + sizeof(uint32_t) * priv->nb_event_queues, + RTE_CACHE_LINE_SIZE); + if (ch_id == NULL) { + EVENTDEV_DRV_ERR("Fail to allocate memory for dpaa channels\n"); + return -ENOMEM; + } + /* Create requested event queues within the given event device */ + ret = qman_alloc_pool_range(ch_id, priv->nb_event_queues, 1, 0); + if (ret < 0) { + EVENTDEV_DRV_ERR("Failed to create internal channel\n"); + rte_free(ch_id); + return ret; + } + for (i = 0; i < priv->nb_event_queues; i++) + priv->evq_info[i].ch_id = (u16)ch_id[i]; + + /* Lets prepare event ports */ + memset(&priv->ports[0], 0, + sizeof(struct dpaa_port) * priv->nb_event_ports); + if (priv->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT) { + for (i = 0; i < priv->nb_event_ports; i++) { + priv->ports[i].timeout = + DPAA_EVENT_PORT_DEQUEUE_TIMEOUT_INVALID; + } + } else if (priv->dequeue_timeout_ns == 0) { + for (i = 0; i < priv->nb_event_ports; i++) { + dpaa_event_dequeue_timeout_ticks(NULL, + DPAA_EVENT_PORT_DEQUEUE_TIMEOUT_NS, + &priv->ports[i].timeout); + } + } else { + for (i = 0; i < priv->nb_event_ports; i++) { + dpaa_event_dequeue_timeout_ticks(NULL, + priv->dequeue_timeout_ns, + &priv->ports[i].timeout); + } + } + /* + * TODO: Currently portals are affined with threads. Maximum threads + * can be created equals to number of lcore. + */ + rte_free(ch_id); + EVENTDEV_DRV_LOG(DEBUG, "Configured eventdev devid=%d", + dev->data->dev_id); + + return 0; +} + +static int +dpaa_event_dev_start(struct rte_eventdev *dev) +{ + EVENTDEV_DRV_FUNC_TRACE(); + RTE_SET_USED(dev); + + return 0; +} + +static void +dpaa_event_dev_stop(struct rte_eventdev *dev) +{ + EVENTDEV_DRV_FUNC_TRACE(); + RTE_SET_USED(dev); +} + +static int +dpaa_event_dev_close(struct rte_eventdev *dev) +{ + EVENTDEV_DRV_FUNC_TRACE(); + RTE_SET_USED(dev); + + return 0; +} + +static void +dpaa_event_queue_def_conf(struct rte_eventdev *dev, uint8_t queue_id, + struct rte_event_queue_conf *queue_conf) +{ + EVENTDEV_DRV_FUNC_TRACE(); + + RTE_SET_USED(dev); + RTE_SET_USED(queue_id); + + memset(queue_conf, 0, sizeof(struct rte_event_queue_conf)); + queue_conf->schedule_type = RTE_SCHED_TYPE_PARALLEL; + queue_conf->priority = RTE_EVENT_DEV_PRIORITY_HIGHEST; +} + +static int +dpaa_event_queue_setup(struct rte_eventdev *dev, uint8_t queue_id, + const struct rte_event_queue_conf *queue_conf) +{ + struct dpaa_eventdev *priv = dev->data->dev_private; + struct dpaa_eventq *evq_info = &priv->evq_info[queue_id]; + + EVENTDEV_DRV_FUNC_TRACE(); + + switch (queue_conf->schedule_type) { + case RTE_SCHED_TYPE_PARALLEL: + case RTE_SCHED_TYPE_ATOMIC: + break; + case RTE_SCHED_TYPE_ORDERED: + EVENTDEV_DRV_ERR("Schedule type is not supported."); + return -1; + } + evq_info->event_queue_cfg = queue_conf->event_queue_cfg; + evq_info->event_queue_id = queue_id; + + return 0; +} + +static void +dpaa_event_queue_release(struct rte_eventdev *dev, uint8_t queue_id) +{ + EVENTDEV_DRV_FUNC_TRACE(); + + RTE_SET_USED(dev); + RTE_SET_USED(queue_id); +} + +static void +dpaa_event_port_default_conf_get(struct rte_eventdev *dev, uint8_t port_id, + struct rte_event_port_conf *port_conf) +{ + EVENTDEV_DRV_FUNC_TRACE(); + + RTE_SET_USED(dev); + RTE_SET_USED(port_id); + + port_conf->new_event_threshold = DPAA_EVENT_MAX_NUM_EVENTS; + port_conf->dequeue_depth = DPAA_EVENT_MAX_PORT_DEQUEUE_DEPTH; + port_conf->enqueue_depth = DPAA_EVENT_MAX_PORT_ENQUEUE_DEPTH; +} + +static int +dpaa_event_port_setup(struct rte_eventdev *dev, uint8_t port_id, + const struct rte_event_port_conf *port_conf) +{ + struct dpaa_eventdev *eventdev = dev->data->dev_private; + + EVENTDEV_DRV_FUNC_TRACE(); + + RTE_SET_USED(port_conf); + dev->data->ports[port_id] = &eventdev->ports[port_id]; + + return 0; +} + +static void +dpaa_event_port_release(void *port) +{ + EVENTDEV_DRV_FUNC_TRACE(); + + RTE_SET_USED(port); +} + +static int +dpaa_event_port_link(struct rte_eventdev *dev, void *port, + const uint8_t queues[], const uint8_t priorities[], + uint16_t nb_links) +{ + struct dpaa_eventdev *priv = dev->data->dev_private; + struct dpaa_port *event_port = (struct dpaa_port *)port; + struct dpaa_eventq *event_queue; + uint8_t eventq_id; + int i; + + RTE_SET_USED(dev); + RTE_SET_USED(priorities); + + /* First check that input configuration are valid */ + for (i = 0; i < nb_links; i++) { + eventq_id = queues[i]; + event_queue = &priv->evq_info[eventq_id]; + if ((event_queue->event_queue_cfg + & RTE_EVENT_QUEUE_CFG_SINGLE_LINK) + && (event_queue->event_port)) { + return -EINVAL; + } + } + + for (i = 0; i < nb_links; i++) { + eventq_id = queues[i]; + event_queue = &priv->evq_info[eventq_id]; + event_port->evq_info[i].event_queue_id = eventq_id; + event_port->evq_info[i].ch_id = event_queue->ch_id; + event_queue->event_port = port; + } + + event_port->num_linked_evq = event_port->num_linked_evq + i; + + return (int)i; +} + +static int +dpaa_event_port_unlink(struct rte_eventdev *dev, void *port, + uint8_t queues[], uint16_t nb_links) +{ + int i; + uint8_t eventq_id; + struct dpaa_eventq *event_queue; + struct dpaa_eventdev *priv = dev->data->dev_private; + struct dpaa_port *event_port = (struct dpaa_port *)port; + + if (!event_port->num_linked_evq) + return nb_links; + + for (i = 0; i < nb_links; i++) { + eventq_id = queues[i]; + event_port->evq_info[eventq_id].event_queue_id = -1; + event_port->evq_info[eventq_id].ch_id = 0; + event_queue = &priv->evq_info[eventq_id]; + event_queue->event_port = NULL; + } + + event_port->num_linked_evq = event_port->num_linked_evq - i; + + return (int)i; +} + +static int +dpaa_event_eth_rx_adapter_caps_get(const struct rte_eventdev *dev, + const struct rte_eth_dev *eth_dev, + uint32_t *caps) +{ + const char *ethdev_driver = eth_dev->device->driver->name; + + EVENTDEV_DRV_FUNC_TRACE(); + + RTE_SET_USED(dev); + + if (!strcmp(ethdev_driver, "net_dpaa")) + *caps = RTE_EVENT_ETH_RX_ADAPTER_DPAA_CAP; + else + *caps = RTE_EVENT_ETH_RX_ADAPTER_SW_CAP; + + return 0; +} + +static int +dpaa_event_eth_rx_adapter_queue_add( + const struct rte_eventdev *dev, + const struct rte_eth_dev *eth_dev, + int32_t rx_queue_id, + const struct rte_event_eth_rx_adapter_queue_conf *queue_conf) +{ + struct dpaa_eventdev *eventdev = dev->data->dev_private; + uint8_t ev_qid = queue_conf->ev.queue_id; + u16 ch_id = eventdev->evq_info[ev_qid].ch_id; + struct dpaa_if *dpaa_intf = eth_dev->data->dev_private; + int ret, i; + + EVENTDEV_DRV_FUNC_TRACE(); + + if (rx_queue_id == -1) { + for (i = 0; i < dpaa_intf->nb_rx_queues; i++) { + ret = dpaa_eth_eventq_attach(eth_dev, i, ch_id, + queue_conf); + if (ret) { + EVENTDEV_DRV_ERR( + "Event Queue attach failed:%d\n", ret); + goto detach_configured_queues; + } + } + return 0; + } + + ret = dpaa_eth_eventq_attach(eth_dev, rx_queue_id, ch_id, queue_conf); + if (ret) + EVENTDEV_DRV_ERR("dpaa_eth_eventq_attach failed:%d\n", ret); + return ret; + +detach_configured_queues: + + for (i = (i - 1); i >= 0 ; i--) + dpaa_eth_eventq_detach(eth_dev, i); + + return ret; +} + +static int +dpaa_event_eth_rx_adapter_queue_del(const struct rte_eventdev *dev, + const struct rte_eth_dev *eth_dev, + int32_t rx_queue_id) +{ + int ret, i; + struct dpaa_if *dpaa_intf = eth_dev->data->dev_private; + + EVENTDEV_DRV_FUNC_TRACE(); + + RTE_SET_USED(dev); + if (rx_queue_id == -1) { + for (i = 0; i < dpaa_intf->nb_rx_queues; i++) { + ret = dpaa_eth_eventq_detach(eth_dev, i); + if (ret) + EVENTDEV_DRV_ERR( + "Event Queue detach failed:%d\n", ret); + } + + return 0; + } + + ret = dpaa_eth_eventq_detach(eth_dev, rx_queue_id); + if (ret) + EVENTDEV_DRV_ERR("dpaa_eth_eventq_detach failed:%d\n", ret); + return ret; +} + +static int +dpaa_event_eth_rx_adapter_start(const struct rte_eventdev *dev, + const struct rte_eth_dev *eth_dev) +{ + EVENTDEV_DRV_FUNC_TRACE(); + + RTE_SET_USED(dev); + RTE_SET_USED(eth_dev); + + return 0; +} + +static int +dpaa_event_eth_rx_adapter_stop(const struct rte_eventdev *dev, + const struct rte_eth_dev *eth_dev) +{ + EVENTDEV_DRV_FUNC_TRACE(); + + RTE_SET_USED(dev); + RTE_SET_USED(eth_dev); + + return 0; +} + +static const struct rte_eventdev_ops dpaa_eventdev_ops = { + .dev_infos_get = dpaa_event_dev_info_get, + .dev_configure = dpaa_event_dev_configure, + .dev_start = dpaa_event_dev_start, + .dev_stop = dpaa_event_dev_stop, + .dev_close = dpaa_event_dev_close, + .queue_def_conf = dpaa_event_queue_def_conf, + .queue_setup = dpaa_event_queue_setup, + .queue_release = dpaa_event_queue_release, + .port_def_conf = dpaa_event_port_default_conf_get, + .port_setup = dpaa_event_port_setup, + .port_release = dpaa_event_port_release, + .port_link = dpaa_event_port_link, + .port_unlink = dpaa_event_port_unlink, + .timeout_ticks = dpaa_event_dequeue_timeout_ticks, + .eth_rx_adapter_caps_get = dpaa_event_eth_rx_adapter_caps_get, + .eth_rx_adapter_queue_add = dpaa_event_eth_rx_adapter_queue_add, + .eth_rx_adapter_queue_del = dpaa_event_eth_rx_adapter_queue_del, + .eth_rx_adapter_start = dpaa_event_eth_rx_adapter_start, + .eth_rx_adapter_stop = dpaa_event_eth_rx_adapter_stop, +}; + +static int +dpaa_event_dev_create(const char *name) +{ + struct rte_eventdev *eventdev; + struct dpaa_eventdev *priv; + + eventdev = rte_event_pmd_vdev_init(name, + sizeof(struct dpaa_eventdev), + rte_socket_id()); + if (eventdev == NULL) { + EVENTDEV_DRV_ERR("Failed to create eventdev vdev %s", name); + goto fail; + } + + eventdev->dev_ops = &dpaa_eventdev_ops; + eventdev->enqueue = dpaa_event_enqueue; + eventdev->enqueue_burst = dpaa_event_enqueue_burst; + eventdev->dequeue = dpaa_event_dequeue; + eventdev->dequeue_burst = dpaa_event_dequeue_burst; + + /* For secondary processes, the primary has done all the work */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + priv = eventdev->data->dev_private; + priv->max_event_queues = DPAA_EVENT_MAX_QUEUES; + + return 0; +fail: + return -EFAULT; +} + +static int +dpaa_event_dev_probe(struct rte_vdev_device *vdev) +{ + const char *name; + + name = rte_vdev_device_name(vdev); + EVENTDEV_DRV_LOG(INFO, "Initializing %s", name); + + return dpaa_event_dev_create(name); +} + +static int +dpaa_event_dev_remove(struct rte_vdev_device *vdev) +{ + const char *name; + + name = rte_vdev_device_name(vdev); + EVENTDEV_DRV_LOG(INFO, "Closing %s", name); + + return rte_event_pmd_vdev_uninit(name); +} + +static struct rte_vdev_driver vdev_eventdev_dpaa_pmd = { + .probe = dpaa_event_dev_probe, + .remove = dpaa_event_dev_remove +}; + +RTE_PMD_REGISTER_VDEV(EVENTDEV_NAME_DPAA_PMD, vdev_eventdev_dpaa_pmd); diff --git a/drivers/event/dpaa/dpaa_eventdev.h b/drivers/event/dpaa/dpaa_eventdev.h new file mode 100644 index 0000000..965c1fd --- /dev/null +++ b/drivers/event/dpaa/dpaa_eventdev.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2017 NXP + */ + +#ifndef __DPAA_EVENTDEV_H__ +#define __DPAA_EVENTDEV_H__ + +#include +#include +#include +#include + +#define EVENTDEV_NAME_DPAA_PMD event_dpaa + +#ifdef RTE_LIBRTE_PMD_DPAA_EVENTDEV_DEBUG +#define EVENTDEV_DRV_LOG(level, fmt, args...) \ + RTE_LOG(level, EVENTDEV, "%s(): " fmt "\n", __func__, ## args) +#define EVENTDEV_DRV_FUNC_TRACE() EVENT_DRV_LOG(DEBUG, ">>") +#else +#define EVENTDEV_DRV_LOG(level, fmt, args...) do { } while (0) +#define EVENTDEV_DRV_FUNC_TRACE() do { } while (0) +#endif + +#define EVENTDEV_DRV_ERR(fmt, args...) \ + RTE_LOG(ERR, EVENTDEV, "%s(): " fmt "\n", __func__, ## args) + +#define DPAA_EVENT_MAX_PORTS 8 +#define DPAA_EVENT_MAX_QUEUES 16 +#define DPAA_EVENT_MIN_DEQUEUE_TIMEOUT 1 +#define DPAA_EVENT_MAX_DEQUEUE_TIMEOUT (UINT32_MAX - 1) +#define DPAA_EVENT_MAX_QUEUE_FLOWS 2048 +#define DPAA_EVENT_MAX_QUEUE_PRIORITY_LEVELS 8 +#define DPAA_EVENT_MAX_EVENT_PRIORITY_LEVELS 0 +#define DPAA_EVENT_MAX_EVENT_PORT RTE_MAX_LCORE +#define DPAA_EVENT_MAX_PORT_DEQUEUE_DEPTH 8 +#define DPAA_EVENT_PORT_DEQUEUE_TIMEOUT_NS 100UL +#define DPAA_EVENT_PORT_DEQUEUE_TIMEOUT_INVALID ((uint64_t)-1) +#define DPAA_EVENT_MAX_PORT_ENQUEUE_DEPTH 1 +#define DPAA_EVENT_MAX_NUM_EVENTS (INT32_MAX - 1) + +#define DPAA_EVENT_DEV_CAP \ +do { \ + RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED | \ + RTE_EVENT_DEV_CAP_BURST_MODE; \ +} while (0) + +#define DPAA_EVENT_QUEUE_ATOMIC_FLOWS 0 +#define DPAA_EVENT_QUEUE_ORDER_SEQUENCES 2048 + +#define RTE_EVENT_ETH_RX_ADAPTER_DPAA_CAP \ + (RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT | \ + RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ | \ + RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID) + +struct dpaa_eventq { + /* Channel Id */ + uint16_t ch_id; + /* Configuration provided by the user */ + uint32_t event_queue_cfg; + uint32_t event_queue_id; + /* Event port */ + void *event_port; +}; + +struct dpaa_port { + struct dpaa_eventq evq_info[DPAA_EVENT_MAX_QUEUES]; + uint8_t num_linked_evq; + uint8_t is_port_linked; + uint64_t timeout; +}; + +struct dpaa_eventdev { + struct dpaa_eventq evq_info[DPAA_EVENT_MAX_QUEUES]; + struct dpaa_port ports[DPAA_EVENT_MAX_PORTS]; + uint32_t dequeue_timeout_ns; + uint32_t nb_events_limit; + uint8_t max_event_queues; + uint8_t nb_event_queues; + uint8_t nb_event_ports; + uint8_t resvd; + uint32_t nb_event_queue_flows; + uint32_t nb_event_port_dequeue_depth; + uint32_t nb_event_port_enqueue_depth; + uint32_t event_dev_cfg; +}; +#endif /* __DPAA_EVENTDEV_H__ */ diff --git a/drivers/event/dpaa/rte_pmd_dpaa_event_version.map b/drivers/event/dpaa/rte_pmd_dpaa_event_version.map new file mode 100644 index 0000000..179140f --- /dev/null +++ b/drivers/event/dpaa/rte_pmd_dpaa_event_version.map @@ -0,0 +1,4 @@ +DPDK_18.02 { + + local: *; +}; -- 2.9.3