From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM02-BL2-obe.outbound.protection.outlook.com (mail-bl2nam02on0058.outbound.protection.outlook.com [104.47.38.58]) by dpdk.org (Postfix) with ESMTP id 674E12B89 for ; Thu, 7 Dec 2017 21:39:04 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=b+FWg5s37rHJGW0sEz5Oyew3lZBlBS6fBoGL0iVoFIk=; b=STbVMzb0zvePi+Eaa4Htnre35HUzQH2n+ZXAuPvgN4+/Ibq2Gc0mqFLvKO/vgNamxAUytlz9O4v9iz+OkGYos2Tahn1Yao7t7qBPKy9F7Jo6a14ib0MFmZt6Fv73yG23dMPag+Vaxw4CV/pe0gkqHzkW0f1SXcKym0EMGmO2fac= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Pavan.Bhagavatula@cavium.com; Received: from localhost.localdomain (111.93.218.67) by DM5PR07MB3468.namprd07.prod.outlook.com (10.164.153.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.282.5; Thu, 7 Dec 2017 20:38:55 +0000 From: Pavan Nikhilesh To: gage.eads@intel.com, jerin.jacobkollanukkaran@cavium.com, harry.van.haaren@intel.com, nikhil.rao@intel.com, hemant.agrawal@nxp.com, liang.j.ma@intel.com Cc: dev@dpdk.org, Pavan Nikhilesh Date: Fri, 8 Dec 2017 02:06:59 +0530 Message-Id: <20171207203705.25020-8-pbhagavatula@caviumnetworks.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20171207203705.25020-1-pbhagavatula@caviumnetworks.com> References: <20171207203705.25020-1-pbhagavatula@caviumnetworks.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: PS1PR0601CA0101.apcprd06.prod.outlook.com (10.170.176.155) To DM5PR07MB3468.namprd07.prod.outlook.com (10.164.153.23) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b7d5f08c-c419-480d-51b2-08d53db28c57 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(5600026)(4604075)(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(2017052603304); SRVR:DM5PR07MB3468; X-Microsoft-Exchange-Diagnostics: 1; DM5PR07MB3468; 3:6Bl3bn2j5G9f7OAYnO/lfbG8gjz1vxvsFe9JD1RbqcAdaRZVP3k2TST/58VUygh6acXVFkuITKbkfYRaPlTlrF3vxMKlv8jSkGkx2uJzRPKIuGGu49fo2lTIRrDA/tDV++jSLVnT46T7SbxnqdiJcvLJXf1PPedbvaJDSqZsOK1zcnGF2CVuwvqCEpudkByL20W5zkn9xWd8EHbBRFZjJtORKE22vkQ31DDpJYbglZemTAPHNepyMkYYrjxOv+tm; 25:JSJkSWnQ0xYCrRWno935MAt/1Wu6KJYs73Ikimwauf34M0JcXRGBgtXtAqCdtDHJmlaApwTghU+5YIY0Dr1cv+JZshkNxhRKnCeEt8EfkpH4mFnRhSaGDW0vX1zBQ+wBsM5+f4sx8Le71UotdwVvcIDRmQEW0VTgOTUzDJyX/8AhnwysKiCM/biuM06A8nZEtCJi3MmAqJgJOR4ANBwaxRBtIHKXT9l7rR+hpyvgSKXNi8piaWfLyUJBvatBSKhNk+nq5mRznIy6+qgI/iaLYqmYXbQkqQJ9LTKE+u3bkfDXI20l8vxkx/LiNKNKw2GULkB+0pdj8Vdw6AvJLswlHA==; 31:ADtjOysXtDI5u+DCPR9ZdfYyWhWig4VM9bY5WdOGq+osKI3cTkG+Y6ElUDh/iuuWjEMkRHBvs2COoUxrlmqfJr+hMD8IPPfmO82iW7HCRQq5zmSjQpYFH/B6Spz+QvMwNtJ2s5e3yRmshY35FRc0yHnjRywvisL8gRBDyaZGnx7ELwX+3MvI/XxAsgd0uUqd82fVcylPcETG+GNlj7G6bRYCBDPgqMOUJ/X1IYNacIA= X-MS-TrafficTypeDiagnostic: DM5PR07MB3468: X-Microsoft-Exchange-Diagnostics: 1; DM5PR07MB3468; 20:JnTX2rJxlNewtQ2O8DJoHwc9pvnji606ybwnuIT2OAbE95s0vbrjfbolJvaiH7czefXL55WMtROUddWjKHzRIs0h5fvO3YPczbX4cSIXzBlngE7ffxbZcPfZtWmYuEoxoVoeW2Zwum0lmPG5OYj5aUD92RHeGdPqGP42bavFpMKMf57LYS6AokkITeiTJcxtRYqfsJlZ7ZcwWWaWFugTZuX/7pWaf27yClvmf2jZbyzmCwF7OVCzH21gbsDAbz/rSlFjP2bROQLNxsqLbOFuP7KbiHItAz4+M/EZXoCfLbKnyT0N/2hzi3wy9vC5OMCrx6VYPY2cCpH6FiDl5upnGRPpailZFVF82NGdqavZ/i5pjTohyxP7fcBS/UdypLx+aEyfpFz7bjr8+dK3yT1d6Gy/FULrQXcZ4ibrCAyN0yWwGn1gyr3nbcgUFTZjeflQKY6C83VaH0FjvKGEIs8SmevWVgVkvGSq9zFiuNHkz2E6dVMhJLdCzarWLRtgKGPIMVDOzRMnsOrWBfCl+egl3TyvkdJcinOsEze8pcFWxngiy+8ZpcPuvRbvZix7sfGDfzbEeofSRtzISd0Ma3TX2fxS32VHdknZBTGHa5O3Pd0=; 4:qQmNiV3rHI4H+eJzuhkBJRMhdV38JjwLvK8x+7+2+MwsKOMmI8S77Y97bixajdU2WgJD7JT6upaVfHvwu2TCPVkfpGYPItVr6FpwbQMEva0aRVZtzh/J+kB04HJgxnzIbwA2d2IQ+cLsQKEegVGmoFkCtpez1FFTsiGm8a5nXGlkaGw0qF+tkSMBFh1Gi4jd157LqFGJs/I30ficUaE+CwkE7g3Fj0XurlWf22MQD8O+WLbs35rROzRKDXZ+hJx1mL+Wq9JrL9WjVCEtK3/gUg== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040450)(2401047)(5005006)(8121501046)(93006095)(3002001)(10201501046)(3231022)(6041248)(20161123560025)(20161123564025)(20161123562025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123558100)(20161123555025)(6072148)(201708071742011); SRVR:DM5PR07MB3468; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:DM5PR07MB3468; X-Forefront-PRVS: 05143A8241 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6069001)(376002)(366004)(346002)(199004)(189003)(8676002)(2950100002)(42882006)(2906002)(50226002)(5009440100003)(305945005)(81156014)(8936002)(97736004)(5660300001)(101416001)(33646002)(478600001)(7736002)(81166006)(68736007)(107886003)(4326008)(25786009)(16526018)(50466002)(16586007)(48376002)(106356001)(6506006)(6486002)(6116002)(105586002)(3846002)(51416003)(76176011)(47776003)(72206003)(52116002)(8656006)(316002)(66066001)(1076002)(53936002)(6512007)(36756003)(42262002); DIR:OUT; SFP:1101; SCL:1; SRVR:DM5PR07MB3468; H:localhost.localdomain; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; DM5PR07MB3468; 23:NRCwqhx+TWip++Olf8+Vj63HYncQvXdqwCd/ef39b?= =?us-ascii?Q?RiNKgW1BJjZ3ZZVEQ4W2hyjYYYSOCgnIefkG6sn/88iRihmdCq+jcZNMPqIg?= =?us-ascii?Q?tl526xtHIcW0tKTh3ZsZn9yx7GTW4DJnOk2TaBYhYXHLKk1eQ/G3KTbXH2ch?= =?us-ascii?Q?hH/MUhigCTuK8IW+PjRmDQ0ffFfLR7KgYC3YgYlvMyI7QG+LTKIMyrfOKAxr?= =?us-ascii?Q?X+PsmdQdQkb0v1bJ0cDJ8pdHP4Aiiip2Bmyib/71PBLPIZ6Nl0cyZ+2QaFd1?= =?us-ascii?Q?qVMJmNhZh9VMzio6960itqtft+eKcd05KgIOFhc4vYPwkYIHayv1LtVXkYJu?= =?us-ascii?Q?bGoSVawcsMWxGDIOackwA+LLgtuzvo7UtxtSbOChJidAcUa544fsDW67iUI6?= =?us-ascii?Q?1Q9ekHQtksd4m5HrmhkMz1Rgp2GJgx1J1NQ0dnvjbHw2bxcFYWPphQHdxQRa?= =?us-ascii?Q?MAP+3EYDHCCKPldPeDDPrVa1Ap8Sss0/2qP96e7CAhQ53vPUWqphpLmaf0al?= =?us-ascii?Q?NIN31/+lapmJfE06Ww8BwsGgcwfRZC6m7/NtuLUuS4kF929LkCjG9iiyJ/QM?= =?us-ascii?Q?F2wBCJyDTSfVaRuLoa6Dey+veiSGytqemH0LTKqf+u/XcLJWe6BVjIMKOwbc?= =?us-ascii?Q?tyWW7oWEQuyV9cmMm2s0q5nnavHnvFVEKPxM7n/yP4Y3+3lAiaPVYHv5z4tE?= =?us-ascii?Q?E4PuRzfrus7Bh0ioEmLkbarQXiUMQr+d37QG/N2qghK0cE/e9OuUkdmyczZx?= =?us-ascii?Q?KGOFHkkDaNi6Y49pOi77K2HgZADEOWMC08Jb8/k85dx4//DSJ5Eg+RIvhsD0?= =?us-ascii?Q?ZS0z434kQPBI5Jf3N8WE3MGc8tWUzRCmSx+6/Nk1gs4FLMlylkrnzaqSk/A+?= =?us-ascii?Q?L/JZL30+mKen2Rx56D9sJlzdvaLx7sh57UabDuQiVOGiz4MCmiGTlOqMRMuG?= =?us-ascii?Q?erwwWAxYzScHvYVjxJJhqAzaNyXa9Y/+V6C3moEDmSU3N2hkQVpF4rIVMgBM?= =?us-ascii?Q?CGkyx0+vRntsdVmCZBz1yJkgvQtARUh5AWWYjyOJcDVuqYKUcty1jPmjIvRz?= =?us-ascii?Q?vZson9cp6O6ElVXlqKH/gBUYlzPpwLCEITLhh+PyDMW2IWq8swzQH7Pi4T+e?= =?us-ascii?Q?4e9P+a0EZnXrcmtLNks4ohERZzckS3u?= X-Microsoft-Exchange-Diagnostics: 1; DM5PR07MB3468; 6:KRezgV1i9Vimel6doWLpgsJevk+snep62Lo8PpDukNX0MoBmZAZlnA2SVPhurfbz8tth3EwLrSSOBJB40Gk+f3v8BrqeHSRfxzICO3l0wU93oQIC0hnyqlofo7QySFDnpcnaQKfxL81Xgn8XKe6Fa693AbODKeb7orlByEJUwlVWj+5fOWqfItFpipeaKxt92vq8xFZohWcaT5Qwk1rxsLbQ+jauT7PY3T2h41ZAQiuk1YWoHCvArKeI8w/8XC19AK24YINFzpHFePt46ErLKv8DAUEHqYs7RFiAkV4hj/shdBcJA0AZR+sXAxARKLdPm/SDKdmDSky9V0jzxvECSat2EfxQ3zmUpsLYpoj0wZw=; 5:Mohv6kxCmmRbOM+1lN/8M8PCXXFzIjZ7QTKBGAVhgZAFMnXNkoFWJ7QXcRPSpf7jMPo4R8C0L4YSRE9LFXFQTUR2RBqrMwK5xRCOr+MuktMBBqBjAAunxb4AWsfEhLK+Zz+93jRAeOo9KFosXYAuVmzxxZXnh6id8WcyLaJqcjw=; 24:RX9xHUG2cUry64490EprPYz7Knho1s2hIGadNvCk6b+OA2uszqYQpqUcaupbTJ9RcPHfLYxlMMEzIXKqx52R3JZhZk2Nj6wNuNHdzTBW7H0=; 7:Xuwg5Ojw5CgzB6S5FQRrPPjojtySL8kxV9eqpNPrbWB/5QZvvqaDe2QWhxlJ6xFvlaMhKpuZCBatEWxFZKx5/etMmMwOHZL/beEQLcj4nNmh2w+BIH/U4YTNTHWYdvAVefts0vQhpGx+Sdj5WRYKRGx3bau6HpSxZfvul8OJNNODzohGW5skZ5tbg/BDDilI8VGoPjXzJ4yfQFnzRFiCrFqsM03tp4aROTJdgjV72AMgmA/BMHHtsyEM5HHZG91g SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Dec 2017 20:38:55.6579 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b7d5f08c-c419-480d-51b2-08d53db28c57 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR07MB3468 Subject: [dpdk-dev] [PATCH 07/13] examples/eventdev: add thread safe Tx worker pipeline X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 07 Dec 2017 20:39:04 -0000 Add worker pipeline when Tx is multi thread safe. Probe Ethernet dev capabilities and select it it is supported. Signed-off-by: Pavan Nikhilesh --- examples/eventdev_pipeline_sw_pmd/Makefile | 1 + examples/eventdev_pipeline_sw_pmd/main.c | 18 +- .../eventdev_pipeline_sw_pmd/pipeline_common.h | 2 + .../eventdev_pipeline_sw_pmd/pipeline_worker_tx.c | 433 +++++++++++++++++++++ 4 files changed, 452 insertions(+), 2 deletions(-) create mode 100644 examples/eventdev_pipeline_sw_pmd/pipeline_worker_tx.c diff --git a/examples/eventdev_pipeline_sw_pmd/Makefile b/examples/eventdev_pipeline_sw_pmd/Makefile index 5e30556fb..59ee9840a 100644 --- a/examples/eventdev_pipeline_sw_pmd/Makefile +++ b/examples/eventdev_pipeline_sw_pmd/Makefile @@ -43,6 +43,7 @@ APP = eventdev_pipeline_sw_pmd # all source are stored in SRCS-y SRCS-y := main.c SRCS-y += pipeline_worker_generic.c +SRCS-y += pipeline_worker_tx.c CFLAGS += -O3 CFLAGS += $(WERROR_FLAGS) diff --git a/examples/eventdev_pipeline_sw_pmd/main.c b/examples/eventdev_pipeline_sw_pmd/main.c index 153467893..3be981c15 100644 --- a/examples/eventdev_pipeline_sw_pmd/main.c +++ b/examples/eventdev_pipeline_sw_pmd/main.c @@ -382,9 +382,20 @@ init_ports(unsigned int num_ports) static void do_capability_setup(uint16_t nb_ethdev, uint8_t eventdev_id) { - RTE_SET_USED(nb_ethdev); + int i; + uint8_t mt_unsafe = 0; uint8_t burst = 0; + for (i = 0; i < nb_ethdev; i++) { + struct rte_eth_dev_info dev_info; + memset(&dev_info, 0, sizeof(struct rte_eth_dev_info)); + + rte_eth_dev_info_get(i, &dev_info); + /* Check if it is safe ask worker to tx. */ + mt_unsafe |= !(dev_info.tx_offload_capa & + DEV_TX_OFFLOAD_MT_LOCKFREE); + } + struct rte_event_dev_info eventdev_info; memset(&eventdev_info, 0, sizeof(struct rte_event_dev_info)); @@ -392,7 +403,10 @@ do_capability_setup(uint16_t nb_ethdev, uint8_t eventdev_id) burst = eventdev_info.event_dev_cap & RTE_EVENT_DEV_CAP_BURST_MODE ? 1 : 0; - set_worker_generic_setup_data(&fdata->cap, burst); + if (mt_unsafe) + set_worker_generic_setup_data(&fdata->cap, burst); + else + set_worker_tx_setup_data(&fdata->cap, burst); } static void diff --git a/examples/eventdev_pipeline_sw_pmd/pipeline_common.h b/examples/eventdev_pipeline_sw_pmd/pipeline_common.h index a5837c99b..0b27d1eb0 100644 --- a/examples/eventdev_pipeline_sw_pmd/pipeline_common.h +++ b/examples/eventdev_pipeline_sw_pmd/pipeline_common.h @@ -108,6 +108,7 @@ struct config_data { int dump_dev_signal; unsigned int num_stages; unsigned int worker_cq_depth; + unsigned int rx_stride; int16_t next_qid[MAX_NUM_STAGES+2]; int16_t qid[MAX_NUM_STAGES]; uint8_t rx_adapter_id; @@ -178,3 +179,4 @@ schedule_devices(unsigned int lcore_id) } void set_worker_generic_setup_data(struct setup_data *caps, bool burst); +void set_worker_tx_setup_data(struct setup_data *caps, bool burst); diff --git a/examples/eventdev_pipeline_sw_pmd/pipeline_worker_tx.c b/examples/eventdev_pipeline_sw_pmd/pipeline_worker_tx.c new file mode 100644 index 000000000..31b7d8936 --- /dev/null +++ b/examples/eventdev_pipeline_sw_pmd/pipeline_worker_tx.c @@ -0,0 +1,433 @@ +/* + * BSD LICENSE + * + * Copyright 2016 Cavium, Inc. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Cavium, Inc nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include "pipeline_common.h" + +static __rte_always_inline void +worker_fwd_event(struct rte_event *ev, uint8_t sched) +{ + ev->event_type = RTE_EVENT_TYPE_CPU; + ev->op = RTE_EVENT_OP_FORWARD; + ev->sched_type = sched; +} + +static __rte_always_inline void +worker_event_enqueue(const uint8_t dev, const uint8_t port, + struct rte_event *ev) +{ + while (rte_event_enqueue_burst(dev, port, ev, 1) != 1) + rte_pause(); +} + +static __rte_always_inline void +worker_tx_pkt(struct rte_mbuf *mbuf) +{ + while (rte_eth_tx_burst(mbuf->port, 0, &mbuf, 1) != 1) + rte_pause(); +} + +/* Multi stage Pipeline Workers */ + +static int +worker_do_tx(void *arg) +{ + struct rte_event ev; + + struct worker_data *data = (struct worker_data *)arg; + const uint8_t dev = data->dev_id; + const uint8_t port = data->port_id; + const uint8_t lst_qid = cdata.num_stages - 1; + size_t fwd = 0, received = 0, tx = 0; + + + while (!fdata->done) { + + if (!rte_event_dequeue_burst(dev, port, &ev, 1, 0)) { + rte_pause(); + continue; + } + + received++; + const uint8_t cq_id = ev.queue_id % cdata.num_stages; + + if (cq_id >= lst_qid) { + if (ev.sched_type == RTE_SCHED_TYPE_ATOMIC) { + worker_tx_pkt(ev.mbuf); + tx++; + continue; + } + + worker_fwd_event(&ev, RTE_SCHED_TYPE_ATOMIC); + ev.queue_id = (cq_id == lst_qid) ? + cdata.next_qid[ev.queue_id] : ev.queue_id; + } else { + ev.queue_id = cdata.next_qid[ev.queue_id]; + worker_fwd_event(&ev, cdata.queue_type); + } + work(ev.mbuf); + + worker_event_enqueue(dev, port, &ev); + fwd++; + } + + if (!cdata.quiet) + printf(" worker %u thread done. RX=%zu FWD=%zu TX=%zu\n", + rte_lcore_id(), received, fwd, tx); + + return 0; +} + +static int +setup_eventdev_w(struct prod_data *prod_data, + struct cons_data *cons_data, + struct worker_data *worker_data) +{ + RTE_SET_USED(prod_data); + RTE_SET_USED(cons_data); + uint8_t i; + const uint8_t dev_id = 0; + const uint8_t nb_ports = cdata.num_workers; + uint8_t nb_slots = 0; + uint8_t nb_queues = rte_eth_dev_count() * cdata.num_stages; + + struct rte_event_dev_config config = { + .nb_event_queues = nb_queues, + .nb_event_ports = nb_ports, + .nb_events_limit = 4096, + .nb_event_queue_flows = 1024, + .nb_event_port_dequeue_depth = 128, + .nb_event_port_enqueue_depth = 128, + }; + struct rte_event_port_conf wkr_p_conf = { + .dequeue_depth = cdata.worker_cq_depth, + .enqueue_depth = 64, + .new_event_threshold = 4096, + }; + struct rte_event_queue_conf wkr_q_conf = { + .schedule_type = cdata.queue_type, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .nb_atomic_flows = 1024, + .nb_atomic_order_sequences = 1024, + }; + + int ret, ndev = rte_event_dev_count(); + + if (ndev < 1) { + printf("%d: No Eventdev Devices Found\n", __LINE__); + return -1; + } + + + struct rte_event_dev_info dev_info; + ret = rte_event_dev_info_get(dev_id, &dev_info); + printf("\tEventdev %d: %s\n", dev_id, dev_info.driver_name); + + if (dev_info.max_event_port_dequeue_depth < + config.nb_event_port_dequeue_depth) + config.nb_event_port_dequeue_depth = + dev_info.max_event_port_dequeue_depth; + if (dev_info.max_event_port_enqueue_depth < + config.nb_event_port_enqueue_depth) + config.nb_event_port_enqueue_depth = + dev_info.max_event_port_enqueue_depth; + + ret = rte_event_dev_configure(dev_id, &config); + if (ret < 0) { + printf("%d: Error configuring device\n", __LINE__); + return -1; + } + + printf(" Stages:\n"); + for (i = 0; i < nb_queues; i++) { + + uint8_t slot; + + nb_slots = cdata.num_stages + 1; + slot = i % nb_slots; + wkr_q_conf.schedule_type = slot == cdata.num_stages ? + RTE_SCHED_TYPE_ATOMIC : cdata.queue_type; + + if (rte_event_queue_setup(dev_id, i, &wkr_q_conf) < 0) { + printf("%d: error creating qid %d\n", __LINE__, i); + return -1; + } + cdata.qid[i] = i; + cdata.next_qid[i] = i+1; + if (cdata.enable_queue_priorities) { + const uint32_t prio_delta = + (RTE_EVENT_DEV_PRIORITY_LOWEST) / + nb_slots; + + /* higher priority for queues closer to tx */ + wkr_q_conf.priority = + RTE_EVENT_DEV_PRIORITY_LOWEST - prio_delta * + (i % nb_slots); + } + + const char *type_str = "Atomic"; + switch (wkr_q_conf.schedule_type) { + case RTE_SCHED_TYPE_ORDERED: + type_str = "Ordered"; + break; + case RTE_SCHED_TYPE_PARALLEL: + type_str = "Parallel"; + break; + } + printf("\tStage %d, Type %s\tPriority = %d\n", i, type_str, + wkr_q_conf.priority); + } + + printf("\n"); + if (wkr_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth) + wkr_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth; + if (wkr_p_conf.enqueue_depth > config.nb_event_port_enqueue_depth) + wkr_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth; + + /* set up one port per worker, linking to all stage queues */ + for (i = 0; i < cdata.num_workers; i++) { + struct worker_data *w = &worker_data[i]; + w->dev_id = dev_id; + if (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) { + printf("Error setting up port %d\n", i); + return -1; + } + + if (rte_event_port_link(dev_id, i, NULL, NULL, 0) + != nb_queues) { + printf("%d: error creating link for port %d\n", + __LINE__, i); + return -1; + } + w->port_id = i; + } + + cdata.rx_stride = nb_slots; + ret = rte_event_dev_service_id_get(dev_id, + &fdata->evdev_service_id); + if (ret != -ESRCH && ret != 0) { + printf("Error getting the service ID for sw eventdev\n"); + return -1; + } + rte_service_runstate_set(fdata->evdev_service_id, 1); + rte_service_set_runstate_mapped_check(fdata->evdev_service_id, 0); + if (rte_event_dev_start(dev_id) < 0) { + printf("Error starting eventdev\n"); + return -1; + } + + return dev_id; +} + + +struct rx_adptr_services { + uint16_t nb_rx_adptrs; + uint32_t *rx_adpt_arr; +}; + +static int32_t +service_rx_adapter(void *arg) +{ + int i; + struct rx_adptr_services *adptr_services = arg; + + for (i = 0; i < adptr_services->nb_rx_adptrs; i++) + rte_service_run_iter_on_app_lcore( + adptr_services->rx_adpt_arr[i], 1); + return 0; +} + +static void +init_rx_adapter(uint16_t nb_ports) +{ + int i; + int ret; + uint8_t evdev_id = 0; + struct rx_adptr_services *adptr_services = NULL; + struct rte_event_dev_info dev_info; + + ret = rte_event_dev_info_get(evdev_id, &dev_info); + adptr_services = rte_zmalloc(NULL, sizeof(struct rx_adptr_services), 0); + + struct rte_event_port_conf rx_p_conf = { + .dequeue_depth = 8, + .enqueue_depth = 8, + .new_event_threshold = 1200, + }; + + if (rx_p_conf.dequeue_depth > dev_info.max_event_port_dequeue_depth) + rx_p_conf.dequeue_depth = dev_info.max_event_port_dequeue_depth; + if (rx_p_conf.enqueue_depth > dev_info.max_event_port_enqueue_depth) + rx_p_conf.enqueue_depth = dev_info.max_event_port_enqueue_depth; + + + struct rte_event_eth_rx_adapter_queue_conf queue_conf = { + .ev.sched_type = cdata.queue_type, + }; + + for (i = 0; i < nb_ports; i++) { + uint32_t cap; + uint32_t service_id; + + ret = rte_event_eth_rx_adapter_create(i, evdev_id, &rx_p_conf); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to create rx adapter[%d]", + cdata.rx_adapter_id); + + ret = rte_event_eth_rx_adapter_caps_get(evdev_id, i, &cap); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to get event rx adapter " + "capabilities"); + + queue_conf.ev.queue_id = cdata.rx_stride ? + (i * cdata.rx_stride) + : (uint8_t)cdata.qid[0]; + + ret = rte_event_eth_rx_adapter_queue_add(i, i, -1, &queue_conf); + if (ret) + rte_exit(EXIT_FAILURE, + "Failed to add queues to Rx adapter"); + + + /* Producer needs to be scheduled. */ + if (!(cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) { + ret = rte_event_eth_rx_adapter_service_id_get(i, + &service_id); + if (ret != -ESRCH && ret != 0) { + rte_exit(EXIT_FAILURE, + "Error getting the service ID for rx adptr\n"); + } + + rte_service_runstate_set(service_id, 1); + rte_service_set_runstate_mapped_check(service_id, 0); + + adptr_services->nb_rx_adptrs++; + adptr_services->rx_adpt_arr = rte_realloc( + adptr_services->rx_adpt_arr, + adptr_services->nb_rx_adptrs * + sizeof(uint32_t), 0); + adptr_services->rx_adpt_arr[ + adptr_services->nb_rx_adptrs - 1] = + service_id; + } + + ret = rte_event_eth_rx_adapter_start(i); + if (ret) + rte_exit(EXIT_FAILURE, "Rx adapter[%d] start failed", + cdata.rx_adapter_id); + } + + prod_data.dev_id = evdev_id; + prod_data.qid = 0; + + if (adptr_services->nb_rx_adptrs) { + struct rte_service_spec service; + + memset(&service, 0, sizeof(struct rte_service_spec)); + snprintf(service.name, sizeof(service.name), "rx_service"); + service.callback = service_rx_adapter; + service.callback_userdata = (void *)adptr_services; + + int32_t ret = rte_service_component_register(&service, + &fdata->rxadptr_service_id); + if (ret) + rte_exit(EXIT_FAILURE, + "Rx adapter[%d] service register failed", + cdata.rx_adapter_id); + + rte_service_runstate_set(fdata->rxadptr_service_id, 1); + rte_service_component_runstate_set(fdata->rxadptr_service_id, + 1); + rte_service_set_runstate_mapped_check(fdata->rxadptr_service_id, + 0); + } else + rte_free(adptr_services); + + + if (!adptr_services->nb_rx_adptrs && fdata->cap.consumer_loop == NULL && + (dev_info.event_dev_cap & + RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED)) + fdata->cap.schedule_loop = NULL; +} + +static void +opt_check(void) +{ + int i; + int ret; + uint32_t cap = 0; + uint8_t rx_needed = 0; + struct rte_event_dev_info eventdev_info; + + memset(&eventdev_info, 0, sizeof(struct rte_event_dev_info)); + rte_event_dev_info_get(0, &eventdev_info); + + for (i = 0; i < rte_eth_dev_count(); i++) { + ret = rte_event_eth_rx_adapter_caps_get(0, i, &cap); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to get event rx adapter " + "capabilities"); + rx_needed |= + !(cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT); + } + + if (cdata.worker_lcore_mask == 0 || + (rx_needed && cdata.rx_lcore_mask == 0) || + (cdata.sched_lcore_mask == 0 && + !(eventdev_info.event_dev_cap & + RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED))) { + printf("Core part of pipeline was not assigned any cores. " + "This will stall the pipeline, please check core masks " + "(use -h for details on setting core masks):\n" + "\trx: %"PRIu64"\n\ttx: %"PRIu64"\n\tsched: %"PRIu64 + "\n\tworkers: %"PRIu64"\n", + cdata.rx_lcore_mask, cdata.tx_lcore_mask, + cdata.sched_lcore_mask, + cdata.worker_lcore_mask); + rte_exit(-1, "Fix core masks\n"); + } +} + +void +set_worker_tx_setup_data(struct setup_data *caps, bool burst) +{ + RTE_SET_USED(burst); + caps->worker_loop = worker_do_tx; + + caps->opt_check = opt_check; + caps->consumer_loop = NULL; + caps->schedule_loop = schedule_devices; + caps->eventdev_setup = setup_eventdev_w; + caps->rx_adapter_setup = init_rx_adapter; +} -- 2.14.1