From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM03-BY2-obe.outbound.protection.outlook.com (mail-by2nam03on0055.outbound.protection.outlook.com [104.47.42.55]) by dpdk.org (Postfix) with ESMTP id 90DD41B169 for ; Wed, 10 Jan 2018 12:11:03 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=TdLI89vO160jF68QVCxaOFbCWJLyEbkQmErWTKqL4I4=; b=E0IoUFluAmouu/9dcHJgQNStmC/67pL4FD2fkrlWSlR/bkmOIQL8rlGy/BOPE08pdWf/zS8c3gjsxiXgr/+WehxRSAQmxGC0ohyeSomL0z86xHuAwlvE1klJLnl3e9ID8XtSsyKRYjh9z2P2r0ecGwYw/Us3Z7yIsadnwodtlGM= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Pavan.Bhagavatula@cavium.com; Received: from Pavan-LT.caveonetworks.com (111.93.218.67) by MWHPR07MB3469.namprd07.prod.outlook.com (10.164.192.20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.386.5; Wed, 10 Jan 2018 11:10:59 +0000 From: Pavan Nikhilesh To: gage.eads@intel.com, jerin.jacobkollanukkaran@cavium.com, harry.van.haaren@intel.com, hemant.agrawal@nxp.com, liang.j.ma@intel.com, santosh.shukla@caviumnetworks.com Cc: dev@dpdk.org, Pavan Nikhilesh Date: Wed, 10 Jan 2018 16:40:06 +0530 Message-Id: <20180110111013.14644-8-pbhagavatula@caviumnetworks.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180110111013.14644-1-pbhagavatula@caviumnetworks.com> References: <20171207203705.25020-1-pbhagavatula@caviumnetworks.com> <20180110111013.14644-1-pbhagavatula@caviumnetworks.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: CY4PR1701CA0010.namprd17.prod.outlook.com (10.171.208.20) To MWHPR07MB3469.namprd07.prod.outlook.com (10.164.192.20) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 31982640-c73b-423d-4490-08d5581ad56e X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(5600026)(4604075)(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(2017052603307)(7153060)(7193020); SRVR:MWHPR07MB3469; X-Microsoft-Exchange-Diagnostics: 1; MWHPR07MB3469; 3:+rgYKriEjdJ1N9PALaR6iRemgBPpIXsNgV7Id9asvJYv1rhsDOMp/vxsoohd/usj3SmL4sTTP+0ZA5dKiQ5O5FRNGQxQK87gvVegLQTvTtkGmOi/u5HnQAE33vUVjv7tmLphJNffFYIEdSY18abquIG1KuLhz4B0AmKDDVhlTJFHx9LFvqFORmUYbRXX2TJYhVRUU6YpwgXhOCz/Ri0ZR6FALTvZb7lI0/TF3OrhDBAFrSgvqW8gbtsD4A7ik17N; 25:HGYxrJ5XD8jQZ3C00OS+YB1PSG42xSDhCWar4J1dr3zdwCmj+OgANEenNaDq/N9zOtRNG9DHNvkqcA7xIouTKT9JFIUMng+ohITmHi6W+6jei8bt7nW1OVnjhEBPvaNeIXeMI/hh4EUxlaFrWe89A/bSRDGOzzRy2YAoUwhEFK9ggoFgBQplYOOJ3CKGqWrqacjeN2ds08KbY/3szV1prqddTpQU0CBOEuyOVLR51mPeafpVofs07/FVxcGs1BlF5Vf09pfnFzYVm0paYlmI9BSI/CFG5bU/DvQM8OZEbcY1e+RhqQJ1zZKcnVzyAoijG9hzJdJObU3aTudTKK+0fg==; 31:zVzZp4KxD2NQwBIPeegjZiMNovv1N23mPO8LXp9ivl56Kojd0haTq7hE9BYmKyV458dQOGI6Ru+Dr2w9lVqbb/fMhFpUwNNbpIz1EHFNhMbUWzJKu+3NbeJCew+FtrjZ+wPqYbwCKVHAsKSEiLCBmlvelNwaAIMhELqXZxnkVCyyYRcFFkZvIajGmj1rcxe9Rj8VtBKABqUgVen2g98kWx49gW0o2MsgAMJWXCP26HU= X-MS-TrafficTypeDiagnostic: MWHPR07MB3469: X-Microsoft-Exchange-Diagnostics: 1; MWHPR07MB3469; 20:o4vxNxgUcEsruXyLTj+NdCp3u648vIFgz0KcI4HstXyAe10vsgPupeqccHswdNI6bFmmPyay0FCOpBTZb5oGTNZSuPiYWcMQleeJMTFiPgQl//bqgmRUf9aHri/aP2JpurhWo5Imz/9mZpX3eWdp3qyHBNT9uKNsw8NeIhqt676rxOA6BMoC4fmUW0havKf3TTKiHqzsv+lwZ0D/5QdpGipefg8X6fvBJ7bpsjkRZ+gVC295XfAddjdfpghj/BqlaU/YObKq/vlA4BxSe81EYAuGTdEFWCjk+hnIFsayqmSGdO0MSGV3mleSLz/McFZ+E9yGSeh6fjSp/u7EDAYAhtCTEK5/xLSP1CRqOubFtw/v902VbuwlCCsBkGK1cqBrFyYsAZ6i0aZ4XdjKv5SCa+6dTSktbOYzYAO0Md8G3UonDaPtEQH/8EY54ukIWh28OJ+AmWmXujD6OqMKnrQWU09A6+rfmeKaPrZmfPg4ZdAto1QV0TdHezGJ6C2aoRdDJnIsfcDb7YA7+8FxXYuN+soZ+UjHrIxhC8/Qhfxit+33EQkyt1cyrONbFMvv4dFVhl+Qx9KozbHqsz3BfZsNWcaOGfzzLuujJ5dhaxUHOcI=; 4:EZNHIn4rXtc3F7QEKh0s0lZU0A9eRJ7n28w+yYLXScjUL0UW5BkPANku2NAo0OVok7EonYprBbfEOaWFiej6z2dtSv87WcJPEfTpXKKOKIlGGSCtE4AohAQJvDGwKwZeGO5TuVLv4nc61vEuCkQnTPchZkvkWJEKImVvT8RdcpCLcJYuiIfaR/qYQFq14mWCnEL/n8Nua44AzDCPTKnVvvFcECEozU7wcMLKKARhCSYLL8iYko/6/9roXHGgJtsky1jY3NHFw+PWT5CY/jLejA== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040470)(2401047)(5005006)(8121501046)(3002001)(93006095)(10201501046)(3231023)(944501075)(6041268)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123560045)(20161123558120)(20161123564045)(20161123562045)(6072148)(201708071742011); SRVR:MWHPR07MB3469; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:MWHPR07MB3469; X-Forefront-PRVS: 0548586081 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(366004)(39380400002)(376002)(346002)(396003)(39860400002)(199004)(189003)(4326008)(8656006)(81156014)(316002)(81166006)(5009440100003)(53416004)(50466002)(50226002)(16586007)(8936002)(16526018)(47776003)(68736007)(66066001)(48376002)(36756003)(69596002)(25786009)(107886003)(42882006)(2950100002)(106356001)(5660300001)(51416003)(52116002)(386003)(59450400001)(53936002)(6636002)(6666003)(105586002)(97736004)(305945005)(7736002)(8676002)(6116002)(3846002)(2906002)(478600001)(72206003)(6512007)(76176011)(6486002)(6506007)(1076002)(42262002); DIR:OUT; SFP:1101; SCL:1; SRVR:MWHPR07MB3469; H:Pavan-LT.caveonetworks.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; MWHPR07MB3469; 23:5CRewb3b+k/JibJ1Tv5eEUxdiHVyAxR07T5Kmig8D?= =?us-ascii?Q?8Ydx6jywD/kZlko3ZdgvS8qbebOHmzhS7NOMDtRAu78I8pLHLzaLX2OGwndF?= =?us-ascii?Q?05ybN58JxxBcmYLEZgo8vh85xvf+9E0esCZOfPLoiSCjvqmJJY0VE0WJA7Ku?= =?us-ascii?Q?zFMZRsdTrxaHwg3DBZ6ytuJzO1bv79lAaZcCrPsQi/3NRLZtV5nxW7hSphwg?= =?us-ascii?Q?II2iQAwOFOhCULHyyeypR1w6zHe8hKbnQp5bj5KJgWt4xNzrKZuUSS9INR6e?= =?us-ascii?Q?sNTO7ZxovpfEKF3q0UMmsstqwTIm4+UYizlLw7QlSiWvp9iQRXXrJPwCzLv1?= =?us-ascii?Q?Hrm8uTWEhCoTGTRGTVlcLiSJrglfi8LTi6BX7kQIyu+likI9+oZDWg1iMaWL?= =?us-ascii?Q?XubpmCgsTBPPERJm77DZt5oprJ5A8VYzprmdGaZtfe0zthBV0ZqpocQWZYPx?= =?us-ascii?Q?Y8K4sK1RfcvOCX3PdgqPOujLL7iRLpVOqBIGP5uFO3dXEbkRZXaTETRqFgyw?= =?us-ascii?Q?FWYbhvBOWGpj5SG0erwrstHY1GYRl29HAp0hZoEQZHF7U4zZywhWmpj2oadD?= =?us-ascii?Q?/0UFfwZAjkhwH8HBtRfTaI8dr5sqAorI2TeYbyXgN8pA/AsavU9xyQpw5eyq?= =?us-ascii?Q?8UPbiwwA2LZnCu+ILU6Mu9o/Db76wIfHwwKDftd5U4QI+mpy8w/JLs+GkSP/?= =?us-ascii?Q?ix3CFGbH/BNVwB1H7vILQtwMGKeRCVHoheBU8dx+8hQtbXkgvLtLDj8ETRPq?= =?us-ascii?Q?tNB3q5oEOjqQz+Strkv5a9zC4ulrEeoBIa30zx/s5vnw8okt6KZE6N9jC9UV?= =?us-ascii?Q?JtOJUSNHZUM05ukuGPstTnezoC2L/wc91fQZOMbZN9YeTPDtEuHY1CTZIWni?= =?us-ascii?Q?2+wVu61b9wiH99130/NC6c1w7nEp4nTpZhLFBHkC1bDOrysQkwdmZuEFuy3o?= =?us-ascii?Q?hEW4nFUR2CcZ+MpEYAD6KwZ+uxAF82V7iHm+B+m/EkKM8Uq1UVThxBWNwsVB?= =?us-ascii?Q?TejK6BYhKlhiQocEue6AOS40bmgGYrLEx7xMpJh76HrYduBLjndv7Qe+/KPk?= =?us-ascii?Q?6Q7VNCyrykzOfyFxWgFpPyUWk9ZULhvf1jORqQrnf6uLWA/4YFS1Hub1bYat?= =?us-ascii?Q?UR+s1vBK8fbp4qcoryv4gZKRSyYheqTSrygD8qYZeBLxsnFHh/VRnhytCm8u?= =?us-ascii?Q?APm+3NsPeFNrVWMd6soUufHfGeAy7ciLH0f5gG2ZzEfJSc8PTO2RVmTR8UNW?= =?us-ascii?Q?jgAwHHULZThMNYHvraDGsd6+lhTt6I75fNwOwZp?= X-Microsoft-Exchange-Diagnostics: 1; MWHPR07MB3469; 6:gCbD1/0lTn+JBi5hx+j4WPIIdZr8RZ0FaLvakHXJ1M4F6+weZutnmAHpkUkOqJED2cVI9W6FbBUaxn5eF9k21uZF82W3osyIbbNnBzWNhN3UG7GDj5pp+4F0EjvSOPhJJSFgPWNdGjNUZR3O4JTjPz9I87X+kZletjIM7b7ua3HKOIqDsiHXy0wgQdHVrZ1/ezRs8kjXgsA7XnjjWL+KFANSonJpsFcB9y+00x8eHfVKNM2dVpWwKXMVIaQCg4udxeODxTxJEi/Y4MaWtjmjYi8XWjVrxTmXpFKbLsY0T+boh9z5TXVEfRarGUuuc4759yDYtk2pAJfAfbZAcwSgC86B1LSSaJBbf2wDk7D1CKw=; 5:NjSdcf5DNaegvdbs11Dyaot8UmHl8U9xfKkdXjI8VfepUAEyXszia3AozumAFh9F3x16C5Ss206VGdFNn5YMxZODvbmYP2JWvU31j/2vIDEMa9YWYGJY1KBotq//UP0WU4o9tCJMGG0jV0DVyWSeQ2kkbuwDxgw44jMhAsKM8T4=; 24:Pbpa/d0DO5CL5WxPwTk2JeAdZfM+95eFJIib3PASvsiP7/dRbO4QqVvfTsWolHe1jLTQkaWBv7HCwvUvrPsOyrHmDFsXo69waaL8mIhvVb0=; 7:LucZNlqsYSR5vk22GmqFj9KqmS8Liyusn8DQok+q9gkkHE4g+fFkSAsGVWYN+ssXkCo4hQEd3P5laRJc0r9+FT0j0T2clI62I+8r41KYGE9uNpcYEWrhv7YJlpO6y0fMuME98zCGElrugWGpeQKuEIeMLNUo2XevNqyQdMfUdankiND3wC0eBTRjwPrE2b+MR5oLW+aU61BDyXDUR8Qglzu4Y4uIWYfeOcGgqOhyvzaAaBEXQPSWJicx5bmKK0ah SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2018 11:10:59.0022 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 31982640-c73b-423d-4490-08d5581ad56e X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR07MB3469 Subject: [dpdk-dev] [PATCH v2 08/15] examples/eventdev: add thread safe Tx worker pipeline X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Jan 2018 11:11:04 -0000 Add worker pipeline when Tx is multi thread safe. Probe Ethernet dev capabilities and select it it is supported. Signed-off-by: Pavan Nikhilesh --- v2 Changes: - Redo function names(Harry) examples/eventdev_pipeline_sw_pmd/Makefile | 1 + examples/eventdev_pipeline_sw_pmd/main.c | 18 +- .../eventdev_pipeline_sw_pmd/pipeline_common.h | 5 + .../eventdev_pipeline_sw_pmd/pipeline_worker_tx.c | 425 +++++++++++++++++++++ 4 files changed, 447 insertions(+), 2 deletions(-) create mode 100644 examples/eventdev_pipeline_sw_pmd/pipeline_worker_tx.c diff --git a/examples/eventdev_pipeline_sw_pmd/Makefile b/examples/eventdev_pipeline_sw_pmd/Makefile index 5e30556fb..59ee9840a 100644 --- a/examples/eventdev_pipeline_sw_pmd/Makefile +++ b/examples/eventdev_pipeline_sw_pmd/Makefile @@ -43,6 +43,7 @@ APP = eventdev_pipeline_sw_pmd # all source are stored in SRCS-y SRCS-y := main.c SRCS-y += pipeline_worker_generic.c +SRCS-y += pipeline_worker_tx.c CFLAGS += -O3 CFLAGS += $(WERROR_FLAGS) diff --git a/examples/eventdev_pipeline_sw_pmd/main.c b/examples/eventdev_pipeline_sw_pmd/main.c index 947c5f786..f877e695b 100644 --- a/examples/eventdev_pipeline_sw_pmd/main.c +++ b/examples/eventdev_pipeline_sw_pmd/main.c @@ -381,9 +381,20 @@ init_ports(unsigned int num_ports) static void do_capability_setup(uint16_t nb_ethdev, uint8_t eventdev_id) { - RTE_SET_USED(nb_ethdev); + int i; + uint8_t mt_unsafe = 0; uint8_t burst = 0; + for (i = 0; i < nb_ethdev; i++) { + struct rte_eth_dev_info dev_info; + memset(&dev_info, 0, sizeof(struct rte_eth_dev_info)); + + rte_eth_dev_info_get(i, &dev_info); + /* Check if it is safe ask worker to tx. */ + mt_unsafe |= !(dev_info.tx_offload_capa & + DEV_TX_OFFLOAD_MT_LOCKFREE); + } + struct rte_event_dev_info eventdev_info; memset(&eventdev_info, 0, sizeof(struct rte_event_dev_info)); @@ -391,7 +402,10 @@ do_capability_setup(uint16_t nb_ethdev, uint8_t eventdev_id) burst = eventdev_info.event_dev_cap & RTE_EVENT_DEV_CAP_BURST_MODE ? 1 : 0; - set_worker_generic_setup_data(&fdata->cap, burst); + if (mt_unsafe) + set_worker_generic_setup_data(&fdata->cap, burst); + else + set_worker_tx_setup_data(&fdata->cap, burst); } static void diff --git a/examples/eventdev_pipeline_sw_pmd/pipeline_common.h b/examples/eventdev_pipeline_sw_pmd/pipeline_common.h index d58059b78..e06320050 100644 --- a/examples/eventdev_pipeline_sw_pmd/pipeline_common.h +++ b/examples/eventdev_pipeline_sw_pmd/pipeline_common.h @@ -79,6 +79,10 @@ struct config_data { int dump_dev_signal; unsigned int num_stages; unsigned int worker_cq_depth; + unsigned int rx_stride; + /* Use rx stride value to reduce congestion in entry queue when using + * multiple eth ports by forming multiple event queue pipelines. + */ int16_t next_qid[MAX_NUM_STAGES+2]; int16_t qid[MAX_NUM_STAGES]; uint8_t rx_adapter_id; @@ -144,3 +148,4 @@ schedule_devices(unsigned int lcore_id) } void set_worker_generic_setup_data(struct setup_data *caps, bool burst); +void set_worker_tx_setup_data(struct setup_data *caps, bool burst); diff --git a/examples/eventdev_pipeline_sw_pmd/pipeline_worker_tx.c b/examples/eventdev_pipeline_sw_pmd/pipeline_worker_tx.c new file mode 100644 index 000000000..397b1013f --- /dev/null +++ b/examples/eventdev_pipeline_sw_pmd/pipeline_worker_tx.c @@ -0,0 +1,425 @@ +/* + * SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2014 Intel Corporation + * Copyright 2017 Cavium, Inc. + */ + +#include "pipeline_common.h" + +static __rte_always_inline void +worker_fwd_event(struct rte_event *ev, uint8_t sched) +{ + ev->event_type = RTE_EVENT_TYPE_CPU; + ev->op = RTE_EVENT_OP_FORWARD; + ev->sched_type = sched; +} + +static __rte_always_inline void +worker_event_enqueue(const uint8_t dev, const uint8_t port, + struct rte_event *ev) +{ + while (rte_event_enqueue_burst(dev, port, ev, 1) != 1) + rte_pause(); +} + +static __rte_always_inline void +worker_tx_pkt(struct rte_mbuf *mbuf) +{ + exchange_mac(mbuf); + while (rte_eth_tx_burst(mbuf->port, 0, &mbuf, 1) != 1) + rte_pause(); +} + +/* Multi stage Pipeline Workers */ + +static int +worker_do_tx(void *arg) +{ + struct rte_event ev; + + struct worker_data *data = (struct worker_data *)arg; + const uint8_t dev = data->dev_id; + const uint8_t port = data->port_id; + const uint8_t lst_qid = cdata.num_stages - 1; + size_t fwd = 0, received = 0, tx = 0; + + + while (!fdata->done) { + + if (!rte_event_dequeue_burst(dev, port, &ev, 1, 0)) { + rte_pause(); + continue; + } + + received++; + const uint8_t cq_id = ev.queue_id % cdata.num_stages; + + if (cq_id >= lst_qid) { + if (ev.sched_type == RTE_SCHED_TYPE_ATOMIC) { + worker_tx_pkt(ev.mbuf); + tx++; + continue; + } + + worker_fwd_event(&ev, RTE_SCHED_TYPE_ATOMIC); + ev.queue_id = (cq_id == lst_qid) ? + cdata.next_qid[ev.queue_id] : ev.queue_id; + } else { + ev.queue_id = cdata.next_qid[ev.queue_id]; + worker_fwd_event(&ev, cdata.queue_type); + } + work(); + + worker_event_enqueue(dev, port, &ev); + fwd++; + } + + if (!cdata.quiet) + printf(" worker %u thread done. RX=%zu FWD=%zu TX=%zu\n", + rte_lcore_id(), received, fwd, tx); + + return 0; +} + +static int +setup_eventdev_worker_tx(struct cons_data *cons_data, + struct worker_data *worker_data) +{ + RTE_SET_USED(cons_data); + uint8_t i; + const uint8_t dev_id = 0; + const uint8_t nb_ports = cdata.num_workers; + uint8_t nb_slots = 0; + uint8_t nb_queues = rte_eth_dev_count() * cdata.num_stages; + nb_queues += rte_eth_dev_count(); + + struct rte_event_dev_config config = { + .nb_event_queues = nb_queues, + .nb_event_ports = nb_ports, + .nb_events_limit = 4096, + .nb_event_queue_flows = 1024, + .nb_event_port_dequeue_depth = 128, + .nb_event_port_enqueue_depth = 128, + }; + struct rte_event_port_conf wkr_p_conf = { + .dequeue_depth = cdata.worker_cq_depth, + .enqueue_depth = 64, + .new_event_threshold = 4096, + }; + struct rte_event_queue_conf wkr_q_conf = { + .schedule_type = cdata.queue_type, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .nb_atomic_flows = 1024, + .nb_atomic_order_sequences = 1024, + }; + + int ret, ndev = rte_event_dev_count(); + + if (ndev < 1) { + printf("%d: No Eventdev Devices Found\n", __LINE__); + return -1; + } + + + struct rte_event_dev_info dev_info; + ret = rte_event_dev_info_get(dev_id, &dev_info); + printf("\tEventdev %d: %s\n", dev_id, dev_info.driver_name); + + if (dev_info.max_event_port_dequeue_depth < + config.nb_event_port_dequeue_depth) + config.nb_event_port_dequeue_depth = + dev_info.max_event_port_dequeue_depth; + if (dev_info.max_event_port_enqueue_depth < + config.nb_event_port_enqueue_depth) + config.nb_event_port_enqueue_depth = + dev_info.max_event_port_enqueue_depth; + + ret = rte_event_dev_configure(dev_id, &config); + if (ret < 0) { + printf("%d: Error configuring device\n", __LINE__); + return -1; + } + + printf(" Stages:\n"); + for (i = 0; i < nb_queues; i++) { + + uint8_t slot; + + nb_slots = cdata.num_stages + 1; + slot = i % nb_slots; + wkr_q_conf.schedule_type = slot == cdata.num_stages ? + RTE_SCHED_TYPE_ATOMIC : cdata.queue_type; + + if (rte_event_queue_setup(dev_id, i, &wkr_q_conf) < 0) { + printf("%d: error creating qid %d\n", __LINE__, i); + return -1; + } + cdata.qid[i] = i; + cdata.next_qid[i] = i+1; + if (cdata.enable_queue_priorities) { + const uint32_t prio_delta = + (RTE_EVENT_DEV_PRIORITY_LOWEST) / + nb_slots; + + /* higher priority for queues closer to tx */ + wkr_q_conf.priority = + RTE_EVENT_DEV_PRIORITY_LOWEST - prio_delta * + (i % nb_slots); + } + + const char *type_str = "Atomic"; + switch (wkr_q_conf.schedule_type) { + case RTE_SCHED_TYPE_ORDERED: + type_str = "Ordered"; + break; + case RTE_SCHED_TYPE_PARALLEL: + type_str = "Parallel"; + break; + } + printf("\tStage %d, Type %s\tPriority = %d\n", i, type_str, + wkr_q_conf.priority); + } + + printf("\n"); + if (wkr_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth) + wkr_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth; + if (wkr_p_conf.enqueue_depth > config.nb_event_port_enqueue_depth) + wkr_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth; + + /* set up one port per worker, linking to all stage queues */ + for (i = 0; i < cdata.num_workers; i++) { + struct worker_data *w = &worker_data[i]; + w->dev_id = dev_id; + if (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) { + printf("Error setting up port %d\n", i); + return -1; + } + + if (rte_event_port_link(dev_id, i, NULL, NULL, 0) + != nb_queues) { + printf("%d: error creating link for port %d\n", + __LINE__, i); + return -1; + } + w->port_id = i; + } + /* + * Reduce the load on ingress event queue by splitting the traffic + * across multiple event queues. + * for example, nb_stages = 2 and nb_ethdev = 2 then + * + * nb_queues = (2 * 2) + 2 = 6 (non atq) + * rx_stride = 3 + * + * So, traffic is split across queue 0 and queue 3 since queue id for + * rx adapter is chosen * i.e in the above + * case eth port 0, 1 will inject packets into event queue 0, 3 + * respectively. + * + * This forms two set of queue pipelines 0->1->2->tx and 3->4->5->tx. + */ + cdata.rx_stride = nb_slots; + ret = rte_event_dev_service_id_get(dev_id, + &fdata->evdev_service_id); + if (ret != -ESRCH && ret != 0) { + printf("Error getting the service ID\n"); + return -1; + } + rte_service_runstate_set(fdata->evdev_service_id, 1); + rte_service_set_runstate_mapped_check(fdata->evdev_service_id, 0); + if (rte_event_dev_start(dev_id) < 0) { + printf("Error starting eventdev\n"); + return -1; + } + + return dev_id; +} + + +struct rx_adptr_services { + uint16_t nb_rx_adptrs; + uint32_t *rx_adpt_arr; +}; + +static int32_t +service_rx_adapter(void *arg) +{ + int i; + struct rx_adptr_services *adptr_services = arg; + + for (i = 0; i < adptr_services->nb_rx_adptrs; i++) + rte_service_run_iter_on_app_lcore( + adptr_services->rx_adpt_arr[i], 1); + return 0; +} + +static void +init_rx_adapter(uint16_t nb_ports) +{ + int i; + int ret; + uint8_t evdev_id = 0; + struct rx_adptr_services *adptr_services = NULL; + struct rte_event_dev_info dev_info; + + ret = rte_event_dev_info_get(evdev_id, &dev_info); + adptr_services = rte_zmalloc(NULL, sizeof(struct rx_adptr_services), 0); + + struct rte_event_port_conf rx_p_conf = { + .dequeue_depth = 8, + .enqueue_depth = 8, + .new_event_threshold = 1200, + }; + + if (rx_p_conf.dequeue_depth > dev_info.max_event_port_dequeue_depth) + rx_p_conf.dequeue_depth = dev_info.max_event_port_dequeue_depth; + if (rx_p_conf.enqueue_depth > dev_info.max_event_port_enqueue_depth) + rx_p_conf.enqueue_depth = dev_info.max_event_port_enqueue_depth; + + + struct rte_event_eth_rx_adapter_queue_conf queue_conf = { + .ev.sched_type = cdata.queue_type, + }; + + for (i = 0; i < nb_ports; i++) { + uint32_t cap; + uint32_t service_id; + + ret = rte_event_eth_rx_adapter_create(i, evdev_id, &rx_p_conf); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to create rx adapter[%d]", + cdata.rx_adapter_id); + + ret = rte_event_eth_rx_adapter_caps_get(evdev_id, i, &cap); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to get event rx adapter " + "capabilities"); + + queue_conf.ev.queue_id = cdata.rx_stride ? + (i * cdata.rx_stride) + : (uint8_t)cdata.qid[0]; + + ret = rte_event_eth_rx_adapter_queue_add(i, i, -1, &queue_conf); + if (ret) + rte_exit(EXIT_FAILURE, + "Failed to add queues to Rx adapter"); + + + /* Producer needs to be scheduled. */ + if (!(cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) { + ret = rte_event_eth_rx_adapter_service_id_get(i, + &service_id); + if (ret != -ESRCH && ret != 0) { + rte_exit(EXIT_FAILURE, + "Error getting the service ID for rx adptr\n"); + } + + rte_service_runstate_set(service_id, 1); + rte_service_set_runstate_mapped_check(service_id, 0); + + adptr_services->nb_rx_adptrs++; + adptr_services->rx_adpt_arr = rte_realloc( + adptr_services->rx_adpt_arr, + adptr_services->nb_rx_adptrs * + sizeof(uint32_t), 0); + adptr_services->rx_adpt_arr[ + adptr_services->nb_rx_adptrs - 1] = + service_id; + } + + ret = rte_event_eth_rx_adapter_start(i); + if (ret) + rte_exit(EXIT_FAILURE, "Rx adapter[%d] start failed", + cdata.rx_adapter_id); + } + + if (adptr_services->nb_rx_adptrs) { + struct rte_service_spec service; + + memset(&service, 0, sizeof(struct rte_service_spec)); + snprintf(service.name, sizeof(service.name), "rx_service"); + service.callback = service_rx_adapter; + service.callback_userdata = (void *)adptr_services; + + int32_t ret = rte_service_component_register(&service, + &fdata->rxadptr_service_id); + if (ret) + rte_exit(EXIT_FAILURE, + "Rx adapter[%d] service register failed", + cdata.rx_adapter_id); + + rte_service_runstate_set(fdata->rxadptr_service_id, 1); + rte_service_component_runstate_set(fdata->rxadptr_service_id, + 1); + rte_service_set_runstate_mapped_check(fdata->rxadptr_service_id, + 0); + } else { + memset(fdata->rx_core, 0, sizeof(unsigned int) * MAX_NUM_CORE); + rte_free(adptr_services); + } + + if (!adptr_services->nb_rx_adptrs && fdata->cap.consumer == NULL && + (dev_info.event_dev_cap & + RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED)) + fdata->cap.scheduler = NULL; + + if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED) + memset(fdata->sched_core, 0, + sizeof(unsigned int) * MAX_NUM_CORE); +} + +static void +worker_tx_opt_check(void) +{ + int i; + int ret; + uint32_t cap = 0; + uint8_t rx_needed = 0; + struct rte_event_dev_info eventdev_info; + + memset(&eventdev_info, 0, sizeof(struct rte_event_dev_info)); + rte_event_dev_info_get(0, &eventdev_info); + + for (i = 0; i < rte_eth_dev_count(); i++) { + ret = rte_event_eth_rx_adapter_caps_get(0, i, &cap); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to get event rx adapter " + "capabilities"); + rx_needed |= + !(cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT); + } + + if (cdata.worker_lcore_mask == 0 || + (rx_needed && cdata.rx_lcore_mask == 0) || + (cdata.sched_lcore_mask == 0 && + !(eventdev_info.event_dev_cap & + RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED))) { + printf("Core part of pipeline was not assigned any cores. " + "This will stall the pipeline, please check core masks " + "(use -h for details on setting core masks):\n" + "\trx: %"PRIu64"\n\ttx: %"PRIu64"\n\tsched: %"PRIu64 + "\n\tworkers: %"PRIu64"\n", + cdata.rx_lcore_mask, cdata.tx_lcore_mask, + cdata.sched_lcore_mask, + cdata.worker_lcore_mask); + rte_exit(-1, "Fix core masks\n"); + } +} + +void +set_worker_tx_setup_data(struct setup_data *caps, bool burst) +{ + RTE_SET_USED(burst); + caps->worker = worker_do_tx; + + memset(fdata->tx_core, 0, sizeof(unsigned int) * MAX_NUM_CORE); + + caps->check_opt = worker_tx_opt_check; + caps->consumer = NULL; + caps->scheduler = schedule_devices; + caps->evdev_setup = setup_eventdev_worker_tx; + caps->adptr_setup = init_rx_adapter; +} -- 2.15.1