From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM02-CY1-obe.outbound.protection.outlook.com (mail-cys01nam02on0044.outbound.protection.outlook.com [104.47.37.44]) by dpdk.org (Postfix) with ESMTP id 8B5DF1B171 for ; Mon, 18 Dec 2017 22:45:25 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=MS093THf65+g1L9zBj1TXhpHVRFpC1+uLyNybNOIrY8=; b=WKT94EZHRniBuVuJxJXSwvNCpA1gLyT/p59gd8Rr4yWZlg8fLxXeCMyYUcaQCUq9fvbz7B5gUCB6eR2fuQzVerA/WrCjP+TkT6SnF8C6vHA0KV9/2GWXNUlBA6OaXmaP829GhVTC09DM9oi2Pm1Aa/dz4PXcD4pq6l0dO6OlpBM= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Pavan.Bhagavatula@cavium.com; Received: from localhost.localdomain (111.93.218.67) by DM5PR07MB3467.namprd07.prod.outlook.com (10.164.153.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.323.15; Mon, 18 Dec 2017 21:45:20 +0000 From: Pavan Nikhilesh To: jerin.jacob@caviumnetworks.com, santosh.shukla@caviumnetworks.com, bruce.richardson@intel.com, harry.van.haaren@intel.com, gage.eads@intel.com, hemant.agrawal@nxp.com, nipun.gupta@nxp.com, liang.j.ma@intel.com Cc: dev@dpdk.org, Pavan Nikhilesh Date: Tue, 19 Dec 2017 03:14:02 +0530 Message-Id: <20171218214405.26763-9-pbhagavatula@caviumnetworks.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20171218214405.26763-1-pbhagavatula@caviumnetworks.com> References: <20171130072406.15605-1-pbhagavatula@caviumnetworks.com> <20171218214405.26763-1-pbhagavatula@caviumnetworks.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: HK2PR02CA0163.apcprd02.prod.outlook.com (10.171.30.23) To DM5PR07MB3467.namprd07.prod.outlook.com (10.164.153.22) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 40262d8f-7983-43f9-6cd7-08d54660a4b2 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(5600026)(4604075)(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(2017052603307); SRVR:DM5PR07MB3467; X-Microsoft-Exchange-Diagnostics: 1; DM5PR07MB3467; 3:XfttUa4nudGMBQCP3uMOvEmiSzu2kz7/LSMTw8CzpyhvgTvCObpsyWb8QO5DjBSJrcMROGxSzQUDLKG+bVRpMJ7fSnfwmqWkjkdFPjGY+P8x19/HjOl64cGp1WOtL/WgzNecSbg5DHaQ06VtasF9avaN2AI5dS29d6FnFeeQES0os/0E3nblfXWAX1OzvXsoJSo1GAu6sIgac0ZRW0Zg9so7KXlnNZxgp6mEOyHHiwcZzNONUA4kNu/1tTd2vCwW; 25:WYcjRez2rPRt2dcUxybbB4LBdsb9SWJ9b9MkMiFv7gUwznMzWqF/FatogHKR+zgmqEg+bSPA+liYxDxsIuIAG07/YOda3nQuSvCnZNyMpEFF9F8XAYSaQHvUtsSCSPpcslqg7N1YAHKXca+R3WQtzcElRp3OHI7BOXp4sX6TkwbTL5fCxb1y9k1DJNl5PmqOg4kDlOxIWwuZJfdzOov5sMfsfJuxkrXyBz8z2ioJvMGUvn56e0PcOmRsOdJj5UH9E4lfVt66AKlCDE0X+2Hqv4CnKIaGr/FgYa2FtmZDPKNlcFKq2dCrgnfzFkOVOss/n3thyJBEw/2GICANdk+a+A==; 31:UQhjEo0AT9lHydRxIdY8MYzlU1NTcssPlOSwjEv5j3EQ69Ba7B/RVhg++K6GImfoEBptU16DAp/GgjJ6zRG79nmYQN08c6BPz0JlFqEXT6czDkBvrypJIBsOkyd0OlI7f16e0dU2dTrv1R4N42wXfduJQFybwWJ6VCdggDldFZ+KZHs6oAwM+SSPdi29+HpuEK7aOqIlrsmWKV1oEXHcp/6vVRiC9abEnXJ5YBKbYxI= X-MS-TrafficTypeDiagnostic: DM5PR07MB3467: X-Microsoft-Exchange-Diagnostics: 1; DM5PR07MB3467; 20:SlyWb8HBgHjVTwANg2JGTDcD1eajKVzQ4hIVrvTVeXv+zTfGipldk+KbhTWmKbzFXYO3IMb8wvPkPuUdD9gCDklAAcj/9h03TtEFIWtbRbxHMH6HvL0kdxCZ5lNuuzwJk2gIjYCITA8emYCcq98IOCkSkfc0qIRdMmidQItETi5D9Z4jOyDPGNdiFlJMNmC7KCbhknV+80U8VXpbksCcUsfZIr/Y1CRXqt0MA1aCyPtbmO7cQLu9rNoF0HyCF3kfNxoyGWatU7nmsyanaeIW0xcy0IyqWlae8jkBqDEbHndlkC2dU9xB8uumz5L8g08GP51lfg2fb3K298MLVIXWuU4XIB4X2dfuL/jyJ+3X2oiqhRR6zHfGeVv8vgsj6JYPv48g1GIBHaO57bbHaxRSjTMvV9EOejE8GJvAWMpIpkl3ylRmIZl/riUfALdAubaqH1l9dENESD6sPrnZ+rSkmw10t0h0dWJYldCXxQZXNlvJo6JMK7x5f1uEBsFaINPsoW3Yn/tyJymjvNhyGoFPYqDBXQswWNTZh9r20VzbdROHy/yp10l6mGDD6aVTOoIeQ4SE5F8NkQnUBNyC3T7/7trWmI03VnqdUOuunkdgczw=; 4:s36OeWaiFGyKFPZTUMZCwtjaIDRlpcnXyyJ13U3yRxHENN0qRea0iuxtnNYTbkjyy1c7l7Me3Q0pIgq1vY8v7Yy9nDNEUBacT3wm5+/in/i5tKzeXb1hMWTLTi6lfFJ7NaPWNfcs0ghYWKcKuRW48fYPxwXtMztIISEI/Gfrdvq21e89LcO+eMlEmZpGXnfHo+S05I6W8tkWPCvFJs3c4xBlsu7Nd1fCdN5RxoPQk1V87YrbCzZVyGZZ72RYExK+8DGxXvnM1IVgLNbUw/yj7A== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040450)(2401047)(8121501046)(5005006)(10201501046)(93006095)(3002001)(3231023)(6041248)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123558100)(20161123562025)(20161123564025)(20161123555025)(20161123560025)(6072148)(201708071742011); SRVR:DM5PR07MB3467; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:DM5PR07MB3467; X-Forefront-PRVS: 0525BB0ADF X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6069001)(396003)(366004)(376002)(39860400002)(346002)(199004)(189003)(4326008)(2906002)(53936002)(305945005)(25786009)(1076002)(8936002)(50226002)(16526018)(6506007)(386003)(68736007)(52116002)(478600001)(51416003)(48376002)(5009440100003)(3846002)(6116002)(72206003)(36756003)(6486002)(107886003)(8676002)(5660300001)(106356001)(105586002)(8656006)(2950100002)(42882006)(6512007)(6666003)(59450400001)(16586007)(7736002)(66066001)(47776003)(76176011)(50466002)(81166006)(81156014)(316002)(97736004)(42262002); DIR:OUT; SFP:1101; SCL:1; SRVR:DM5PR07MB3467; H:localhost.localdomain; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; DM5PR07MB3467; 23:iczAn3ZabkJ2Wb0COFSQMYnsNUyFNUVLyz5G+myPM?= =?us-ascii?Q?7IijAoozHtbgA+ugWDscIc2CjDAGuMKaY3le8khMWJNz/9cF6jwn/HjMXKpk?= =?us-ascii?Q?vhWb/hja1alTwOZsqiwAJW30gr9Yy97jmX1bl0xOgrfR8kj140IQ8Y8wovo4?= =?us-ascii?Q?qj38mAQXzA17vaCwKp9olPZT6EQj4bo3LXJalFKWv4/g83Imozo5Cnh5kkvm?= =?us-ascii?Q?qnj0i7r3WLCrFVPfuWpJBeKuQaShpTQkpfeH7PCIO+gRQzdYlkQCefksdLjl?= =?us-ascii?Q?Y/fOILy/PG9+CEgNbhvLWTdVgY8A/hzz9Ej4bL6QxtQoD0L/Npqhqk9gs6Xq?= =?us-ascii?Q?T5jJLjk1OB7O7egwQFd9AwKb3GRypIGe8xCZu8PW2Y09b+jwMJeoRoQnog1o?= =?us-ascii?Q?je5KthKbst/6NQpX8zZ41IlnlBG8e/+Ttu4jdLJuwhhPbyt7zjwbkKNGzoLj?= =?us-ascii?Q?WRBwkVDyxjZgQgmk1aPwLgrplR2MI8JnqVJNKmDORwzeqzCFSx4gIOXHRyEp?= =?us-ascii?Q?fU6HBU2hvtGy22w6SOlg8jsPmHYM+FY98syceYKsYzNHXrL5Nfy53mnOU3Km?= =?us-ascii?Q?+wPB1FNShNwMTjh3AqDlhY/QMxa0ex/mGZsLow2QxvALyfb68CrLbfKa6ONk?= =?us-ascii?Q?ab52PLKs78m7Qv3uekrW43gDIIWfX+2dMxY5J31iISiF1TuTe2iLIEzskPJo?= =?us-ascii?Q?tRAC/46EqyuCjw9YIswYOAHNBhVl8PlEye8d+1UcrF9Ib/VcHyUkmL8wvR84?= =?us-ascii?Q?UmRVlvTR/T8amdBeOWc62A96bonYHIU6p96F3QDqamTMYCUSC3Nl/0RMdBhb?= =?us-ascii?Q?J3AxrmFuqj6bLseYTxMuTMBAEv++qwQScojAPyfXEcjAjTn/68vNVDkByj02?= =?us-ascii?Q?7j4znpYP8IfNpOoJBvKDz5Lr7XL+RkIaiEY8KaGO7CFzia1UeTEB7Vo0teue?= =?us-ascii?Q?FNhjNU1TA46S6cZmWYNSO5ZdF36l4ZtIREXSktd/SFNVYRp2PMEm3uFgyFGt?= =?us-ascii?Q?2HhF6fF2bRggpW3Xjn88pyrXMdgjlWYQaOFGiuBiCcc91b8cuRitxRwQViRw?= =?us-ascii?Q?v0cWjZKNalTo/fmjxZ+DwwPPwjnLElU18XmwbTmDSYLjj7kLODBo191s+2p5?= =?us-ascii?Q?IFQDnd9mwUoegNnq9RlJhxOmsvr8Bbi4F11r5KtuCwIIizpm2/E7KFpRKpE0?= =?us-ascii?Q?Ltyqo2dnWKK7fQe91SvOXXo7wWkE0w4mAyf?= X-Microsoft-Exchange-Diagnostics: 1; DM5PR07MB3467; 6:nA9tVMGGZ+kDL6NnQyxHmSyMKYkYvUL1rGmz0Bo1Kbd31aSjZ3ZG1+5ptTM+WkKzLiyHDjpNlXI4l3Mb31cGW2v3mcWxl0TE/lcV96AcYxBlnWWUWyZaPkZP5YwG8VHE1eUXNsnY7Wy1ipzXor93wh4cMi2LNjl0veBsw3qWyfsYFpQLblKuQ+03LyNiXbVqAvk6xdNrLdMkQNzxaL3rT4aIJ3ezYx1m++kCYm1xAajSxxbSN8wvNdv/pveGLdNmh852rVqhyki+5EMyEaFtnJCRFQAT2uwbNNk73b6bb734nbVaNpW9Pb4dWvl0hGyTIF3tZ2+C5C69jV6pv3z7p71UMQDMXjqBDjpF+RYrwRw=; 5:SqJVfhaJHgL1kFIq8uuXVhXHS8bZftkeptjcLy/PY40CycTMkFhZ4zpn/SNhpTp8KaoMa74Z4pmVdioMUlLQqipJNOwO1/vuQbHHvefCxmg+58Q49RfpkSbqzphUKZTHkNtYcib0d6w+CSYirozxeCPRFpWI/XqYj8wJfWptRqY=; 24:3OH34GUx7DoguPeKn/W2aexi2VLCs1jZBPZj3BoLvgf4reqy+eoRR/PAE/ZfbszFlRVrtSWs9ZQeUZ2g2VxcMqZnuIzDZvggsx22ag7atmc=; 7:MZZUNiDHiseMnFSrRANJHyhxJwxCDa9awRTST16JAuQ3E+ltx3TBRyHxdUSsk2EpjKjAvdHuyp1TUlC+8Z/92HVXhFsMY9dMqBjcEpJqKilmevrSQ3pIWk6paQday0rxDTvWQGkGdwVJ9CHH2ZCCXXqTReHLUz83k+hT7i/wnfgxuk880ImV1sI7DaxR6MlgMuRAktbAplcAVyrD/rarAoS5oSCDrAkhr9py0ir6sY99HBAp73Vdl7ldL2mCPco3 SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Dec 2017 21:45:20.1443 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 40262d8f-7983-43f9-6cd7-08d54660a4b2 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR07MB3467 Subject: [dpdk-dev] [PATCH v2 09/12] app/eventdev: add pipeline queue worker functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 Dec 2017 21:45:26 -0000 Signed-off-by: Pavan Nikhilesh --- app/test-eventdev/test_pipeline_common.h | 80 +++++++ app/test-eventdev/test_pipeline_queue.c | 367 ++++++++++++++++++++++++++++++- 2 files changed, 446 insertions(+), 1 deletion(-) diff --git a/app/test-eventdev/test_pipeline_common.h b/app/test-eventdev/test_pipeline_common.h index 26d265a3d..009b20a7d 100644 --- a/app/test-eventdev/test_pipeline_common.h +++ b/app/test-eventdev/test_pipeline_common.h @@ -78,6 +78,86 @@ struct test_pipeline { uint8_t sched_type_list[EVT_MAX_STAGES] __rte_cache_aligned; } __rte_cache_aligned; +#define BURST_SIZE 16 + +static __rte_always_inline void +pipeline_fwd_event(struct rte_event *ev, uint8_t sched) +{ + ev->event_type = RTE_EVENT_TYPE_CPU; + ev->op = RTE_EVENT_OP_FORWARD; + ev->sched_type = sched; +} + +static __rte_always_inline void +pipeline_event_enqueue(const uint8_t dev, const uint8_t port, + struct rte_event *ev) +{ + while (rte_event_enqueue_burst(dev, port, ev, 1) != 1) + rte_pause(); +} + +static __rte_always_inline void +pipeline_event_enqueue_burst(const uint8_t dev, const uint8_t port, + struct rte_event *ev, const uint16_t nb_rx) +{ + uint16_t enq; + + enq = rte_event_enqueue_burst(dev, port, ev, nb_rx); + while (enq < nb_rx) { + enq += rte_event_enqueue_burst(dev, port, + ev + enq, nb_rx - enq); + } +} + +static __rte_always_inline void +pipeline_tx_pkt_safe(struct rte_mbuf *mbuf) +{ + while (rte_eth_tx_burst(mbuf->port, 0, &mbuf, 1) != 1) + rte_pause(); +} + +static __rte_always_inline void +pipeline_tx_pkt_unsafe(struct rte_mbuf *mbuf, struct test_pipeline *t) +{ + rte_spinlock_t *lk = &t->tx_lk[mbuf->port]; + + rte_spinlock_lock(lk); + pipeline_tx_pkt_safe(mbuf); + rte_spinlock_unlock(lk); +} + +static __rte_always_inline void +pipeline_tx_unsafe_burst(struct rte_mbuf *mbuf, struct test_pipeline *t) +{ + uint16_t port = mbuf->port; + rte_spinlock_t *lk = &t->tx_lk[port]; + + rte_spinlock_lock(lk); + rte_eth_tx_buffer(port, 0, t->tx_buf[port], mbuf); + rte_spinlock_unlock(lk); +} + +static __rte_always_inline void +pipeline_tx_flush(struct test_pipeline *t, const uint8_t nb_ports) +{ + int i; + rte_spinlock_t *lk; + + for (i = 0; i < nb_ports; i++) { + lk = &t->tx_lk[i]; + + rte_spinlock_lock(lk); + rte_eth_tx_buffer_flush(i, 0, t->tx_buf[i]); + rte_spinlock_unlock(lk); + } +} + +static inline int +pipeline_nb_event_ports(struct evt_options *opt) +{ + return evt_nr_active_lcores(opt->wlcores); +} + int pipeline_test_result(struct evt_test *test, struct evt_options *opt); int pipeline_opt_check(struct evt_options *opt, uint64_t nb_queues); int pipeline_test_setup(struct evt_test *test, struct evt_options *opt); diff --git a/app/test-eventdev/test_pipeline_queue.c b/app/test-eventdev/test_pipeline_queue.c index 851027cb7..f89adc4b4 100644 --- a/app/test-eventdev/test_pipeline_queue.c +++ b/app/test-eventdev/test_pipeline_queue.c @@ -42,10 +42,375 @@ pipeline_queue_nb_event_queues(struct evt_options *opt) return (eth_count * opt->nb_stages) + eth_count; } +static int +pipeline_queue_worker_single_stage_safe(void *arg) +{ + struct worker_data *w = arg; + struct test_pipeline *t = w->t; + const uint8_t dev = w->dev_id; + const uint8_t port = w->port_id; + struct rte_event ev; + + while (t->done == false) { + uint16_t event = rte_event_dequeue_burst(dev, port, &ev, 1, 0); + + if (!event) { + rte_pause(); + continue; + } + + if (ev.sched_type == RTE_SCHED_TYPE_ATOMIC) { + pipeline_tx_pkt_safe(ev.mbuf); + w->processed_pkts++; + } else { + ev.queue_id++; + pipeline_fwd_event(&ev, RTE_SCHED_TYPE_ATOMIC); + pipeline_event_enqueue(dev, port, &ev); + } + } + + return 0; +} + +static int +pipeline_queue_worker_single_stage_unsafe(void *arg) +{ + struct worker_data *w = arg; + struct test_pipeline *t = w->t; + const uint8_t dev = w->dev_id; + const uint8_t port = w->port_id; + struct rte_event ev; + + while (t->done == false) { + uint16_t event = rte_event_dequeue_burst(dev, port, &ev, 1, 0); + + if (!event) { + rte_pause(); + continue; + } + + if (ev.sched_type == RTE_SCHED_TYPE_ATOMIC) { + pipeline_tx_pkt_unsafe(ev.mbuf, t); + w->processed_pkts++; + } else { + ev.queue_id++; + pipeline_fwd_event(&ev, RTE_SCHED_TYPE_ATOMIC); + pipeline_event_enqueue(dev, port, &ev); + } + } + + return 0; +} + +static int +pipeline_queue_worker_single_stage_burst_safe(void *arg) +{ + int i; + struct worker_data *w = arg; + struct test_pipeline *t = w->t; + const uint8_t dev = w->dev_id; + const uint8_t port = w->port_id; + struct rte_event ev[BURST_SIZE]; + + while (t->done == false) { + uint16_t nb_rx = rte_event_dequeue_burst(dev, port, ev, + BURST_SIZE, 0); + + if (!nb_rx) { + rte_pause(); + continue; + } + + for (i = 0; i < nb_rx; i++) { + rte_prefetch0(ev[i + 1].mbuf); + if (ev[i].sched_type == RTE_SCHED_TYPE_ATOMIC) { + + pipeline_tx_pkt_safe(ev[i].mbuf); + ev[i].op = RTE_EVENT_OP_RELEASE; + w->processed_pkts++; + } else { + ev[i].queue_id++; + pipeline_fwd_event(&ev[i], + RTE_SCHED_TYPE_ATOMIC); + } + } + + pipeline_event_enqueue_burst(dev, port, ev, nb_rx); + } + + return 0; +} + +static int +pipeline_queue_worker_single_stage_burst_unsafe(void *arg) +{ + int i; + struct worker_data *w = arg; + struct test_pipeline *t = w->t; + const uint8_t dev = w->dev_id; + const uint8_t port = w->port_id; + struct rte_event ev[BURST_SIZE]; + const uint16_t nb_ports = rte_eth_dev_count(); + + while (t->done == false) { + uint16_t nb_rx = rte_event_dequeue_burst(dev, port, ev, + BURST_SIZE, 0); + + if (!nb_rx) { + pipeline_tx_flush(t, nb_ports); + rte_pause(); + continue; + } + + for (i = 0; i < nb_rx; i++) { + rte_prefetch0(ev[i + 1].mbuf); + if (ev[i].sched_type == RTE_SCHED_TYPE_ATOMIC) { + + pipeline_tx_unsafe_burst(ev[i].mbuf, t); + ev[i].op = RTE_EVENT_OP_RELEASE; + w->processed_pkts++; + } else { + + ev[i].queue_id++; + pipeline_fwd_event(&ev[i], + RTE_SCHED_TYPE_ATOMIC); + } + } + + pipeline_event_enqueue_burst(dev, port, ev, nb_rx); + } + + return 0; +} + + +static int +pipeline_queue_worker_multi_stage_safe(void *arg) +{ + struct worker_data *w = arg; + struct test_pipeline *t = w->t; + const uint8_t dev = w->dev_id; + const uint8_t port = w->port_id; + const uint8_t last_queue = t->opt->nb_stages - 1; + const uint8_t nb_stages = t->opt->nb_stages + 1; + uint8_t *const sched_type_list = &t->sched_type_list[0]; + uint8_t cq_id; + struct rte_event ev; + + + while (t->done == false) { + uint16_t event = rte_event_dequeue_burst(dev, port, &ev, 1, 0); + + if (!event) { + rte_pause(); + continue; + } + + cq_id = ev.queue_id % nb_stages; + + if (cq_id >= last_queue) { + if (ev.sched_type == RTE_SCHED_TYPE_ATOMIC) { + + pipeline_tx_pkt_safe(ev.mbuf); + w->processed_pkts++; + continue; + } + ev.queue_id += (cq_id == last_queue) ? 1 : 0; + pipeline_fwd_event(&ev, RTE_SCHED_TYPE_ATOMIC); + } else { + ev.queue_id++; + pipeline_fwd_event(&ev, sched_type_list[cq_id]); + } + + pipeline_event_enqueue(dev, port, &ev); + } + return 0; +} + +static int +pipeline_queue_worker_multi_stage_unsafe(void *arg) +{ + struct worker_data *w = arg; + struct test_pipeline *t = w->t; + const uint8_t dev = w->dev_id; + const uint8_t port = w->port_id; + const uint8_t last_queue = t->opt->nb_stages - 1; + const uint8_t nb_stages = t->opt->nb_stages + 1; + uint8_t *const sched_type_list = &t->sched_type_list[0]; + uint8_t cq_id; + struct rte_event ev; + + + while (t->done == false) { + uint16_t event = rte_event_dequeue_burst(dev, port, &ev, 1, 0); + + if (!event) { + rte_pause(); + continue; + } + + cq_id = ev.queue_id % nb_stages; + + if (cq_id >= last_queue) { + if (ev.sched_type == RTE_SCHED_TYPE_ATOMIC) { + + pipeline_tx_pkt_unsafe(ev.mbuf, t); + w->processed_pkts++; + continue; + } + ev.queue_id += (cq_id == last_queue) ? 1 : 0; + pipeline_fwd_event(&ev, RTE_SCHED_TYPE_ATOMIC); + } else { + ev.queue_id++; + pipeline_fwd_event(&ev, sched_type_list[cq_id]); + } + + pipeline_event_enqueue(dev, port, &ev); + } + return 0; +} + +static int +pipeline_queue_worker_multi_stage_burst_safe(void *arg) +{ + int i; + struct worker_data *w = arg; + struct test_pipeline *t = w->t; + const uint8_t dev = w->dev_id; + const uint8_t port = w->port_id; + uint8_t *const sched_type_list = &t->sched_type_list[0]; + const uint8_t last_queue = t->opt->nb_stages - 1; + const uint8_t nb_stages = t->opt->nb_stages + 1; + uint8_t cq_id; + struct rte_event ev[BURST_SIZE + 1]; + + while (t->done == false) { + uint16_t nb_rx = rte_event_dequeue_burst(dev, port, ev, + BURST_SIZE, 0); + + if (!nb_rx) { + rte_pause(); + continue; + } + + for (i = 0; i < nb_rx; i++) { + rte_prefetch0(ev[i + 1].mbuf); + cq_id = ev[i].queue_id % nb_stages; + + if (cq_id >= last_queue) { + if (ev[i].sched_type == RTE_SCHED_TYPE_ATOMIC) { + + pipeline_tx_pkt_safe(ev[i].mbuf); + ev[i].op = RTE_EVENT_OP_RELEASE; + w->processed_pkts++; + continue; + } + + ev[i].queue_id += (cq_id == last_queue) ? 1 : 0; + pipeline_fwd_event(&ev[i], + RTE_SCHED_TYPE_ATOMIC); + } else { + ev[i].queue_id++; + pipeline_fwd_event(&ev[i], + sched_type_list[cq_id]); + } + + } + + pipeline_event_enqueue_burst(dev, port, ev, nb_rx); + } + return 0; +} + +static int +pipeline_queue_worker_multi_stage_burst_unsafe(void *arg) +{ + int i; + struct worker_data *w = arg; + struct test_pipeline *t = w->t; + const uint8_t dev = w->dev_id; + const uint8_t port = w->port_id; + uint8_t *const sched_type_list = &t->sched_type_list[0]; + const uint8_t last_queue = t->opt->nb_stages - 1; + const uint8_t nb_stages = t->opt->nb_stages + 1; + uint8_t cq_id; + struct rte_event ev[BURST_SIZE + 1]; + const uint16_t nb_ports = rte_eth_dev_count(); + + while (t->done == false) { + uint16_t nb_rx = rte_event_dequeue_burst(dev, port, ev, + BURST_SIZE, 0); + + if (!nb_rx) { + pipeline_tx_flush(t, nb_ports); + rte_pause(); + continue; + } + + for (i = 0; i < nb_rx; i++) { + rte_prefetch0(ev[i + 1].mbuf); + cq_id = ev[i].queue_id % nb_stages; + + if (cq_id >= last_queue) { + if (ev[i].sched_type == RTE_SCHED_TYPE_ATOMIC) { + + pipeline_tx_unsafe_burst(ev[i].mbuf, t); + ev[i].op = RTE_EVENT_OP_RELEASE; + w->processed_pkts++; + continue; + } + + ev[i].queue_id += (cq_id == last_queue) ? 1 : 0; + pipeline_fwd_event(&ev[i], + RTE_SCHED_TYPE_ATOMIC); + } else { + ev[i].queue_id++; + pipeline_fwd_event(&ev[i], + sched_type_list[cq_id]); + } + } + + pipeline_event_enqueue_burst(dev, port, ev, nb_rx); + } + return 0; +} + static int worker_wrapper(void *arg) { - RTE_SET_USED(arg); + struct worker_data *w = arg; + struct evt_options *opt = w->t->opt; + const bool burst = evt_has_burst_mode(w->dev_id); + const bool mt_safe = !w->t->mt_unsafe; + const uint8_t nb_stages = opt->nb_stages; + RTE_SET_USED(opt); + + /* allow compiler to optimize */ + if (nb_stages == 1) { + if (!burst && mt_safe) + return pipeline_queue_worker_single_stage_safe(arg); + else if (!burst && !mt_safe) + return pipeline_queue_worker_single_stage_unsafe( + arg); + else if (burst && mt_safe) + return pipeline_queue_worker_single_stage_burst_safe( + arg); + else if (burst && !mt_safe) + return pipeline_queue_worker_single_stage_burst_unsafe( + arg); + } else { + if (!burst && mt_safe) + return pipeline_queue_worker_multi_stage_safe(arg); + else if (!burst && !mt_safe) + return pipeline_queue_worker_multi_stage_unsafe(arg); + if (burst && mt_safe) + return pipeline_queue_worker_multi_stage_burst_safe( + arg); + else if (burst && !mt_safe) + return pipeline_queue_worker_multi_stage_burst_unsafe( + arg); + + } rte_panic("invalid worker\n"); } -- 2.14.1