From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM03-BY2-obe.outbound.protection.outlook.com (mail-by2nam03on0070.outbound.protection.outlook.com [104.47.42.70]) by dpdk.org (Postfix) with ESMTP id 302F81D06E for ; Fri, 8 Jun 2018 19:26:12 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3pcx5I535udSgZSGUtMlcudL8Dt+Q9RlNRpZwGzoEKI=; b=bM5KLxGIqHUhTGy9F0C3Z1v9nfl9wJzREcL4Y8tVuwR+XZhAMIjbRoB8ntPtpklX0lYqNmf2SesCAPhLH/ZD3D75i0t3wOqV8FmzE9qYoXtTboNLv3FyxFNmZHrBUipWelVlS+bHIRTE58fftbp+NYajBB142Z5tnEHh66G8uMc= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Anoob.Joseph@cavium.com; Received: from ajoseph83.caveonetworks.com.caveonetworks.com (115.113.156.2) by DM6PR07MB4906.namprd07.prod.outlook.com (2603:10b6:5:a3::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.841.17; Fri, 8 Jun 2018 17:26:07 +0000 From: Anoob Joseph To: Bruce Richardson , Jerin Jacob , Pablo de Lara Cc: Anoob Joseph , Hemant Agrawal , Narayana Prasad , Nikhil Rao , Pavan Nikhilesh , Sunil Kumar Kori , dev@dpdk.org Date: Fri, 8 Jun 2018 22:54:19 +0530 Message-Id: <1528478659-15859-21-git-send-email-anoob.joseph@caviumnetworks.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1528478659-15859-1-git-send-email-anoob.joseph@caviumnetworks.com> References: <1528478659-15859-1-git-send-email-anoob.joseph@caviumnetworks.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [115.113.156.2] X-ClientProxiedBy: BM1PR0101CA0050.INDPRD01.PROD.OUTLOOK.COM (2603:1096:b00:19::12) To DM6PR07MB4906.namprd07.prod.outlook.com (2603:10b6:5:a3::11) X-MS-PublicTrafficType: Email X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652020)(5600026)(4534165)(7168020)(4627221)(201703031133081)(201702281549075)(2017052603328)(7153060)(7193020); SRVR:DM6PR07MB4906; X-Microsoft-Exchange-Diagnostics: 1; DM6PR07MB4906; 3:qAOGS7xLaKFuFNIJ0xSXNFjk8GFjXuavXRirI+eqbzVlb6f3JPAc9VyLkljpDXgwuIPwKoU4YW9oO9ktICrclFhoNyk0E0/N5rzXyQ5IjvE0YqzIwoyUTUL9L9pUgl2CHWea1hv5+uXq8KYWsnpKeLDxW0KwMjHcker6vyTfQ0tyLhsIWYH+GlIMpdzYPJEO92ON22bcDVgfp6HuRVbnV9y1ezFmXfDYQ9BnBXrT7rf0mmiSoLnyWfEJhEM9kETy; 25:L+BOv6sztv/vw2mSvDvsNvgG0/AXBhG72TYBM/JQ2OaNasRnpESj0v560GsQNgVfvT1qOQ2kOK/jF/0dPwxEPIcVK1FljsoBdsmCYFaymTbAeDgTmfPbhOpc+j/4KC2bWauOgp0to3ZXjKkET2kpExWj0r2zrh1IVQClZQwwklrLW46aq70peBAsmoy4XYSS5x+2uyHjEI4XlwclLF2AjFHpLtWvELAUwIVftDG1/gyJlAtpxCJEZqugwWISn2gYizW7W63LsMRRqvi4vEPvqbTDPyHUe/dTHTYUfxEIOUGpvWJ4TktqazDMh/aL7um6erHtZ/iIBHqVe5BnaLnN8g==; 31:gbxSXzkDjiiz6XBMcx9Rfy4Q1DoBBOl6woXXL1h9FmTdOm85tMEkU0AMopFyE+99QjSLui8phKcjPhsD2/6W5ueAOri1TGMO38kya/gd0IdKvSSG7Pp9f5gcXglBCnspvUKxy5g5xt41dQj6D1dSM43IzY64h5TgL2zko1Rh5lL2Zf0TshSelPN9MMSnx2EJcSmSJFsIbUZILzdY4WY8vFih0VdWjsc/AD9tgqQmwpI= X-MS-TrafficTypeDiagnostic: DM6PR07MB4906: X-Microsoft-Exchange-Diagnostics: 1; DM6PR07MB4906; 20:s1+AePawPiukkP/eT8vX5Yo3bQE7rVi+ksxRb3w8DmRCmMeRuIo1XWmyTgajUugDF/1wh+hGL+4aarMlumVTnCw18i+3ljXjZs09syhZ6NHhGcDTQjzya63VWnbRehTSPYoNnovpvZThCQYjmrRs1L4H/SM24IfVZSvvH2uup1xV5tYAE7woqPtxuyXMTJNV+jexdoHZ1lMVp9WFANAYemEVhgK7A22QQf0e9Nns+08d13QDQ2OsP1W1ObWt8H0Ol5oLWO9jlm4gNg2/AO8ONLlp2VeG0YkwSh7htMjd3vYOrlksMwOl91R1Dld50AEtQUj4DV0fAsooZW5ihACeRk5MnqVGzDA85S+y6pPWo5VGnooeEn9S9w/YWdliKeNADVTz3Uj4JTQuBlU+wzOBRZ3fFrif5kUnOcVe9kQQeI0+do7ItHiVayLG8y0xOur+oAX87oIaAuZF4Ye4Jqns8dMyeEabVts16gkT6inRo9iGeR64a8EDNKNAp9n8pUiZyFAUxDMv6K8YxevtN7VsGAwlHGaUgUkt8tJTFLeMncqrw4fl/BfA+7t9LJe7cgxUVP1VGfuUylbqnMU8FK5h+dYXaBlRD1+cthUhDu3ypf0=; 4:5tA7pRFG2067EuRkfcn+jmwOwx5UEztcW9knSEnwKzIOHoTL+56ZXSWYj72MQQVQO4/9f02O93WWd0JHyZrC63eVFxXFHSPj4knL4khOelbwIgpcWz4yQdAN0Fsqe/PC9BY03wbJPp84Z73NbdJUZRPsZ03zwaGVDTVCJfyIH3P0iHyXL2FY5jJlPaW99UIyfhYtOQicb4DcBnUdzIZk2KFzRSbrz3YzxHcZkIt1b7rDYbvq+Yrg1tqTwVONCKIdiG79ji0RYguivMCl/hlDGA== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(93006095)(3231254)(944501410)(52105095)(10201501046)(3002001)(149027)(150027)(6041310)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123564045)(20161123560045)(20161123562045)(20161123558120)(6072148)(201708071742011)(7699016); SRVR:DM6PR07MB4906; BCL:0; PCL:0; RULEID:; SRVR:DM6PR07MB4906; X-Forefront-PRVS: 06973FFAD3 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(39860400002)(346002)(366004)(376002)(396003)(39380400002)(189003)(199004)(186003)(16526019)(5660300001)(26005)(42882007)(55236004)(66066001)(47776003)(68736007)(6666003)(305945005)(2906002)(8676002)(7736002)(8936002)(36756003)(52116002)(2616005)(956004)(476003)(446003)(50226002)(575784001)(81166006)(11346002)(44832011)(6506007)(386003)(486006)(59450400001)(76176011)(51416003)(48376002)(50466002)(97736004)(316002)(6512007)(25786009)(53936002)(6486002)(81156014)(4326008)(3846002)(6116002)(16586007)(54906003)(110136005)(105586002)(106356001)(72206003)(478600001)(8656006); DIR:OUT; SFP:1101; SCL:1; SRVR:DM6PR07MB4906; H:ajoseph83.caveonetworks.com.caveonetworks.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; DM6PR07MB4906; 23:xpCSgfrE0tIoG1D0lRkTduwXZfSroqdEkZF7BiY0m?= =?us-ascii?Q?ALZXxezQ7HeBmmMe310e5tKS8Ri++aEXsK1PwwDFzzN5m+4DIloGU3sKCzHh?= =?us-ascii?Q?bWzSHf7U+Q0qjWHIZCavmt3lzq/dFfLZ934LkyUwgOOg3+Ozo/YoTveu5YQH?= =?us-ascii?Q?2s7V7MFWIu7nPf+8BCQ8jSM8nnfb6o7G+IyOftyJawotnDhgmRuOPGorbHdX?= =?us-ascii?Q?dvO0G6Dy9RY/o0At6Zqh69V8BVV2jd7HJ1RyZVlfk3bvhFfXhk3XCLIMLrMk?= =?us-ascii?Q?1Z10hpz8MjjVMEgLdFpdzv21vS18x8qpBG5uVJN7ZyDCplDfjs2BNxeD8/3x?= =?us-ascii?Q?JKJpiylEiYsuwWiP96HleuXhxLwyQhJ6Y5BFcMSZCW0heZMs7DgHQ/2sC6Bo?= =?us-ascii?Q?2GqV0ULXbN4Olc/qnuq4gsBzC4I4zjbS0PTViH6/VXGNQngPQ3BLEtGI6RF1?= =?us-ascii?Q?PqFC6tR8VOh/jz54eT1OOAbKuOiylIN9bnai9uN2zOHrUb7o4QO957K1QCeb?= =?us-ascii?Q?Te+O7at4K5VBHDjj3N4AVHc0iv3x17hp/hFz3BXzgUEvbvfiELIjl7bzSKSR?= =?us-ascii?Q?mo9d3vhM0EyswUdRI2ISWW1zV+zAC0JJ9jGHms5377cn5SBLFWnqlfIeaKxh?= =?us-ascii?Q?yLkYNq0/odMz75hCAP3AWN1TBawi8eSQIfya8pc96jug+LGvmuy39mtJGsrU?= =?us-ascii?Q?iuNWnSnfEKJHjpszzlCPvX3Ql0X2j+P4CoXKY9DxXRhcTNu02YMqix+Kz9lN?= =?us-ascii?Q?bbgxM8Wo8IOcwB+m9cRlMq8sgmK69Vk2Hea5rqp+OTHEDPmXzVzftJ+r7Az3?= =?us-ascii?Q?zUqPySPI/gYzAgC/ckFcf8bvRkE6Zz9EmfH1iizRSXRw331okDffURNdOlmU?= =?us-ascii?Q?w/rIv3TH3lZ9K63xnbHNor+PKqkiQfgH1F2I7w8x5i2rQ2qqsAkgnHMyjjgj?= =?us-ascii?Q?8VqS14cvD8MEBhMgPZeqVL0YwwJ2umLHWtYO15vmVu3Arg9mmef2WgIuIj7n?= =?us-ascii?Q?9MDkcr5HPkcd1agIFXjvyPiLPPR2FrVmM8OnMhxwP/F0Z8GRuFUM8EDdaL4/?= =?us-ascii?Q?APzVbHuZCrJ3yf274IF5J2YmUaXyaM0bZkpu7W/mf0CA9JM1++F0SCHSGwIi?= =?us-ascii?Q?OFkbRqBKFwfvo/1YFU6GEcrKyTLgZbFZdRuZ4iE53u+WUXingTEPKhSe6oZz?= =?us-ascii?Q?KzlHlAMhU8SQ0SQvaEC8keqALbgsy+w8EdvRUfoy+tY+dB38nZtb1v4Zb2LQ?= =?us-ascii?Q?7U/PjabGywz0jms493uu3563WYm+FWz6RWHU3eeh7BcKc4nT6qT+76q748B0?= =?us-ascii?Q?BVHeZsRqM1ujVUNkAd+0iABXrejx8mwihDYcqYVuBHA?= X-Microsoft-Antispam-Message-Info: hTmdTa+D/EmwbVUY1Xf8y+E2tDwdvPBsHtOvMwyHrWtFRB4cpXIhZZE6cNkcTTwUPFYHyTNln3UwqhfwVO0PKQzzVWGqtmyq0bWLbBtRP+0Zx3hGSBiL3sTqotjLPFbsUTL5okIHek0XCqIbUZ5zrnpc/VmwOe4MKDW8RSl2F0N5Nmcic/pmuI2ExbadSRPj X-Microsoft-Exchange-Diagnostics: 1; DM6PR07MB4906; 6:r3MDoyQLzkzZBQcvsrAll9vZVvgPDegY/b4Frbqa+M0Fqu6/OQ3asF0LWeo5WIN82jIrIyE3c8VJwn2tAm7A7eFw1ohblKBfFIrnXN3FvKvaNSCEounosLDH4Hm7JiB2HoqL06e7zaZiHCXpLvEUazYQULZq34dxKIXeCs5gaLVB2MYRlajSP0DWjLv6Mv4lranlVo5vNgXtssjLJSPTdUOeWCnplhu2gGjXZ82AVlXSNizF32u8PHmuHFs6HvhfTipfmlW6VQLoFEZEjS9VLnurQhUuefitk/i6yEIkN8lo3BQKgIY8olMqe90okdi0ODlWK8q6B2IZSzs5PcXhCpbp66QYKBTK1fzwo9T6IpAqH0sHHxbkWZZvdtcoNOJVLHAeBb/dBVXQly6i66wpobXGlt/e62KuYCP5LcOCelhh2/NwiYxsgRLcafVNxG4pIfgfy5gfDGOQnoMrmQAEXg==; 5:R1I8+l+A31mX61S9+n3lLvy24sFPyffwKiAz9qbAde59EYfBtaYEkHE+aiBG+mi2KkD+afwJyxNv1efHdznIcMLnHCtMeyFnjCqv+k5TuoKV6gdtaNvtY2Mc1/Scny7UXuERwrWsz15QggEXFAidwSMB+CYed/Kk9Dc/KNuH5H8=; 24:Tc6XOkpa+mJfTkDao3TenV73mYIBJDz9udS8H1QnXRZB5+ScgpYhPbouINKqHaPayDxxR7XeLHgC5/01jhvZvLjHhlkAyuP6yZY0InH3ORE= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; DM6PR07MB4906; 7:wQ1GEVL5iE8Ylpx5Jl0BzD72CIvgVYJBlbz/1WJXIYCHftYy0b/EMW0iJv0cvd4o36/B2xbKJFNu7GLLtxGMBwnNbUo6Qk4cNqrsbH9YEyUkCOAS87+a8LsL1PhG+Ym3X+LD/b1foGAa/VoRSeNtDg8blfqjzGONbp0SFhmsCDWZteRrzMsHbEpjUa3/af7k8ksgTEmYhHgUFhgdd0ZjxHlidDo3lXNEEkb1dCW8d8tcIyTLs/ktT0pHVOYtMORf X-MS-Office365-Filtering-Correlation-Id: 29ce72b9-9114-4e3b-a7ba-08d5cd64ed65 X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Jun 2018 17:26:07.8110 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 29ce72b9-9114-4e3b-a7ba-08d5cd64ed65 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR07MB4906 Subject: [dpdk-dev] [PATCH 20/20] examples/l2fwd: add eventmode for l2fwd X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 08 Jun 2018 17:26:13 -0000 Adding eventmode support in l2fwd. This uses rte_eventmode_helper APIs to setup and use the eventmode capabilties. Signed-off-by: Anoob Joseph --- examples/l2fwd/l2fwd_worker.c | 815 +++++++++++++++++++++++++++++++++++++++++- examples/l2fwd/main.c | 64 +++- 2 files changed, 864 insertions(+), 15 deletions(-) diff --git a/examples/l2fwd/l2fwd_worker.c b/examples/l2fwd/l2fwd_worker.c index 56e0bdb..bc63b31 100644 --- a/examples/l2fwd/l2fwd_worker.c +++ b/examples/l2fwd/l2fwd_worker.c @@ -25,6 +25,9 @@ #include #include #include +#include +#include +#include #include #include "l2fwd_common.h" @@ -138,6 +141,16 @@ l2fwd_periodic_drain_stats_monitor(struct lcore_queue_conf *qconf, } } +static inline void +l2fwd_drain_loop(struct lcore_queue_conf *qconf, struct tsc_tracker *t, + int is_master_core) +{ + while (!force_quit) { + /* Do periodic operations (buffer drain & stats monitor) */ + l2fwd_periodic_drain_stats_monitor(qconf, t, is_master_core); + } +} + static void l2fwd_mac_updating(struct rte_mbuf *m, unsigned dest_portid) { @@ -180,9 +193,45 @@ l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid) l2fwd_send_pkt(m, dst_port); } -/* main processing loop */ +static inline void +l2fwd_send_single_pkt(struct rte_mbuf *m) +{ + l2fwd_send_pkt(m, m->port); +} + +static inline void +l2fwd_event_pre_forward(struct rte_event *ev, unsigned portid) +{ + unsigned dst_port; + struct rte_mbuf *m; + + /* Get the mbuf */ + m = ev->mbuf; + + /* Get the destination port from the tables */ + dst_port = l2fwd_dst_ports[portid]; + + /* Save the destination port in the mbuf */ + m->port = dst_port; + + /* Perform work */ + if (mac_updating) + l2fwd_mac_updating(m, dst_port); +} + +static inline void +l2fwd_event_switch_to_atomic(struct rte_event *ev, uint8_t atomic_queue_id) +{ + ev->event_type = RTE_EVENT_TYPE_CPU; + ev->op = RTE_EVENT_OP_FORWARD; + ev->sched_type = RTE_SCHED_TYPE_ATOMIC; + ev->queue_id = atomic_queue_id; +} + + +/* poll mode processing loop */ static void -l2fwd_main_loop(void) +l2fwd_poll_mode_worker(void) { struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; struct rte_mbuf *m; @@ -241,9 +290,767 @@ l2fwd_main_loop(void) } } +/* + * Event mode exposes various operating modes depending on the + * capabilities of the event device and the operating mode + * selected. + */ + +/* Workers registered */ +#define L2FWD_EVENTMODE_WORKERS 4 + +/* + * Event mode worker + * Operating mode : Single stage non-burst with atomic scheduling + */ +static void +l2fwd_eventmode_non_burst_atomic_worker(void *args) +{ + struct rte_event ev; + struct rte_mbuf *pkt; + struct rte_eventmode_helper_conf *mode_conf; + struct rte_eventmode_helper_event_link_info *links = NULL; + unsigned lcore_nb_link = 0; + uint32_t lcore_id; + unsigned i, nb_rx = 0; + unsigned portid; + struct lcore_queue_conf *qconf; + int is_master_core; + struct tsc_tracker tsc = {0}; + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + RTE_LOG(INFO, L2FWD, + "Launching event mode single stage non-burst woker with " + "atomic scheduling on lcore %d\n", lcore_id); + + /* Set the flag if master core */ + is_master_core = (lcore_id == rte_get_master_lcore()) ? 1 : 0; + + /* Get qconf for this core */ + qconf = &lcore_queue_conf[lcore_id]; + + /* Set drain tsc */ + tsc.drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / + US_PER_S * BURST_TX_DRAIN_US; + + /* Mode conf will be passed as args */ + mode_conf = (struct rte_eventmode_helper_conf *)args; + + /* Get the links configured for this lcore */ + lcore_nb_link = rte_eventmode_helper_get_event_lcore_links(lcore_id, + mode_conf, &links); + + /* Check if we have links registered for this lcore */ + if (lcore_nb_link == 0) { + /* No links registered. The core could do periodic drains */ + l2fwd_drain_loop(qconf, &tsc, is_master_core); + goto clean_and_exit; + } + + /* We have valid links */ + + /* See if it's single link */ + if (lcore_nb_link == 1) + goto single_link_loop; + else + goto multi_link_loop; + +single_link_loop: + + RTE_LOG(INFO, L2FWD, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_portid); + + while (!force_quit) { + + /* Do periodic operations (buffer drain & stats monitor) */ + l2fwd_periodic_drain_stats_monitor(qconf, &tsc, is_master_core); + + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_portid, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + portid = ev.queue_id; + port_statistics[portid].rx++; + pkt = ev.mbuf; + + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); + l2fwd_simple_forward(pkt, portid); + } + goto clean_and_exit; + +multi_link_loop: + + for (i = 0; i < lcore_nb_link; i++) { + RTE_LOG(INFO, L2FWD, " -- lcoreid=%u event_port_id=%u\n", + lcore_id, links[i].event_portid); + } + + while (!force_quit) { + + /* Do periodic operations (buffer drain & stats monitor) */ + l2fwd_periodic_drain_stats_monitor(qconf, &tsc, is_master_core); + + for (i = 0; i < lcore_nb_link; i++) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[i].eventdev_id, + links[i].event_portid, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + portid = ev.queue_id; + port_statistics[portid].rx++; + pkt = ev.mbuf; + + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); + l2fwd_simple_forward(pkt, portid); + } + } + goto clean_and_exit; + +clean_and_exit: + if (links != NULL) + rte_free(links); +} + +/* + * Event mode worker + * Operating mode : Single stage burst with atomic scheduling + */ +static void +l2fwd_eventmode_burst_atomic_worker(void *args) +{ + struct rte_event ev[MAX_PKT_BURST]; + struct rte_mbuf *pkt; + struct rte_eventmode_helper_conf *mode_conf; + struct rte_eventmode_helper_event_link_info *links = NULL; + unsigned lcore_nb_link = 0; + uint32_t lcore_id; + unsigned i, j, nb_rx = 0; + unsigned portid; + struct lcore_queue_conf *qconf; + int is_master_core; + struct rte_event_port_conf event_port_conf; + uint16_t dequeue_len = 0; + struct tsc_tracker tsc = {0}; + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + RTE_LOG(INFO, L2FWD, + "Launching event mode single stage burst woker with " + "atomic scheduling on lcore %d\n", lcore_id); + + /* Set the flag if master core */ + is_master_core = (lcore_id == rte_get_master_lcore()) ? 1 : 0; + + /* Get qconf for this core */ + qconf = &lcore_queue_conf[lcore_id]; + + /* Set drain tsc */ + tsc.drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / + US_PER_S * BURST_TX_DRAIN_US; + + /* Mode conf will be passed as args */ + mode_conf = (struct rte_eventmode_helper_conf *)args; + + /* Get the links configured for this lcore */ + lcore_nb_link = rte_eventmode_helper_get_event_lcore_links(lcore_id, + mode_conf, &links); + + /* Check if we have links registered for this lcore */ + if (lcore_nb_link == 0) { + /* No links registered. The core could do periodic drains */ + l2fwd_drain_loop(qconf, &tsc, is_master_core); + goto clean_and_exit; + } + + /* We have valid links */ + + /* Get the burst size of the event device */ + + /* Get the default conf of the first link */ + rte_event_port_default_conf_get(links[0].eventdev_id, + links[0].event_portid, + &event_port_conf); + + /* Save the burst size */ + dequeue_len = event_port_conf.dequeue_depth; + + /* Dequeue len should not exceed MAX_PKT_BURST */ + if (dequeue_len > MAX_PKT_BURST) + dequeue_len = MAX_PKT_BURST; + + /* See if it's single link */ + if (lcore_nb_link == 1) + goto single_link_loop; + else + goto multi_link_loop; + +single_link_loop: + + RTE_LOG(INFO, L2FWD, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_portid); + + while (!force_quit) { + + /* Do periodic operations (buffer drain & stats monitor) */ + l2fwd_periodic_drain_stats_monitor(qconf, &tsc, is_master_core); + + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_portid, + ev, /* events */ + dequeue_len, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + for (j = 0; j < nb_rx; j++) { + portid = ev[j].queue_id; + port_statistics[portid].rx++; + pkt = ev[j].mbuf; + + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); + l2fwd_simple_forward(pkt, portid); + } + } + goto clean_and_exit; + +multi_link_loop: + + for (i = 0; i < lcore_nb_link; i++) { + RTE_LOG(INFO, L2FWD, " -- lcoreid=%u event_port_id=%u\n", + lcore_id, links[i].event_portid); + } + + while (!force_quit) { + + /* Do periodic operations (buffer drain & stats monitor) */ + l2fwd_periodic_drain_stats_monitor(qconf, &tsc, is_master_core); + + for (i = 0; i < lcore_nb_link; i++) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[i].eventdev_id, + links[i].event_portid, + ev, /* events */ + dequeue_len, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + for (j = 0; j < nb_rx; j++) { + portid = ev[j].queue_id; + port_statistics[portid].rx++; + pkt = ev[j].mbuf; + + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); + l2fwd_simple_forward(pkt, portid); + } + } + } + goto clean_and_exit; + +clean_and_exit: + if (links != NULL) + rte_free(links); +} + +/* + * Event mode worker + * Operating mode : Single stage non-burst with ordered scheduling + */ +static void +l2fwd_eventmode_non_burst_ordered_worker(void *args) +{ + struct rte_event ev; + struct rte_mbuf *pkt; + struct rte_eventmode_helper_conf *mode_conf; + struct rte_eventmode_helper_event_link_info *links = NULL; + unsigned lcore_nb_link = 0; + uint32_t lcore_id; + unsigned i, nb_rx = 0; + unsigned portid; + struct lcore_queue_conf *qconf; + int is_master_core; + uint8_t tx_queue; + uint8_t eventdev_id; + struct tsc_tracker tsc = {0}; + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + RTE_LOG(INFO, L2FWD, + "Launching event mode single stage non-burst woker with " + "ordered scheduling on lcore %d\n", lcore_id); + + /* Set the flag if master core */ + is_master_core = (lcore_id == rte_get_master_lcore()) ? 1 : 0; + + /* Get qconf for this core */ + qconf = &lcore_queue_conf[lcore_id]; + + /* Set drain tsc */ + tsc.drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / + US_PER_S * BURST_TX_DRAIN_US; + + /* Mode conf will be passed as args */ + mode_conf = (struct rte_eventmode_helper_conf *)args; + + /* Get the links configured for this lcore */ + lcore_nb_link = rte_eventmode_helper_get_event_lcore_links(lcore_id, + mode_conf, &links); + + /* Check if we have links registered for this lcore */ + if (lcore_nb_link == 0) { + /* No links registered. The core could do periodic drains */ + l2fwd_drain_loop(qconf, &tsc, is_master_core); + goto clean_and_exit; + } + + /* We have valid links */ + + /* + * When the stage 1 is set to have scheduling ORDERED, the event need + * to change the scheduling type to ATOMIC before it can be send out. + * This would ensure that the packets are send out in the same order + * as it came. + */ + + /* + * The helper function would create a queue with ATOMIC scheduling + * for this purpose. Worker would submit packets to that queue if the + * event is not coming from an ATOMIC queue. + */ + + /* Get event dev ID from the first link */ + eventdev_id = links[0].eventdev_id; + + /* + * One queue would be reserved to be used as atomic queue for the last + * stage (eth packet tx stage) + */ + tx_queue = rte_eventmode_helper_get_tx_queue(mode_conf, eventdev_id); + + /* See if it's single link */ + if (lcore_nb_link == 1) + goto single_link_loop; + else + goto multi_link_loop; + +single_link_loop: + + RTE_LOG(INFO, L2FWD, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_portid); + + while (!force_quit) { + + /* Do periodic operations (buffer drain & stats monitor) */ + l2fwd_periodic_drain_stats_monitor(qconf, &tsc, is_master_core); + + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_portid, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + /* + * Check if this event came on atomic queue. If yes, do eth tx + */ + if (ev.sched_type == RTE_SCHED_TYPE_ATOMIC) { + l2fwd_send_single_pkt(ev.mbuf); + continue; + } + + /* Else, we have a fresh packet */ + portid = ev.queue_id; + port_statistics[portid].rx++; + pkt = ev.mbuf; + + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); + + /* Process packet */ + l2fwd_event_pre_forward(&ev, portid); + + /* Update the scheduling type for tx stage */ + l2fwd_event_switch_to_atomic(&ev, tx_queue); + + /* Submit the updated event for tx stage */ + rte_event_enqueue_burst(links[0].eventdev_id, + links[0].event_portid, + &ev, /* events */ + 1 /* nb_events */); + } + goto clean_and_exit; + +multi_link_loop: + + for (i = 0; i < lcore_nb_link; i++) { + RTE_LOG(INFO, L2FWD, " -- lcoreid=%u event_port_id=%u\n", + lcore_id, links[i].event_portid); + } + + while (!force_quit) { + + /* Do periodic operations (buffer drain & stats monitor) */ + l2fwd_periodic_drain_stats_monitor(qconf, &tsc, is_master_core); + + for (i = 0; i < lcore_nb_link; i++) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[i].eventdev_id, + links[i].event_portid, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + /* + * Check if this event came on atomic queue. + * If yes, do eth tx + */ + if (ev.sched_type == RTE_SCHED_TYPE_ATOMIC) { + l2fwd_send_single_pkt(ev.mbuf); + continue; + } + + /* Else, we have a fresh packet */ + portid = ev.queue_id; + port_statistics[portid].rx++; + pkt = ev.mbuf; + + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); + + /* Process packet */ + l2fwd_event_pre_forward(&ev, portid); + + /* Update the scheduling type for tx stage */ + l2fwd_event_switch_to_atomic(&ev, tx_queue); + + /* Submit the updated event for tx stage */ + rte_event_enqueue_burst(links[i].eventdev_id, + links[i].event_portid, + &ev, /* events */ + 1 /* nb_events */); + } + } + goto clean_and_exit; + +clean_and_exit: + if (links != NULL) + rte_free(links); +} + +/* + * Event mode worker + * Operating mode : Single stage burst with ordered scheduling + */ +static void +l2fwd_eventmode_burst_ordered_worker(void *args) +{ + struct rte_event ev[MAX_PKT_BURST]; + struct rte_mbuf *pkt; + struct rte_eventmode_helper_conf *mode_conf; + struct rte_eventmode_helper_event_link_info *links = NULL; + unsigned lcore_nb_link = 0; + uint32_t lcore_id; + unsigned i, j, nb_rx = 0; + unsigned portid; + struct lcore_queue_conf *qconf; + int is_master_core; + struct rte_event_port_conf event_port_conf; + uint16_t dequeue_len = 0; + uint8_t tx_queue; + uint8_t eventdev_id; + struct tsc_tracker tsc = {0}; + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + RTE_LOG(INFO, L2FWD, + "Launching event mode single stage burst woker with " + "ordered scheduling on lcore %d\n", lcore_id); + + /* Set the flag if master core */ + is_master_core = (lcore_id == rte_get_master_lcore()) ? 1 : 0; + + /* Get qconf for this core */ + qconf = &lcore_queue_conf[lcore_id]; + + /* Set drain tsc */ + tsc.drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / + US_PER_S * BURST_TX_DRAIN_US; + + /* Mode conf will be passed as args */ + mode_conf = (struct rte_eventmode_helper_conf *)args; + + /* Get the links configured for this lcore */ + lcore_nb_link = rte_eventmode_helper_get_event_lcore_links(lcore_id, + mode_conf, &links); + + /* Check if we have links registered for this lcore */ + if (lcore_nb_link == 0) { + /* No links registered. The core could do periodic drains */ + l2fwd_drain_loop(qconf, &tsc, is_master_core); + goto clean_and_exit; + } + + /* We have valid links */ + + /* + * When the stage 1 is set to have scheduling ORDERED, the event need + * to change the scheduling type to ATOMIC before it can be send out. + * This would ensure that the packets are send out in the same order + * as it came. + */ + + /* + * The helper function would create a queue with ATOMIC scheduling + * for this purpose. Worker would submit packets to that queue if the + * event is not coming from an ATOMIC queue. + */ + + /* Get event dev ID from the first link */ + eventdev_id = links[0].eventdev_id; + + /* + * One queue would be reserved to be used as atomic queue for the last + * stage (eth packet tx stage) + */ + tx_queue = rte_eventmode_helper_get_tx_queue(mode_conf, eventdev_id); + + /* Get the burst size of the event device */ + + /* Get the default conf of the first link */ + rte_event_port_default_conf_get(links[0].eventdev_id, + links[0].event_portid, + &event_port_conf); + + /* Save the burst size */ + dequeue_len = event_port_conf.dequeue_depth; + + /* Dequeue len should not exceed MAX_PKT_BURST */ + if (dequeue_len > MAX_PKT_BURST) + dequeue_len = MAX_PKT_BURST; + + /* See if it's single link */ + if (lcore_nb_link == 1) + goto single_link_loop; + else + goto multi_link_loop; + +single_link_loop: + + RTE_LOG(INFO, L2FWD, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_portid); + + while (!force_quit) { + + /* Do periodic operations (buffer drain & stats monitor) */ + l2fwd_periodic_drain_stats_monitor(qconf, &tsc, is_master_core); + + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_portid, + ev, /* events */ + dequeue_len, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + for (j = 0; j < nb_rx; j++) { + /* + * Check if this event came on atomic queue. + * If yes, do eth tx + */ + if (ev[j].sched_type == RTE_SCHED_TYPE_ATOMIC) { + l2fwd_send_single_pkt(ev[j].mbuf); + continue; + } + + /* Else, we have a fresh packet */ + portid = ev[j].queue_id; + port_statistics[portid].rx++; + pkt = ev[j].mbuf; + + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); + + /* Process packet */ + l2fwd_event_pre_forward(&(ev[j]), portid); + + /* Update the scheduling type for tx stage */ + l2fwd_event_switch_to_atomic(&(ev[j]), tx_queue); + + /* Submit the updated event for tx stage */ + rte_event_enqueue_burst(links[0].eventdev_id, + links[0].event_portid, + &(ev[j]), /* events */ + 1 /* nb_events */); + } + } + goto clean_and_exit; + +multi_link_loop: + + for (i = 0; i < lcore_nb_link; i++) { + RTE_LOG(INFO, L2FWD, " -- lcoreid=%u event_port_id=%u\n", + lcore_id, links[i].event_portid); + } + + while (!force_quit) { + + /* Do periodic operations (buffer drain & stats monitor) */ + l2fwd_periodic_drain_stats_monitor(qconf, &tsc, is_master_core); + + for (i = 0; i < lcore_nb_link; i++) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[i].eventdev_id, + links[i].event_portid, + ev, /* events */ + dequeue_len, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + for (j = 0; j < nb_rx; j++) { + /* + * Check if this event came on atomic queue. + * If yes, do eth tx + */ + if (ev[j].sched_type == RTE_SCHED_TYPE_ATOMIC) { + l2fwd_send_single_pkt(ev[j].mbuf); + continue; + } + + /* Else, we have a fresh packet */ + portid = ev[j].queue_id; + port_statistics[portid].rx++; + pkt = ev[j].mbuf; + + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); + + /* Process packet */ + l2fwd_event_pre_forward(&(ev[j]), portid); + + /* Update the scheduling type for tx stage */ + l2fwd_event_switch_to_atomic(&(ev[j]), + tx_queue); + + /* Submit the updated event for tx stage */ + rte_event_enqueue_burst(links[i].eventdev_id, + links[i].event_portid, + &(ev[j]), /* events */ + 1 /* nb_events */); + } + } + } + goto clean_and_exit; + +clean_and_exit: + if (links != NULL) + rte_free(links); +} + +static uint8_t +l2fwd_eventmode_populate_wrkr_params( + struct rte_eventmode_helper_app_worker_params *wrkrs) +{ + uint8_t nb_wrkr_param = 0; + struct rte_eventmode_helper_app_worker_params *wrkr; + + /* Save workers */ + + wrkr = wrkrs; + + /* Single stage non-burst with atomic scheduling */ + wrkr->cap.burst = RTE_EVENTMODE_HELPER_RX_TYPE_NON_BURST; + wrkr->cap.s1_sched_type = RTE_SCHED_TYPE_ATOMIC; + wrkr->nb_stage = 1; + wrkr->s1_worker_thread = l2fwd_eventmode_non_burst_atomic_worker; + + nb_wrkr_param++; + wrkr++; + + /* Single stage burst with atomic scheduling */ + wrkr->cap.burst = RTE_EVENTMODE_HELPER_RX_TYPE_BURST; + wrkr->cap.s1_sched_type = RTE_SCHED_TYPE_ATOMIC; + wrkr->nb_stage = 1; + wrkr->s1_worker_thread = l2fwd_eventmode_burst_atomic_worker; + + nb_wrkr_param++; + wrkr++; + + /* Single stage non-burst with ordered scheduling */ + wrkr->cap.burst = RTE_EVENTMODE_HELPER_RX_TYPE_NON_BURST; + wrkr->cap.s1_sched_type = RTE_SCHED_TYPE_ORDERED; + wrkr->nb_stage = 1; + wrkr->s1_worker_thread = l2fwd_eventmode_non_burst_ordered_worker; + + nb_wrkr_param++; + wrkr++; + + /* Single stage burst with ordered scheduling */ + wrkr->cap.burst = RTE_EVENTMODE_HELPER_RX_TYPE_BURST; + wrkr->cap.s1_sched_type = RTE_SCHED_TYPE_ORDERED; + wrkr->nb_stage = 1; + wrkr->s1_worker_thread = l2fwd_eventmode_burst_ordered_worker; + + nb_wrkr_param++; + return nb_wrkr_param; +} + +static void +l2fwd_eventmode_worker(struct rte_eventmode_helper_conf *mode_conf) +{ + struct rte_eventmode_helper_app_worker_params + l2fwd_wrkr[L2FWD_EVENTMODE_WORKERS] = {0}; + uint8_t nb_wrkr_param; + + /* Populate l2fwd_wrkr params */ + nb_wrkr_param = l2fwd_eventmode_populate_wrkr_params(l2fwd_wrkr); + + /* + * The helper function will launch the correct worker after checking the + * event device's capabilities. + */ + rte_eventmode_helper_launch_worker(mode_conf, l2fwd_wrkr, + nb_wrkr_param); +} + int -l2fwd_launch_one_lcore(__attribute__((unused)) void *dummy) +l2fwd_launch_one_lcore(void *args) { - l2fwd_main_loop(); + struct rte_eventmode_helper_conf *mode_conf; + + mode_conf = (struct rte_eventmode_helper_conf *)args; + + if (mode_conf->mode == RTE_EVENTMODE_HELPER_PKT_TRANSFER_MODE_POLL) { + /* App is initialized to run in poll mode */ + l2fwd_poll_mode_worker(); + } else if (mode_conf->mode == + RTE_EVENTMODE_HELPER_PKT_TRANSFER_MODE_EVENT) { + /* App is initialized to run in event mode */ + l2fwd_eventmode_worker(mode_conf); + } return 0; } diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c index ac81beb..278b9a8 100644 --- a/examples/l2fwd/main.c +++ b/examples/l2fwd/main.c @@ -38,6 +38,7 @@ #include #include #include +#include #include "l2fwd_common.h" #include "l2fwd_worker.h" @@ -69,6 +70,8 @@ l2fwd_usage(const char *prgname) " [-q NQ]", prgname); + rte_eventmode_helper_print_options_list(); + fprintf(stderr, "\n\n"); fprintf(stderr, @@ -79,7 +82,9 @@ l2fwd_usage(const char *prgname) " When enabled:\n" " - The source MAC address is replaced by the TX port MAC address\n" " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n" - "\n"); + ""); + + rte_eventmode_helper_print_options_description(); } static int @@ -158,12 +163,14 @@ static const struct option lgopts[] = { /* Parse the argument given in the command line of the application */ static int -l2fwd_parse_args(int argc, char **argv) +l2fwd_parse_args(int argc, char **argv, + struct rte_eventmode_helper_conf **mode_conf) { - int opt, ret, timer_secs; + int opt, timer_secs; char **argvopt; int option_index; char *prgname = argv[0]; + int options_parsed = 0; argvopt = argv; @@ -212,12 +219,31 @@ l2fwd_parse_args(int argc, char **argv) } } - if (optind >= 0) - argv[optind-1] = prgname; + /* Update argc & argv to move to event mode options */ + options_parsed = optind-1; + argc -= options_parsed; + argv += options_parsed; - ret = optind-1; - optind = 1; /* reset getopt lib */ - return ret; + /* Reset getopt lib */ + optind = 1; + + /* Check for event mode parameters and get the conf prepared*/ + *mode_conf = rte_eventmode_helper_parse_args(argc, argv); + if (*mode_conf == NULL) { + l2fwd_usage(prgname); + return -1; + } + + /* Add the number of options parsed */ + options_parsed += optind-1; + + if (options_parsed >= 0) + argv[options_parsed] = prgname; + + /* Reset getopt lib */ + optind = 1; + + return options_parsed; } /* Check the link status of all ports in up to 9s, and print them finally */ @@ -315,6 +341,7 @@ main(int argc, char **argv) unsigned nb_ports_in_mask = 0; unsigned int nb_lcores = 0; unsigned int nb_mbufs; + struct rte_eventmode_helper_conf *mode_conf = NULL; /* Set default values for global vars */ l2fwd_init_global_vars(); @@ -329,8 +356,12 @@ main(int argc, char **argv) signal(SIGINT, signal_handler); signal(SIGTERM, signal_handler); - /* parse application arguments (after the EAL ones) */ - ret = l2fwd_parse_args(argc, argv); + /* + * Parse application arguments (after the EAL ones). This would parse + * the event mode options too, and would set the conf pointer + * accordingly. + */ + ret = l2fwd_parse_args(argc, argv, &mode_conf); if (ret < 0) rte_exit(EXIT_FAILURE, "Invalid L2FWD arguments\n"); @@ -521,9 +552,20 @@ main(int argc, char **argv) check_all_ports_link_status(l2fwd_enabled_port_mask); + /* + * Set the enabled port mask in helper conf to be used by helper + * sub-system. This would be used while intializing devices using + * helper sub-system. + */ + mode_conf->eth_portmask = l2fwd_enabled_port_mask; + + /* Initialize eventmode components */ + rte_eventmode_helper_initialize_devs(mode_conf); + ret = 0; /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, (void *)mode_conf, + CALL_MASTER); RTE_LCORE_FOREACH_SLAVE(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) { ret = -1; -- 2.7.4