From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0A54BA057C; Thu, 26 Mar 2020 18:02:21 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1333B1C0AD; Thu, 26 Mar 2020 17:58:24 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id B6BD31C1BB for ; Thu, 26 Mar 2020 17:58:20 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 02QGif5O025354; Thu, 26 Mar 2020 09:58:18 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=m3vnb09Z1kcHV9+nhOmOg/Xhd4ytXjgU/D65FXP/6PY=; b=HKIdTbUGtmHXx69R065UTusQTO57adYLvOWXLNLQen9W/KRmSGuRDQrPBqUO6HSCinlz iT0NIvacbRv551tRuTdkzTEXhCYCey0adkiM9EiJhnZTWlwarWSGnUdo5xj3pKCNQ2eT EpYjjUdPRsRGc7wqGq7K5mNSDaAvnQl/isOvCRJv9womM5ppabQ+5fUUF4X7N8v4mUsq sMk1J1uIe2VaT2zCKXt5Ad5Q20t5kXAJrOjFZ+FqDUfVLvG4vlJplNYWme0gtZ6XnLhP dPF5T7bpxqtW3YF4VwiwT5l76rHFF2M3wiqFV2DwZ+HzhTUEa49orgqRVFw1SaKsKrz9 xw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2ywg9nxgpf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 26 Mar 2020 09:58:18 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Mar 2020 09:58:16 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 26 Mar 2020 09:58:16 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id A16263F7040; Thu, 26 Mar 2020 09:58:12 -0700 (PDT) From: To: Marko Kovacevic , Ori Kam , Bruce Richardson , Radu Nicolau , Akhil Goyal , Tomasz Kantecki , Sunil Kumar Kori , "Pavan Nikhilesh" , Nithin Dabilpuram CC: , , , , , Date: Thu, 26 Mar 2020 22:26:42 +0530 Message-ID: <20200326165644.866053-27-jerinj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200326165644.866053-1-jerinj@marvell.com> References: <20200318213551.3489504-1-jerinj@marvell.com> <20200326165644.866053-1-jerinj@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.645 definitions=2020-03-26_08:2020-03-26, 2020-03-26 signatures=0 Subject: [dpdk-dev] [PATCH v2 26/28] l3fwd-graph: add graph config and main loop X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add graph creation, configuration logic and graph main loop. This graph main loop is run on every slave lcore and calls rte_graph_walk() to walk over lcore specific rte_graph. Master core accumulates and prints graph walk stats of all the lcore's graph's. Signed-off-by: Nithin Dabilpuram --- examples/l3fwd-graph/main.c | 242 +++++++++++++++++++++++++++++++++++- 1 file changed, 240 insertions(+), 2 deletions(-) diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c index 47d2f2ecb..28db947b5 100644 --- a/examples/l3fwd-graph/main.c +++ b/examples/l3fwd-graph/main.c @@ -23,9 +23,13 @@ #include #include #include +#include +#include #include #include #include +#include +#include #include #include #include @@ -75,12 +79,17 @@ static uint32_t enabled_port_mask; struct lcore_rx_queue { uint16_t port_id; uint8_t queue_id; + char node_name[RTE_NODE_NAMESIZE]; }; /* Lcore conf */ struct lcore_conf { uint16_t n_rx_queue; struct lcore_rx_queue rx_queue_list[MAX_RX_QUEUE_PER_LCORE]; + + struct rte_graph *graph; + char name[RTE_GRAPH_NAMESIZE]; + rte_graph_t graph_id; } __rte_cache_aligned; static struct lcore_conf lcore_conf[RTE_MAX_LCORE]; @@ -119,6 +128,25 @@ static struct rte_eth_conf port_conf = { static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS]; +static struct rte_node_ethdev_config ethdev_conf[RTE_MAX_ETHPORTS]; + +struct ipv4_l3fwd_lpm_route { + uint32_t ip; + uint8_t depth; + uint8_t if_out; +}; + +#define IPV4_L3FWD_LPM_NUM_ROUTES \ + (sizeof(ipv4_l3fwd_lpm_route_array) / \ + sizeof(ipv4_l3fwd_lpm_route_array[0])) +/* 198.18.0.0/16 are set aside for RFC2544 benchmarking. */ +static struct ipv4_l3fwd_lpm_route ipv4_l3fwd_lpm_route_array[] = { + {RTE_IPV4(198, 18, 0, 0), 24, 0}, {RTE_IPV4(198, 18, 1, 0), 24, 1}, + {RTE_IPV4(198, 18, 2, 0), 24, 2}, {RTE_IPV4(198, 18, 3, 0), 24, 3}, + {RTE_IPV4(198, 18, 4, 0), 24, 4}, {RTE_IPV4(198, 18, 5, 0), 24, 5}, + {RTE_IPV4(198, 18, 6, 0), 24, 6}, {RTE_IPV4(198, 18, 7, 0), 24, 7}, +}; + static int check_lcore_params(void) { @@ -624,17 +652,87 @@ signal_handler(int signum) } } +static void +print_stats(void) +{ + const char topLeft[] = {27, '[', '1', ';', '1', 'H', '\0'}; + const char clr[] = {27, '[', '2', 'J', '\0'}; + struct rte_graph_cluster_stats_param s_param; + struct rte_graph_cluster_stats *stats; + const char *pattern = "worker_*"; + + /* Prepare stats object */ + memset(&s_param, 0, sizeof(s_param)); + s_param.f = stdout; + s_param.socket_id = SOCKET_ID_ANY; + s_param.graph_patterns = &pattern; + s_param.nb_graph_patterns = 1; + + stats = rte_graph_cluster_stats_create(&s_param); + if (stats == NULL) + rte_exit(EXIT_FAILURE, "Unable to create stats object\n"); + + while (!force_quit) { + /* Clear screen and move to top left */ + printf("%s%s", clr, topLeft); + rte_graph_cluster_stats_get(stats, 0); + rte_delay_ms(1E3); + } + + rte_graph_cluster_stats_destroy(stats); +} + +/* Main processing loop */ +static int +graph_main_loop(void *conf) +{ + struct lcore_conf *qconf; + struct rte_graph *graph; + uint32_t lcore_id; + + RTE_SET_USED(conf); + + lcore_id = rte_lcore_id(); + qconf = &lcore_conf[lcore_id]; + graph = qconf->graph; + + if (!graph) { + RTE_LOG(INFO, L3FWD_GRAPH, "Lcore %u has nothing to do\n", + lcore_id); + return 0; + } + + RTE_LOG(INFO, L3FWD_GRAPH, + "Entering main loop on lcore %u, graph %s(%p)\n", lcore_id, + qconf->name, graph); + + while (likely(!force_quit)) + rte_graph_walk(graph); + + return 0; +} + int main(int argc, char **argv) { + uint8_t rewrite_data[2 * sizeof(struct rte_ether_addr)]; + static const char *node_patterns[] = { + "ip4*", + "ethdev_tx-*", + "pkt_drop", + }; uint8_t nb_rx_queue, queue, socketid; + struct rte_graph_param graph_conf; struct rte_eth_dev_info dev_info; + uint32_t nb_ports, nb_conf = 0; uint32_t n_tx_queue, nb_lcores; struct rte_eth_txconf *txconf; - uint16_t queueid, portid; + uint16_t queueid, portid, i; struct lcore_conf *qconf; + uint16_t nb_graphs = 0; + uint16_t nb_patterns; + uint8_t rewrite_len; uint32_t lcore_id; - uint32_t nb_ports; int ret; /* Init EAL */ @@ -783,6 +881,18 @@ main(int argc, char **argv) queueid++; } + /* Setup ethdev node config */ + ethdev_conf[nb_conf].port_id = portid; + ethdev_conf[nb_conf].num_rx_queues = nb_rx_queue; + ethdev_conf[nb_conf].num_tx_queues = n_tx_queue; + if (!per_port_pool) + ethdev_conf[nb_conf].mp = pktmbuf_pool[0]; + + else + ethdev_conf[nb_conf].mp = pktmbuf_pool[portid]; + ethdev_conf[nb_conf].mp_count = NB_SOCKETS; + + nb_conf++; printf("\n"); } @@ -826,11 +936,26 @@ main(int argc, char **argv) "port=%d\n", ret, portid); + /* Add this queue node to its graph */ + snprintf(qconf->rx_queue_list[queue].node_name, + RTE_NODE_NAMESIZE, "ethdev_rx-%u-%u", portid, + queueid); + } + + /* Alloc a graph to this lcore only if source exists */ + if (qconf->n_rx_queue) { + qconf->graph_id = nb_graphs; + nb_graphs++; } } printf("\n"); + /* Ethdev node config, skip rx queue mapping */ + ret = rte_node_eth_config(ethdev_conf, nb_conf, nb_graphs); + if (ret) + rte_exit(EXIT_FAILURE, "rte_node_eth_config: err=%d\n", ret); + /* Start ports */ RTE_ETH_FOREACH_DEV(portid) { @@ -858,6 +983,119 @@ main(int argc, char **argv) check_all_ports_link_status(enabled_port_mask); + /* Graph Initialization */ + memset(&graph_conf, 0, sizeof(graph_conf)); + graph_conf.node_patterns = node_patterns; + nb_patterns = RTE_DIM(node_patterns); + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + rte_graph_t graph_id; + rte_edge_t i; + + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + qconf = &lcore_conf[lcore_id]; + + /* Skip graph creation if no source exists */ + if (!qconf->n_rx_queue) + continue; + + /* Add rx node patterns of this lcore */ + for (i = 0; i < qconf->n_rx_queue; i++) { + graph_conf.node_patterns[nb_patterns + i] = + qconf->rx_queue_list[i].node_name; + } + + graph_conf.nb_node_patterns = nb_patterns + i; + graph_conf.socket_id = rte_lcore_to_socket_id(lcore_id); + + snprintf(qconf->name, sizeof(qconf->name), "worker_%u", + lcore_id); + + graph_id = rte_graph_create(qconf->name, &graph_conf); + if (graph_id != qconf->graph_id) + rte_exit(EXIT_FAILURE, + "rte_graph_create(): graph_id=%d not " + " as expected for lcore %u(%u\n", + graph_id, lcore_id, qconf->graph_id); + + qconf->graph = rte_graph_lookup(qconf->name); + if (!qconf->graph) + rte_exit(EXIT_FAILURE, + "rte_graph_lookup(): graph %s not found\n", + qconf->name); + } + + memset(&rewrite_data, 0, sizeof(rewrite_data)); + rewrite_len = sizeof(rewrite_data); + + /* Add route to ip4 graph infra */ + for (i = 0; i < IPV4_L3FWD_LPM_NUM_ROUTES; i++) { + char route_str[INET6_ADDRSTRLEN * 4]; + char abuf[INET6_ADDRSTRLEN]; + struct in_addr in; + uint32_t dst_port; + uint16_t next_hop; + + /* Skip unused ports */ + if ((1 << ipv4_l3fwd_lpm_route_array[i].if_out & + enabled_port_mask) == 0) + continue; + + dst_port = ipv4_l3fwd_lpm_route_array[i].if_out; + next_hop = i; + + in.s_addr = htonl(ipv4_l3fwd_lpm_route_array[i].ip); + snprintf(route_str, sizeof(route_str), "%s / %d (%d)", + inet_ntop(AF_INET, &in, abuf, sizeof(abuf)), + ipv4_l3fwd_lpm_route_array[i].depth, + ipv4_l3fwd_lpm_route_array[i].if_out); + + ret = rte_node_ip4_route_add( + ipv4_l3fwd_lpm_route_array[i].ip, + ipv4_l3fwd_lpm_route_array[i].depth, next_hop, + RTE_NODE_IP4_LOOKUP_NEXT_REWRITE); + + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Unable to add ip4 route %s to graph\n", + route_str); + + memcpy(rewrite_data, val_eth + dst_port, rewrite_len); + + /* Add next hop for a given destination */ + ret = rte_node_ip4_rewrite_add(next_hop, rewrite_data, + rewrite_len, dst_port); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Unable to add next hop %u for " + "route %s\n", + next_hop, route_str); + + RTE_LOG(INFO, L3FWD_GRAPH, "Added route %s, next_hop %u\n", + route_str, next_hop); + } + + /* Launch per-lcore init on every slave lcore */ + rte_eal_mp_remote_launch(graph_main_loop, NULL, SKIP_MASTER); + + /* Accumulate and print stats on master until exit */ + if (rte_graph_has_stats_feature()) + print_stats(); + + /* Wait for slave cores to exit */ + ret = 0; + RTE_LCORE_FOREACH_SLAVE(lcore_id) { + ret = rte_eal_wait_lcore(lcore_id); + /* Destroy graph */ + rte_graph_destroy(lcore_conf[lcore_id].name); + if (ret < 0) { + ret = -1; + break; + } + } + /* Stop ports */ RTE_ETH_FOREACH_DEV(portid) { if ((enabled_port_mask & (1 << portid)) == 0) -- 2.25.1