From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2317DA0528; Mon, 20 Jan 2020 14:46:17 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E19992C28; Mon, 20 Jan 2020 14:46:02 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 508A01515 for ; Mon, 20 Jan 2020 14:46:01 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 00KDjYeL023766; Mon, 20 Jan 2020 05:46:00 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=U85Flsl3jTgKFyvJp/+X2mIj08DgDG1PMotdJurn0q8=; b=Ka1EFWx6pc19tQxUzbZehZ7NyXZOBhRWGRJaqrLGW/bk6JAbKarunlxFdspfxNRab47u 9GVNHiRHhHuKUm3PBKQ+YijbE+nMLVRlCODQdY/qJX5S3b5VJ72um6EkXwkOm4HUNm0Q d66vYTw9iKhfLqyxlC9XESmk24gPsKZ/7f0WuGgwwjN29HS+dGQMn4CjmYR1KE/YXhSQ rOWWrUHfxAIz8FGk2tCgVeXuAotzvsSzRKgeS2teMdgxpkezr/2BmigJbmzPJhfMVyYa 94L+yCbmMU+oa6LApb8HxcAnrTbl/K25iCdE0y2ruRVJhUH48GJmWJS2niMswLuUuxjp +w== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2xm2dsx46u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 20 Jan 2020 05:46:00 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 20 Jan 2020 05:45:58 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 20 Jan 2020 05:45:58 -0800 Received: from ajoseph83.caveonetworks.com (unknown [10.29.45.60]) by maili.marvell.com (Postfix) with ESMTP id 710A43F7051; Mon, 20 Jan 2020 05:45:54 -0800 (PST) From: Anoob Joseph To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Anoob Joseph , Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , "Lukasz Bartosik" , Konstantin Ananyev , Date: Mon, 20 Jan 2020 19:15:09 +0530 Message-ID: <1579527918-360-4-git-send-email-anoobj@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1579527918-360-1-git-send-email-anoobj@marvell.com> References: <1575808249-31135-1-git-send-email-anoobj@marvell.com> <1579527918-360-1-git-send-email-anoobj@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.572 definitions=2020-01-20_02:2020-01-20, 2020-01-20 signatures=0 Subject: [dpdk-dev] [PATCH v2 03/12] examples/ipsec-secgw: add eventdev port-lcore link X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add event device port-lcore link and specify which event queues should be connected to the event port. Generate a default config for event port-lcore links if it is not specified in the configuration. This routine will check the number of available ports and then create links according to the number of cores available. This patch also adds a new entry in the eventmode conf to denote that all queues are to be linked with every port. This enables one core to receive packets from all ethernet ports. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/event_helper.c | 126 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 33 ++++++++++ 2 files changed, 159 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 82425de..cf2dff0 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -1,11 +1,33 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (C) 2020 Marvell International Ltd. */ +#include #include #include +#include #include "event_helper.h" +static inline unsigned int +eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) +{ + unsigned int next_core; + + /* Get next active core skipping cores reserved as eth cores */ + do { + /* Get the next core */ + next_core = rte_get_next_lcore(prev_core, 0, 0); + + /* Check if we have reached max lcores */ + if (next_core == RTE_MAX_LCORE) + return next_core; + + prev_core = next_core; + } while (rte_bitmap_get(em_conf->eth_core_mask, next_core)); + + return next_core; +} + static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -81,6 +103,71 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_link(struct eventmode_conf *em_conf) +{ + struct eventdev_params *eventdev_config; + struct eh_event_link_info *link; + unsigned int lcore_id = -1; + int i, link_index; + + /* + * Create a 1:1 mapping from event ports to cores. If the number + * of event ports is lesser than the cores, some cores won't + * execute worker. If there are more event ports, then some ports + * won't be used. + * + */ + + /* + * The event queue-port mapping is done according to the link. Since + * we are falling back to the default link config, enabling + * "all_ev_queue_to_ev_port" mode flag. This will map all queues + * to the port. + */ + em_conf->ext_params.all_ev_queue_to_ev_port = 1; + + /* Get first event dev conf */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Loop through the ports */ + for (i = 0; i < eventdev_config->nb_eventport; i++) { + + /* Get next active core id */ + lcore_id = eh_get_next_active_core(em_conf, + lcore_id); + + if (lcore_id == RTE_MAX_LCORE) { + /* Reached max cores */ + return 0; + } + + /* Save the current combination as one link */ + + /* Get the index */ + link_index = em_conf->nb_link; + + /* Get the corresponding link */ + link = &(em_conf->link[link_index]); + + /* Save link */ + link->eventdev_id = eventdev_config->eventdev_id; + link->event_port_id = i; + link->lcore_id = lcore_id; + + /* + * Don't set eventq_id as by default all queues + * need to be mapped to the port, which is controlled + * by the operating mode. + */ + + /* Update number of links */ + em_conf->nb_link++; + } + + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -95,6 +182,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if links are specified. Else generate a default config for + * the event ports used. + */ + if (em_conf->nb_link == 0) { + ret = eh_set_default_conf_link(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -106,6 +203,8 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) struct rte_event_dev_config eventdev_conf; struct eventdev_params *eventdev_config; int nb_eventdev = em_conf->nb_eventdev; + struct eh_event_link_info *link; + uint8_t *queue = NULL; uint8_t eventdev_id; int nb_eventqueue; uint8_t i, j; @@ -205,6 +304,33 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) } } + /* Make event queue - event port link */ + for (j = 0; j < em_conf->nb_link; j++) { + + /* Get link info */ + link = &(em_conf->link[j]); + + /* Get event dev ID */ + eventdev_id = link->eventdev_id; + + /* + * If "all_ev_queue_to_ev_port" params flag is selected, all + * queues need to be mapped to the port. + */ + if (em_conf->ext_params.all_ev_queue_to_ev_port) + queue = NULL; + else + queue = &(link->eventq_id); + + /* Link queue to port */ + ret = rte_event_port_link(eventdev_id, link->event_port_id, + queue, NULL, 1); + if (ret < 0) { + EH_LOG_ERR("Failed to link event port %d", ret); + return ret; + } + } + /* Start event devices */ for (i = 0; i < nb_eventdev; i++) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 7685987..16b03b3 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -20,6 +20,13 @@ extern "C" { /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS +/* Max event queues supported per event device */ +#define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV + +/* Max event-lcore links */ +#define EVENT_MODE_MAX_LCORE_LINKS \ + (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) + /** * Packet transfer mode of the application */ @@ -36,17 +43,43 @@ struct eventdev_params { uint8_t ev_queue_mode; }; +/** + * Event-lcore link configuration + */ +struct eh_event_link_info { + uint8_t eventdev_id; + /**< Event device ID */ + uint8_t event_port_id; + /**< Event port ID */ + uint8_t eventq_id; + /**< Event queue to be linked to the port */ + uint8_t lcore_id; + /**< Lcore to be polling on this port */ +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; /**< No of event devs */ struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; /**< Per event dev conf */ + uint8_t nb_link; + /**< No of links */ + struct eh_event_link_info + link[EVENT_MODE_MAX_LCORE_LINKS]; + /**< Per link conf */ + struct rte_bitmap *eth_core_mask; + /**< Core mask of cores to be used for software Rx and Tx */ union { RTE_STD_C11 struct { uint64_t sched_type : 2; /**< Schedule type */ + uint64_t all_ev_queue_to_ev_port : 1; + /**< + * When enabled, all event queues need to be mapped to + * each event port + */ }; uint64_t u64; } ext_params; -- 2.7.4