From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B1E3DA3160 for ; Wed, 9 Oct 2019 17:10:57 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 455411E8F3; Wed, 9 Oct 2019 17:10:45 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id DCB5C1E8B3 for ; Wed, 9 Oct 2019 17:10:43 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x99Esh9Z024439; Wed, 9 Oct 2019 08:10:43 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=e0CZ7BbE4T1MLZgTO/FoTtbXp+/gaqNlG9IWMlut5r4=; b=kshNG3vLNszn2bZqVTCMnHd2kZ+zV0Zm1IYiXyjmdJF43QTxFJRZ2JjgRzCvOLV1/xYX 9ZjIKA6lX0V0DkBbh55DGP4xtWbBsevJ6yAZEEGCCyqIfQQliDIqJYulDcFdggFgA1eY VXVfqPF9ZZ5/61r+33WqfgNMnwvpaAXSO73ry8w+SKvlzdLpYNBZ0GVHsT0nPxlGnCPX BABLR0ZPpPu4XDsXi4VjpW4uRlGLdE3DMigcrDz3cg/f7rJjL/riQCHPlaT/+NQ/fzwS Wru2F369/VnOtV6GZd21MR8A2aPGTRTqbnIwJQNV3S5KaSaam3NZiXPmfBcSyo5MxyIs Nw== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2vhdxbrwjs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 09 Oct 2019 08:10:43 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 9 Oct 2019 08:10:41 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Wed, 9 Oct 2019 08:10:41 -0700 Received: from ajoseph83.caveonetworks.com.com (unknown [10.29.45.60]) by maili.marvell.com (Postfix) with ESMTP id 7836E3F7041; Wed, 9 Oct 2019 08:10:38 -0700 (PDT) From: Anoob Joseph To: Akhil Goyal , Radu Nicolau CC: Anoob Joseph , Thomas Monjalon , Jerin Jacob , Narayana Prasad , , Lukasz Bartosik Date: Wed, 9 Oct 2019 20:40:05 +0530 Message-ID: <1570633816-4706-3-git-send-email-anoobj@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1570633816-4706-1-git-send-email-anoobj@marvell.com> References: <1570633816-4706-1-git-send-email-anoobj@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-10-09_06:2019-10-08,2019-10-09 signatures=0 Subject: [dpdk-dev] [RFC PATCH 02/13] examples/ipsec-secgw: add eventdev port-lcore link X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add event device port-lcore link and specify which event queues should be connected to the event port. Generate a default config for event port-lcore links if it is not specified in the configuration. This routine will check the number of available ports and then create links according to the number of cores available. This patch also adds a new entry in the eventmode conf to denote that all queues are to be linked with every port. This enables one core to receive packets from every port. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/event_helper.c | 132 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 33 +++++++++ 2 files changed, 165 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index af7c7e2..3080625 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -3,9 +3,34 @@ */ #include #include +#include +#include #include "event_helper.h" +static inline unsigned int +eh_get_next_active_core(struct eventmode_conf *em_conf, + unsigned int prev_core) +{ + unsigned int next_core; + +get_next_core: + /* Get the next core */ + next_core = rte_get_next_lcore(prev_core, 0, 0); + + /* Check if we have reached max lcores */ + if (next_core == RTE_MAX_LCORE) + return next_core; + + /* Some cores would be reserved as rx cores. Skip them */ + if (rte_bitmap_get(em_conf->eth_core_mask, next_core)) { + prev_core = next_core; + goto get_next_core; + } + + return next_core; +} + static int eh_validate_user_params(struct eventmode_conf *em_conf) { @@ -74,6 +99,74 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_link(struct eventmode_conf *em_conf) +{ + int i, j; + struct eventdev_params *eventdev_config; + unsigned int lcore_id = -1; + int link_index; + struct eh_event_link_info *link; + + /* + * Create a 1:1 mapping from event ports to cores. If the number + * of event ports is lesser than the cores, some cores won't + * execute worker. If event ports are more, then some ports won't + * be used. + * + */ + + /* + * The event queue-port mapping is done according to the link. Since + * we are falling back to the default link conf, enabling + * "all_ev_queue_to_ev_port" mode flag. This will map all queues to the + * port. + */ + em_conf->ext_params.all_ev_queue_to_ev_port = 1; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + + /* Get event dev conf */ + eventdev_config = &(em_conf->eventdev_config[i]); + + /* Loop through the ports */ + for (j = 0; j < eventdev_config->nb_eventport; j++) { + + /* Get next active core id */ + lcore_id = eh_get_next_active_core(em_conf, + lcore_id); + + if (lcore_id == RTE_MAX_LCORE) { + /* Reached max cores */ + return 0; + } + + /* Save the current combination as one link */ + + /* Get the index */ + link_index = em_conf->nb_link; + + /* Get the corresponding link */ + link = &(em_conf->link[link_index]); + + /* Save link */ + link->eventdev_id = eventdev_config->eventdev_id; + link->event_portid = j; + link->lcore_id = lcore_id; + + /* + * Not setting eventq_id as by default all queues + * need to be mapped to the port, and is controlled + * by the operating mode. + */ + + /* Update number of links */ + em_conf->nb_link++; + } + } + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -93,6 +186,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * See if links are specified. Else generate a default conf for + * the event ports used. + */ + if (em_conf->nb_link == 0) { + ret = eh_set_default_conf_link(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -104,10 +207,12 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) struct rte_event_dev_config eventdev_conf; struct rte_event_dev_info evdev_default_conf; struct rte_event_queue_conf eventq_conf = {0}; + struct eh_event_link_info *link; struct eventdev_params *eventdev_config; int nb_eventdev = em_conf->nb_eventdev; int nb_eventqueue; uint8_t eventdev_id; + uint8_t *queue = NULL; for (i = 0; i < nb_eventdev; i++) { @@ -205,6 +310,33 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) } } + /* Make event queue - event port link */ + for (j = 0; j < em_conf->nb_link; j++) { + + /* Get link info */ + link = &(em_conf->link[j]); + + /* Get event dev ID */ + eventdev_id = link->eventdev_id; + + /* + * If "all_ev_queue_to_ev_port" params flag is selected, all + * queues need to be mapped to the port. + */ + if (em_conf->ext_params.all_ev_queue_to_ev_port) + queue = NULL; + else + queue = &(link->eventq_id); + + /* Link queue to port */ + ret = rte_event_port_link(eventdev_id, link->event_portid, + queue, NULL, 1); + if (ret < 0) { + EH_LOG_ERR("Error in event port linking"); + return ret; + } + } + /* Start event devices */ for (i = 0; i < nb_eventdev; i++) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 71990f9..052ff25 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -20,6 +20,13 @@ extern "C" { /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS +/* Max event queues supported per event device */ +#define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV + +/* Max event-lcore links */ +#define EVENT_MODE_MAX_LCORE_LINKS \ + (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) + /** * Packet transfer mode of the application */ @@ -36,17 +43,43 @@ struct eventdev_params { uint8_t ev_queue_mode; }; +/** + * Event-lcore link configuration + */ +struct eh_event_link_info { + uint8_t eventdev_id; + /**< Event device ID */ + uint8_t event_portid; + /**< Event port ID */ + uint8_t eventq_id; + /**< Event queue to be linked to the port */ + uint8_t lcore_id; + /**< Lcore to be polling on this port */ +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; /**< No of event devs */ struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; /**< Per event dev conf */ + uint8_t nb_link; + /**< No of links */ + struct eh_event_link_info + link[EVENT_MODE_MAX_LCORE_LINKS]; + /**< Per link conf */ + struct rte_bitmap *eth_core_mask; + /**< Core mask of cores to be used for software Rx and Tx */ union { RTE_STD_C11 struct { uint64_t sched_type : 2; /**< Schedule type */ + uint64_t all_ev_queue_to_ev_port : 1; + /**< + * When enabled, all event queues need to be mapped to + * each event port + */ }; uint64_t u64; } ext_params; -- 2.7.4