From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 4643EA0096 for ; Mon, 3 Jun 2019 19:38:49 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 409F01BBD7; Mon, 3 Jun 2019 19:36:20 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 434841BB3D for ; Mon, 3 Jun 2019 19:36:10 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x53HKsuS000981; Mon, 3 Jun 2019 10:36:09 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=qOmuy5F05mMO5t+x4yP2K2NCT5WjJy1A/47KvaYZsRU=; b=RsItaj/y3d9GrXE396mSasi+LU4334RTQ+TbLXblNrFhELYsR77IG/C4gpS7yYjNt7Dr fd5Ppusa+L1CA8PDCoqdqjtQrne0Ggh5foZc4lgBLPXHzQCKrQqRDLarQ/WaD+053k/d R/w+vgHT0VF1Xa4Vdo6cDFeAFaEA7bePGGpwYou4wERaBjONnIV/Eq+abyx5pyXj9V16 Yc4ZNgfQXWRo1UZhXSggt0BuvXJq5z4P2GOZL3K4iVw546xpnzfM/vPc4q7oMgCbsEYt iF8F2CAVtCyD8AjDREx4uE6dT1CKbQ24HEQKzGJtpB6+g+/h8T76vt85TGl1r0FNH4G/ GQ== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2sw2wmhdpd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 03 Jun 2019 10:36:09 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 3 Jun 2019 10:36:08 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 3 Jun 2019 10:36:08 -0700 Received: from ajoseph83.caveonetworks.com.com (unknown [10.29.45.56]) by maili.marvell.com (Postfix) with ESMTP id E4F483F7041; Mon, 3 Jun 2019 10:36:03 -0700 (PDT) From: Anoob Joseph To: Jerin Jacob , Nikhil Rao , "Erik Gabriel Carrillo" , Abhinandan Gujjar , Bruce Richardson , Pablo de Lara CC: Anoob Joseph , Narayana Prasad , , Lukasz Bartosik , Pavan Nikhilesh , Hemant Agrawal , "Nipun Gupta" , Harry van Haaren , =?UTF-8?q?Mattias=20R=C3=B6nnblom?= , Liang Ma Date: Mon, 3 Jun 2019 23:02:28 +0530 Message-ID: <1559583160-13944-29-git-send-email-anoobj@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1559583160-13944-1-git-send-email-anoobj@marvell.com> References: <1559583160-13944-1-git-send-email-anoobj@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-03_13:, , signatures=0 Subject: [dpdk-dev] [PATCH 28/39] eventdev: add default conf for event port-lcore link X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Generate a default conf for event port-lcore link, if not specified in the conf. This routine will check the number of available ports and then create links according to the number of cores available. This patch also adds a new entry in the eventmode conf to denote that all queues is to be linked with every port. This enables one core to receive packets from every port. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- lib/librte_eventdev/rte_eventmode_helper.c | 109 ++++++++++++++++++++- .../rte_eventmode_helper_internal.h | 5 + 2 files changed, 113 insertions(+), 1 deletion(-) diff --git a/lib/librte_eventdev/rte_eventmode_helper.c b/lib/librte_eventdev/rte_eventmode_helper.c index a24d654..191eb77 100644 --- a/lib/librte_eventdev/rte_eventmode_helper.c +++ b/lib/librte_eventdev/rte_eventmode_helper.c @@ -70,6 +70,28 @@ internal_get_next_rx_core(struct eventmode_conf *em_conf, return next_core; } +static inline unsigned int +internal_get_next_active_core(struct eventmode_conf *em_conf, + unsigned int prev_core) +{ + unsigned int next_core; + +get_next_core: + /* Get the next core */ + next_core = rte_get_next_lcore(prev_core, 0, 0); + + /* Check if we have reached max lcores */ + if (next_core == RTE_MAX_LCORE) + return next_core; + + /* Some cores would be reserved as rx cores. Skip them */ + if (em_conf->eth_core_mask & (1 << next_core)) { + prev_core = next_core; + goto get_next_core; + } + + return next_core; +} /* Global functions */ @@ -350,6 +372,74 @@ rte_eventmode_helper_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) } static int +rte_eventmode_helper_set_default_conf_link(struct eventmode_conf *em_conf) +{ + int i, j; + struct eventdev_params *eventdev_config; + unsigned int lcore_id = -1; + int link_index; + struct rte_eventmode_helper_event_link_info *link; + + /* + * Create a 1:1 mapping from event ports to cores. If the number + * of event ports is lesser than the cores, some cores won't + * execute worker. If event ports are more, then some ports won't + * be used. + * + */ + + /* + * The event queue-port mapping is done according to the link. Since + * we are falling back to the default link conf, enabling + * "all_ev_queue_to_ev_port" mode flag. This will map all queues to the + * port. + */ + em_conf->ext_params.all_ev_queue_to_ev_port = 1; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + + /* Get event dev conf */ + eventdev_config = &(em_conf->eventdev_config[i]); + + /* Loop through the ports */ + for (j = 0; j < eventdev_config->nb_eventport; j++) { + + /* Get next active core id */ + lcore_id = internal_get_next_active_core(em_conf, + lcore_id); + + if (lcore_id == RTE_MAX_LCORE) { + /* Reached max cores */ + return 0; + } + + /* Save the current combination as one link */ + + /* Get the index */ + link_index = em_conf->nb_link; + + /* Get the corresponding link */ + link = &(em_conf->link[link_index]); + + /* Save link */ + link->eventdev_id = eventdev_config->eventdev_id; + link->event_portid = j; + link->lcore_id = lcore_id; + + /* + * Not setting eventq_id as by default all queues + * need to be mapped to the port, and is controlled + * by the operating mode. + */ + + /* Update number of links */ + em_conf->nb_link++; + } + } + return 0; +} + +static int rte_eventmode_helper_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -379,6 +469,16 @@ rte_eventmode_helper_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * See if links are specified. Else generate a default conf for + * the event ports used. + */ + if (em_conf->nb_link == 0) { + ret = rte_eventmode_helper_set_default_conf_link(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -508,7 +608,14 @@ rte_eventmode_helper_initialize_eventdev(struct eventmode_conf *em_conf) /* Get event dev ID */ eventdev_id = link->eventdev_id; - queue = &(link->eventq_id); + /* + * If "all_ev_queue_to_ev_port" params flag is selected, all + * queues need to be mapped to the port. + */ + if (em_conf->ext_params.all_ev_queue_to_ev_port) + queue = NULL; + else + queue = &(link->eventq_id); /* Link queue to port */ ret = rte_event_port_link(eventdev_id, link->event_portid, diff --git a/lib/librte_eventdev/rte_eventmode_helper_internal.h b/lib/librte_eventdev/rte_eventmode_helper_internal.h index 7cc5776..499cf5d 100644 --- a/lib/librte_eventdev/rte_eventmode_helper_internal.h +++ b/lib/librte_eventdev/rte_eventmode_helper_internal.h @@ -96,6 +96,11 @@ struct eventmode_conf { struct { uint64_t sched_type : 2; /**< Schedule type */ + uint64_t all_ev_queue_to_ev_port : 1; + /**< + * When enabled, all event queues need to be mapped to + * each event port + */ }; uint64_t u64; } ext_params; -- 2.7.4