From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id C1B2AA0096 for ; Mon, 3 Jun 2019 19:36:24 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 378B11B9D3; Mon, 3 Jun 2019 19:35:38 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 5EB771B9CF for ; Mon, 3 Jun 2019 19:35:36 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x53HKk5T027666; Mon, 3 Jun 2019 10:35:35 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=87OSkinGzpTANoGkyjRWo5CIE8FFToGRqzYjDukJKSQ=; b=Z/Y1IY9y3Fmn8dijCW8L7/YttO0FXMaAWYSdfzYgccvyZFRqfgwkVImOeHCxeN63qfOi dQA73+3h7jueqX+0BHdPeDzbFw45quK1hwoX0OOi6DmSzEtSuGpVtY1koxU1Qa3E6aVD UXEMpn75QJeJaF6XYjjT8Pz+OlmG8zYtsCbNpti1aXaRX9fqUkeenb4o1Fw1U23dZRRj Y+HIkUyQo4AEGDubw8OzcuIPEbD8TEcSisYN4akO+p20ldwOsCM2QGAHufgJS2mQLKYU Y4JqOydIu3YxzOS9zMYKbjSEW2JleSfoifnmbPLXo19vB+KZr5VRjohXd3CLFb2zNyyo 2g== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2sw79pr6yw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 03 Jun 2019 10:35:34 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 3 Jun 2019 10:35:32 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 3 Jun 2019 10:35:32 -0700 Received: from ajoseph83.caveonetworks.com.com (unknown [10.29.45.56]) by maili.marvell.com (Postfix) with ESMTP id 6505D3F7040; Mon, 3 Jun 2019 10:35:27 -0700 (PDT) From: Anoob Joseph To: Jerin Jacob , Nikhil Rao , "Erik Gabriel Carrillo" , Abhinandan Gujjar , Bruce Richardson , Pablo de Lara CC: Anoob Joseph , Narayana Prasad , , Lukasz Bartosik , Pavan Nikhilesh , Hemant Agrawal , "Nipun Gupta" , Harry van Haaren , =?UTF-8?q?Mattias=20R=C3=B6nnblom?= , Liang Ma Date: Mon, 3 Jun 2019 23:02:22 +0530 Message-ID: <1559583160-13944-23-git-send-email-anoobj@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1559583160-13944-1-git-send-email-anoobj@marvell.com> References: <1559583160-13944-1-git-send-email-anoobj@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-03_13:, , signatures=0 Subject: [dpdk-dev] [PATCH 22/39] eventdev: add option to specify schedule mode for app stage X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Scheduling mode for each event queue is dependent on the same of app stage. Configure event queue taking this also into account. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- lib/librte_eventdev/rte_eventmode_helper.c | 24 ++++++++++++++++++++-- .../rte_eventmode_helper_internal.h | 8 ++++++++ 2 files changed, 30 insertions(+), 2 deletions(-) diff --git a/lib/librte_eventdev/rte_eventmode_helper.c b/lib/librte_eventdev/rte_eventmode_helper.c index ec0be44..30bb357 100644 --- a/lib/librte_eventdev/rte_eventmode_helper.c +++ b/lib/librte_eventdev/rte_eventmode_helper.c @@ -85,6 +85,8 @@ em_parse_transfer_mode(struct rte_eventmode_helper_conf *conf, static void em_initialize_helper_conf(struct rte_eventmode_helper_conf *conf) { + struct eventmode_conf *em_conf = NULL; + /* Set default conf */ /* Packet transfer mode: poll */ @@ -92,6 +94,13 @@ em_initialize_helper_conf(struct rte_eventmode_helper_conf *conf) /* Keep all ethernet ports enabled by default */ conf->eth_portmask = -1; + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + /* Schedule type: ordered */ + /* FIXME */ + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; } struct rte_eventmode_helper_conf * __rte_experimental @@ -233,8 +242,19 @@ rte_eventmode_helper_initialize_eventdev(struct eventmode_conf *em_conf) eventq_conf.event_queue_cfg = eventdev_config->ev_queue_mode; - /* Set schedule type as ATOMIC */ - eventq_conf.schedule_type = RTE_SCHED_TYPE_ATOMIC; + /* + * All queues need to be set with sched_type as + * schedule type for the application stage. One queue + * would be reserved for the final eth tx stage. This + * will be an atomic queue. + */ + if (j == nb_eventqueue-1) { + eventq_conf.schedule_type = + RTE_SCHED_TYPE_ATOMIC; + } else { + eventq_conf.schedule_type = + em_conf->ext_params.sched_type; + } /* Set max atomic flows to 1024 */ eventq_conf.nb_atomic_flows = 1024; diff --git a/lib/librte_eventdev/rte_eventmode_helper_internal.h b/lib/librte_eventdev/rte_eventmode_helper_internal.h index ee41833..2a6cd90 100644 --- a/lib/librte_eventdev/rte_eventmode_helper_internal.h +++ b/lib/librte_eventdev/rte_eventmode_helper_internal.h @@ -61,6 +61,14 @@ struct eventmode_conf { struct rte_eventmode_helper_event_link_info link[EVENT_MODE_MAX_LCORE_LINKS]; /**< Per link conf */ + union { + struct { + uint64_t sched_type : 2; + /**< Schedule type */ + }; + uint64_t u64; + } ext_params; + /**< 64 bit field to specify extended params */ }; #endif /* _RTE_EVENTMODE_HELPER_INTERNAL_H_ */ -- 2.7.4