From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 11864A32A8 for ; Sat, 26 Oct 2019 13:11:49 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 963691BEB6; Sat, 26 Oct 2019 13:11:38 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 370FC1BF97 for ; Sat, 26 Oct 2019 13:11:19 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x9QB5bqD010488; Sat, 26 Oct 2019 04:11:18 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=mx2Fg+ND0m+7mcCFUnRJDnkyVKp9Et2JgPm7nnyv7EI=; b=CSJjEoUs5LJRjJisrX5WRwFa907xDTaO4pW92LM3VryyLt/fXXN6C0V8+39aL0G5Va/w ExAOIBZqI0RyiqHGmLKv75/H74X8cinVlMVA8guyp9YkQyaTl3HYhxqnUSPuNkOUmToc tHhhWLho2KgcJQC36pCXd2sXAIy78f0gPsZ3fTHLJK1LPhCLbEUO9IvoQ5+CM219KOnO umnYLZHK35rgVMKaYbliPNzTcs2Kp6VzTOPfjBp0sACeBtImXwGzhb8XRicruTcpLzen 4+UFwEfwdIkZRZoy+BC4cq3ZD3GLccMOY0W5QNzPyuujYxNlBJ/pPNFnHxiZlfPlx0VZ xg== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2vvkgq06bg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sat, 26 Oct 2019 04:11:18 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sat, 26 Oct 2019 04:11:17 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sat, 26 Oct 2019 04:11:17 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.85]) by maili.marvell.com (Postfix) with ESMTP id 998F03F7040; Sat, 26 Oct 2019 04:11:13 -0700 (PDT) From: To: , , , Marko Kovacevic , "Ori Kam" , Radu Nicolau , Akhil Goyal , Tomasz Kantecki , "Sunil Kumar Kori" , Pavan Nikhilesh CC: Date: Sat, 26 Oct 2019 16:40:48 +0530 Message-ID: <20191026111054.15491-5-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191026111054.15491-1-pbhagavatula@marvell.com> References: <20191014182247.961-1-pbhagavatula@marvell.com> <20191026111054.15491-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-10-26_02:2019-10-25,2019-10-26 signatures=0 Subject: [dpdk-dev] [PATCH v7 04/10] examples/l2fwd-event: add event device setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Add event device device setup based on event eth Tx adapter capabilities. Signed-off-by: Sunil Kumar Kori Signed-off-by: Pavan Nikhilesh --- examples/l2fwd-event/l2fwd_event.c | 3 + examples/l2fwd-event/l2fwd_event.h | 16 ++++ examples/l2fwd-event/l2fwd_event_generic.c | 75 +++++++++++++++++- .../l2fwd-event/l2fwd_event_internal_port.c | 77 ++++++++++++++++++- 4 files changed, 169 insertions(+), 2 deletions(-) diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c index 7f90e6311..a5c1c2c40 100644 --- a/examples/l2fwd-event/l2fwd_event.c +++ b/examples/l2fwd-event/l2fwd_event.c @@ -57,4 +57,7 @@ l2fwd_event_resource_setup(struct l2fwd_resources *rsrc) /* Setup eventdev capability callbacks */ l2fwd_event_capability_setup(evt_rsrc); + + /* Event device configuration */ + evt_rsrc->ops.event_device_setup(rsrc); } diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h index b7aaa39f9..6b5beb041 100644 --- a/examples/l2fwd-event/l2fwd_event.h +++ b/examples/l2fwd-event/l2fwd_event.h @@ -13,11 +13,27 @@ #include "l2fwd_common.h" +typedef uint32_t (*event_device_setup_cb)(struct l2fwd_resources *rsrc); + +struct event_queues { + uint8_t nb_queues; +}; + +struct event_ports { + uint8_t nb_ports; +}; + struct event_setup_ops { + event_device_setup_cb event_device_setup; }; struct l2fwd_event_resources { uint8_t tx_mode_q; + uint8_t has_burst; + uint8_t event_d_id; + uint8_t disable_implicit_release; + struct event_ports evp; + struct event_queues evq; struct event_setup_ops ops; }; diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c index 9afade7d2..33e570585 100644 --- a/examples/l2fwd-event/l2fwd_event_generic.c +++ b/examples/l2fwd-event/l2fwd_event_generic.c @@ -16,8 +16,81 @@ #include "l2fwd_common.h" #include "l2fwd_event.h" +static uint32_t +l2fwd_event_device_setup_generic(struct l2fwd_resources *rsrc) +{ + struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc; + struct rte_event_dev_config event_d_conf = { + .nb_events_limit = 4096, + .nb_event_queue_flows = 1024, + .nb_event_port_dequeue_depth = 128, + .nb_event_port_enqueue_depth = 128 + }; + struct rte_event_dev_info dev_info; + const uint8_t event_d_id = 0; /* Always use first event device only */ + uint32_t event_queue_cfg = 0; + uint16_t ethdev_count = 0; + uint16_t num_workers = 0; + uint16_t port_id; + int ret; + + RTE_ETH_FOREACH_DEV(port_id) { + if ((rsrc->enabled_port_mask & (1 << port_id)) == 0) + continue; + ethdev_count++; + } + + /* Event device configurtion */ + rte_event_dev_info_get(event_d_id, &dev_info); + evt_rsrc->disable_implicit_release = !!(dev_info.event_dev_cap & + RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE); + + if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES) + event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES; + + /* One queue for each ethdev port + one Tx adapter Single link queue. */ + event_d_conf.nb_event_queues = ethdev_count + 1; + if (dev_info.max_event_queues < event_d_conf.nb_event_queues) + event_d_conf.nb_event_queues = dev_info.max_event_queues; + + if (dev_info.max_num_events < event_d_conf.nb_events_limit) + event_d_conf.nb_events_limit = dev_info.max_num_events; + + if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows) + event_d_conf.nb_event_queue_flows = + dev_info.max_event_queue_flows; + + if (dev_info.max_event_port_dequeue_depth < + event_d_conf.nb_event_port_dequeue_depth) + event_d_conf.nb_event_port_dequeue_depth = + dev_info.max_event_port_dequeue_depth; + + if (dev_info.max_event_port_enqueue_depth < + event_d_conf.nb_event_port_enqueue_depth) + event_d_conf.nb_event_port_enqueue_depth = + dev_info.max_event_port_enqueue_depth; + + num_workers = rte_lcore_count() - rte_service_lcore_count(); + if (dev_info.max_event_ports < num_workers) + num_workers = dev_info.max_event_ports; + + event_d_conf.nb_event_ports = num_workers; + evt_rsrc->evp.nb_ports = num_workers; + evt_rsrc->evq.nb_queues = event_d_conf.nb_event_queues; + + evt_rsrc->has_burst = !!(dev_info.event_dev_cap & + RTE_EVENT_DEV_CAP_BURST_MODE); + + ret = rte_event_dev_configure(event_d_id, &event_d_conf); + if (ret < 0) + rte_panic("Error in configuring event device\n"); + + evt_rsrc->event_d_id = event_d_id; + return event_queue_cfg; +} + void l2fwd_event_set_generic_ops(struct event_setup_ops *ops) { - RTE_SET_USED(ops); + ops->event_device_setup = l2fwd_event_device_setup_generic; } diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c index ce95b8e6d..acd98798e 100644 --- a/examples/l2fwd-event/l2fwd_event_internal_port.c +++ b/examples/l2fwd-event/l2fwd_event_internal_port.c @@ -16,8 +16,83 @@ #include "l2fwd_common.h" #include "l2fwd_event.h" +static uint32_t +l2fwd_event_device_setup_internal_port(struct l2fwd_resources *rsrc) +{ + struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc; + struct rte_event_dev_config event_d_conf = { + .nb_events_limit = 4096, + .nb_event_queue_flows = 1024, + .nb_event_port_dequeue_depth = 128, + .nb_event_port_enqueue_depth = 128 + }; + struct rte_event_dev_info dev_info; + uint8_t disable_implicit_release; + const uint8_t event_d_id = 0; /* Always use first event device only */ + uint32_t event_queue_cfg = 0; + uint16_t ethdev_count = 0; + uint16_t num_workers = 0; + uint16_t port_id; + int ret; + + RTE_ETH_FOREACH_DEV(port_id) { + if ((rsrc->enabled_port_mask & (1 << port_id)) == 0) + continue; + ethdev_count++; + } + + /* Event device configurtion */ + rte_event_dev_info_get(event_d_id, &dev_info); + + disable_implicit_release = !!(dev_info.event_dev_cap & + RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE); + evt_rsrc->disable_implicit_release = + disable_implicit_release; + + if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES) + event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES; + + event_d_conf.nb_event_queues = ethdev_count; + if (dev_info.max_event_queues < event_d_conf.nb_event_queues) + event_d_conf.nb_event_queues = dev_info.max_event_queues; + + if (dev_info.max_num_events < event_d_conf.nb_events_limit) + event_d_conf.nb_events_limit = dev_info.max_num_events; + + if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows) + event_d_conf.nb_event_queue_flows = + dev_info.max_event_queue_flows; + + if (dev_info.max_event_port_dequeue_depth < + event_d_conf.nb_event_port_dequeue_depth) + event_d_conf.nb_event_port_dequeue_depth = + dev_info.max_event_port_dequeue_depth; + + if (dev_info.max_event_port_enqueue_depth < + event_d_conf.nb_event_port_enqueue_depth) + event_d_conf.nb_event_port_enqueue_depth = + dev_info.max_event_port_enqueue_depth; + + num_workers = rte_lcore_count(); + if (dev_info.max_event_ports < num_workers) + num_workers = dev_info.max_event_ports; + + event_d_conf.nb_event_ports = num_workers; + evt_rsrc->evp.nb_ports = num_workers; + evt_rsrc->evq.nb_queues = event_d_conf.nb_event_queues; + evt_rsrc->has_burst = !!(dev_info.event_dev_cap & + RTE_EVENT_DEV_CAP_BURST_MODE); + + ret = rte_event_dev_configure(event_d_id, &event_d_conf); + if (ret < 0) + rte_panic("Error in configuring event device\n"); + + evt_rsrc->event_d_id = event_d_id; + return event_queue_cfg; +} + void l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops) { - RTE_SET_USED(ops); + ops->event_device_setup = l2fwd_event_device_setup_internal_port; } -- 2.17.1