From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 9FC02FACB for ; Mon, 6 Mar 2017 18:03:21 +0100 (CET) Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga105.jf.intel.com with ESMTP; 06 Mar 2017 09:03:20 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.35,254,1484035200"; d="scan'208";a="232875896" Received: from txasoft-yocto.an.intel.com (HELO txasoft-yocto.an.intel.com.) ([10.123.72.111]) by fmsmga004.fm.intel.com with ESMTP; 06 Mar 2017 09:03:19 -0800 From: Gage Eads To: dev@dpdk.org Cc: jerin.jacob@caviumnetworks.com, bruce.richardson@intel.com, hemant.agrawal@nxp.com, harry.van.haaren@intel.com, nipun.gupta@nxp.com Date: Mon, 6 Mar 2017 11:02:48 -0600 Message-Id: <1488819768-9474-1-git-send-email-gage.eads@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1488430056-32055-1-git-send-email-gage.eads@intel.com> References: <1488430056-32055-1-git-send-email-gage.eads@intel.com> Subject: [dpdk-dev] [PATCH v2] eventdev: Fix links_map initialization for sw PMD X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Mar 2017 17:03:22 -0000 This patch initializes the links_map array entries to EVENT_QUEUE_SERVICE_PRIORITY_INVALID, as expected by rte_event_port_links_get(). This is necessary for the sw eventdev PMD, which does not initialize links_map when rte_event_port_setup() calls rte_event_port_unlink(). Signed-off-by: Gage Eads --- v2: Refined commit message's description of patch lib/librte_eventdev/rte_eventdev.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c index 68bfc3b..b8cd92b 100644 --- a/lib/librte_eventdev/rte_eventdev.c +++ b/lib/librte_eventdev/rte_eventdev.c @@ -190,6 +190,8 @@ rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues) return 0; } +#define EVENT_QUEUE_SERVICE_PRIORITY_INVALID (0xdead) + static inline int rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports) { @@ -251,6 +253,9 @@ rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports) "nb_ports %u", nb_ports); return -(ENOMEM); } + for (i = 0; i < nb_ports * RTE_EVENT_MAX_QUEUES_PER_DEV; i++) + dev->data->links_map[i] = + EVENT_QUEUE_SERVICE_PRIORITY_INVALID; } else if (dev->data->ports != NULL && nb_ports != 0) {/* re-config */ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release, -ENOTSUP); @@ -305,6 +310,10 @@ rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports) if (nb_ports > old_nb_ports) { uint8_t new_ps = nb_ports - old_nb_ports; + unsigned int old_links_map_end = + old_nb_ports * RTE_EVENT_MAX_QUEUES_PER_DEV; + unsigned int links_map_end = + nb_ports * RTE_EVENT_MAX_QUEUES_PER_DEV; memset(ports + old_nb_ports, 0, sizeof(ports[0]) * new_ps); @@ -312,9 +321,9 @@ rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports) sizeof(ports_dequeue_depth[0]) * new_ps); memset(ports_enqueue_depth + old_nb_ports, 0, sizeof(ports_enqueue_depth[0]) * new_ps); - memset(links_map + - (old_nb_ports * RTE_EVENT_MAX_QUEUES_PER_DEV), - 0, sizeof(ports_enqueue_depth[0]) * new_ps); + for (i = old_links_map_end; i < links_map_end; i++) + links_map[i] = + EVENT_QUEUE_SERVICE_PRIORITY_INVALID; } dev->data->ports = ports; @@ -815,8 +824,6 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id, return diag; } -#define EVENT_QUEUE_SERVICE_PRIORITY_INVALID (0xdead) - int rte_event_port_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[], uint16_t nb_unlinks) -- 2.7.4