DPDK patches and discussions
 help / color / mirror / Atom feed
From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
To: Sunil Kumar Kori <skori@marvell.com>,
	Jerin Jacob Kollanukkaran <jerinj@marvell.com>,
	Marko Kovacevic <marko.kovacevic@intel.com>,
	Ori Kam <orika@mellanox.com>,
	Bruce Richardson <bruce.richardson@intel.com>,
	"Radu Nicolau" <radu.nicolau@intel.com>,
	Akhil Goyal <akhil.goyal@nxp.com>,
	"Tomasz Kantecki" <tomasz.kantecki@intel.com>,
	Sunil Kumar Kori <skori@marvell.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] examples/l3fwd: fix error checking
Date: Fri, 1 May 2020 11:15:43 +0000	[thread overview]
Message-ID: <BYAPR18MB251884603421FE050384B80FDEAB0@BYAPR18MB2518.namprd18.prod.outlook.com> (raw)
In-Reply-To: <20200417082516.28803-1-skori@marvell.com>

>Subject: [PATCH] examples/l3fwd: fix error checking
>
>Patch fixes coverity issues which handle return values from AP
>calling.
>
>Coverity issue: 354227, 354232, 354238, 354239, 354240
>
>Fixes: aaf58cb85b62 ("examples/l3fwd: add event port and queue
>setup")
>
>Signed-off-by: Sunil Kumar Kori <skori@marvell.com>

Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>

>---
> examples/l3fwd/l3fwd_event.c               |  6 +++++-
> examples/l3fwd/l3fwd_event_generic.c       |  9 +++++++--
> examples/l3fwd/l3fwd_event_internal_port.c | 10 ++++++++--
> 3 files changed, 20 insertions(+), 5 deletions(-)
>
>diff --git a/examples/l3fwd/l3fwd_event.c
>b/examples/l3fwd/l3fwd_event.c
>index 43c47eade..4d31593a0 100644
>--- a/examples/l3fwd/l3fwd_event.c
>+++ b/examples/l3fwd/l3fwd_event.c
>@@ -70,7 +70,11 @@ l3fwd_eth_dev_port_setup(struct rte_eth_conf
>*port_conf)
> 		printf("Creating queues: nb_rxq=%d nb_txq=1...\n",
> 		       evt_rsrc->eth_rx_queues);
>
>-		rte_eth_dev_info_get(port_id, &dev_info);
>+		ret = rte_eth_dev_info_get(port_id, &dev_info);
>+		if (ret != 0)
>+			rte_panic("Error during getting device (port %u)
>info:"
>+				  "%s\n", port_id, strerror(-ret));
>+
> 		if (dev_info.tx_offload_capa &
>DEV_TX_OFFLOAD_MBUF_FAST_FREE)
> 			local_port_conf.txmode.offloads |=
>
>	DEV_TX_OFFLOAD_MBUF_FAST_FREE;
>diff --git a/examples/l3fwd/l3fwd_event_generic.c
>b/examples/l3fwd/l3fwd_event_generic.c
>index c69c611dd..f8c98435d 100644
>--- a/examples/l3fwd/l3fwd_event_generic.c
>+++ b/examples/l3fwd/l3fwd_event_generic.c
>@@ -101,7 +101,9 @@ l3fwd_event_port_setup_generic(void)
> 		rte_panic("No space is available\n");
>
> 	memset(&def_p_conf, 0, sizeof(struct rte_event_port_conf));
>-	rte_event_port_default_conf_get(event_d_id, 0,
>&def_p_conf);
>+	ret = rte_event_port_default_conf_get(event_d_id, 0,
>&def_p_conf);
>+	if (ret < 0)
>+		rte_panic("Error to get default configuration of event
>port\n");
>
> 	if (def_p_conf.new_event_threshold <
>event_p_conf.new_event_threshold)
> 		event_p_conf.new_event_threshold =
>@@ -161,7 +163,10 @@ l3fwd_event_queue_setup_generic(uint32_t
>event_queue_cfg)
> 	if (!evt_rsrc->evq.event_q_id)
> 		rte_panic("Memory allocation failure\n");
>
>-	rte_event_queue_default_conf_get(event_d_id, 0,
>&def_q_conf);
>+	ret = rte_event_queue_default_conf_get(event_d_id, 0,
>&def_q_conf);
>+	if (ret < 0)
>+		rte_panic("Error to get default config of event
>queue\n");
>+
> 	if (def_q_conf.nb_atomic_flows <
>event_q_conf.nb_atomic_flows)
> 		event_q_conf.nb_atomic_flows =
>def_q_conf.nb_atomic_flows;
>
>diff --git a/examples/l3fwd/l3fwd_event_internal_port.c
>b/examples/l3fwd/l3fwd_event_internal_port.c
>index 993e26f13..03ac581d6 100644
>--- a/examples/l3fwd/l3fwd_event_internal_port.c
>+++ b/examples/l3fwd/l3fwd_event_internal_port.c
>@@ -99,7 +99,10 @@ l3fwd_event_port_setup_internal_port(void)
> 	if (!evt_rsrc->evp.event_p_id)
> 		rte_panic("Failed to allocate memory for Event
>Ports\n");
>
>-	rte_event_port_default_conf_get(event_d_id, 0,
>&def_p_conf);
>+	ret = rte_event_port_default_conf_get(event_d_id, 0,
>&def_p_conf);
>+	if (ret < 0)
>+		rte_panic("Error to get default configuration of event
>port\n");
>+
> 	if (def_p_conf.new_event_threshold <
>event_p_conf.new_event_threshold)
> 		event_p_conf.new_event_threshold =
>
>	def_p_conf.new_event_threshold;
>@@ -150,7 +153,10 @@
>l3fwd_event_queue_setup_internal_port(uint32_t event_queue_cfg)
> 	uint8_t event_q_id = 0;
> 	int32_t ret;
>
>-	rte_event_queue_default_conf_get(event_d_id, event_q_id,
>&def_q_conf);
>+	ret = rte_event_queue_default_conf_get(event_d_id,
>event_q_id,
>+					       &def_q_conf);
>+	if (ret < 0)
>+		rte_panic("Error to get default config of event
>queue\n");
>
> 	if (def_q_conf.nb_atomic_flows <
>event_q_conf.nb_atomic_flows)
> 		event_q_conf.nb_atomic_flows =
>def_q_conf.nb_atomic_flows;
>--
>2.17.1


  reply	other threads:[~2020-05-01 11:15 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-17  8:25 Sunil Kumar Kori
2020-05-01 11:15 ` Pavan Nikhilesh Bhagavatula [this message]
2020-05-01 17:43   ` Jerin Jacob

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BYAPR18MB251884603421FE050384B80FDEAB0@BYAPR18MB2518.namprd18.prod.outlook.com \
    --to=pbhagavatula@marvell.com \
    --cc=akhil.goyal@nxp.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=jerinj@marvell.com \
    --cc=marko.kovacevic@intel.com \
    --cc=orika@mellanox.com \
    --cc=radu.nicolau@intel.com \
    --cc=skori@marvell.com \
    --cc=tomasz.kantecki@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).