DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jerin Jacob <jerin.jacob@caviumnetworks.com>
To: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Cc: harry.van.haaren@intel.com, dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH] app/eventdev: fix port dequeue depth configuration
Date: Tue, 30 Jan 2018 14:12:01 +0530	[thread overview]
Message-ID: <20180130084200.GA28735@jerin> (raw)
In-Reply-To: <20180124093033.20122-1-pbhagavatula@caviumnetworks.com>

-----Original Message-----
> Date: Wed, 24 Jan 2018 15:00:33 +0530
> From: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> To: jerin.jacob@caviumnetworks.com, harry.van.haaren@intel.com
> Cc: dev@dpdk.org, Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> Subject: [dpdk-dev] [PATCH] app/eventdev: fix port dequeue depth
>  configuration
> X-Mailer: git-send-email 2.14.1
> 
> The port dequeue depth value has to be compared against the maximum
> allowed dequeue depth reported by the event drivers.
> 
> Fixes: 3617aae53f92 ("app/eventdev: add event Rx adapter setup")
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> ---
>  app/test-eventdev/test_perf_atq.c       | 13 ++++++++++++-
>  app/test-eventdev/test_perf_common.c    | 25 +++++--------------------
>  app/test-eventdev/test_perf_common.h    |  3 ++-
>  app/test-eventdev/test_perf_queue.c     | 12 +++++++++++-
>  app/test-eventdev/test_pipeline_atq.c   |  3 +++
>  app/test-eventdev/test_pipeline_queue.c |  3 +++
>  6 files changed, 36 insertions(+), 23 deletions(-)
> 
> diff --git a/app/test-eventdev/test_perf_atq.c b/app/test-eventdev/test_perf_atq.c
> index d07a05425..b36b22a77 100644
> --- a/app/test-eventdev/test_perf_atq.c
> +++ b/app/test-eventdev/test_perf_atq.c
> @@ -207,7 +207,18 @@ perf_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
>  		}
>  	}
>  
> -	ret = perf_event_dev_port_setup(test, opt, 1 /* stride */, nb_queues);
> +	if (opt->wkr_deq_dep > dev_info.max_event_port_dequeue_depth)
> +		opt->wkr_deq_dep = dev_info.max_event_port_dequeue_depth;
> +
> +	/* port configuration */
> +	const struct rte_event_port_conf p_conf = {
> +			.dequeue_depth = opt->wkr_deq_dep,
> +			.enqueue_depth = dev_info.max_event_port_dequeue_depth,
> +			.new_event_threshold = dev_info.max_num_events,
> +	};
> +
> +	ret = perf_event_dev_port_setup(test, opt, 1 /* stride */, nb_queues,
> +			&p_conf);
>  	if (ret)
>  		return ret;
>  
> diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
> index e279d81a5..59fa0a49e 100644
> --- a/app/test-eventdev/test_perf_common.c
> +++ b/app/test-eventdev/test_perf_common.c
> @@ -285,22 +285,12 @@ perf_event_rx_adapter_setup(struct evt_options *opt, uint8_t stride,
>  
>  int
>  perf_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
> -				uint8_t stride, uint8_t nb_queues)
> +				uint8_t stride, uint8_t nb_queues,
> +				const struct rte_event_port_conf *port_conf)
>  {
>  	struct test_perf *t = evt_test_priv(test);
>  	uint16_t port, prod;
>  	int ret = -1;
> -	struct rte_event_port_conf port_conf;
> -
> -	memset(&port_conf, 0, sizeof(struct rte_event_port_conf));
> -	rte_event_port_default_conf_get(opt->dev_id, 0, &port_conf);
> -
> -	/* port configuration */
> -	const struct rte_event_port_conf wkr_p_conf = {
> -			.dequeue_depth = opt->wkr_deq_dep,
> -			.enqueue_depth = port_conf.enqueue_depth,
> -			.new_event_threshold = port_conf.new_event_threshold,
> -	};
>  
>  	/* setup one port per worker, linking to all queues */
>  	for (port = 0; port < evt_nr_active_lcores(opt->wlcores);
> @@ -313,7 +303,7 @@ perf_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
>  		w->processed_pkts = 0;
>  		w->latency = 0;
>  
> -		ret = rte_event_port_setup(opt->dev_id, port, &wkr_p_conf);
> +		ret = rte_event_port_setup(opt->dev_id, port, port_conf);
>  		if (ret) {
>  			evt_err("failed to setup port %d", port);
>  			return ret;
> @@ -327,18 +317,13 @@ perf_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
>  	}
>  
>  	/* port for producers, no links */
> -	struct rte_event_port_conf prod_conf = {
> -			.dequeue_depth = port_conf.dequeue_depth,
> -			.enqueue_depth = port_conf.enqueue_depth,
> -			.new_event_threshold = port_conf.new_event_threshold,
> -	};
>  	if (opt->prod_type == EVT_PROD_TYPE_ETH_RX_ADPTR) {
>  		for ( ; port < perf_nb_event_ports(opt); port++) {
>  			struct prod_data *p = &t->prod[port];
>  			p->t = t;
>  		}
>  
> -		ret = perf_event_rx_adapter_setup(opt, stride, prod_conf);
> +		ret = perf_event_rx_adapter_setup(opt, stride, *port_conf);

I think, it is better to pass port_conf as pointer.

With that change:
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

  reply	other threads:[~2018-01-30  8:42 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-24  9:30 Pavan Nikhilesh
2018-01-30  8:42 ` Jerin Jacob [this message]
2018-01-30 11:17 ` [dpdk-dev] [PATCH v2] " Pavan Nikhilesh
2018-01-31  6:33   ` Jerin Jacob
2018-01-31  8:48   ` Jerin Jacob

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180130084200.GA28735@jerin \
    --to=jerin.jacob@caviumnetworks.com \
    --cc=dev@dpdk.org \
    --cc=harry.van.haaren@intel.com \
    --cc=pbhagavatula@caviumnetworks.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).