From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E5C5FA046B for ; Fri, 23 Aug 2019 16:47:46 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DFB9C1C0B8; Fri, 23 Aug 2019 16:46:30 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 2F7691C00D for ; Fri, 23 Aug 2019 16:46:16 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 Aug 2019 07:46:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,421,1559545200"; d="scan'208";a="263211296" Received: from silpixa00381635.ir.intel.com (HELO silpixa00381635.ger.corp.intel.com) ([10.237.223.4]) by orsmga001.jf.intel.com with ESMTP; 23 Aug 2019 07:46:14 -0700 From: Jasvinder Singh To: dev@dpdk.org Cc: cristian.dumitrescu@intel.com, Lukasz Krakowiak Date: Fri, 23 Aug 2019 15:45:56 +0100 Message-Id: <20190823144602.58213-10-jasvinder.singh@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190823144602.58213-1-jasvinder.singh@intel.com> References: <20190823144602.58213-1-jasvinder.singh@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH 09/15] sched: update pkt dequeue for subport config flexibility X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Modify scheduler packet dequeue operation to allow different subports of the same port to have different configuration in terms of number of pipes, pipe queue sizes, etc. Signed-off-by: Jasvinder Singh Signed-off-by: Lukasz Krakowiak --- lib/librte_sched/rte_sched.c | 51 ++++++++++++++++++++++++++++-------- 1 file changed, 40 insertions(+), 11 deletions(-) diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c index 0451e10ea..e0ef86f40 100644 --- a/lib/librte_sched/rte_sched.c +++ b/lib/librte_sched/rte_sched.c @@ -245,6 +245,7 @@ struct rte_sched_port { uint32_t busy_grinders; struct rte_mbuf **pkts_out; uint32_t n_pkts_out; + uint32_t subport_id; /* Queue base calculation */ uint32_t qsize_add[RTE_SCHED_QUEUES_PER_PIPE]; @@ -911,6 +912,7 @@ rte_sched_port_config(struct rte_sched_port_params *params) /* Grinders */ port->pkts_out = NULL; port->n_pkts_out = 0; + port->subport_id = 0; return port; } @@ -2616,9 +2618,9 @@ grinder_prefetch_mbuf(struct rte_sched_subport *subport, uint32_t pos) } static inline uint32_t -grinder_handle(struct rte_sched_port *port, uint32_t pos) +grinder_handle(struct rte_sched_port *port, + struct rte_sched_subport *subport, uint32_t pos) { - struct rte_sched_subport *subport = port->subport; struct rte_sched_grinder *grinder = subport->grinder + pos; switch (grinder->state) { @@ -2717,6 +2719,7 @@ rte_sched_port_time_resync(struct rte_sched_port *port) uint64_t cycles = rte_get_tsc_cycles(); uint64_t cycles_diff = cycles - port->time_cpu_cycles; uint64_t bytes_diff; + uint32_t i; /* Compute elapsed time in bytes */ bytes_diff = rte_reciprocal_divide(cycles_diff << RTE_SCHED_TIME_SHIFT, @@ -2729,20 +2732,21 @@ rte_sched_port_time_resync(struct rte_sched_port *port) port->time = port->time_cpu_bytes; /* Reset pipe loop detection */ - port->pipe_loop = RTE_SCHED_PIPE_INVALID; + for (i = 0; i < port->n_subports_per_port; i++) + port->subports[i]->pipe_loop = RTE_SCHED_PIPE_INVALID; } static inline int -rte_sched_port_exceptions(struct rte_sched_port *port, int second_pass) +rte_sched_port_exceptions(struct rte_sched_subport *subport, int second_pass) { int exceptions; /* Check if any exception flag is set */ - exceptions = (second_pass && port->busy_grinders == 0) || - (port->pipe_exhaustion == 1); + exceptions = (second_pass && subport->busy_grinders == 0) || + (subport->pipe_exhaustion == 1); /* Clear exception flags */ - port->pipe_exhaustion = 0; + subport->pipe_exhaustion = 0; return exceptions; } @@ -2750,7 +2754,9 @@ rte_sched_port_exceptions(struct rte_sched_port *port, int second_pass) int rte_sched_port_dequeue(struct rte_sched_port *port, struct rte_mbuf **pkts, uint32_t n_pkts) { - uint32_t i, count; + struct rte_sched_subport *subport; + uint32_t subport_id = port->subport_id; + uint32_t i, n_subports = 0, count; port->pkts_out = pkts; port->n_pkts_out = 0; @@ -2759,9 +2765,32 @@ rte_sched_port_dequeue(struct rte_sched_port *port, struct rte_mbuf **pkts, uint /* Take each queue in the grinder one step further */ for (i = 0, count = 0; ; i++) { - count += grinder_handle(port, i & (RTE_SCHED_PORT_N_GRINDERS - 1)); - if ((count == n_pkts) || - rte_sched_port_exceptions(port, i >= RTE_SCHED_PORT_N_GRINDERS)) { + subport = port->subports[subport_id]; + + count += grinder_handle(port, subport, + i & (RTE_SCHED_PORT_N_GRINDERS - 1)); + + if (count == n_pkts) { + subport_id++; + + if (subport_id == port->n_subports_per_port) + subport_id = 0; + + port->subport_id = subport_id; + break; + } + + if (rte_sched_port_exceptions(subport, i >= RTE_SCHED_PORT_N_GRINDERS)) { + i = 0; + subport_id++; + n_subports++; + } + + if (subport_id == port->n_subports_per_port) + subport_id = 0; + + if (n_subports == port->n_subports_per_port) { + port->subport_id = subport_id; break; } } -- 2.21.0