From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id B1EBD1B1EE for ; Mon, 26 Nov 2018 12:40:38 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Nov 2018 03:40:36 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,281,1539673200"; d="scan'208";a="92194544" Received: from irsmsx101.ger.corp.intel.com ([163.33.3.153]) by orsmga007.jf.intel.com with ESMTP; 26 Nov 2018 03:40:36 -0800 Received: from irsmsx108.ger.corp.intel.com ([169.254.11.101]) by IRSMSX101.ger.corp.intel.com ([169.254.1.134]) with mapi id 14.03.0415.000; Mon, 26 Nov 2018 11:37:36 +0000 From: "Dumitrescu, Cristian" To: "Singh, Jasvinder" , "dev@dpdk.org" CC: "Pattan, Reshma" Thread-Topic: [PATCH] mbuf: implement generic format for sched field Thread-Index: AQHUg00z45byq1dxl0GBB/k5BakcbaVh6mnw Date: Mon, 26 Nov 2018 11:37:35 +0000 Message-ID: <3EB4FA525960D640B5BDFFD6A3D891268E800A9E@IRSMSX108.ger.corp.intel.com> References: <20181123165423.134922-1-jasvinder.singh@intel.com> In-Reply-To: <20181123165423.134922-1-jasvinder.singh@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiMWNjMTgwY2YtM2RkOC00ZDljLWE3MTctYzA2OTZiODdjYjc1IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiS2hpaXlxd3d0QXF3QUxhREFxcXNkNkdhMWwwWnpRTWczR3h1NzlMa3RkZHFlKzhJM0p3Z1hEOFd5Z2p6cG5qZCJ9 x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.0.400.15 dlp-reaction: no-action x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] mbuf: implement generic format for sched field X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 26 Nov 2018 11:40:39 -0000 Hi Jasvinder, > -----Original Message----- > From: Singh, Jasvinder > Sent: Friday, November 23, 2018 4:54 PM > To: dev@dpdk.org > Cc: Dumitrescu, Cristian ; Pattan, Reshma > > Subject: [PATCH] mbuf: implement generic format for sched field >=20 > This patch implements the changes proposed in the deprecation > notes [1][2]. >=20 > The opaque mbuf->hash.sched field is updated to support generic > definition in line with the ethdev TM and MTR APIs. The new generic > format contains: queue ID, traffic class, color. >=20 > In addtion, following API functions of the sched library have > been modified with an additional parameter of type struct > rte_sched_port to accomodate the changes made to mbuf sched field. > (i) rte_sched_port_pkt_write() > (ii) rte_sched_port_pkt_read() >=20 > The other libraries, sample applications and tests which use mbuf > sched field have been updated as well. >=20 > [1] http://mails.dpdk.org/archives/dev/2018-February/090651.html > [2] https://mails.dpdk.org/archives/dev/2018-November/119051.html >=20 > Signed-off-by: Jasvinder Singh > Signed-off-by: Reshma Pattan > --- > examples/qos_sched/app_thread.c | 6 +- > examples/qos_sched/main.c | 1 + > .../rte_event_eth_tx_adapter.h | 4 +- > lib/librte_mbuf/Makefile | 2 +- > lib/librte_mbuf/rte_mbuf.h | 10 +-- > lib/librte_pipeline/rte_table_action.c | 60 ++++++++----- > lib/librte_sched/Makefile | 2 +- > lib/librte_sched/rte_sched.c | 89 +++++++------------ > lib/librte_sched/rte_sched.h | 10 ++- > test/test/test_sched.c | 8 +- > 10 files changed, 92 insertions(+), 100 deletions(-) >=20 > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h > index 3dbc6695e..98428bd21 100644 > --- a/lib/librte_mbuf/rte_mbuf.h > +++ b/lib/librte_mbuf/rte_mbuf.h > @@ -575,12 +575,10 @@ struct rte_mbuf { > */ > } fdir; /**< Filter identifier if FDIR enabled */ > struct { > - uint32_t lo; > - uint32_t hi; > - /**< The event eth Tx adapter uses this field > - * to store Tx queue id. > - * @see > rte_event_eth_tx_adapter_txq_set() > - */ > + uint32_t queue_id; /**< Queue ID. */ > + uint8_t traffic_class; /**< Traffic class ID. */ We should add comment here that traffic class 0 is the highest priority tra= ffic class. > + uint8_t color; /**< Color. */ We should create a new file rte_color.h in a common place (librte_eal/commo= n/include) to consolidate the color definition, which is currently replicat= ed in too many places, such as: rte_meter.h, rte_mtr.h, rte_tm.h. We should include the rte_color.h file here (and in the above header files) We should also document the link between this field and the color enum type= (@see ...). > + uint16_t reserved; /**< Reserved. */ > } sched; /**< Hierarchical scheduler */ > /**< User defined tags. See > rte_distributor_process() */ > uint32_t usr; We should also add trivial inline functions to read/write these mbuf fields= as part of this header file. We want to discourage people from accessing t= hese fields directly. Besides the functions to read/write each field individually, we should also= have a function to read all the sched fields in one operation, as well as = another one to write all the sched fields in one operation. > diff --git a/lib/librte_pipeline/rte_table_action.c > b/lib/librte_pipeline/rte_table_action.c > index 7c7c8dd82..99f2d779b 100644 > --- a/lib/librte_pipeline/rte_table_action.c > +++ b/lib/librte_pipeline/rte_table_action.c > @@ -108,12 +108,12 @@ mtr_cfg_check(struct rte_table_action_mtr_config > *mtr) > } >=20 > #define MBUF_SCHED_QUEUE_TC_COLOR(queue, tc, color) \ > - ((uint16_t)((((uint64_t)(queue)) & 0x3) | \ > - ((((uint64_t)(tc)) & 0x3) << 2) | \ > - ((((uint64_t)(color)) & 0x3) << 4))) > + ((uint64_t)((((uint64_t)(queue)) & 0xffffffff) | \ > + ((((uint64_t)(tc)) & 0xff) << 32) | \ > + ((((uint64_t)(color)) & 0xff) << 40))) >=20 > #define MBUF_SCHED_COLOR(sched, color) \ > - (((sched) & (~0x30LLU)) | ((color) << 4)) > + ((uint64_t)((sched) & (~0xff000000LLU)) | (((uint64_t)(color)) << 40)) >=20 Given the read/write mbuf->sched field functions, the above two macros are = no longer needed. > struct mtr_trtcm_data { > struct rte_meter_trtcm trtcm; > @@ -176,7 +176,7 @@ mtr_data_size(struct rte_table_action_mtr_config > *mtr) > struct dscp_table_entry_data { > enum rte_meter_color color; > uint16_t tc; > - uint16_t queue_tc_color; > + uint32_t queue; > }; >=20 > struct dscp_table_data { > @@ -368,7 +368,6 @@ tm_cfg_check(struct rte_table_action_tm_config > *tm) > } >=20 > struct tm_data { > - uint16_t queue_tc_color; > uint16_t subport; > uint32_t pipe; > } __attribute__((__packed__)); > @@ -397,26 +396,40 @@ tm_apply(struct tm_data *data, > return status; >=20 > /* Apply */ > - data->queue_tc_color =3D 0; > data->subport =3D (uint16_t) p->subport_id; > data->pipe =3D p->pipe_id; >=20 > return 0; > } >=20 > +static uint32_t > +tm_sched_qindex(struct tm_data *data, > + struct dscp_table_entry_data *dscp, > + struct rte_table_action_tm_config *cfg) { > + > + uint32_t result; > + > + result =3D data->subport * cfg->n_pipes_per_subport + data->pipe; Since n_subports_per_pipe and n_pipes_per_subport are enforced to be power = of 2, we should replace multiplication/division with shift left/right. We p= robably need to store log2 correspondents in the action context. > + result =3D result * RTE_TABLE_ACTION_TC_MAX + dscp->tc; > + result =3D result * RTE_TABLE_ACTION_TC_QUEUE_MAX + dscp- > >queue; > + > + return result; > +} > + > static __rte_always_inline void > pkt_work_tm(struct rte_mbuf *mbuf, > struct tm_data *data, > struct dscp_table_data *dscp_table, > - uint32_t dscp) > + uint32_t dscp, > + struct rte_table_action_tm_config *cfg) > { > struct dscp_table_entry_data *dscp_entry =3D &dscp_table- > >entry[dscp]; > - struct tm_data *sched_ptr =3D (struct tm_data *) &mbuf->hash.sched; > - struct tm_data sched; > + uint64_t *sched_ptr =3D (uint64_t *) &mbuf->hash.sched; > + uint32_t queue =3D tm_sched_qindex(data, dscp_entry, cfg); >=20 > - sched =3D *data; > - sched.queue_tc_color =3D dscp_entry->queue_tc_color; > - *sched_ptr =3D sched; > + *sched_ptr =3D MBUF_SCHED_QUEUE_TC_COLOR(queue, > + dscp_entry->tc, > + dscp_entry->color); > } >=20 > /** > @@ -2580,17 +2593,13 @@ rte_table_action_dscp_table_update(struct > rte_table_action *action, > &action->dscp_table.entry[i]; > struct rte_table_action_dscp_table_entry *entry =3D > &table->entry[i]; > - uint16_t queue_tc_color =3D > - MBUF_SCHED_QUEUE_TC_COLOR(entry- > >tc_queue_id, > - entry->tc_id, > - entry->color); >=20 > if ((dscp_mask & (1LLU << i)) =3D=3D 0) > continue; >=20 > data->color =3D entry->color; > data->tc =3D entry->tc_id; > - data->queue_tc_color =3D queue_tc_color; > + data->queue =3D entry->tc_queue_id; > } >=20 > return 0; > @@ -2882,7 +2891,8 @@ pkt_work(struct rte_mbuf *mbuf, > pkt_work_tm(mbuf, > data, > &action->dscp_table, > - dscp); > + dscp, > + &cfg->tm); > } >=20 > if (cfg->action_mask & (1LLU << RTE_TABLE_ACTION_DECAP)) { > @@ -3108,22 +3118,26 @@ pkt4_work(struct rte_mbuf **mbufs, > pkt_work_tm(mbuf0, > data0, > &action->dscp_table, > - dscp0); > + dscp0, > + &cfg->tm); >=20 > pkt_work_tm(mbuf1, > data1, > &action->dscp_table, > - dscp1); > + dscp1, > + &cfg->tm); >=20 > pkt_work_tm(mbuf2, > data2, > &action->dscp_table, > - dscp2); > + dscp2, > + &cfg->tm); >=20 > pkt_work_tm(mbuf3, > data3, > &action->dscp_table, > - dscp3); > + dscp3, > + &cfg->tm); > } >=20 > if (cfg->action_mask & (1LLU << RTE_TABLE_ACTION_DECAP)) { > diff --git a/lib/librte_sched/Makefile b/lib/librte_sched/Makefile > index 46c53ed71..644fd9d15 100644 > --- a/lib/librte_sched/Makefile > +++ b/lib/librte_sched/Makefile > @@ -18,7 +18,7 @@ LDLIBS +=3D -lrte_timer >=20 > EXPORT_MAP :=3D rte_sched_version.map >=20 > -LIBABIVER :=3D 1 > +LIBABIVER :=3D 2 >=20 > # > # all source are stored in SRCS-y > diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c > index 587d5e602..7bf4d6400 100644 > --- a/lib/librte_sched/rte_sched.c > +++ b/lib/librte_sched/rte_sched.c > @@ -128,22 +128,6 @@ enum grinder_state { > e_GRINDER_READ_MBUF > }; >=20 > -/* > - * Path through the scheduler hierarchy used by the scheduler enqueue > - * operation to identify the destination queue for the current > - * packet. Stored in the field pkt.hash.sched of struct rte_mbuf of > - * each packet, typically written by the classification stage and read > - * by scheduler enqueue. > - */ > -struct rte_sched_port_hierarchy { > - uint16_t queue:2; /**< Queue ID (0 .. 3) */ > - uint16_t traffic_class:2; /**< Traffic class ID (0 .. 3)*/ > - uint32_t color:2; /**< Color */ > - uint16_t unused:10; > - uint16_t subport; /**< Subport ID */ > - uint32_t pipe; /**< Pipe ID */ > -}; > - > struct rte_sched_grinder { > /* Pipe cache */ > uint16_t pcache_qmask[RTE_SCHED_GRINDER_PCACHE_SIZE]; > @@ -241,16 +225,12 @@ enum rte_sched_port_array { > e_RTE_SCHED_PORT_ARRAY_TOTAL, > }; >=20 > -#ifdef RTE_SCHED_COLLECT_STATS > - > static inline uint32_t > rte_sched_port_queues_per_subport(struct rte_sched_port *port) > { > return RTE_SCHED_QUEUES_PER_PIPE * port- > >n_pipes_per_subport; > } >=20 > -#endif > - > static inline uint32_t > rte_sched_port_queues_per_port(struct rte_sched_port *port) > { > @@ -1006,44 +986,50 @@ rte_sched_port_pipe_profile_add(struct > rte_sched_port *port, > return 0; > } >=20 > +static inline uint32_t > +rte_sched_port_qindex(struct rte_sched_port *port, > + uint32_t subport, > + uint32_t pipe, > + uint32_t traffic_class, > + uint32_t queue) > +{ > + uint32_t result; > + > + result =3D subport * port->n_pipes_per_subport + pipe; Since n_subports_per_pipe and n_pipes_per_subport are enforced to be power = of 2, we should replace multiplication/division with shift left/right. > + result =3D result * RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE + > traffic_class; > + result =3D result * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS + queue; > + > + return result; > +} > + > void > -rte_sched_port_pkt_write(struct rte_mbuf *pkt, > +rte_sched_port_pkt_write(struct rte_sched_port *port, > + struct rte_mbuf *pkt, > uint32_t subport, uint32_t pipe, uint32_t > traffic_class, > uint32_t queue, enum rte_meter_color color) > { > - struct rte_sched_port_hierarchy *sched > - =3D (struct rte_sched_port_hierarchy *) &pkt->hash.sched; > - > - RTE_BUILD_BUG_ON(sizeof(*sched) > sizeof(pkt->hash.sched)); > - > - sched->color =3D (uint32_t) color; > - sched->subport =3D subport; > - sched->pipe =3D pipe; > - sched->traffic_class =3D traffic_class; > - sched->queue =3D queue; > + pkt->hash.sched.traffic_class =3D traffic_class; > + pkt->hash.sched.queue_id =3D rte_sched_port_qindex(port, subport, > pipe, > + traffic_class, queue); > + pkt->hash.sched.color =3D (uint8_t) color; > } >=20 > void > -rte_sched_port_pkt_read_tree_path(const struct rte_mbuf *pkt, > +rte_sched_port_pkt_read_tree_path(struct rte_sched_port *port, > + const struct rte_mbuf *pkt, > uint32_t *subport, uint32_t *pipe, > uint32_t *traffic_class, uint32_t *queue) > { > - const struct rte_sched_port_hierarchy *sched > - =3D (const struct rte_sched_port_hierarchy *) &pkt- > >hash.sched; > - > - *subport =3D sched->subport; > - *pipe =3D sched->pipe; > - *traffic_class =3D sched->traffic_class; > - *queue =3D sched->queue; > + *subport =3D pkt->hash.sched.queue_id / > rte_sched_port_queues_per_subport(port); > + *pipe =3D pkt->hash.sched.queue_id / > RTE_SCHED_QUEUES_PER_PIPE; Since n_subports_per_pipe and n_pipes_per_subport are enforced to be power = of 2, we should replace multiplication/division with shift left/right. > + *traffic_class =3D pkt->hash.sched.traffic_class; > + *queue =3D pkt->hash.sched.queue_id % > RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS; Since RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS is enforced to be a power of 2, pl= ease replace modulo with bitwise AND of (RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS= - 1). > } >=20 > enum rte_meter_color > rte_sched_port_pkt_read_color(const struct rte_mbuf *pkt) > { > - const struct rte_sched_port_hierarchy *sched > - =3D (const struct rte_sched_port_hierarchy *) &pkt- > >hash.sched; > - > - return (enum rte_meter_color) sched->color; > + return (enum rte_meter_color) pkt->hash.sched.color; > } Should use the mbuf->sched.color read function to be added in rte_mbuf.h. >=20 > int > @@ -1100,18 +1086,6 @@ rte_sched_queue_read_stats(struct > rte_sched_port *port, > return 0; > } >=20 > -static inline uint32_t > -rte_sched_port_qindex(struct rte_sched_port *port, uint32_t subport, > uint32_t pipe, uint32_t traffic_class, uint32_t queue) > -{ > - uint32_t result; > - > - result =3D subport * port->n_pipes_per_subport + pipe; > - result =3D result * RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE + > traffic_class; > - result =3D result * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS + queue; > - > - return result; > - Since n_subports_per_pipe and n_pipes_per_subport are enforced to be power = of 2, we should replace multiplication/division with shift left/right. } > - > #ifdef RTE_SCHED_DEBUG >=20 > static inline int > @@ -1272,11 +1246,8 @@ rte_sched_port_enqueue_qptrs_prefetch0(struct > rte_sched_port *port, > #ifdef RTE_SCHED_COLLECT_STATS > struct rte_sched_queue_extra *qe; > #endif > - uint32_t subport, pipe, traffic_class, queue, qindex; > - > - rte_sched_port_pkt_read_tree_path(pkt, &subport, &pipe, > &traffic_class, &queue); > + uint32_t qindex =3D pkt->hash.sched.queue_id; Let's use the read function for this mbuf field. >=20 > - qindex =3D rte_sched_port_qindex(port, subport, pipe, traffic_class, > queue); > q =3D port->queue + qindex; > rte_prefetch0(q); > #ifdef RTE_SCHED_COLLECT_STATS > diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h > index 84fa896de..4d9f869eb 100644 > --- a/lib/librte_sched/rte_sched.h > +++ b/lib/librte_sched/rte_sched.h > @@ -355,6 +355,8 @@ rte_sched_queue_read_stats(struct rte_sched_port > *port, > * Scheduler hierarchy path write to packet descriptor. Typically > * called by the packet classification stage. > * > + * @param port > + * Handle to port scheduler instance > * @param pkt > * Packet descriptor handle > * @param subport > @@ -369,7 +371,8 @@ rte_sched_queue_read_stats(struct rte_sched_port > *port, > * Packet color set > */ > void > -rte_sched_port_pkt_write(struct rte_mbuf *pkt, > +rte_sched_port_pkt_write(struct rte_sched_port *port, > + struct rte_mbuf *pkt, > uint32_t subport, uint32_t pipe, uint32_t > traffic_class, > uint32_t queue, enum rte_meter_color color); >=20 > @@ -379,6 +382,8 @@ rte_sched_port_pkt_write(struct rte_mbuf *pkt, > * enqueue operation. The subport, pipe, traffic class and queue > * parameters need to be pre-allocated by the caller. > * > + * @param port > + * Handle to port scheduler instance > * @param pkt > * Packet descriptor handle > * @param subport > @@ -392,7 +397,8 @@ rte_sched_port_pkt_write(struct rte_mbuf *pkt, > * > */ > void > -rte_sched_port_pkt_read_tree_path(const struct rte_mbuf *pkt, > +rte_sched_port_pkt_read_tree_path(struct rte_sched_port *port, > + const struct rte_mbuf *pkt, > uint32_t *subport, uint32_t *pipe, > uint32_t *traffic_class, uint32_t *queue); >=20 Regards, Cristian