From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 7C1ECA055A;
	Fri,  3 Jun 2022 16:55:03 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 0A19640694;
	Fri,  3 Jun 2022 16:55:03 +0200 (CEST)
Received: from mail-108-mta134.mxroute.com (mail-108-mta134.mxroute.com
 [136.175.108.134])
 by mails.dpdk.org (Postfix) with ESMTP id 9A58E4021E
 for <dev@dpdk.org>; Fri,  3 Jun 2022 16:55:01 +0200 (CEST)
Received: from filter006.mxroute.com ([140.82.40.27]
 140.82.40.27.vultrusercontent.com)
 (Authenticated sender: mN4UYu2MZsgR)
 by mail-108-mta134.mxroute.com (ZoneMTA) with ESMTPSA id
 1812a0ff81c000c327.001 for <dev@dpdk.org>
 (version=TLSv1/SSLv3 cipher=ECDHE-RSA-AES128-GCM-SHA256);
 Fri, 03 Jun 2022 14:54:58 +0000
X-Zone-Loop: 5431bb0cf2853bca8ae05568f4909901350e9d22c896
X-Originating-IP: [140.82.40.27]
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=ashroe.eu; 
 s=x;
 h=Content-Type:MIME-Version:Message-ID:In-reply-to:Date:Subject:Cc:To:
 From:References:Sender:Reply-To:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe:
 List-Post:List-Owner:List-Archive;
 bh=DSviv6BNp5NfF9CNfgu/gfzdbft9tfPNQBDNP8KHqYs=; b=j0hXPKEn45XRI8qGMDRKid2Vvi
 8mOQWh+blpf4bmCGdhFSaZOiF2br1XsaxIKErRRiipyANddLFC8DWWvuhYdYZG+qUjsBdGeceHdsy
 ZTCp7qL92PRrr0DKGq5AfzrS/d9jIFTNuTMk+OC94Hl5NE2dQfz6GakhgpK7/iNNyoyc17t81slM1
 PNeEp3SIIv2AjVj5h9FIDhTfspp8b55Oj9HyWB/+E4/Lsm3nbhrjLNAUgGb2XXA/g4jBNIlC3VSg+
 HoHAAqoYgTa0iofBvG58k1z5DUro8W7skirXZ0tSQx9+qJzm/89XZNc5OoIJPQaq9O6GJakJRy4WQ
 QTfe2hag==;
References: <20220524152041.737154-1-spiked@nvidia.com>
 <20220603124821.1148119-1-spiked@nvidia.com>
 <20220603124821.1148119-4-spiked@nvidia.com>
User-agent: mu4e 1.6.10; emacs 27.1
From: Ray Kinsella <mdr@ashroe.eu>
To: Spike Du <spiked@nvidia.com>
Cc: matan@nvidia.com, viacheslavo@nvidia.com, orika@nvidia.com,
 thomas@monjalon.net, Wenzhuo Lu <wenzhuo.lu@intel.com>, Beilei Xing
 <beilei.xing@intel.com>, Bernard Iremonger <bernard.iremonger@intel.com>,
 Neil Horman <nhorman@tuxdriver.com>, andrew.rybchenko@oktetlabs.ru,
 stephen@networkplumber.org, mb@smartsharesystems.com, dev@dpdk.org,
 rasland@nvidia.com
Subject: Re: [PATCH v4 3/7] ethdev: introduce Rx queue based fill threshold
Date: Fri, 03 Jun 2022 15:30:56 +0100
In-reply-to: <20220603124821.1148119-4-spiked@nvidia.com>
Message-ID: <87czfppzk1.fsf@mdr78.vserver.site>
MIME-Version: 1.0
Content-Type: text/plain
X-AuthUser: mdr@ashroe.eu
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org


Spike Du <spiked@nvidia.com> writes:

> Fill threshold describes the fullness of a Rx queue. If the Rx
> queue fullness is above the threshold, the device will trigger the event
> RTE_ETH_EVENT_RX_FILL_THRESH.
> Fill threshold is defined as a percentage of Rx queue size with valid
> value of [0,99].
> Setting fill threshold to 0 means disable it, which is the default.
> Add fill threshold configuration and query driver callbacks in eth_dev_ops.
> Add command line options to support fill_thresh per-rxq configure.
> - Command syntax:
>   set port <port_id> rxq <rxq_id> fill_thresh <fill_thresh_num>
>
> - Example commands:
> To configure fill_thresh as 30% of rxq size on port 1 rxq 0:
> testpmd> set port 1 rxq 0 fill_thresh 30
>
> To disable fill_thresh on port 1 rxq 0:
> testpmd> set port 1 rxq 0 fill_thresh 0
>
> Signed-off-by: Spike Du <spiked@nvidia.com>
> ---
>  app/test-pmd/cmdline.c     | 68 +++++++++++++++++++++++++++++++++++++++++++
>  app/test-pmd/config.c      | 21 ++++++++++++++
>  app/test-pmd/testpmd.c     | 18 ++++++++++++
>  app/test-pmd/testpmd.h     |  2 ++
>  lib/ethdev/ethdev_driver.h | 22 ++++++++++++++
>  lib/ethdev/rte_ethdev.c    | 52 +++++++++++++++++++++++++++++++++
>  lib/ethdev/rte_ethdev.h    | 72 ++++++++++++++++++++++++++++++++++++++++++++++
>  lib/ethdev/version.map     |  2 ++
>  8 files changed, 257 insertions(+)
>
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index 0410bad..918581e 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -17823,6 +17823,73 @@ struct cmd_show_port_flow_transfer_proxy_result {
>  	}
>  };
>  
> +/* *** SET FILL THRESHOLD FOR A RXQ OF A PORT *** */
> +struct cmd_rxq_fill_thresh_result {
> +	cmdline_fixed_string_t set;
> +	cmdline_fixed_string_t port;
> +	uint16_t port_num;
> +	cmdline_fixed_string_t rxq;
> +	uint16_t rxq_num;
> +	cmdline_fixed_string_t fill_thresh;
> +	uint16_t fill_thresh_num;
> +};
> +
> +static void cmd_rxq_fill_thresh_parsed(void *parsed_result,
> +		__rte_unused struct cmdline *cl,
> +		__rte_unused void *data)
> +{
> +	struct cmd_rxq_fill_thresh_result *res = parsed_result;
> +	int ret = 0;
> +
> +	if ((strcmp(res->set, "set") == 0) && (strcmp(res->port, "port") == 0)
> +	    && (strcmp(res->rxq, "rxq") == 0)
> +	    && (strcmp(res->fill_thresh, "fill_thresh") == 0))
> +		ret = set_rxq_fill_thresh(res->port_num, res->rxq_num,
> +				  res->fill_thresh_num);
> +	if (ret < 0)
> +		printf("rxq_fill_thresh_cmd error: (%s)\n", strerror(-ret));
> +
> +}
> +
> +cmdline_parse_token_string_t cmd_rxq_fill_thresh_set =
> +	TOKEN_STRING_INITIALIZER(struct cmd_rxq_fill_thresh_result,
> +				set, "set");
> +cmdline_parse_token_string_t cmd_rxq_fill_thresh_port =
> +	TOKEN_STRING_INITIALIZER(struct cmd_rxq_fill_thresh_result,
> +				port, "port");
> +cmdline_parse_token_num_t cmd_rxq_fill_thresh_portnum =
> +	TOKEN_NUM_INITIALIZER(struct cmd_rxq_fill_thresh_result,
> +				port_num, RTE_UINT16);
> +cmdline_parse_token_string_t cmd_rxq_fill_thresh_rxq =
> +	TOKEN_STRING_INITIALIZER(struct cmd_rxq_fill_thresh_result,
> +				rxq, "rxq");
> +cmdline_parse_token_num_t cmd_rxq_fill_thresh_rxqnum =
> +	TOKEN_NUM_INITIALIZER(struct cmd_rxq_fill_thresh_result,
> +				rxq_num, RTE_UINT8);
> +cmdline_parse_token_string_t cmd_rxq_fill_thresh_fill_thresh =
> +	TOKEN_STRING_INITIALIZER(struct cmd_rxq_fill_thresh_result,
> +				fill_thresh, "fill_thresh");
> +cmdline_parse_token_num_t cmd_rxq_fill_thresh_fill_threshnum =
> +	TOKEN_NUM_INITIALIZER(struct cmd_rxq_fill_thresh_result,
> +				fill_thresh_num, RTE_UINT16);
> +
> +cmdline_parse_inst_t cmd_rxq_fill_thresh = {
> +	.f = cmd_rxq_fill_thresh_parsed,
> +	.data = (void *)0,
> +	.help_str = "set port <port_id> rxq <rxq_id> fill_thresh <fill_thresh_num>"
> +		"Set fill_thresh for rxq on port_id",
> +	.tokens = {
> +		(void *)&cmd_rxq_fill_thresh_set,
> +		(void *)&cmd_rxq_fill_thresh_port,
> +		(void *)&cmd_rxq_fill_thresh_portnum,
> +		(void *)&cmd_rxq_fill_thresh_rxq,
> +		(void *)&cmd_rxq_fill_thresh_rxqnum,
> +		(void *)&cmd_rxq_fill_thresh_fill_thresh,
> +		(void *)&cmd_rxq_fill_thresh_fill_threshnum,
> +		NULL,
> +	},
> +};
> +
>  /* ******************************************************************************** */
>  
>  /* list of instructions */
> @@ -18110,6 +18177,7 @@ struct cmd_show_port_flow_transfer_proxy_result {
>  	(cmdline_parse_inst_t *)&cmd_show_capability,
>  	(cmdline_parse_inst_t *)&cmd_set_flex_is_pattern,
>  	(cmdline_parse_inst_t *)&cmd_set_flex_spec_pattern,
> +	(cmdline_parse_inst_t *)&cmd_rxq_fill_thresh,
>  	NULL,
>  };
>  
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index 1b1e738..d0c519b 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -6342,3 +6342,24 @@ struct igb_ring_desc_16_bytes {
>  		printf("  %s\n", buf);
>  	}
>  }
> +
> +int
> +set_rxq_fill_thresh(portid_t port_id, uint16_t queue_idx, uint16_t fill_thresh)
> +{
> +	struct rte_eth_link link;
> +	int ret;
> +
> +	if (port_id_is_invalid(port_id, ENABLED_WARN))
> +		return -EINVAL;
> +	ret = eth_link_get_nowait_print_err(port_id, &link);
> +	if (ret < 0)
> +		return -EINVAL;
> +	if (fill_thresh > 99)
> +		return -EINVAL;
> +	ret = rte_eth_rx_fill_thresh_set(port_id, queue_idx, fill_thresh);
> +
> +	if (ret)
> +		return ret;
> +	return 0;
> +}
> +
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 767765d..1209230 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -420,6 +420,7 @@ struct fwd_engine * fwd_engines[] = {
>  	[RTE_ETH_EVENT_NEW] = "device probed",
>  	[RTE_ETH_EVENT_DESTROY] = "device released",
>  	[RTE_ETH_EVENT_FLOW_AGED] = "flow aged",
> +	[RTE_ETH_EVENT_RX_FILL_THRESH] = "rxq fill threshold reached",
>  	[RTE_ETH_EVENT_MAX] = NULL,
>  };
>  
> @@ -3616,6 +3617,10 @@ struct pmd_test_command {
>  eth_event_callback(portid_t port_id, enum rte_eth_event_type type, void *param,
>  		  void *ret_param)
>  {
> +	struct rte_eth_dev_info dev_info;
> +	uint16_t rxq_id;
> +	uint8_t fill_thresh;
> +	int ret;
>  	RTE_SET_USED(param);
>  	RTE_SET_USED(ret_param);
>  
> @@ -3647,6 +3652,19 @@ struct pmd_test_command {
>  		ports[port_id].port_status = RTE_PORT_CLOSED;
>  		printf("Port %u is closed\n", port_id);
>  		break;
> +	case RTE_ETH_EVENT_RX_FILL_THRESH:
> +		ret = rte_eth_dev_info_get(port_id, &dev_info);
> +		if (ret != 0)
> +			break;
> +		/* fill_thresh query API rewinds rxq_id, no need to check max rxq num. */
> +		for (rxq_id = 0; ; rxq_id++) {
> +			ret = rte_eth_rx_fill_thresh_query(port_id, &rxq_id, &fill_thresh);
> +			if (ret <= 0)
> +				break;
> +			printf("Received fill_thresh event, port:%d rxq_id:%d\n",
> +			       port_id, rxq_id);
> +		}
> +		break;
>  	default:
>  		break;
>  	}
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index 78a5f4e..c7a144e 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -1173,6 +1173,8 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
>  void flex_item_create(portid_t port_id, uint16_t flex_id, const char *filename);
>  void flex_item_destroy(portid_t port_id, uint16_t flex_id);
>  void port_flex_item_flush(portid_t port_id);
> +int set_rxq_fill_thresh(portid_t port_id, uint16_t queue_idx,
> +			uint16_t fill_thresh);
>  
>  extern int flow_parse(const char *src, void *result, unsigned int size,
>  		      struct rte_flow_attr **attr,
> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> index 69d9dc2..7ef7dba 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -470,6 +470,23 @@ typedef int (*eth_rx_queue_setup_t)(struct rte_eth_dev *dev,
>  				    const struct rte_eth_rxconf *rx_conf,
>  				    struct rte_mempool *mb_pool);
>  
> +/**
> + * @internal Set Rx queue fill threshold.
> + * @see rte_eth_rx_fill_thresh_set()
> + */
> +typedef int (*eth_rx_queue_fill_thresh_set_t)(struct rte_eth_dev *dev,
> +				      uint16_t rx_queue_id,
> +				      uint8_t fill_thresh);
> +
> +/**
> + * @internal Query queue fill threshold event.
> + * @see rte_eth_rx_fill_thresh_query()
> + */
> +
> +typedef int (*eth_rx_queue_fill_thresh_query_t)(struct rte_eth_dev *dev,
> +					uint16_t *rx_queue_id,
> +					uint8_t *fill_thresh);
> +
>  /** @internal Setup a transmit queue of an Ethernet device. */
>  typedef int (*eth_tx_queue_setup_t)(struct rte_eth_dev *dev,
>  				    uint16_t tx_queue_id,
> @@ -1168,6 +1185,11 @@ struct eth_dev_ops {
>  	/** Priority flow control queue configure */
>  	priority_flow_ctrl_queue_config_t priority_flow_ctrl_queue_config;
>  
> +	/** Set Rx queue fill threshold. */
> +	eth_rx_queue_fill_thresh_set_t rx_queue_fill_thresh_set;
> +	/** Query Rx queue fill threshold event. */
> +	eth_rx_queue_fill_thresh_query_t rx_queue_fill_thresh_query;
> +
>  	/** Set Unicast Table Array */
>  	eth_uc_hash_table_set_t    uc_hash_table_set;
>  	/** Set Unicast hash bitmap */
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index a175867..69a1f75 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -4424,6 +4424,58 @@ int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx,
>  							queue_idx, tx_rate));
>  }
>  
> +int rte_eth_rx_fill_thresh_set(uint16_t port_id, uint16_t queue_id,
> +			       uint8_t fill_thresh)
> +{
> +	struct rte_eth_dev *dev;
> +	struct rte_eth_dev_info dev_info;
> +	int ret;
> +
> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> +	dev = &rte_eth_devices[port_id];
> +
> +	ret = rte_eth_dev_info_get(port_id, &dev_info);
> +	if (ret != 0)
> +		return ret;
> +
> +	if (queue_id > dev_info.max_rx_queues) {
> +		RTE_ETHDEV_LOG(ERR,
> +			"Set queue fill thresh: port %u: invalid queue ID=%u.\n",
> +			port_id, queue_id);
> +		return -EINVAL;
> +	}
> +
> +	if (fill_thresh > 99)
> +		return -EINVAL;
> +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_fill_thresh_set, -ENOTSUP);
> +	return eth_err(port_id, (*dev->dev_ops->rx_queue_fill_thresh_set)(dev,
> +							     queue_id, fill_thresh));
> +}
> +
> +int rte_eth_rx_fill_thresh_query(uint16_t port_id, uint16_t *queue_id,
> +				 uint8_t *fill_thresh)
> +{
> +	struct rte_eth_dev_info dev_info;
> +	struct rte_eth_dev *dev;
> +	int ret;
> +
> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> +	dev = &rte_eth_devices[port_id];
> +
> +	ret = rte_eth_dev_info_get(port_id, &dev_info);
> +	if (ret != 0)
> +		return ret;
> +
> +	if (queue_id == NULL)
> +		return -EINVAL;
> +	if (*queue_id >= dev_info.max_rx_queues)
> +		*queue_id = 0;
> +
> +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_fill_thresh_query, -ENOTSUP);
> +	return eth_err(port_id, (*dev->dev_ops->rx_queue_fill_thresh_query)(dev,
> +							     queue_id, fill_thresh));
> +}
> +
>  RTE_INIT(eth_dev_init_fp_ops)
>  {
>  	uint32_t i;
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index 04225bb..d44e5da 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -1931,6 +1931,14 @@ struct rte_eth_rxq_info {
>  	uint8_t queue_state;        /**< one of RTE_ETH_QUEUE_STATE_*. */
>  	uint16_t nb_desc;           /**< configured number of RXDs. */
>  	uint16_t rx_buf_size;       /**< hardware receive buffer size. */
> +	/**
> +	 * Per-queue Rx fill threshold defined as percentage of Rx queue
> +	 * size. If Rx queue receives traffic higher than this percentage,
> +	 * the event RTE_ETH_EVENT_RX_FILL_THESH is triggered.
> +	 * Value 0 means threshold monitoring is disabled, no event is
> +	 * triggered.
> +	 */
> +	uint8_t fill_thresh;
>  } __rte_cache_min_aligned;
>  
>  /**
> @@ -3672,6 +3680,65 @@ int rte_eth_dev_set_vlan_ether_type(uint16_t port_id,
>   */
>  int rte_eth_dev_set_vlan_pvid(uint16_t port_id, uint16_t pvid, int on);
>  
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Set Rx queue based fill threshold.
> + *
> + * @param port_id
> + *  The port identifier of the Ethernet device.
> + * @param queue_id
> + *  The index of the receive queue.
> + * @param fill_thresh
> + *  The fill threshold percentage of Rx queue size which describes
> + *  the fullness of Rx queue. If the Rx queue fullness is above it,
> + *  the device will trigger the event RTE_ETH_EVENT_RX_FILL_THRESH.
> + *  [1-99] to set a new fill thresold.
> + *  0 to disable thresold monitoring.
> + *
> + * @return
> + *   - 0 if successful.
> + *   - negative if failed.
> + */
> +__rte_experimental
> +int rte_eth_rx_fill_thresh_set(uint16_t port_id, uint16_t queue_id,
> +			       uint8_t fill_thresh);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Query Rx queue based fill threshold event.
> + * The function queries all queues in the port circularly until one
> + * pending fill_thresh event is found or no pending fill_thresh event is found.
> + *
> + * @param port_id
> + *  The port identifier of the Ethernet device.
> + * @param queue_id
> + *  The API caller sets the starting Rx queue id in the pointer.
> + *  If the queue_id is bigger than maximum queue id of the port,
> + *  it's rewinded to 0 so that application can keep calling
> + *  this function to handle all pending fill_thresh events in the queues
> + *  with a simple increment between calls.
> + *  If a Rx queue has pending fill_thresh event, the pointer is updated
> + *  with this Rx queue id; otherwise this pointer's content is
> + *  unchanged.
> + * @param fill_thresh
> + *  The pointer to the fill threshold percentage of Rx queue.
> + *  If Rx queue with pending fill_thresh event is found, the queue's fill_thresh
> + *  percentage is stored in this pointer, otherwise the pointer's
> + *  content is unchanged.
> + *
> + * @return
> + *   - 1 if a Rx queue with pending fill_thresh event is found.
> + *   - 0 if no Rx queue with pending fill_thresh event is found.
> + *   - -EINVAL if queue_id is NULL.
> + */
> +__rte_experimental
> +int rte_eth_rx_fill_thresh_query(uint16_t port_id, uint16_t *queue_id,
> +				 uint8_t *fill_thresh);
> +
>  typedef void (*buffer_tx_error_fn)(struct rte_mbuf **unsent, uint16_t count,
>  		void *userdata);
>  
> @@ -3877,6 +3944,11 @@ enum rte_eth_event_type {
>  	RTE_ETH_EVENT_DESTROY,  /**< port is released */
>  	RTE_ETH_EVENT_IPSEC,    /**< IPsec offload related event */
>  	RTE_ETH_EVENT_FLOW_AGED,/**< New aged-out flows is detected */
> +	/**
> +	 *  Fill threshold value is exceeded in a queue.
> +	 *  @see rte_eth_rx_fill_thresh_set()
> +	 */
> +	RTE_ETH_EVENT_RX_FILL_THRESH,
>  	RTE_ETH_EVENT_MAX       /**< max value of this enum */
>  };
>  
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> index daca785..29b1fe8 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -285,6 +285,8 @@ EXPERIMENTAL {
>  	rte_mtr_color_in_protocol_priority_get;
>  	rte_mtr_color_in_protocol_set;
>  	rte_mtr_meter_vlan_table_update;

Comment with # added in 22.XX

> +	rte_eth_rx_fill_thresh_set;
> +	rte_eth_rx_fill_thresh_query;
>  };
>  
>  INTERNAL {


-- 
Regards, Ray K