DPDK patches and discussions
 help / color / mirror / Atom feed
From: Dmitry Kozlyuk <dkozlyuk@oss.nvidia.com>
To: <dev@dpdk.org>
Cc: Ori Kam <orika@oss.nvidia.com>,
	Ferruh Yigit <ferruh.yigit@intel.com>,
	"Ajit Khaparde" <ajit.khaparde@broadcom.com>,
	Somnath Kotur <somnath.kotur@broadcom.com>,
	Nithin Dabilpuram <ndabilpuram@marvell.com>,
	Kiran Kumar K <kirankumark@marvell.com>,
	Sunil Kumar Kori <skori@marvell.com>,
	Satha Rao <skoteshwar@marvell.com>,
	Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>,
	Hemant Agrawal <hemant.agrawal@nxp.com>,
	Sachin Saxena <sachin.saxena@oss.nxp.com>,
	Haiyue Wang <haiyue.wang@intel.com>,
	John Daley <johndale@cisco.com>,
	Hyong Youb Kim <hyonkim@cisco.com>, Gaetan Rivet <grive@u256.net>,
	Ziyang Xuan <xuanziyang2@huawei.com>,
	Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>,
	Guoyang Zhou <zhouguoyang@huawei.com>,
	"Min Hu (Connor)" <humin29@huawei.com>,
	Yisen Zhuang <yisen.zhuang@huawei.com>,
	Lijun Ou <oulijun@huawei.com>,
	Beilei Xing <beilei.xing@intel.com>,
	Jingjing Wu <jingjing.wu@intel.com>,
	Qiming Yang <qiming.yang@intel.com>,
	Qi Zhang <qi.z.zhang@intel.com>, Rosen Xu <rosen.xu@intel.com>,
	Liron Himi <lironh@marvell.com>, Jerin Jacob <jerinj@marvell.com>,
	Rasesh Mody <rmody@marvell.com>,
	Devendra Singh Rawat <dsinghrawat@marvell.com>,
	"Andrew Rybchenko" <andrew.rybchenko@oktetlabs.ru>,
	Jasvinder Singh <jasvinder.singh@intel.com>,
	Cristian Dumitrescu <cristian.dumitrescu@intel.com>,
	Keith Wiles <keith.wiles@intel.com>,
	"Jiawen Wu" <jiawenwu@trustnetic.com>,
	Jian Wang <jianwang@trustnetic.com>
Subject: [dpdk-dev] [PATCH v4 3/6] net: advertise no support for keeping flow rules
Date: Thu, 21 Oct 2021 09:35:00 +0300	[thread overview]
Message-ID: <20211021063503.3632732-4-dkozlyuk@nvidia.com> (raw)
In-Reply-To: <20211021063503.3632732-1-dkozlyuk@nvidia.com>

When RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP capability bit is zero,
the specified behavior is the same as it had been before
this bit was introduced. Explicitly reset it in all PMDs
supporting rte_flow API in order to attract the attention
of maintainers, who should eventually choose to advertise
the new capability or not. It is already known that
mlx4 and mlx5 will not support this capability.

For RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP
similar action is not performed,
because no PMD except mlx5 supports indirect actions.
Any PMD that starts doing so will anyway have to consider
all relevant API, including this capability.

Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
---
 drivers/net/bnxt/bnxt_ethdev.c          | 1 +
 drivers/net/bnxt/bnxt_reps.c            | 1 +
 drivers/net/cnxk/cnxk_ethdev_ops.c      | 1 +
 drivers/net/cxgbe/cxgbe_ethdev.c        | 2 ++
 drivers/net/dpaa2/dpaa2_ethdev.c        | 1 +
 drivers/net/e1000/em_ethdev.c           | 2 ++
 drivers/net/e1000/igb_ethdev.c          | 1 +
 drivers/net/enic/enic_ethdev.c          | 1 +
 drivers/net/failsafe/failsafe_ops.c     | 1 +
 drivers/net/hinic/hinic_pmd_ethdev.c    | 2 ++
 drivers/net/hns3/hns3_ethdev.c          | 1 +
 drivers/net/hns3/hns3_ethdev_vf.c       | 1 +
 drivers/net/i40e/i40e_ethdev.c          | 1 +
 drivers/net/i40e/i40e_vf_representor.c  | 2 ++
 drivers/net/iavf/iavf_ethdev.c          | 1 +
 drivers/net/ice/ice_dcf_ethdev.c        | 1 +
 drivers/net/igc/igc_ethdev.c            | 1 +
 drivers/net/ipn3ke/ipn3ke_representor.c | 1 +
 drivers/net/mvpp2/mrvl_ethdev.c         | 2 ++
 drivers/net/octeontx2/otx2_ethdev_ops.c | 1 +
 drivers/net/qede/qede_ethdev.c          | 1 +
 drivers/net/sfc/sfc_ethdev.c            | 1 +
 drivers/net/softnic/rte_eth_softnic.c   | 1 +
 drivers/net/tap/rte_eth_tap.c           | 1 +
 drivers/net/txgbe/txgbe_ethdev.c        | 1 +
 drivers/net/txgbe/txgbe_ethdev_vf.c     | 1 +
 26 files changed, 31 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index f385723a9f..dbdcdb1ec4 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1008,6 +1008,7 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	dev_info->speed_capa = bnxt_get_speed_capabilities(bp);
 	dev_info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 			     RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index b7e88e013a..34b5df6018 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -526,6 +526,7 @@ int bnxt_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	dev_info->max_tx_queues = max_rx_rings;
 	dev_info->reta_size = bnxt_rss_hash_tbl_size(parent_bp);
 	dev_info->hash_key_size = 40;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	/* MTU specifics */
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index d0924df761..b598512322 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -68,6 +68,7 @@ cnxk_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
 	devinfo->speed_capa = dev->speed_capa;
 	devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 			    RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+	devinfo->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 	return 0;
 }
 
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index f77b297600..e654ccc854 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -131,6 +131,8 @@ int cxgbe_dev_info_get(struct rte_eth_dev *eth_dev,
 	device_info->max_vfs = adapter->params.arch.vfcount;
 	device_info->max_vmdq_pools = 0; /* XXX: For now no support for VMDQ */
 
+	device_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+
 	device_info->rx_queue_offload_capa = 0UL;
 	device_info->rx_offload_capa = CXGBE_RX_OFFLOADS;
 
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index a0270e7852..19f35262e5 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -254,6 +254,7 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->speed_capa = ETH_LINK_SPEED_1G |
 			ETH_LINK_SPEED_2_5G |
 			ETH_LINK_SPEED_10G;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 73152dec6e..3d546c5517 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1106,6 +1106,8 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
 			ETH_LINK_SPEED_1G;
 
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+
 	/* Preferred queue parameters */
 	dev_info->default_rxportconf.nb_queues = 1;
 	dev_info->default_txportconf.nb_queues = 1;
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index dbe811a1ad..d1e61ea345 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -2174,6 +2174,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->tx_queue_offload_capa = igb_get_tx_queue_offloads_capa(dev);
 	dev_info->tx_offload_capa = igb_get_tx_port_offloads_capa(dev) |
 				    dev_info->tx_queue_offload_capa;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	switch (hw->mac.type) {
 	case e1000_82575:
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8df7332bc5..4e8ccfd832 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -469,6 +469,7 @@ static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
 	device_info->rx_offload_capa = enic->rx_offload_capa;
 	device_info->tx_offload_capa = enic->tx_offload_capa;
 	device_info->tx_queue_offload_capa = enic->tx_queue_offload_capa;
+	device_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 	device_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_free_thresh = ENIC_DEFAULT_RX_FREE_THRESH
 	};
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 29de39910c..9e9c688961 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1220,6 +1220,7 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
 	infos->dev_capa =
 		RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 		RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+	infos->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_PROBED) {
 		struct rte_eth_dev_info sub_info;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c2374ebb67..ff287321c5 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -751,6 +751,8 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 				DEV_TX_OFFLOAD_TCP_TSO |
 				DEV_TX_OFFLOAD_MULTI_SEGS;
 
+	info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+
 	info->hash_key_size = HINIC_RSS_KEY_SIZE;
 	info->reta_size = HINIC_RSS_INDIR_SIZE;
 	info->flow_type_rss_offloads = HINIC_RSS_OFFLOAD_ALL;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 693048f587..4177c0db41 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2707,6 +2707,7 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 	if (hns3_dev_get_support(hw, INDEP_TXRX))
 		info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 				 RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+	info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	if (hns3_dev_get_support(hw, PTP))
 		info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 54dbd4b798..b53e9be091 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -965,6 +965,7 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 	if (hns3_dev_get_support(hw, INDEP_TXRX))
 		info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 				 RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+	info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	info->rx_desc_lim = (struct rte_eth_desc_lim) {
 		.nb_max = HNS3_MAX_RING_DESC,
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 0a4db0891d..e472cee167 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3751,6 +3751,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->dev_capa =
 		RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 		RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
 						sizeof(uint32_t);
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 12d5a2e48a..4d5a4af292 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -35,6 +35,8 @@ i40e_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
 	/* get dev info for the vdev */
 	dev_info->device = ethdev->device;
 
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+
 	dev_info->max_rx_queues = ethdev->data->nb_rx_queues;
 	dev_info->max_tx_queues = ethdev->data->nb_tx_queues;
 
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 611f1f7722..9bb5bdf465 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -960,6 +960,7 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->reta_size = vf->vf_res->rss_lut_size;
 	dev_info->flow_type_rss_offloads = IAVF_RSS_OFFLOAD_ALL;
 	dev_info->max_mac_addrs = IAVF_NUM_MACADDR_MAX;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 	dev_info->rx_offload_capa =
 		DEV_RX_OFFLOAD_VLAN_STRIP |
 		DEV_RX_OFFLOAD_QINQ_STRIP |
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index b8a537cb85..05a7ccf71e 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -673,6 +673,7 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->hash_key_size = hw->vf_res->rss_key_size;
 	dev_info->reta_size = hw->vf_res->rss_lut_size;
 	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	dev_info->rx_offload_capa =
 		DEV_RX_OFFLOAD_VLAN_STRIP |
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 2a1ed90b64..7d4cc408ba 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1480,6 +1480,7 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_rx_bufsize = 256; /* See BSIZE field of RCTL register. */
 	dev_info->max_rx_pktlen = MAX_RX_JUMBO_FRAME_SIZE;
 	dev_info->max_mac_addrs = hw->mac.rar_entry_count;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 	dev_info->rx_offload_capa = IGC_RX_OFFLOAD_ALL;
 	dev_info->tx_offload_capa = IGC_TX_OFFLOAD_ALL;
 	dev_info->rx_queue_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 063a9c6a6f..d40947162d 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -96,6 +96,7 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->dev_capa =
 		RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 		RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	dev_info->switch_info.name = ethdev->device->name;
 	dev_info->switch_info.domain_id = rpst->switch_domain_id;
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index a6458d2ce9..a6d67ea093 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -1709,6 +1709,8 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
 {
 	struct mrvl_priv *priv = dev->data->dev_private;
 
+	info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+
 	info->speed_capa = ETH_LINK_SPEED_10M |
 			   ETH_LINK_SPEED_100M |
 			   ETH_LINK_SPEED_1G |
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 22a8af5cba..cad5416ba2 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -583,6 +583,7 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
 
 	devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 				RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+	devinfo->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	return 0;
 }
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 27f6932dc7..5bcc97d314 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1367,6 +1367,7 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->max_rx_pktlen = (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
 	dev_info->rx_desc_lim = qede_rx_desc_lim;
 	dev_info->tx_desc_lim = qede_tx_desc_lim;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	if (IS_PF(edev))
 		dev_info->max_rx_queues = (uint16_t)RTE_MIN(
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index f5986b610f..8951495841 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -186,6 +186,7 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	dev_info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 			     RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	if (mae->status == SFC_MAE_STATUS_SUPPORTED ||
 	    mae->status == SFC_MAE_STATUS_ADMIN) {
diff --git a/drivers/net/softnic/rte_eth_softnic.c b/drivers/net/softnic/rte_eth_softnic.c
index b3b55b9035..3622049afa 100644
--- a/drivers/net/softnic/rte_eth_softnic.c
+++ b/drivers/net/softnic/rte_eth_softnic.c
@@ -93,6 +93,7 @@ pmd_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
 	dev_info->max_rx_pktlen = UINT32_MAX;
 	dev_info->max_rx_queues = UINT16_MAX;
 	dev_info->max_tx_queues = UINT16_MAX;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	return 0;
 }
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index e4f1ad4521..5e19bd8d4b 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1006,6 +1006,7 @@ tap_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	 * functions together and not in partial combinations
 	 */
 	dev_info->flow_type_rss_offloads = ~TAP_RSS_HF_MASK;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	return 0;
 }
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 7b46ffb686..6d64c657d9 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -2603,6 +2603,7 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_vfs = pci_dev->max_vfs;
 	dev_info->max_vmdq_pools = ETH_64_POOLS;
 	dev_info->vmdq_queue_num = dev_info->max_rx_queues;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 	dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
 				     dev_info->rx_queue_offload_capa);
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 43dc0ed39b..0d464c5a4c 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -487,6 +487,7 @@ txgbevf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
 	dev_info->max_vmdq_pools = ETH_64_POOLS;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 	dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
 				     dev_info->rx_queue_offload_capa);
-- 
2.25.1


  parent reply	other threads:[~2021-10-21  6:36 UTC|newest]

Thread overview: 96+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-05  0:52 [dpdk-dev] [PATCH 0/5] Flow entites behavior on port restart dkozlyuk
2021-10-05  0:52 ` [dpdk-dev] [PATCH 1/5] ethdev: add capability to keep flow rules on restart dkozlyuk
2021-10-06  6:15   ` Ori Kam
2021-10-06  6:55     ` Somnath Kotur
2021-10-06 17:15   ` Ajit Khaparde
2021-10-05  0:52 ` [dpdk-dev] [PATCH 2/5] ethdev: add capability to keep shared objects " dkozlyuk
2021-10-06  6:16   ` Ori Kam
2021-10-13  8:32   ` Dmitry Kozlyuk
2021-10-14 13:46     ` Ferruh Yigit
2021-10-14 21:45       ` Dmitry Kozlyuk
2021-10-14 21:48         ` Dmitry Kozlyuk
2021-10-15 11:46         ` Ferruh Yigit
2021-10-15 12:35           ` Dmitry Kozlyuk
2021-10-15 16:26             ` Ferruh Yigit
2021-10-16 20:32               ` Dmitry Kozlyuk
2021-10-18  8:42                 ` Ferruh Yigit
2021-10-18 11:13                   ` Dmitry Kozlyuk
2021-10-18 11:59                     ` Ferruh Yigit
2021-10-14 14:14     ` Dmitry Kozlyuk
2021-10-15  8:26       ` Andrew Rybchenko
2021-10-15  9:04         ` Dmitry Kozlyuk
2021-10-15  9:36           ` Andrew Rybchenko
2021-10-05  0:52 ` [dpdk-dev] [PATCH 3/5] net/mlx5: discover max flow priority using DevX dkozlyuk
2021-10-05  0:52 ` [dpdk-dev] [PATCH 4/5] net/mlx5: create drop queue " dkozlyuk
2021-10-05  0:52 ` [dpdk-dev] [PATCH 5/5] net/mlx5: preserve indirect actions on restart dkozlyuk
2021-10-15 16:18 ` [dpdk-dev] [PATCH v2 0/5] Flow entites behavior on port restart Dmitry Kozlyuk
2021-10-15 16:18   ` [dpdk-dev] [PATCH v2 1/5] ethdev: add capability to keep flow rules on restart Dmitry Kozlyuk
2021-10-18  8:56     ` Andrew Rybchenko
2021-10-19 12:38       ` Dmitry Kozlyuk
2021-10-18 13:06     ` Zhang, Qi Z
2021-10-18 22:51       ` Dmitry Kozlyuk
2021-10-19  1:00         ` Zhang, Qi Z
2021-10-15 16:18   ` [dpdk-dev] [PATCH v2 2/5] ethdev: add capability to keep shared objects " Dmitry Kozlyuk
2021-10-17  8:10     ` Ori Kam
2021-10-17  9:14       ` Dmitry Kozlyuk
2021-10-17  9:45         ` Ori Kam
2021-10-15 16:18   ` [dpdk-dev] [PATCH v2 3/5] net/mlx5: discover max flow priority using DevX Dmitry Kozlyuk
2021-10-15 16:18   ` [dpdk-dev] [PATCH v2 4/5] net/mlx5: create drop queue " Dmitry Kozlyuk
2021-10-15 16:18   ` [dpdk-dev] [PATCH v2 5/5] net/mlx5: preserve indirect actions on restart Dmitry Kozlyuk
2021-10-19 12:37   ` [dpdk-dev] [PATCH v3 0/6] Flow entites behavior on port restart Dmitry Kozlyuk
2021-10-19 12:37     ` [dpdk-dev] [PATCH v3 1/6] ethdev: add capability to keep flow rules on restart Dmitry Kozlyuk
2021-10-19 15:22       ` Ori Kam
2021-10-19 16:38       ` Ferruh Yigit
2021-10-19 17:13         ` Dmitry Kozlyuk
2021-10-20 10:39       ` Andrew Rybchenko
2021-10-20 11:40         ` Dmitry Kozlyuk
2021-10-20 13:40           ` Ori Kam
2021-10-19 12:37     ` [dpdk-dev] [PATCH v3 2/6] ethdev: add capability to keep shared objects " Dmitry Kozlyuk
2021-10-19 15:22       ` Ori Kam
2021-10-19 12:37     ` [dpdk-dev] [PATCH v3 3/6] net: advertise no support for keeping flow rules Dmitry Kozlyuk
2021-10-20 10:08       ` Andrew Rybchenko
2021-10-20 22:20         ` Dmitry Kozlyuk
2021-10-19 12:37     ` [dpdk-dev] [PATCH v3 4/6] net/mlx5: discover max flow priority using DevX Dmitry Kozlyuk
2021-10-19 12:37     ` [dpdk-dev] [PATCH v3 5/6] net/mlx5: create drop queue " Dmitry Kozlyuk
2021-10-19 12:37     ` [dpdk-dev] [PATCH v3 6/6] net/mlx5: preserve indirect actions on restart Dmitry Kozlyuk
2021-10-20 10:12     ` [dpdk-dev] [PATCH v3 0/6] Flow entites behavior on port restart Andrew Rybchenko
2021-10-20 13:21       ` Dmitry Kozlyuk
2021-10-21  6:34     ` [dpdk-dev] [PATCH v4 " Dmitry Kozlyuk
2021-10-21  6:34       ` [dpdk-dev] [PATCH v4 1/6] ethdev: add capability to keep flow rules on restart Dmitry Kozlyuk
2021-10-21  7:36         ` Ori Kam
2021-10-28 18:33         ` Ajit Khaparde
2021-11-01 15:02         ` Andrew Rybchenko
2021-11-01 15:56           ` Dmitry Kozlyuk
2021-10-21  6:34       ` [dpdk-dev] [PATCH v4 2/6] ethdev: add capability to keep shared objects " Dmitry Kozlyuk
2021-10-21  7:37         ` Ori Kam
2021-10-21 18:28         ` Ajit Khaparde
2021-11-01 15:04         ` Andrew Rybchenko
2021-10-21  6:35       ` Dmitry Kozlyuk [this message]
2021-10-21 18:26         ` [dpdk-dev] [PATCH v4 3/6] net: advertise no support for keeping flow rules Ajit Khaparde
2021-10-22  1:38           ` Somnath Kotur
2021-10-27  7:11         ` Hyong Youb Kim (hyonkim)
2021-11-01 15:06         ` Andrew Rybchenko
2021-11-01 16:59           ` Ferruh Yigit
2021-10-21  6:35       ` [dpdk-dev] [PATCH v4 4/6] net/mlx5: discover max flow priority using DevX Dmitry Kozlyuk
2021-10-21  6:35       ` [dpdk-dev] [PATCH v4 5/6] net/mlx5: create drop queue " Dmitry Kozlyuk
2021-10-21  6:35       ` [dpdk-dev] [PATCH v4 6/6] net/mlx5: preserve indirect actions on restart Dmitry Kozlyuk
2021-10-26 11:46       ` [dpdk-dev] [PATCH v4 0/6] Flow entites behavior on port restart Ferruh Yigit
2021-11-01 13:43         ` Ferruh Yigit
2021-11-02 13:49       ` Ferruh Yigit
2021-11-02 13:54       ` [dpdk-dev] [PATCH v5 " Dmitry Kozlyuk
2021-11-02 13:54         ` [dpdk-dev] [PATCH v5 1/6] ethdev: add capability to keep flow rules on restart Dmitry Kozlyuk
2021-11-02 13:54         ` [dpdk-dev] [PATCH v5 2/6] ethdev: add capability to keep shared objects " Dmitry Kozlyuk
2021-11-02 13:54         ` [dpdk-dev] [PATCH v5 3/6] net: advertise no support for keeping flow rules Dmitry Kozlyuk
2021-11-02 13:54         ` [dpdk-dev] [PATCH v5 4/6] net/mlx5: discover max flow priority using DevX Dmitry Kozlyuk
2021-11-02 13:54         ` [dpdk-dev] [PATCH v5 5/6] net/mlx5: create drop queue " Dmitry Kozlyuk
2021-11-02 13:54         ` [dpdk-dev] [PATCH v5 6/6] net/mlx5: preserve indirect actions on restart Dmitry Kozlyuk
2021-11-02 14:23         ` [dpdk-dev] [PATCH v5 0/6] Flow entites behavior on port restart Ferruh Yigit
2021-11-02 17:02           ` Dmitry Kozlyuk
2021-11-02 17:01         ` [dpdk-dev] [PATCH v6 " Dmitry Kozlyuk
2021-11-02 17:01           ` [dpdk-dev] [PATCH v6 1/6] ethdev: add capability to keep flow rules on restart Dmitry Kozlyuk
2021-11-02 17:01           ` [dpdk-dev] [PATCH v6 2/6] ethdev: add capability to keep shared objects " Dmitry Kozlyuk
2021-11-02 17:01           ` [dpdk-dev] [PATCH v6 3/6] net: advertise no support for keeping flow rules Dmitry Kozlyuk
2021-11-02 17:01           ` [dpdk-dev] [PATCH v6 4/6] net/mlx5: discover max flow priority using DevX Dmitry Kozlyuk
2021-11-02 17:01           ` [dpdk-dev] [PATCH v6 5/6] net/mlx5: create drop queue " Dmitry Kozlyuk
2021-11-02 17:01           ` [dpdk-dev] [PATCH v6 6/6] net/mlx5: preserve indirect actions on restart Dmitry Kozlyuk
2021-11-02 18:02           ` [dpdk-dev] [PATCH v6 0/6] Flow entites behavior on port restart Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211021063503.3632732-4-dkozlyuk@nvidia.com \
    --to=dkozlyuk@oss.nvidia.com \
    --cc=ajit.khaparde@broadcom.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=beilei.xing@intel.com \
    --cc=cloud.wangxiaoyun@huawei.com \
    --cc=cristian.dumitrescu@intel.com \
    --cc=dev@dpdk.org \
    --cc=dsinghrawat@marvell.com \
    --cc=ferruh.yigit@intel.com \
    --cc=grive@u256.net \
    --cc=haiyue.wang@intel.com \
    --cc=hemant.agrawal@nxp.com \
    --cc=humin29@huawei.com \
    --cc=hyonkim@cisco.com \
    --cc=jasvinder.singh@intel.com \
    --cc=jerinj@marvell.com \
    --cc=jianwang@trustnetic.com \
    --cc=jiawenwu@trustnetic.com \
    --cc=jingjing.wu@intel.com \
    --cc=johndale@cisco.com \
    --cc=keith.wiles@intel.com \
    --cc=kirankumark@marvell.com \
    --cc=lironh@marvell.com \
    --cc=ndabilpuram@marvell.com \
    --cc=orika@oss.nvidia.com \
    --cc=oulijun@huawei.com \
    --cc=qi.z.zhang@intel.com \
    --cc=qiming.yang@intel.com \
    --cc=rahul.lakkireddy@chelsio.com \
    --cc=rmody@marvell.com \
    --cc=rosen.xu@intel.com \
    --cc=sachin.saxena@oss.nxp.com \
    --cc=skori@marvell.com \
    --cc=skoteshwar@marvell.com \
    --cc=somnath.kotur@broadcom.com \
    --cc=xuanziyang2@huawei.com \
    --cc=yisen.zhuang@huawei.com \
    --cc=zhouguoyang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).