From: Christian Ehrhardt <christian.ehrhardt@canonical.com>
To: Huisong Li <lihuisong@huawei.com>
Cc: Min Hu <humin29@huawei.com>, dpdk stable <stable@dpdk.org>
Subject: Re: [dpdk-stable] patch 'net/hns3: fix delay for waiting to stop Rx/Tx' has been queued to stable release 19.11.10
Date: Wed, 11 Aug 2021 11:02:46 +0200 [thread overview]
Message-ID: <CAATJJ0L+2ekLS6UaBRmqCb=sHdN-xQ7Ug=LwKoL9wRop6-L8EA@mail.gmail.com> (raw)
In-Reply-To: <20210810154022.749358-31-christian.ehrhardt@canonical.com>
On Tue, Aug 10, 2021 at 5:42 PM <christian.ehrhardt@canonical.com> wrote:
>
> Hi,
>
> FYI, your patch has been queued to stable release 19.11.10
>
> Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
> It will be pushed if I get no objections before 08/12/21. So please
> shout if anyone has objections.
>
> Also note that after the patch there's a diff of the upstream commit vs the
> patch applied to the branch. This will indicate if there was any rebasing
> needed to apply to the stable branch. If there were code changes for rebasing
> (ie: not only metadata diffs), please double check that the rebase was
> correctly done.
>
> Queued patches are on a temporary branch at:
> https://github.com/cpaelzer/dpdk-stable-queue
>
> This queued commit can be viewed at:
> https://github.com/cpaelzer/dpdk-stable-queue/commit/249c35152a9bcd6d4c4b52776602750552dcf294
>
> Thanks.
>
> Christian Ehrhardt <christian.ehrhardt@canonical.com>
>
> ---
> From 249c35152a9bcd6d4c4b52776602750552dcf294 Mon Sep 17 00:00:00 2001
> From: Huisong Li <lihuisong@huawei.com>
> Date: Sun, 13 Jun 2021 10:31:52 +0800
> Subject: [PATCH] net/hns3: fix delay for waiting to stop Rx/Tx
>
> [ upstream commit 4d8cce267840556cec8483c61f8cfbf25873496d ]
>
> When the primary process executes dev_stop or is being reset, the packet
> sending and receiving functions is changed. In this moment, the primary
> process requests secondary processes to change their Rx/Tx functions, and
> delays a period of time in case of crashes when queues are still in use.
> The delay time depends on the number of queues actually used, instead of
> the maximum number of queues supported by the device.
>
> Fixes: 23d4b61fee5d ("net/hns3: support multiple process")
>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> ---
> drivers/net/hns3/hns3_ethdev.c | 1184 ++++++++++++++++++++++++++++-
> drivers/net/hns3/hns3_ethdev_vf.c | 4 +-
> 2 files changed, 1184 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
> index ac82e0b5ef..e1bc55682c 100644
> --- a/drivers/net/hns3/hns3_ethdev.c
> +++ b/drivers/net/hns3/hns3_ethdev.c
> @@ -4742,7 +4742,7 @@ hns3_dev_stop(struct rte_eth_dev *dev)
> /* Disable datapath on secondary process. */
> hns3_mp_req_stop_rxtx(dev);
> /* Prevent crashes when queues are still in use. */
> - rte_delay_ms(hw->tqps_num);
> + rte_delay_ms(hw->cfg_max_queues);
>
> rte_spinlock_lock(&hw->lock);
> if (rte_atomic16_read(&hw->reset.resetting) == 0) {
> @@ -5130,10 +5130,1190 @@ hns3_get_reset_level(struct hns3_adapter *hns, uint64_t *levels)
> reset_level = HNS3_IMP_RESET;
> else if (hns3_atomic_test_bit(HNS3_GLOBAL_RESET, levels))
> reset_level = HNS3_GLOBAL_RESET;
> +<<<<<<< HEAD
I beg your pardon, this was my mistake to be missing this remaining
three-way-merge snippet when backporting.
I have fixed it now - the patch is still applied, looks correct to me
now and can be seen at
https://github.com/cpaelzer/dpdk-stable-queue/commit/779b8527cef5227ff7972522b5ac5c3cae3e8169
> else if (hns3_atomic_test_bit(HNS3_FUNC_RESET, levels))
> reset_level = HNS3_FUNC_RESET;
> else if (hns3_atomic_test_bit(HNS3_FLR_RESET, levels))
> reset_level = HNS3_FLR_RESET;
> +||||||| constructed merge base
> + else if (hns3_atomic_test_bit(HNS3_FUNC_RESET, levels))
> + reset_level = HNS3_FUNC_RESET;
> + else if (hns3_atomic_test_bit(HNS3_FLR_RESET, levels))
> + reset_level = HNS3_FLR_RESET;
> +
> + if (hw->reset.level != HNS3_NONE_RESET && reset_level < hw->reset.level)
> + return HNS3_NONE_RESET;
> +
> + return reset_level;
> +}
> +
> +static void
> +hns3_record_imp_error(struct hns3_adapter *hns)
> +{
> + struct hns3_hw *hw = &hns->hw;
> + uint32_t reg_val;
> +
> + reg_val = hns3_read_dev(hw, HNS3_VECTOR0_OTER_EN_REG);
> + if (hns3_get_bit(reg_val, HNS3_VECTOR0_IMP_RD_POISON_B)) {
> + hns3_warn(hw, "Detected IMP RD poison!");
> + hns3_set_bit(reg_val, HNS3_VECTOR0_IMP_RD_POISON_B, 0);
> + hns3_write_dev(hw, HNS3_VECTOR0_OTER_EN_REG, reg_val);
> + }
> +
> + if (hns3_get_bit(reg_val, HNS3_VECTOR0_IMP_CMDQ_ERR_B)) {
> + hns3_warn(hw, "Detected IMP CMDQ error!");
> + hns3_set_bit(reg_val, HNS3_VECTOR0_IMP_CMDQ_ERR_B, 0);
> + hns3_write_dev(hw, HNS3_VECTOR0_OTER_EN_REG, reg_val);
> + }
> +}
> +
> +static int
> +hns3_prepare_reset(struct hns3_adapter *hns)
> +{
> + struct hns3_hw *hw = &hns->hw;
> + uint32_t reg_val;
> + int ret;
> +
> + switch (hw->reset.level) {
> + case HNS3_FUNC_RESET:
> + ret = hns3_func_reset_cmd(hw, HNS3_PF_FUNC_ID);
> + if (ret)
> + return ret;
> +
> + /*
> + * After performaning pf reset, it is not necessary to do the
> + * mailbox handling or send any command to firmware, because
> + * any mailbox handling or command to firmware is only valid
> + * after hns3_cmd_init is called.
> + */
> + __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED);
> + hw->reset.stats.request_cnt++;
> + break;
> + case HNS3_IMP_RESET:
> + hns3_record_imp_error(hns);
> + reg_val = hns3_read_dev(hw, HNS3_VECTOR0_OTER_EN_REG);
> + hns3_write_dev(hw, HNS3_VECTOR0_OTER_EN_REG, reg_val |
> + BIT(HNS3_VECTOR0_IMP_RESET_INT_B));
> + break;
> + default:
> + break;
> + }
> + return 0;
> +}
> +
> +static int
> +hns3_set_rst_done(struct hns3_hw *hw)
> +{
> + struct hns3_pf_rst_done_cmd *req;
> + struct hns3_cmd_desc desc;
> +
> + req = (struct hns3_pf_rst_done_cmd *)desc.data;
> + hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_PF_RST_DONE, false);
> + req->pf_rst_done |= HNS3_PF_RESET_DONE_BIT;
> + return hns3_cmd_send(hw, &desc, 1);
> +}
> +
> +static int
> +hns3_stop_service(struct hns3_adapter *hns)
> +{
> + struct hns3_hw *hw = &hns->hw;
> + struct rte_eth_dev *eth_dev;
> +
> + eth_dev = &rte_eth_devices[hw->data->port_id];
> + hw->mac.link_status = ETH_LINK_DOWN;
> + if (hw->adapter_state == HNS3_NIC_STARTED) {
> + rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
> + hns3_update_linkstatus_and_event(hw, false);
> + }
> +
> + hns3_set_rxtx_function(eth_dev);
> + rte_wmb();
> + /* Disable datapath on secondary process. */
> + hns3_mp_req_stop_rxtx(eth_dev);
> + rte_delay_ms(hw->cfg_max_queues);
> +
> + rte_spinlock_lock(&hw->lock);
> + if (hns->hw.adapter_state == HNS3_NIC_STARTED ||
> + hw->adapter_state == HNS3_NIC_STOPPING) {
> + hns3_enable_all_queues(hw, false);
> + hns3_do_stop(hns);
> + hw->reset.mbuf_deferred_free = true;
> + } else
> + hw->reset.mbuf_deferred_free = false;
> +
> + /*
> + * It is cumbersome for hardware to pick-and-choose entries for deletion
> + * from table space. Hence, for function reset software intervention is
> + * required to delete the entries
> + */
> + if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0)
> + hns3_configure_all_mc_mac_addr(hns, true);
> + rte_spinlock_unlock(&hw->lock);
> +
> + return 0;
> +}
> +
> +static int
> +hns3_start_service(struct hns3_adapter *hns)
> +{
> + struct hns3_hw *hw = &hns->hw;
> + struct rte_eth_dev *eth_dev;
> +
> + if (hw->reset.level == HNS3_IMP_RESET ||
> + hw->reset.level == HNS3_GLOBAL_RESET)
> + hns3_set_rst_done(hw);
> + eth_dev = &rte_eth_devices[hw->data->port_id];
> + hns3_set_rxtx_function(eth_dev);
> + hns3_mp_req_start_rxtx(eth_dev);
> + if (hw->adapter_state == HNS3_NIC_STARTED) {
> + /*
> + * This API parent function already hold the hns3_hw.lock, the
> + * hns3_service_handler may report lse, in bonding application
> + * it will call driver's ops which may acquire the hns3_hw.lock
> + * again, thus lead to deadlock.
> + * We defer calls hns3_service_handler to avoid the deadlock.
> + */
> + rte_eal_alarm_set(HNS3_SERVICE_QUICK_INTERVAL,
> + hns3_service_handler, eth_dev);
> +
> + /* Enable interrupt of all rx queues before enabling queues */
> + hns3_dev_all_rx_queue_intr_enable(hw, true);
> + /*
> + * Enable state of each rxq and txq will be recovered after
> + * reset, so we need to restore them before enable all tqps;
> + */
> + hns3_restore_tqp_enable_state(hw);
> + /*
> + * When finished the initialization, enable queues to receive
> + * and transmit packets.
> + */
> + hns3_enable_all_queues(hw, true);
> + }
> +
> + return 0;
> +}
> +
> +static int
> +hns3_restore_conf(struct hns3_adapter *hns)
> +{
> + struct hns3_hw *hw = &hns->hw;
> + int ret;
> +
> + ret = hns3_configure_all_mac_addr(hns, false);
> + if (ret)
> + return ret;
> +
> + ret = hns3_configure_all_mc_mac_addr(hns, false);
> + if (ret)
> + goto err_mc_mac;
> +
> + ret = hns3_dev_promisc_restore(hns);
> + if (ret)
> + goto err_promisc;
> +
> + ret = hns3_restore_vlan_table(hns);
> + if (ret)
> + goto err_promisc;
> +
> + ret = hns3_restore_vlan_conf(hns);
> + if (ret)
> + goto err_promisc;
> +
> + ret = hns3_restore_all_fdir_filter(hns);
> + if (ret)
> + goto err_promisc;
> +
> + ret = hns3_restore_ptp(hns);
> + if (ret)
> + goto err_promisc;
> +
> + ret = hns3_restore_rx_interrupt(hw);
> + if (ret)
> + goto err_promisc;
> +
> + ret = hns3_restore_gro_conf(hw);
> + if (ret)
> + goto err_promisc;
> +
> + ret = hns3_restore_fec(hw);
> + if (ret)
> + goto err_promisc;
> +
> + if (hns->hw.adapter_state == HNS3_NIC_STARTED) {
> + ret = hns3_do_start(hns, false);
> + if (ret)
> + goto err_promisc;
> + hns3_info(hw, "hns3 dev restart successful!");
> + } else if (hw->adapter_state == HNS3_NIC_STOPPING)
> + hw->adapter_state = HNS3_NIC_CONFIGURED;
> + return 0;
> +
> +err_promisc:
> + hns3_configure_all_mc_mac_addr(hns, true);
> +err_mc_mac:
> + hns3_configure_all_mac_addr(hns, true);
> + return ret;
> +}
> +
> +static void
> +hns3_reset_service(void *param)
> +{
> + struct hns3_adapter *hns = (struct hns3_adapter *)param;
> + struct hns3_hw *hw = &hns->hw;
> + enum hns3_reset_level reset_level;
> + struct timeval tv_delta;
> + struct timeval tv_start;
> + struct timeval tv;
> + uint64_t msec;
> + int ret;
> +
> + /*
> + * The interrupt is not triggered within the delay time.
> + * The interrupt may have been lost. It is necessary to handle
> + * the interrupt to recover from the error.
> + */
> + if (__atomic_load_n(&hw->reset.schedule, __ATOMIC_RELAXED) ==
> + SCHEDULE_DEFERRED) {
> + __atomic_store_n(&hw->reset.schedule, SCHEDULE_REQUESTED,
> + __ATOMIC_RELAXED);
> + hns3_err(hw, "Handling interrupts in delayed tasks");
> + hns3_interrupt_handler(&rte_eth_devices[hw->data->port_id]);
> + reset_level = hns3_get_reset_level(hns, &hw->reset.pending);
> + if (reset_level == HNS3_NONE_RESET) {
> + hns3_err(hw, "No reset level is set, try IMP reset");
> + hns3_atomic_set_bit(HNS3_IMP_RESET, &hw->reset.pending);
> + }
> + }
> + __atomic_store_n(&hw->reset.schedule, SCHEDULE_NONE, __ATOMIC_RELAXED);
> +
> + /*
> + * Check if there is any ongoing reset in the hardware. This status can
> + * be checked from reset_pending. If there is then, we need to wait for
> + * hardware to complete reset.
> + * a. If we are able to figure out in reasonable time that hardware
> + * has fully resetted then, we can proceed with driver, client
> + * reset.
> + * b. else, we can come back later to check this status so re-sched
> + * now.
> + */
> + reset_level = hns3_get_reset_level(hns, &hw->reset.pending);
> + if (reset_level != HNS3_NONE_RESET) {
> + hns3_clock_gettime(&tv_start);
> + ret = hns3_reset_process(hns, reset_level);
> + hns3_clock_gettime(&tv);
> + timersub(&tv, &tv_start, &tv_delta);
> + msec = hns3_clock_calctime_ms(&tv_delta);
> + if (msec > HNS3_RESET_PROCESS_MS)
> + hns3_err(hw, "%d handle long time delta %" PRIu64
> + " ms time=%ld.%.6ld",
> + hw->reset.level, msec,
> + tv.tv_sec, tv.tv_usec);
> + if (ret == -EAGAIN)
> + return;
> + }
> +
> + /* Check if we got any *new* reset requests to be honored */
> + reset_level = hns3_get_reset_level(hns, &hw->reset.request);
> + if (reset_level != HNS3_NONE_RESET)
> + hns3_msix_process(hns, reset_level);
> +}
> +
> +static unsigned int
> +hns3_get_speed_capa_num(uint16_t device_id)
> +{
> + unsigned int num;
> +
> + switch (device_id) {
> + case HNS3_DEV_ID_25GE:
> + case HNS3_DEV_ID_25GE_RDMA:
> + num = 2;
> + break;
> + case HNS3_DEV_ID_100G_RDMA_MACSEC:
> + case HNS3_DEV_ID_200G_RDMA:
> + num = 1;
> + break;
> + default:
> + num = 0;
> + break;
> + }
> +
> + return num;
> +}
> +
> +static int
> +hns3_get_speed_fec_capa(struct rte_eth_fec_capa *speed_fec_capa,
> + uint16_t device_id)
> +{
> + switch (device_id) {
> + case HNS3_DEV_ID_25GE:
> + /* fallthrough */
> + case HNS3_DEV_ID_25GE_RDMA:
> + speed_fec_capa[0].speed = speed_fec_capa_tbl[1].speed;
> + speed_fec_capa[0].capa = speed_fec_capa_tbl[1].capa;
> +
> + /* In HNS3 device, the 25G NIC is compatible with 10G rate */
> + speed_fec_capa[1].speed = speed_fec_capa_tbl[0].speed;
> + speed_fec_capa[1].capa = speed_fec_capa_tbl[0].capa;
> + break;
> + case HNS3_DEV_ID_100G_RDMA_MACSEC:
> + speed_fec_capa[0].speed = speed_fec_capa_tbl[4].speed;
> + speed_fec_capa[0].capa = speed_fec_capa_tbl[4].capa;
> + break;
> + case HNS3_DEV_ID_200G_RDMA:
> + speed_fec_capa[0].speed = speed_fec_capa_tbl[5].speed;
> + speed_fec_capa[0].capa = speed_fec_capa_tbl[5].capa;
> + break;
> + default:
> + return -ENOTSUP;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +hns3_fec_get_capability(struct rte_eth_dev *dev,
> + struct rte_eth_fec_capa *speed_fec_capa,
> + unsigned int num)
> +{
> + struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
> + uint16_t device_id = pci_dev->id.device_id;
> + unsigned int capa_num;
> + int ret;
> +
> + capa_num = hns3_get_speed_capa_num(device_id);
> + if (capa_num == 0) {
> + hns3_err(hw, "device(0x%x) is not supported by hns3 PMD",
> + device_id);
> + return -ENOTSUP;
> + }
> +
> + if (speed_fec_capa == NULL || num < capa_num)
> + return capa_num;
> +
> + ret = hns3_get_speed_fec_capa(speed_fec_capa, device_id);
> + if (ret)
> + return -ENOTSUP;
> +
> + return capa_num;
> +}
> +
> +static int
> +get_current_fec_auto_state(struct hns3_hw *hw, uint8_t *state)
> +{
> + struct hns3_config_fec_cmd *req;
> + struct hns3_cmd_desc desc;
> + int ret;
> +
> + /*
> + * CMD(HNS3_OPC_CONFIG_FEC_MODE) read is not supported
> + * in device of link speed
> + * below 10 Gbps.
> + */
> + if (hw->mac.link_speed < ETH_SPEED_NUM_10G) {
> + *state = 0;
> + return 0;
> + }
> +
> + hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CONFIG_FEC_MODE, true);
> + req = (struct hns3_config_fec_cmd *)desc.data;
> + ret = hns3_cmd_send(hw, &desc, 1);
> + if (ret) {
> + hns3_err(hw, "get current fec auto state failed, ret = %d",
> + ret);
> + return ret;
> + }
> +
> + *state = req->fec_mode & (1U << HNS3_MAC_CFG_FEC_AUTO_EN_B);
> + return 0;
> +}
> +
> +static int
> +hns3_fec_get_internal(struct hns3_hw *hw, uint32_t *fec_capa)
> +{
> + struct hns3_sfp_info_cmd *resp;
> + uint32_t tmp_fec_capa;
> + uint8_t auto_state;
> + struct hns3_cmd_desc desc;
> + int ret;
> +
> + /*
> + * If link is down and AUTO is enabled, AUTO is returned, otherwise,
> + * configured FEC mode is returned.
> + * If link is up, current FEC mode is returned.
> + */
> + if (hw->mac.link_status == ETH_LINK_DOWN) {
> + ret = get_current_fec_auto_state(hw, &auto_state);
> + if (ret)
> + return ret;
> +
> + if (auto_state == 0x1) {
> + *fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(AUTO);
> + return 0;
> + }
> + }
> +
> + hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_GET_SFP_INFO, true);
> + resp = (struct hns3_sfp_info_cmd *)desc.data;
> + resp->query_type = HNS3_ACTIVE_QUERY;
> +
> + ret = hns3_cmd_send(hw, &desc, 1);
> + if (ret == -EOPNOTSUPP) {
> + hns3_err(hw, "IMP do not support get FEC, ret = %d", ret);
> + return ret;
> + } else if (ret) {
> + hns3_err(hw, "get FEC failed, ret = %d", ret);
> + return ret;
> + }
> +
> + /*
> + * FEC mode order defined in hns3 hardware is inconsistend with
> + * that defined in the ethdev library. So the sequence needs
> + * to be converted.
> + */
> + switch (resp->active_fec) {
> + case HNS3_HW_FEC_MODE_NOFEC:
> + tmp_fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC);
> + break;
> + case HNS3_HW_FEC_MODE_BASER:
> + tmp_fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(BASER);
> + break;
> + case HNS3_HW_FEC_MODE_RS:
> + tmp_fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(RS);
> + break;
> + default:
> + tmp_fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC);
> + break;
> + }
> +
> + *fec_capa = tmp_fec_capa;
> + return 0;
> +}
> +
> +static int
> +hns3_fec_get(struct rte_eth_dev *dev, uint32_t *fec_capa)
> +{
> + struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +
> + return hns3_fec_get_internal(hw, fec_capa);
> +}
> +
> +static int
> +hns3_set_fec_hw(struct hns3_hw *hw, uint32_t mode)
> +{
> + struct hns3_config_fec_cmd *req;
> + struct hns3_cmd_desc desc;
> + int ret;
> +
> + hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CONFIG_FEC_MODE, false);
> +
> + req = (struct hns3_config_fec_cmd *)desc.data;
> + switch (mode) {
> + case RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC):
> + hns3_set_field(req->fec_mode, HNS3_MAC_CFG_FEC_MODE_M,
> + HNS3_MAC_CFG_FEC_MODE_S, HNS3_MAC_FEC_OFF);
> + break;
> + case RTE_ETH_FEC_MODE_CAPA_MASK(BASER):
> + hns3_set_field(req->fec_mode, HNS3_MAC_CFG_FEC_MODE_M,
> + HNS3_MAC_CFG_FEC_MODE_S, HNS3_MAC_FEC_BASER);
> + break;
> + case RTE_ETH_FEC_MODE_CAPA_MASK(RS):
> + hns3_set_field(req->fec_mode, HNS3_MAC_CFG_FEC_MODE_M,
> + HNS3_MAC_CFG_FEC_MODE_S, HNS3_MAC_FEC_RS);
> + break;
> + case RTE_ETH_FEC_MODE_CAPA_MASK(AUTO):
> + hns3_set_bit(req->fec_mode, HNS3_MAC_CFG_FEC_AUTO_EN_B, 1);
> + break;
> + default:
> + return 0;
> + }
> + ret = hns3_cmd_send(hw, &desc, 1);
> + if (ret)
> + hns3_err(hw, "set fec mode failed, ret = %d", ret);
> +
> + return ret;
> +}
> +
> +static uint32_t
> +get_current_speed_fec_cap(struct hns3_hw *hw, struct rte_eth_fec_capa *fec_capa)
> +{
> + struct hns3_mac *mac = &hw->mac;
> + uint32_t cur_capa;
> +
> + switch (mac->link_speed) {
> + case ETH_SPEED_NUM_10G:
> + cur_capa = fec_capa[1].capa;
> + break;
> + case ETH_SPEED_NUM_25G:
> + case ETH_SPEED_NUM_100G:
> + case ETH_SPEED_NUM_200G:
> + cur_capa = fec_capa[0].capa;
> + break;
> + default:
> + cur_capa = 0;
> + break;
> + }
> +
> + return cur_capa;
> +}
> +
> +static bool
> +is_fec_mode_one_bit_set(uint32_t mode)
> +{
> + int cnt = 0;
> + uint8_t i;
> +
> + for (i = 0; i < sizeof(mode); i++)
> + if (mode >> i & 0x1)
> + cnt++;
> +
> + return cnt == 1 ? true : false;
> +}
> +
> +static int
> +hns3_fec_set(struct rte_eth_dev *dev, uint32_t mode)
> +{
> +#define FEC_CAPA_NUM 2
> + struct hns3_adapter *hns = dev->data->dev_private;
> + struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(hns);
> + struct hns3_pf *pf = &hns->pf;
> +
> + struct rte_eth_fec_capa fec_capa[FEC_CAPA_NUM];
> + uint32_t cur_capa;
> + uint32_t num = FEC_CAPA_NUM;
> + int ret;
> +
> + ret = hns3_fec_get_capability(dev, fec_capa, num);
> + if (ret < 0)
> + return ret;
> +
> + /* HNS3 PMD driver only support one bit set mode, e.g. 0x1, 0x4 */
> + if (!is_fec_mode_one_bit_set(mode)) {
> + hns3_err(hw, "FEC mode(0x%x) not supported in HNS3 PMD, "
> + "FEC mode should be only one bit set", mode);
> + return -EINVAL;
> + }
> +
> + /*
> + * Check whether the configured mode is within the FEC capability.
> + * If not, the configured mode will not be supported.
> + */
> + cur_capa = get_current_speed_fec_cap(hw, fec_capa);
> + if (!(cur_capa & mode)) {
> + hns3_err(hw, "unsupported FEC mode = 0x%x", mode);
> + return -EINVAL;
> + }
> +
> + rte_spinlock_lock(&hw->lock);
> + ret = hns3_set_fec_hw(hw, mode);
> + if (ret) {
> + rte_spinlock_unlock(&hw->lock);
> + return ret;
> + }
> +
> + pf->fec_mode = mode;
> + rte_spinlock_unlock(&hw->lock);
> +
> + return 0;
> +}
> +
> +static int
> +hns3_restore_fec(struct hns3_hw *hw)
> +{
> + struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
> + struct hns3_pf *pf = &hns->pf;
> + uint32_t mode = pf->fec_mode;
> + int ret;
> +=======
> + else if (hns3_atomic_test_bit(HNS3_FUNC_RESET, levels))
> + reset_level = HNS3_FUNC_RESET;
> + else if (hns3_atomic_test_bit(HNS3_FLR_RESET, levels))
> + reset_level = HNS3_FLR_RESET;
> +
> + if (hw->reset.level != HNS3_NONE_RESET && reset_level < hw->reset.level)
> + return HNS3_NONE_RESET;
> +
> + return reset_level;
> +}
> +
> +static void
> +hns3_record_imp_error(struct hns3_adapter *hns)
> +{
> + struct hns3_hw *hw = &hns->hw;
> + uint32_t reg_val;
> +
> + reg_val = hns3_read_dev(hw, HNS3_VECTOR0_OTER_EN_REG);
> + if (hns3_get_bit(reg_val, HNS3_VECTOR0_IMP_RD_POISON_B)) {
> + hns3_warn(hw, "Detected IMP RD poison!");
> + hns3_set_bit(reg_val, HNS3_VECTOR0_IMP_RD_POISON_B, 0);
> + hns3_write_dev(hw, HNS3_VECTOR0_OTER_EN_REG, reg_val);
> + }
> +
> + if (hns3_get_bit(reg_val, HNS3_VECTOR0_IMP_CMDQ_ERR_B)) {
> + hns3_warn(hw, "Detected IMP CMDQ error!");
> + hns3_set_bit(reg_val, HNS3_VECTOR0_IMP_CMDQ_ERR_B, 0);
> + hns3_write_dev(hw, HNS3_VECTOR0_OTER_EN_REG, reg_val);
> + }
> +}
> +
> +static int
> +hns3_prepare_reset(struct hns3_adapter *hns)
> +{
> + struct hns3_hw *hw = &hns->hw;
> + uint32_t reg_val;
> + int ret;
> +
> + switch (hw->reset.level) {
> + case HNS3_FUNC_RESET:
> + ret = hns3_func_reset_cmd(hw, HNS3_PF_FUNC_ID);
> + if (ret)
> + return ret;
> +
> + /*
> + * After performaning pf reset, it is not necessary to do the
> + * mailbox handling or send any command to firmware, because
> + * any mailbox handling or command to firmware is only valid
> + * after hns3_cmd_init is called.
> + */
> + __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED);
> + hw->reset.stats.request_cnt++;
> + break;
> + case HNS3_IMP_RESET:
> + hns3_record_imp_error(hns);
> + reg_val = hns3_read_dev(hw, HNS3_VECTOR0_OTER_EN_REG);
> + hns3_write_dev(hw, HNS3_VECTOR0_OTER_EN_REG, reg_val |
> + BIT(HNS3_VECTOR0_IMP_RESET_INT_B));
> + break;
> + default:
> + break;
> + }
> + return 0;
> +}
> +
> +static int
> +hns3_set_rst_done(struct hns3_hw *hw)
> +{
> + struct hns3_pf_rst_done_cmd *req;
> + struct hns3_cmd_desc desc;
> +
> + req = (struct hns3_pf_rst_done_cmd *)desc.data;
> + hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_PF_RST_DONE, false);
> + req->pf_rst_done |= HNS3_PF_RESET_DONE_BIT;
> + return hns3_cmd_send(hw, &desc, 1);
> +}
> +
> +static int
> +hns3_stop_service(struct hns3_adapter *hns)
> +{
> + struct hns3_hw *hw = &hns->hw;
> + struct rte_eth_dev *eth_dev;
> +
> + eth_dev = &rte_eth_devices[hw->data->port_id];
> + hw->mac.link_status = ETH_LINK_DOWN;
> + if (hw->adapter_state == HNS3_NIC_STARTED) {
> + rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
> + hns3_update_linkstatus_and_event(hw, false);
> + }
> +
> + hns3_set_rxtx_function(eth_dev);
> + rte_wmb();
> + /* Disable datapath on secondary process. */
> + hns3_mp_req_stop_rxtx(eth_dev);
> + rte_delay_ms(hw->cfg_max_queues);
> +
> + rte_spinlock_lock(&hw->lock);
> + if (hns->hw.adapter_state == HNS3_NIC_STARTED ||
> + hw->adapter_state == HNS3_NIC_STOPPING) {
> + hns3_enable_all_queues(hw, false);
> + hns3_do_stop(hns);
> + hw->reset.mbuf_deferred_free = true;
> + } else
> + hw->reset.mbuf_deferred_free = false;
> +
> + /*
> + * It is cumbersome for hardware to pick-and-choose entries for deletion
> + * from table space. Hence, for function reset software intervention is
> + * required to delete the entries
> + */
> + if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0)
> + hns3_configure_all_mc_mac_addr(hns, true);
> + rte_spinlock_unlock(&hw->lock);
> +
> + return 0;
> +}
> +
> +static int
> +hns3_start_service(struct hns3_adapter *hns)
> +{
> + struct hns3_hw *hw = &hns->hw;
> + struct rte_eth_dev *eth_dev;
> +
> + if (hw->reset.level == HNS3_IMP_RESET ||
> + hw->reset.level == HNS3_GLOBAL_RESET)
> + hns3_set_rst_done(hw);
> + eth_dev = &rte_eth_devices[hw->data->port_id];
> + hns3_set_rxtx_function(eth_dev);
> + hns3_mp_req_start_rxtx(eth_dev);
> + if (hw->adapter_state == HNS3_NIC_STARTED) {
> + /*
> + * This API parent function already hold the hns3_hw.lock, the
> + * hns3_service_handler may report lse, in bonding application
> + * it will call driver's ops which may acquire the hns3_hw.lock
> + * again, thus lead to deadlock.
> + * We defer calls hns3_service_handler to avoid the deadlock.
> + */
> + rte_eal_alarm_set(HNS3_SERVICE_QUICK_INTERVAL,
> + hns3_service_handler, eth_dev);
> +
> + /* Enable interrupt of all rx queues before enabling queues */
> + hns3_dev_all_rx_queue_intr_enable(hw, true);
> + /*
> + * Enable state of each rxq and txq will be recovered after
> + * reset, so we need to restore them before enable all tqps;
> + */
> + hns3_restore_tqp_enable_state(hw);
> + /*
> + * When finished the initialization, enable queues to receive
> + * and transmit packets.
> + */
> + hns3_enable_all_queues(hw, true);
> + }
> +
> + return 0;
> +}
> +
> +static int
> +hns3_restore_conf(struct hns3_adapter *hns)
> +{
> + struct hns3_hw *hw = &hns->hw;
> + int ret;
> +
> + ret = hns3_configure_all_mac_addr(hns, false);
> + if (ret)
> + return ret;
> +
> + ret = hns3_configure_all_mc_mac_addr(hns, false);
> + if (ret)
> + goto err_mc_mac;
> +
> + ret = hns3_dev_promisc_restore(hns);
> + if (ret)
> + goto err_promisc;
> +
> + ret = hns3_restore_vlan_table(hns);
> + if (ret)
> + goto err_promisc;
> +
> + ret = hns3_restore_vlan_conf(hns);
> + if (ret)
> + goto err_promisc;
> +
> + ret = hns3_restore_all_fdir_filter(hns);
> + if (ret)
> + goto err_promisc;
> +
> + ret = hns3_restore_ptp(hns);
> + if (ret)
> + goto err_promisc;
> +
> + ret = hns3_restore_rx_interrupt(hw);
> + if (ret)
> + goto err_promisc;
> +
> + ret = hns3_restore_gro_conf(hw);
> + if (ret)
> + goto err_promisc;
> +
> + ret = hns3_restore_fec(hw);
> + if (ret)
> + goto err_promisc;
> +
> + if (hns->hw.adapter_state == HNS3_NIC_STARTED) {
> + ret = hns3_do_start(hns, false);
> + if (ret)
> + goto err_promisc;
> + hns3_info(hw, "hns3 dev restart successful!");
> + } else if (hw->adapter_state == HNS3_NIC_STOPPING)
> + hw->adapter_state = HNS3_NIC_CONFIGURED;
> + return 0;
> +
> +err_promisc:
> + hns3_configure_all_mc_mac_addr(hns, true);
> +err_mc_mac:
> + hns3_configure_all_mac_addr(hns, true);
> + return ret;
> +}
> +
> +static void
> +hns3_reset_service(void *param)
> +{
> + struct hns3_adapter *hns = (struct hns3_adapter *)param;
> + struct hns3_hw *hw = &hns->hw;
> + enum hns3_reset_level reset_level;
> + struct timeval tv_delta;
> + struct timeval tv_start;
> + struct timeval tv;
> + uint64_t msec;
> + int ret;
> +
> + /*
> + * The interrupt is not triggered within the delay time.
> + * The interrupt may have been lost. It is necessary to handle
> + * the interrupt to recover from the error.
> + */
> + if (__atomic_load_n(&hw->reset.schedule, __ATOMIC_RELAXED) ==
> + SCHEDULE_DEFERRED) {
> + __atomic_store_n(&hw->reset.schedule, SCHEDULE_REQUESTED,
> + __ATOMIC_RELAXED);
> + hns3_err(hw, "Handling interrupts in delayed tasks");
> + hns3_interrupt_handler(&rte_eth_devices[hw->data->port_id]);
> + reset_level = hns3_get_reset_level(hns, &hw->reset.pending);
> + if (reset_level == HNS3_NONE_RESET) {
> + hns3_err(hw, "No reset level is set, try IMP reset");
> + hns3_atomic_set_bit(HNS3_IMP_RESET, &hw->reset.pending);
> + }
> + }
> + __atomic_store_n(&hw->reset.schedule, SCHEDULE_NONE, __ATOMIC_RELAXED);
> +
> + /*
> + * Check if there is any ongoing reset in the hardware. This status can
> + * be checked from reset_pending. If there is then, we need to wait for
> + * hardware to complete reset.
> + * a. If we are able to figure out in reasonable time that hardware
> + * has fully resetted then, we can proceed with driver, client
> + * reset.
> + * b. else, we can come back later to check this status so re-sched
> + * now.
> + */
> + reset_level = hns3_get_reset_level(hns, &hw->reset.pending);
> + if (reset_level != HNS3_NONE_RESET) {
> + hns3_clock_gettime(&tv_start);
> + ret = hns3_reset_process(hns, reset_level);
> + hns3_clock_gettime(&tv);
> + timersub(&tv, &tv_start, &tv_delta);
> + msec = hns3_clock_calctime_ms(&tv_delta);
> + if (msec > HNS3_RESET_PROCESS_MS)
> + hns3_err(hw, "%d handle long time delta %" PRIu64
> + " ms time=%ld.%.6ld",
> + hw->reset.level, msec,
> + tv.tv_sec, tv.tv_usec);
> + if (ret == -EAGAIN)
> + return;
> + }
> +
> + /* Check if we got any *new* reset requests to be honored */
> + reset_level = hns3_get_reset_level(hns, &hw->reset.request);
> + if (reset_level != HNS3_NONE_RESET)
> + hns3_msix_process(hns, reset_level);
> +}
> +
> +static unsigned int
> +hns3_get_speed_capa_num(uint16_t device_id)
> +{
> + unsigned int num;
> +
> + switch (device_id) {
> + case HNS3_DEV_ID_25GE:
> + case HNS3_DEV_ID_25GE_RDMA:
> + num = 2;
> + break;
> + case HNS3_DEV_ID_100G_RDMA_MACSEC:
> + case HNS3_DEV_ID_200G_RDMA:
> + num = 1;
> + break;
> + default:
> + num = 0;
> + break;
> + }
> +
> + return num;
> +}
> +
> +static int
> +hns3_get_speed_fec_capa(struct rte_eth_fec_capa *speed_fec_capa,
> + uint16_t device_id)
> +{
> + switch (device_id) {
> + case HNS3_DEV_ID_25GE:
> + /* fallthrough */
> + case HNS3_DEV_ID_25GE_RDMA:
> + speed_fec_capa[0].speed = speed_fec_capa_tbl[1].speed;
> + speed_fec_capa[0].capa = speed_fec_capa_tbl[1].capa;
> +
> + /* In HNS3 device, the 25G NIC is compatible with 10G rate */
> + speed_fec_capa[1].speed = speed_fec_capa_tbl[0].speed;
> + speed_fec_capa[1].capa = speed_fec_capa_tbl[0].capa;
> + break;
> + case HNS3_DEV_ID_100G_RDMA_MACSEC:
> + speed_fec_capa[0].speed = speed_fec_capa_tbl[4].speed;
> + speed_fec_capa[0].capa = speed_fec_capa_tbl[4].capa;
> + break;
> + case HNS3_DEV_ID_200G_RDMA:
> + speed_fec_capa[0].speed = speed_fec_capa_tbl[5].speed;
> + speed_fec_capa[0].capa = speed_fec_capa_tbl[5].capa;
> + break;
> + default:
> + return -ENOTSUP;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +hns3_fec_get_capability(struct rte_eth_dev *dev,
> + struct rte_eth_fec_capa *speed_fec_capa,
> + unsigned int num)
> +{
> + struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
> + uint16_t device_id = pci_dev->id.device_id;
> + unsigned int capa_num;
> + int ret;
> +
> + capa_num = hns3_get_speed_capa_num(device_id);
> + if (capa_num == 0) {
> + hns3_err(hw, "device(0x%x) is not supported by hns3 PMD",
> + device_id);
> + return -ENOTSUP;
> + }
> +
> + if (speed_fec_capa == NULL || num < capa_num)
> + return capa_num;
> +
> + ret = hns3_get_speed_fec_capa(speed_fec_capa, device_id);
> + if (ret)
> + return -ENOTSUP;
> +
> + return capa_num;
> +}
> +
> +static int
> +get_current_fec_auto_state(struct hns3_hw *hw, uint8_t *state)
> +{
> + struct hns3_config_fec_cmd *req;
> + struct hns3_cmd_desc desc;
> + int ret;
> +
> + /*
> + * CMD(HNS3_OPC_CONFIG_FEC_MODE) read is not supported
> + * in device of link speed
> + * below 10 Gbps.
> + */
> + if (hw->mac.link_speed < ETH_SPEED_NUM_10G) {
> + *state = 0;
> + return 0;
> + }
> +
> + hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CONFIG_FEC_MODE, true);
> + req = (struct hns3_config_fec_cmd *)desc.data;
> + ret = hns3_cmd_send(hw, &desc, 1);
> + if (ret) {
> + hns3_err(hw, "get current fec auto state failed, ret = %d",
> + ret);
> + return ret;
> + }
> +
> + *state = req->fec_mode & (1U << HNS3_MAC_CFG_FEC_AUTO_EN_B);
> + return 0;
> +}
> +
> +static int
> +hns3_fec_get_internal(struct hns3_hw *hw, uint32_t *fec_capa)
> +{
> + struct hns3_sfp_info_cmd *resp;
> + uint32_t tmp_fec_capa;
> + uint8_t auto_state;
> + struct hns3_cmd_desc desc;
> + int ret;
> +
> + /*
> + * If link is down and AUTO is enabled, AUTO is returned, otherwise,
> + * configured FEC mode is returned.
> + * If link is up, current FEC mode is returned.
> + */
> + if (hw->mac.link_status == ETH_LINK_DOWN) {
> + ret = get_current_fec_auto_state(hw, &auto_state);
> + if (ret)
> + return ret;
> +
> + if (auto_state == 0x1) {
> + *fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(AUTO);
> + return 0;
> + }
> + }
> +
> + hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_GET_SFP_INFO, true);
> + resp = (struct hns3_sfp_info_cmd *)desc.data;
> + resp->query_type = HNS3_ACTIVE_QUERY;
> +
> + ret = hns3_cmd_send(hw, &desc, 1);
> + if (ret == -EOPNOTSUPP) {
> + hns3_err(hw, "IMP do not support get FEC, ret = %d", ret);
> + return ret;
> + } else if (ret) {
> + hns3_err(hw, "get FEC failed, ret = %d", ret);
> + return ret;
> + }
> +
> + /*
> + * FEC mode order defined in hns3 hardware is inconsistend with
> + * that defined in the ethdev library. So the sequence needs
> + * to be converted.
> + */
> + switch (resp->active_fec) {
> + case HNS3_HW_FEC_MODE_NOFEC:
> + tmp_fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC);
> + break;
> + case HNS3_HW_FEC_MODE_BASER:
> + tmp_fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(BASER);
> + break;
> + case HNS3_HW_FEC_MODE_RS:
> + tmp_fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(RS);
> + break;
> + default:
> + tmp_fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC);
> + break;
> + }
> +
> + *fec_capa = tmp_fec_capa;
> + return 0;
> +}
> +
> +static int
> +hns3_fec_get(struct rte_eth_dev *dev, uint32_t *fec_capa)
> +{
> + struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> +
> + return hns3_fec_get_internal(hw, fec_capa);
> +}
> +
> +static int
> +hns3_set_fec_hw(struct hns3_hw *hw, uint32_t mode)
> +{
> + struct hns3_config_fec_cmd *req;
> + struct hns3_cmd_desc desc;
> + int ret;
> +
> + hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CONFIG_FEC_MODE, false);
> +
> + req = (struct hns3_config_fec_cmd *)desc.data;
> + switch (mode) {
> + case RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC):
> + hns3_set_field(req->fec_mode, HNS3_MAC_CFG_FEC_MODE_M,
> + HNS3_MAC_CFG_FEC_MODE_S, HNS3_MAC_FEC_OFF);
> + break;
> + case RTE_ETH_FEC_MODE_CAPA_MASK(BASER):
> + hns3_set_field(req->fec_mode, HNS3_MAC_CFG_FEC_MODE_M,
> + HNS3_MAC_CFG_FEC_MODE_S, HNS3_MAC_FEC_BASER);
> + break;
> + case RTE_ETH_FEC_MODE_CAPA_MASK(RS):
> + hns3_set_field(req->fec_mode, HNS3_MAC_CFG_FEC_MODE_M,
> + HNS3_MAC_CFG_FEC_MODE_S, HNS3_MAC_FEC_RS);
> + break;
> + case RTE_ETH_FEC_MODE_CAPA_MASK(AUTO):
> + hns3_set_bit(req->fec_mode, HNS3_MAC_CFG_FEC_AUTO_EN_B, 1);
> + break;
> + default:
> + return 0;
> + }
> + ret = hns3_cmd_send(hw, &desc, 1);
> + if (ret)
> + hns3_err(hw, "set fec mode failed, ret = %d", ret);
> +
> + return ret;
> +}
> +
> +static uint32_t
> +get_current_speed_fec_cap(struct hns3_hw *hw, struct rte_eth_fec_capa *fec_capa)
> +{
> + struct hns3_mac *mac = &hw->mac;
> + uint32_t cur_capa;
> +
> + switch (mac->link_speed) {
> + case ETH_SPEED_NUM_10G:
> + cur_capa = fec_capa[1].capa;
> + break;
> + case ETH_SPEED_NUM_25G:
> + case ETH_SPEED_NUM_100G:
> + case ETH_SPEED_NUM_200G:
> + cur_capa = fec_capa[0].capa;
> + break;
> + default:
> + cur_capa = 0;
> + break;
> + }
> +
> + return cur_capa;
> +}
> +
> +static bool
> +is_fec_mode_one_bit_set(uint32_t mode)
> +{
> + int cnt = 0;
> + uint8_t i;
> +
> + for (i = 0; i < sizeof(mode); i++)
> + if (mode >> i & 0x1)
> + cnt++;
> +
> + return cnt == 1 ? true : false;
> +}
> +
> +static int
> +hns3_fec_set(struct rte_eth_dev *dev, uint32_t mode)
> +{
> +#define FEC_CAPA_NUM 2
> + struct hns3_adapter *hns = dev->data->dev_private;
> + struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(hns);
> + struct hns3_pf *pf = &hns->pf;
> +
> + struct rte_eth_fec_capa fec_capa[FEC_CAPA_NUM];
> + uint32_t cur_capa;
> + uint32_t num = FEC_CAPA_NUM;
> + int ret;
> +
> + ret = hns3_fec_get_capability(dev, fec_capa, num);
> + if (ret < 0)
> + return ret;
> +
> + /* HNS3 PMD driver only support one bit set mode, e.g. 0x1, 0x4 */
> + if (!is_fec_mode_one_bit_set(mode)) {
> + hns3_err(hw, "FEC mode(0x%x) not supported in HNS3 PMD, "
> + "FEC mode should be only one bit set", mode);
> + return -EINVAL;
> + }
> +
> + /*
> + * Check whether the configured mode is within the FEC capability.
> + * If not, the configured mode will not be supported.
> + */
> + cur_capa = get_current_speed_fec_cap(hw, fec_capa);
> + if (!(cur_capa & mode)) {
> + hns3_err(hw, "unsupported FEC mode = 0x%x", mode);
> + return -EINVAL;
> + }
> +
> + rte_spinlock_lock(&hw->lock);
> + ret = hns3_set_fec_hw(hw, mode);
> + if (ret) {
> + rte_spinlock_unlock(&hw->lock);
> + return ret;
> + }
> +
> + pf->fec_mode = mode;
> + rte_spinlock_unlock(&hw->lock);
> +
> + return 0;
> +}
> +
> +static int
> +hns3_restore_fec(struct hns3_hw *hw)
> +{
> + struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
> + struct hns3_pf *pf = &hns->pf;
> + uint32_t mode = pf->fec_mode;
> + int ret;
> +>>>>>>> net/hns3: fix delay for waiting to stop Rx/Tx
>
> if (hw->reset.level != HNS3_NONE_RESET && reset_level < hw->reset.level)
> return HNS3_NONE_RESET;
> @@ -5201,7 +6381,7 @@ hns3_stop_service(struct hns3_adapter *hns)
> rte_wmb();
> /* Disable datapath on secondary process. */
> hns3_mp_req_stop_rxtx(eth_dev);
> - rte_delay_ms(hw->tqps_num);
> + rte_delay_ms(hw->cfg_max_queues);
>
> rte_spinlock_lock(&hw->lock);
> if (hns->hw.adapter_state == HNS3_NIC_STARTED ||
> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
> index a7b6188eea..eb3edf3464 100644
> --- a/drivers/net/hns3/hns3_ethdev_vf.c
> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
> @@ -1631,7 +1631,7 @@ hns3vf_dev_stop(struct rte_eth_dev *dev)
> /* Disable datapath on secondary process. */
> hns3_mp_req_stop_rxtx(dev);
> /* Prevent crashes when queues are still in use. */
> - rte_delay_ms(hw->tqps_num);
> + rte_delay_ms(hw->cfg_max_queues);
>
> rte_spinlock_lock(&hw->lock);
> if (rte_atomic16_read(&hw->reset.resetting) == 0) {
> @@ -2005,7 +2005,7 @@ hns3vf_stop_service(struct hns3_adapter *hns)
> rte_wmb();
> /* Disable datapath on secondary process. */
> hns3_mp_req_stop_rxtx(eth_dev);
> - rte_delay_ms(hw->tqps_num);
> + rte_delay_ms(hw->cfg_max_queues);
>
> rte_spinlock_lock(&hw->lock);
> if (hw->adapter_state == HNS3_NIC_STARTED ||
> --
> 2.32.0
>
> ---
> Diff of the applied patch vs upstream commit (please double-check if non-empty:
> ---
> --- - 2021-08-10 15:11:14.264171339 +0200
> +++ 0031-net-hns3-fix-delay-for-waiting-to-stop-Rx-Tx.patch 2021-08-10 15:11:12.962637696 +0200
> @@ -1 +1 @@
> -From 4d8cce267840556cec8483c61f8cfbf25873496d Mon Sep 17 00:00:00 2001
> +From 249c35152a9bcd6d4c4b52776602750552dcf294 Mon Sep 17 00:00:00 2001
> @@ -5,0 +6,2 @@
> +[ upstream commit 4d8cce267840556cec8483c61f8cfbf25873496d ]
> +
> @@ -14 +15,0 @@
> -Cc: stable@dpdk.org
> @@ -19,3 +20,3 @@
> - drivers/net/hns3/hns3_ethdev.c | 4 ++--
> - drivers/net/hns3/hns3_ethdev_vf.c | 4 ++--
> - 2 files changed, 4 insertions(+), 4 deletions(-)
> + drivers/net/hns3/hns3_ethdev.c | 1184 ++++++++++++++++++++++++++++-
> + drivers/net/hns3/hns3_ethdev_vf.c | 4 +-
> + 2 files changed, 1184 insertions(+), 4 deletions(-)
> @@ -24 +25 @@
> -index 20491305e7..dff265828e 100644
> +index ac82e0b5ef..e1bc55682c 100644
> @@ -27 +28 @@
> -@@ -5895,7 +5895,7 @@ hns3_dev_stop(struct rte_eth_dev *dev)
> +@@ -4742,7 +4742,7 @@ hns3_dev_stop(struct rte_eth_dev *dev)
> @@ -35,2 +36,1193 @@
> - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED) == 0) {
> -@@ -6511,7 +6511,7 @@ hns3_stop_service(struct hns3_adapter *hns)
> + if (rte_atomic16_read(&hw->reset.resetting) == 0) {
> +@@ -5130,10 +5130,1190 @@ hns3_get_reset_level(struct hns3_adapter *hns, uint64_t *levels)
> + reset_level = HNS3_IMP_RESET;
> + else if (hns3_atomic_test_bit(HNS3_GLOBAL_RESET, levels))
> + reset_level = HNS3_GLOBAL_RESET;
> ++<<<<<<< HEAD
> + else if (hns3_atomic_test_bit(HNS3_FUNC_RESET, levels))
> + reset_level = HNS3_FUNC_RESET;
> + else if (hns3_atomic_test_bit(HNS3_FLR_RESET, levels))
> + reset_level = HNS3_FLR_RESET;
> ++||||||| constructed merge base
> ++ else if (hns3_atomic_test_bit(HNS3_FUNC_RESET, levels))
> ++ reset_level = HNS3_FUNC_RESET;
> ++ else if (hns3_atomic_test_bit(HNS3_FLR_RESET, levels))
> ++ reset_level = HNS3_FLR_RESET;
> ++
> ++ if (hw->reset.level != HNS3_NONE_RESET && reset_level < hw->reset.level)
> ++ return HNS3_NONE_RESET;
> ++
> ++ return reset_level;
> ++}
> ++
> ++static void
> ++hns3_record_imp_error(struct hns3_adapter *hns)
> ++{
> ++ struct hns3_hw *hw = &hns->hw;
> ++ uint32_t reg_val;
> ++
> ++ reg_val = hns3_read_dev(hw, HNS3_VECTOR0_OTER_EN_REG);
> ++ if (hns3_get_bit(reg_val, HNS3_VECTOR0_IMP_RD_POISON_B)) {
> ++ hns3_warn(hw, "Detected IMP RD poison!");
> ++ hns3_set_bit(reg_val, HNS3_VECTOR0_IMP_RD_POISON_B, 0);
> ++ hns3_write_dev(hw, HNS3_VECTOR0_OTER_EN_REG, reg_val);
> ++ }
> ++
> ++ if (hns3_get_bit(reg_val, HNS3_VECTOR0_IMP_CMDQ_ERR_B)) {
> ++ hns3_warn(hw, "Detected IMP CMDQ error!");
> ++ hns3_set_bit(reg_val, HNS3_VECTOR0_IMP_CMDQ_ERR_B, 0);
> ++ hns3_write_dev(hw, HNS3_VECTOR0_OTER_EN_REG, reg_val);
> ++ }
> ++}
> ++
> ++static int
> ++hns3_prepare_reset(struct hns3_adapter *hns)
> ++{
> ++ struct hns3_hw *hw = &hns->hw;
> ++ uint32_t reg_val;
> ++ int ret;
> ++
> ++ switch (hw->reset.level) {
> ++ case HNS3_FUNC_RESET:
> ++ ret = hns3_func_reset_cmd(hw, HNS3_PF_FUNC_ID);
> ++ if (ret)
> ++ return ret;
> ++
> ++ /*
> ++ * After performaning pf reset, it is not necessary to do the
> ++ * mailbox handling or send any command to firmware, because
> ++ * any mailbox handling or command to firmware is only valid
> ++ * after hns3_cmd_init is called.
> ++ */
> ++ __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED);
> ++ hw->reset.stats.request_cnt++;
> ++ break;
> ++ case HNS3_IMP_RESET:
> ++ hns3_record_imp_error(hns);
> ++ reg_val = hns3_read_dev(hw, HNS3_VECTOR0_OTER_EN_REG);
> ++ hns3_write_dev(hw, HNS3_VECTOR0_OTER_EN_REG, reg_val |
> ++ BIT(HNS3_VECTOR0_IMP_RESET_INT_B));
> ++ break;
> ++ default:
> ++ break;
> ++ }
> ++ return 0;
> ++}
> ++
> ++static int
> ++hns3_set_rst_done(struct hns3_hw *hw)
> ++{
> ++ struct hns3_pf_rst_done_cmd *req;
> ++ struct hns3_cmd_desc desc;
> ++
> ++ req = (struct hns3_pf_rst_done_cmd *)desc.data;
> ++ hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_PF_RST_DONE, false);
> ++ req->pf_rst_done |= HNS3_PF_RESET_DONE_BIT;
> ++ return hns3_cmd_send(hw, &desc, 1);
> ++}
> ++
> ++static int
> ++hns3_stop_service(struct hns3_adapter *hns)
> ++{
> ++ struct hns3_hw *hw = &hns->hw;
> ++ struct rte_eth_dev *eth_dev;
> ++
> ++ eth_dev = &rte_eth_devices[hw->data->port_id];
> ++ hw->mac.link_status = ETH_LINK_DOWN;
> ++ if (hw->adapter_state == HNS3_NIC_STARTED) {
> ++ rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
> ++ hns3_update_linkstatus_and_event(hw, false);
> ++ }
> ++
> ++ hns3_set_rxtx_function(eth_dev);
> ++ rte_wmb();
> ++ /* Disable datapath on secondary process. */
> ++ hns3_mp_req_stop_rxtx(eth_dev);
> ++ rte_delay_ms(hw->cfg_max_queues);
> ++
> ++ rte_spinlock_lock(&hw->lock);
> ++ if (hns->hw.adapter_state == HNS3_NIC_STARTED ||
> ++ hw->adapter_state == HNS3_NIC_STOPPING) {
> ++ hns3_enable_all_queues(hw, false);
> ++ hns3_do_stop(hns);
> ++ hw->reset.mbuf_deferred_free = true;
> ++ } else
> ++ hw->reset.mbuf_deferred_free = false;
> ++
> ++ /*
> ++ * It is cumbersome for hardware to pick-and-choose entries for deletion
> ++ * from table space. Hence, for function reset software intervention is
> ++ * required to delete the entries
> ++ */
> ++ if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0)
> ++ hns3_configure_all_mc_mac_addr(hns, true);
> ++ rte_spinlock_unlock(&hw->lock);
> ++
> ++ return 0;
> ++}
> ++
> ++static int
> ++hns3_start_service(struct hns3_adapter *hns)
> ++{
> ++ struct hns3_hw *hw = &hns->hw;
> ++ struct rte_eth_dev *eth_dev;
> ++
> ++ if (hw->reset.level == HNS3_IMP_RESET ||
> ++ hw->reset.level == HNS3_GLOBAL_RESET)
> ++ hns3_set_rst_done(hw);
> ++ eth_dev = &rte_eth_devices[hw->data->port_id];
> ++ hns3_set_rxtx_function(eth_dev);
> ++ hns3_mp_req_start_rxtx(eth_dev);
> ++ if (hw->adapter_state == HNS3_NIC_STARTED) {
> ++ /*
> ++ * This API parent function already hold the hns3_hw.lock, the
> ++ * hns3_service_handler may report lse, in bonding application
> ++ * it will call driver's ops which may acquire the hns3_hw.lock
> ++ * again, thus lead to deadlock.
> ++ * We defer calls hns3_service_handler to avoid the deadlock.
> ++ */
> ++ rte_eal_alarm_set(HNS3_SERVICE_QUICK_INTERVAL,
> ++ hns3_service_handler, eth_dev);
> ++
> ++ /* Enable interrupt of all rx queues before enabling queues */
> ++ hns3_dev_all_rx_queue_intr_enable(hw, true);
> ++ /*
> ++ * Enable state of each rxq and txq will be recovered after
> ++ * reset, so we need to restore them before enable all tqps;
> ++ */
> ++ hns3_restore_tqp_enable_state(hw);
> ++ /*
> ++ * When finished the initialization, enable queues to receive
> ++ * and transmit packets.
> ++ */
> ++ hns3_enable_all_queues(hw, true);
> ++ }
> ++
> ++ return 0;
> ++}
> ++
> ++static int
> ++hns3_restore_conf(struct hns3_adapter *hns)
> ++{
> ++ struct hns3_hw *hw = &hns->hw;
> ++ int ret;
> ++
> ++ ret = hns3_configure_all_mac_addr(hns, false);
> ++ if (ret)
> ++ return ret;
> ++
> ++ ret = hns3_configure_all_mc_mac_addr(hns, false);
> ++ if (ret)
> ++ goto err_mc_mac;
> ++
> ++ ret = hns3_dev_promisc_restore(hns);
> ++ if (ret)
> ++ goto err_promisc;
> ++
> ++ ret = hns3_restore_vlan_table(hns);
> ++ if (ret)
> ++ goto err_promisc;
> ++
> ++ ret = hns3_restore_vlan_conf(hns);
> ++ if (ret)
> ++ goto err_promisc;
> ++
> ++ ret = hns3_restore_all_fdir_filter(hns);
> ++ if (ret)
> ++ goto err_promisc;
> ++
> ++ ret = hns3_restore_ptp(hns);
> ++ if (ret)
> ++ goto err_promisc;
> ++
> ++ ret = hns3_restore_rx_interrupt(hw);
> ++ if (ret)
> ++ goto err_promisc;
> ++
> ++ ret = hns3_restore_gro_conf(hw);
> ++ if (ret)
> ++ goto err_promisc;
> ++
> ++ ret = hns3_restore_fec(hw);
> ++ if (ret)
> ++ goto err_promisc;
> ++
> ++ if (hns->hw.adapter_state == HNS3_NIC_STARTED) {
> ++ ret = hns3_do_start(hns, false);
> ++ if (ret)
> ++ goto err_promisc;
> ++ hns3_info(hw, "hns3 dev restart successful!");
> ++ } else if (hw->adapter_state == HNS3_NIC_STOPPING)
> ++ hw->adapter_state = HNS3_NIC_CONFIGURED;
> ++ return 0;
> ++
> ++err_promisc:
> ++ hns3_configure_all_mc_mac_addr(hns, true);
> ++err_mc_mac:
> ++ hns3_configure_all_mac_addr(hns, true);
> ++ return ret;
> ++}
> ++
> ++static void
> ++hns3_reset_service(void *param)
> ++{
> ++ struct hns3_adapter *hns = (struct hns3_adapter *)param;
> ++ struct hns3_hw *hw = &hns->hw;
> ++ enum hns3_reset_level reset_level;
> ++ struct timeval tv_delta;
> ++ struct timeval tv_start;
> ++ struct timeval tv;
> ++ uint64_t msec;
> ++ int ret;
> ++
> ++ /*
> ++ * The interrupt is not triggered within the delay time.
> ++ * The interrupt may have been lost. It is necessary to handle
> ++ * the interrupt to recover from the error.
> ++ */
> ++ if (__atomic_load_n(&hw->reset.schedule, __ATOMIC_RELAXED) ==
> ++ SCHEDULE_DEFERRED) {
> ++ __atomic_store_n(&hw->reset.schedule, SCHEDULE_REQUESTED,
> ++ __ATOMIC_RELAXED);
> ++ hns3_err(hw, "Handling interrupts in delayed tasks");
> ++ hns3_interrupt_handler(&rte_eth_devices[hw->data->port_id]);
> ++ reset_level = hns3_get_reset_level(hns, &hw->reset.pending);
> ++ if (reset_level == HNS3_NONE_RESET) {
> ++ hns3_err(hw, "No reset level is set, try IMP reset");
> ++ hns3_atomic_set_bit(HNS3_IMP_RESET, &hw->reset.pending);
> ++ }
> ++ }
> ++ __atomic_store_n(&hw->reset.schedule, SCHEDULE_NONE, __ATOMIC_RELAXED);
> ++
> ++ /*
> ++ * Check if there is any ongoing reset in the hardware. This status can
> ++ * be checked from reset_pending. If there is then, we need to wait for
> ++ * hardware to complete reset.
> ++ * a. If we are able to figure out in reasonable time that hardware
> ++ * has fully resetted then, we can proceed with driver, client
> ++ * reset.
> ++ * b. else, we can come back later to check this status so re-sched
> ++ * now.
> ++ */
> ++ reset_level = hns3_get_reset_level(hns, &hw->reset.pending);
> ++ if (reset_level != HNS3_NONE_RESET) {
> ++ hns3_clock_gettime(&tv_start);
> ++ ret = hns3_reset_process(hns, reset_level);
> ++ hns3_clock_gettime(&tv);
> ++ timersub(&tv, &tv_start, &tv_delta);
> ++ msec = hns3_clock_calctime_ms(&tv_delta);
> ++ if (msec > HNS3_RESET_PROCESS_MS)
> ++ hns3_err(hw, "%d handle long time delta %" PRIu64
> ++ " ms time=%ld.%.6ld",
> ++ hw->reset.level, msec,
> ++ tv.tv_sec, tv.tv_usec);
> ++ if (ret == -EAGAIN)
> ++ return;
> ++ }
> ++
> ++ /* Check if we got any *new* reset requests to be honored */
> ++ reset_level = hns3_get_reset_level(hns, &hw->reset.request);
> ++ if (reset_level != HNS3_NONE_RESET)
> ++ hns3_msix_process(hns, reset_level);
> ++}
> ++
> ++static unsigned int
> ++hns3_get_speed_capa_num(uint16_t device_id)
> ++{
> ++ unsigned int num;
> ++
> ++ switch (device_id) {
> ++ case HNS3_DEV_ID_25GE:
> ++ case HNS3_DEV_ID_25GE_RDMA:
> ++ num = 2;
> ++ break;
> ++ case HNS3_DEV_ID_100G_RDMA_MACSEC:
> ++ case HNS3_DEV_ID_200G_RDMA:
> ++ num = 1;
> ++ break;
> ++ default:
> ++ num = 0;
> ++ break;
> ++ }
> ++
> ++ return num;
> ++}
> ++
> ++static int
> ++hns3_get_speed_fec_capa(struct rte_eth_fec_capa *speed_fec_capa,
> ++ uint16_t device_id)
> ++{
> ++ switch (device_id) {
> ++ case HNS3_DEV_ID_25GE:
> ++ /* fallthrough */
> ++ case HNS3_DEV_ID_25GE_RDMA:
> ++ speed_fec_capa[0].speed = speed_fec_capa_tbl[1].speed;
> ++ speed_fec_capa[0].capa = speed_fec_capa_tbl[1].capa;
> ++
> ++ /* In HNS3 device, the 25G NIC is compatible with 10G rate */
> ++ speed_fec_capa[1].speed = speed_fec_capa_tbl[0].speed;
> ++ speed_fec_capa[1].capa = speed_fec_capa_tbl[0].capa;
> ++ break;
> ++ case HNS3_DEV_ID_100G_RDMA_MACSEC:
> ++ speed_fec_capa[0].speed = speed_fec_capa_tbl[4].speed;
> ++ speed_fec_capa[0].capa = speed_fec_capa_tbl[4].capa;
> ++ break;
> ++ case HNS3_DEV_ID_200G_RDMA:
> ++ speed_fec_capa[0].speed = speed_fec_capa_tbl[5].speed;
> ++ speed_fec_capa[0].capa = speed_fec_capa_tbl[5].capa;
> ++ break;
> ++ default:
> ++ return -ENOTSUP;
> ++ }
> ++
> ++ return 0;
> ++}
> ++
> ++static int
> ++hns3_fec_get_capability(struct rte_eth_dev *dev,
> ++ struct rte_eth_fec_capa *speed_fec_capa,
> ++ unsigned int num)
> ++{
> ++ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> ++ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
> ++ uint16_t device_id = pci_dev->id.device_id;
> ++ unsigned int capa_num;
> ++ int ret;
> ++
> ++ capa_num = hns3_get_speed_capa_num(device_id);
> ++ if (capa_num == 0) {
> ++ hns3_err(hw, "device(0x%x) is not supported by hns3 PMD",
> ++ device_id);
> ++ return -ENOTSUP;
> ++ }
> ++
> ++ if (speed_fec_capa == NULL || num < capa_num)
> ++ return capa_num;
> ++
> ++ ret = hns3_get_speed_fec_capa(speed_fec_capa, device_id);
> ++ if (ret)
> ++ return -ENOTSUP;
> ++
> ++ return capa_num;
> ++}
> ++
> ++static int
> ++get_current_fec_auto_state(struct hns3_hw *hw, uint8_t *state)
> ++{
> ++ struct hns3_config_fec_cmd *req;
> ++ struct hns3_cmd_desc desc;
> ++ int ret;
> ++
> ++ /*
> ++ * CMD(HNS3_OPC_CONFIG_FEC_MODE) read is not supported
> ++ * in device of link speed
> ++ * below 10 Gbps.
> ++ */
> ++ if (hw->mac.link_speed < ETH_SPEED_NUM_10G) {
> ++ *state = 0;
> ++ return 0;
> ++ }
> ++
> ++ hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CONFIG_FEC_MODE, true);
> ++ req = (struct hns3_config_fec_cmd *)desc.data;
> ++ ret = hns3_cmd_send(hw, &desc, 1);
> ++ if (ret) {
> ++ hns3_err(hw, "get current fec auto state failed, ret = %d",
> ++ ret);
> ++ return ret;
> ++ }
> ++
> ++ *state = req->fec_mode & (1U << HNS3_MAC_CFG_FEC_AUTO_EN_B);
> ++ return 0;
> ++}
> ++
> ++static int
> ++hns3_fec_get_internal(struct hns3_hw *hw, uint32_t *fec_capa)
> ++{
> ++ struct hns3_sfp_info_cmd *resp;
> ++ uint32_t tmp_fec_capa;
> ++ uint8_t auto_state;
> ++ struct hns3_cmd_desc desc;
> ++ int ret;
> ++
> ++ /*
> ++ * If link is down and AUTO is enabled, AUTO is returned, otherwise,
> ++ * configured FEC mode is returned.
> ++ * If link is up, current FEC mode is returned.
> ++ */
> ++ if (hw->mac.link_status == ETH_LINK_DOWN) {
> ++ ret = get_current_fec_auto_state(hw, &auto_state);
> ++ if (ret)
> ++ return ret;
> ++
> ++ if (auto_state == 0x1) {
> ++ *fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(AUTO);
> ++ return 0;
> ++ }
> ++ }
> ++
> ++ hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_GET_SFP_INFO, true);
> ++ resp = (struct hns3_sfp_info_cmd *)desc.data;
> ++ resp->query_type = HNS3_ACTIVE_QUERY;
> ++
> ++ ret = hns3_cmd_send(hw, &desc, 1);
> ++ if (ret == -EOPNOTSUPP) {
> ++ hns3_err(hw, "IMP do not support get FEC, ret = %d", ret);
> ++ return ret;
> ++ } else if (ret) {
> ++ hns3_err(hw, "get FEC failed, ret = %d", ret);
> ++ return ret;
> ++ }
> ++
> ++ /*
> ++ * FEC mode order defined in hns3 hardware is inconsistend with
> ++ * that defined in the ethdev library. So the sequence needs
> ++ * to be converted.
> ++ */
> ++ switch (resp->active_fec) {
> ++ case HNS3_HW_FEC_MODE_NOFEC:
> ++ tmp_fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC);
> ++ break;
> ++ case HNS3_HW_FEC_MODE_BASER:
> ++ tmp_fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(BASER);
> ++ break;
> ++ case HNS3_HW_FEC_MODE_RS:
> ++ tmp_fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(RS);
> ++ break;
> ++ default:
> ++ tmp_fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC);
> ++ break;
> ++ }
> ++
> ++ *fec_capa = tmp_fec_capa;
> ++ return 0;
> ++}
> ++
> ++static int
> ++hns3_fec_get(struct rte_eth_dev *dev, uint32_t *fec_capa)
> ++{
> ++ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> ++
> ++ return hns3_fec_get_internal(hw, fec_capa);
> ++}
> ++
> ++static int
> ++hns3_set_fec_hw(struct hns3_hw *hw, uint32_t mode)
> ++{
> ++ struct hns3_config_fec_cmd *req;
> ++ struct hns3_cmd_desc desc;
> ++ int ret;
> ++
> ++ hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CONFIG_FEC_MODE, false);
> ++
> ++ req = (struct hns3_config_fec_cmd *)desc.data;
> ++ switch (mode) {
> ++ case RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC):
> ++ hns3_set_field(req->fec_mode, HNS3_MAC_CFG_FEC_MODE_M,
> ++ HNS3_MAC_CFG_FEC_MODE_S, HNS3_MAC_FEC_OFF);
> ++ break;
> ++ case RTE_ETH_FEC_MODE_CAPA_MASK(BASER):
> ++ hns3_set_field(req->fec_mode, HNS3_MAC_CFG_FEC_MODE_M,
> ++ HNS3_MAC_CFG_FEC_MODE_S, HNS3_MAC_FEC_BASER);
> ++ break;
> ++ case RTE_ETH_FEC_MODE_CAPA_MASK(RS):
> ++ hns3_set_field(req->fec_mode, HNS3_MAC_CFG_FEC_MODE_M,
> ++ HNS3_MAC_CFG_FEC_MODE_S, HNS3_MAC_FEC_RS);
> ++ break;
> ++ case RTE_ETH_FEC_MODE_CAPA_MASK(AUTO):
> ++ hns3_set_bit(req->fec_mode, HNS3_MAC_CFG_FEC_AUTO_EN_B, 1);
> ++ break;
> ++ default:
> ++ return 0;
> ++ }
> ++ ret = hns3_cmd_send(hw, &desc, 1);
> ++ if (ret)
> ++ hns3_err(hw, "set fec mode failed, ret = %d", ret);
> ++
> ++ return ret;
> ++}
> ++
> ++static uint32_t
> ++get_current_speed_fec_cap(struct hns3_hw *hw, struct rte_eth_fec_capa *fec_capa)
> ++{
> ++ struct hns3_mac *mac = &hw->mac;
> ++ uint32_t cur_capa;
> ++
> ++ switch (mac->link_speed) {
> ++ case ETH_SPEED_NUM_10G:
> ++ cur_capa = fec_capa[1].capa;
> ++ break;
> ++ case ETH_SPEED_NUM_25G:
> ++ case ETH_SPEED_NUM_100G:
> ++ case ETH_SPEED_NUM_200G:
> ++ cur_capa = fec_capa[0].capa;
> ++ break;
> ++ default:
> ++ cur_capa = 0;
> ++ break;
> ++ }
> ++
> ++ return cur_capa;
> ++}
> ++
> ++static bool
> ++is_fec_mode_one_bit_set(uint32_t mode)
> ++{
> ++ int cnt = 0;
> ++ uint8_t i;
> ++
> ++ for (i = 0; i < sizeof(mode); i++)
> ++ if (mode >> i & 0x1)
> ++ cnt++;
> ++
> ++ return cnt == 1 ? true : false;
> ++}
> ++
> ++static int
> ++hns3_fec_set(struct rte_eth_dev *dev, uint32_t mode)
> ++{
> ++#define FEC_CAPA_NUM 2
> ++ struct hns3_adapter *hns = dev->data->dev_private;
> ++ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(hns);
> ++ struct hns3_pf *pf = &hns->pf;
> ++
> ++ struct rte_eth_fec_capa fec_capa[FEC_CAPA_NUM];
> ++ uint32_t cur_capa;
> ++ uint32_t num = FEC_CAPA_NUM;
> ++ int ret;
> ++
> ++ ret = hns3_fec_get_capability(dev, fec_capa, num);
> ++ if (ret < 0)
> ++ return ret;
> ++
> ++ /* HNS3 PMD driver only support one bit set mode, e.g. 0x1, 0x4 */
> ++ if (!is_fec_mode_one_bit_set(mode)) {
> ++ hns3_err(hw, "FEC mode(0x%x) not supported in HNS3 PMD, "
> ++ "FEC mode should be only one bit set", mode);
> ++ return -EINVAL;
> ++ }
> ++
> ++ /*
> ++ * Check whether the configured mode is within the FEC capability.
> ++ * If not, the configured mode will not be supported.
> ++ */
> ++ cur_capa = get_current_speed_fec_cap(hw, fec_capa);
> ++ if (!(cur_capa & mode)) {
> ++ hns3_err(hw, "unsupported FEC mode = 0x%x", mode);
> ++ return -EINVAL;
> ++ }
> ++
> ++ rte_spinlock_lock(&hw->lock);
> ++ ret = hns3_set_fec_hw(hw, mode);
> ++ if (ret) {
> ++ rte_spinlock_unlock(&hw->lock);
> ++ return ret;
> ++ }
> ++
> ++ pf->fec_mode = mode;
> ++ rte_spinlock_unlock(&hw->lock);
> ++
> ++ return 0;
> ++}
> ++
> ++static int
> ++hns3_restore_fec(struct hns3_hw *hw)
> ++{
> ++ struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
> ++ struct hns3_pf *pf = &hns->pf;
> ++ uint32_t mode = pf->fec_mode;
> ++ int ret;
> ++=======
> ++ else if (hns3_atomic_test_bit(HNS3_FUNC_RESET, levels))
> ++ reset_level = HNS3_FUNC_RESET;
> ++ else if (hns3_atomic_test_bit(HNS3_FLR_RESET, levels))
> ++ reset_level = HNS3_FLR_RESET;
> ++
> ++ if (hw->reset.level != HNS3_NONE_RESET && reset_level < hw->reset.level)
> ++ return HNS3_NONE_RESET;
> ++
> ++ return reset_level;
> ++}
> ++
> ++static void
> ++hns3_record_imp_error(struct hns3_adapter *hns)
> ++{
> ++ struct hns3_hw *hw = &hns->hw;
> ++ uint32_t reg_val;
> ++
> ++ reg_val = hns3_read_dev(hw, HNS3_VECTOR0_OTER_EN_REG);
> ++ if (hns3_get_bit(reg_val, HNS3_VECTOR0_IMP_RD_POISON_B)) {
> ++ hns3_warn(hw, "Detected IMP RD poison!");
> ++ hns3_set_bit(reg_val, HNS3_VECTOR0_IMP_RD_POISON_B, 0);
> ++ hns3_write_dev(hw, HNS3_VECTOR0_OTER_EN_REG, reg_val);
> ++ }
> ++
> ++ if (hns3_get_bit(reg_val, HNS3_VECTOR0_IMP_CMDQ_ERR_B)) {
> ++ hns3_warn(hw, "Detected IMP CMDQ error!");
> ++ hns3_set_bit(reg_val, HNS3_VECTOR0_IMP_CMDQ_ERR_B, 0);
> ++ hns3_write_dev(hw, HNS3_VECTOR0_OTER_EN_REG, reg_val);
> ++ }
> ++}
> ++
> ++static int
> ++hns3_prepare_reset(struct hns3_adapter *hns)
> ++{
> ++ struct hns3_hw *hw = &hns->hw;
> ++ uint32_t reg_val;
> ++ int ret;
> ++
> ++ switch (hw->reset.level) {
> ++ case HNS3_FUNC_RESET:
> ++ ret = hns3_func_reset_cmd(hw, HNS3_PF_FUNC_ID);
> ++ if (ret)
> ++ return ret;
> ++
> ++ /*
> ++ * After performaning pf reset, it is not necessary to do the
> ++ * mailbox handling or send any command to firmware, because
> ++ * any mailbox handling or command to firmware is only valid
> ++ * after hns3_cmd_init is called.
> ++ */
> ++ __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED);
> ++ hw->reset.stats.request_cnt++;
> ++ break;
> ++ case HNS3_IMP_RESET:
> ++ hns3_record_imp_error(hns);
> ++ reg_val = hns3_read_dev(hw, HNS3_VECTOR0_OTER_EN_REG);
> ++ hns3_write_dev(hw, HNS3_VECTOR0_OTER_EN_REG, reg_val |
> ++ BIT(HNS3_VECTOR0_IMP_RESET_INT_B));
> ++ break;
> ++ default:
> ++ break;
> ++ }
> ++ return 0;
> ++}
> ++
> ++static int
> ++hns3_set_rst_done(struct hns3_hw *hw)
> ++{
> ++ struct hns3_pf_rst_done_cmd *req;
> ++ struct hns3_cmd_desc desc;
> ++
> ++ req = (struct hns3_pf_rst_done_cmd *)desc.data;
> ++ hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_PF_RST_DONE, false);
> ++ req->pf_rst_done |= HNS3_PF_RESET_DONE_BIT;
> ++ return hns3_cmd_send(hw, &desc, 1);
> ++}
> ++
> ++static int
> ++hns3_stop_service(struct hns3_adapter *hns)
> ++{
> ++ struct hns3_hw *hw = &hns->hw;
> ++ struct rte_eth_dev *eth_dev;
> ++
> ++ eth_dev = &rte_eth_devices[hw->data->port_id];
> ++ hw->mac.link_status = ETH_LINK_DOWN;
> ++ if (hw->adapter_state == HNS3_NIC_STARTED) {
> ++ rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
> ++ hns3_update_linkstatus_and_event(hw, false);
> ++ }
> ++
> ++ hns3_set_rxtx_function(eth_dev);
> ++ rte_wmb();
> ++ /* Disable datapath on secondary process. */
> ++ hns3_mp_req_stop_rxtx(eth_dev);
> ++ rte_delay_ms(hw->cfg_max_queues);
> ++
> ++ rte_spinlock_lock(&hw->lock);
> ++ if (hns->hw.adapter_state == HNS3_NIC_STARTED ||
> ++ hw->adapter_state == HNS3_NIC_STOPPING) {
> ++ hns3_enable_all_queues(hw, false);
> ++ hns3_do_stop(hns);
> ++ hw->reset.mbuf_deferred_free = true;
> ++ } else
> ++ hw->reset.mbuf_deferred_free = false;
> ++
> ++ /*
> ++ * It is cumbersome for hardware to pick-and-choose entries for deletion
> ++ * from table space. Hence, for function reset software intervention is
> ++ * required to delete the entries
> ++ */
> ++ if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0)
> ++ hns3_configure_all_mc_mac_addr(hns, true);
> ++ rte_spinlock_unlock(&hw->lock);
> ++
> ++ return 0;
> ++}
> ++
> ++static int
> ++hns3_start_service(struct hns3_adapter *hns)
> ++{
> ++ struct hns3_hw *hw = &hns->hw;
> ++ struct rte_eth_dev *eth_dev;
> ++
> ++ if (hw->reset.level == HNS3_IMP_RESET ||
> ++ hw->reset.level == HNS3_GLOBAL_RESET)
> ++ hns3_set_rst_done(hw);
> ++ eth_dev = &rte_eth_devices[hw->data->port_id];
> ++ hns3_set_rxtx_function(eth_dev);
> ++ hns3_mp_req_start_rxtx(eth_dev);
> ++ if (hw->adapter_state == HNS3_NIC_STARTED) {
> ++ /*
> ++ * This API parent function already hold the hns3_hw.lock, the
> ++ * hns3_service_handler may report lse, in bonding application
> ++ * it will call driver's ops which may acquire the hns3_hw.lock
> ++ * again, thus lead to deadlock.
> ++ * We defer calls hns3_service_handler to avoid the deadlock.
> ++ */
> ++ rte_eal_alarm_set(HNS3_SERVICE_QUICK_INTERVAL,
> ++ hns3_service_handler, eth_dev);
> ++
> ++ /* Enable interrupt of all rx queues before enabling queues */
> ++ hns3_dev_all_rx_queue_intr_enable(hw, true);
> ++ /*
> ++ * Enable state of each rxq and txq will be recovered after
> ++ * reset, so we need to restore them before enable all tqps;
> ++ */
> ++ hns3_restore_tqp_enable_state(hw);
> ++ /*
> ++ * When finished the initialization, enable queues to receive
> ++ * and transmit packets.
> ++ */
> ++ hns3_enable_all_queues(hw, true);
> ++ }
> ++
> ++ return 0;
> ++}
> ++
> ++static int
> ++hns3_restore_conf(struct hns3_adapter *hns)
> ++{
> ++ struct hns3_hw *hw = &hns->hw;
> ++ int ret;
> ++
> ++ ret = hns3_configure_all_mac_addr(hns, false);
> ++ if (ret)
> ++ return ret;
> ++
> ++ ret = hns3_configure_all_mc_mac_addr(hns, false);
> ++ if (ret)
> ++ goto err_mc_mac;
> ++
> ++ ret = hns3_dev_promisc_restore(hns);
> ++ if (ret)
> ++ goto err_promisc;
> ++
> ++ ret = hns3_restore_vlan_table(hns);
> ++ if (ret)
> ++ goto err_promisc;
> ++
> ++ ret = hns3_restore_vlan_conf(hns);
> ++ if (ret)
> ++ goto err_promisc;
> ++
> ++ ret = hns3_restore_all_fdir_filter(hns);
> ++ if (ret)
> ++ goto err_promisc;
> ++
> ++ ret = hns3_restore_ptp(hns);
> ++ if (ret)
> ++ goto err_promisc;
> ++
> ++ ret = hns3_restore_rx_interrupt(hw);
> ++ if (ret)
> ++ goto err_promisc;
> ++
> ++ ret = hns3_restore_gro_conf(hw);
> ++ if (ret)
> ++ goto err_promisc;
> ++
> ++ ret = hns3_restore_fec(hw);
> ++ if (ret)
> ++ goto err_promisc;
> ++
> ++ if (hns->hw.adapter_state == HNS3_NIC_STARTED) {
> ++ ret = hns3_do_start(hns, false);
> ++ if (ret)
> ++ goto err_promisc;
> ++ hns3_info(hw, "hns3 dev restart successful!");
> ++ } else if (hw->adapter_state == HNS3_NIC_STOPPING)
> ++ hw->adapter_state = HNS3_NIC_CONFIGURED;
> ++ return 0;
> ++
> ++err_promisc:
> ++ hns3_configure_all_mc_mac_addr(hns, true);
> ++err_mc_mac:
> ++ hns3_configure_all_mac_addr(hns, true);
> ++ return ret;
> ++}
> ++
> ++static void
> ++hns3_reset_service(void *param)
> ++{
> ++ struct hns3_adapter *hns = (struct hns3_adapter *)param;
> ++ struct hns3_hw *hw = &hns->hw;
> ++ enum hns3_reset_level reset_level;
> ++ struct timeval tv_delta;
> ++ struct timeval tv_start;
> ++ struct timeval tv;
> ++ uint64_t msec;
> ++ int ret;
> ++
> ++ /*
> ++ * The interrupt is not triggered within the delay time.
> ++ * The interrupt may have been lost. It is necessary to handle
> ++ * the interrupt to recover from the error.
> ++ */
> ++ if (__atomic_load_n(&hw->reset.schedule, __ATOMIC_RELAXED) ==
> ++ SCHEDULE_DEFERRED) {
> ++ __atomic_store_n(&hw->reset.schedule, SCHEDULE_REQUESTED,
> ++ __ATOMIC_RELAXED);
> ++ hns3_err(hw, "Handling interrupts in delayed tasks");
> ++ hns3_interrupt_handler(&rte_eth_devices[hw->data->port_id]);
> ++ reset_level = hns3_get_reset_level(hns, &hw->reset.pending);
> ++ if (reset_level == HNS3_NONE_RESET) {
> ++ hns3_err(hw, "No reset level is set, try IMP reset");
> ++ hns3_atomic_set_bit(HNS3_IMP_RESET, &hw->reset.pending);
> ++ }
> ++ }
> ++ __atomic_store_n(&hw->reset.schedule, SCHEDULE_NONE, __ATOMIC_RELAXED);
> ++
> ++ /*
> ++ * Check if there is any ongoing reset in the hardware. This status can
> ++ * be checked from reset_pending. If there is then, we need to wait for
> ++ * hardware to complete reset.
> ++ * a. If we are able to figure out in reasonable time that hardware
> ++ * has fully resetted then, we can proceed with driver, client
> ++ * reset.
> ++ * b. else, we can come back later to check this status so re-sched
> ++ * now.
> ++ */
> ++ reset_level = hns3_get_reset_level(hns, &hw->reset.pending);
> ++ if (reset_level != HNS3_NONE_RESET) {
> ++ hns3_clock_gettime(&tv_start);
> ++ ret = hns3_reset_process(hns, reset_level);
> ++ hns3_clock_gettime(&tv);
> ++ timersub(&tv, &tv_start, &tv_delta);
> ++ msec = hns3_clock_calctime_ms(&tv_delta);
> ++ if (msec > HNS3_RESET_PROCESS_MS)
> ++ hns3_err(hw, "%d handle long time delta %" PRIu64
> ++ " ms time=%ld.%.6ld",
> ++ hw->reset.level, msec,
> ++ tv.tv_sec, tv.tv_usec);
> ++ if (ret == -EAGAIN)
> ++ return;
> ++ }
> ++
> ++ /* Check if we got any *new* reset requests to be honored */
> ++ reset_level = hns3_get_reset_level(hns, &hw->reset.request);
> ++ if (reset_level != HNS3_NONE_RESET)
> ++ hns3_msix_process(hns, reset_level);
> ++}
> ++
> ++static unsigned int
> ++hns3_get_speed_capa_num(uint16_t device_id)
> ++{
> ++ unsigned int num;
> ++
> ++ switch (device_id) {
> ++ case HNS3_DEV_ID_25GE:
> ++ case HNS3_DEV_ID_25GE_RDMA:
> ++ num = 2;
> ++ break;
> ++ case HNS3_DEV_ID_100G_RDMA_MACSEC:
> ++ case HNS3_DEV_ID_200G_RDMA:
> ++ num = 1;
> ++ break;
> ++ default:
> ++ num = 0;
> ++ break;
> ++ }
> ++
> ++ return num;
> ++}
> ++
> ++static int
> ++hns3_get_speed_fec_capa(struct rte_eth_fec_capa *speed_fec_capa,
> ++ uint16_t device_id)
> ++{
> ++ switch (device_id) {
> ++ case HNS3_DEV_ID_25GE:
> ++ /* fallthrough */
> ++ case HNS3_DEV_ID_25GE_RDMA:
> ++ speed_fec_capa[0].speed = speed_fec_capa_tbl[1].speed;
> ++ speed_fec_capa[0].capa = speed_fec_capa_tbl[1].capa;
> ++
> ++ /* In HNS3 device, the 25G NIC is compatible with 10G rate */
> ++ speed_fec_capa[1].speed = speed_fec_capa_tbl[0].speed;
> ++ speed_fec_capa[1].capa = speed_fec_capa_tbl[0].capa;
> ++ break;
> ++ case HNS3_DEV_ID_100G_RDMA_MACSEC:
> ++ speed_fec_capa[0].speed = speed_fec_capa_tbl[4].speed;
> ++ speed_fec_capa[0].capa = speed_fec_capa_tbl[4].capa;
> ++ break;
> ++ case HNS3_DEV_ID_200G_RDMA:
> ++ speed_fec_capa[0].speed = speed_fec_capa_tbl[5].speed;
> ++ speed_fec_capa[0].capa = speed_fec_capa_tbl[5].capa;
> ++ break;
> ++ default:
> ++ return -ENOTSUP;
> ++ }
> ++
> ++ return 0;
> ++}
> ++
> ++static int
> ++hns3_fec_get_capability(struct rte_eth_dev *dev,
> ++ struct rte_eth_fec_capa *speed_fec_capa,
> ++ unsigned int num)
> ++{
> ++ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> ++ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
> ++ uint16_t device_id = pci_dev->id.device_id;
> ++ unsigned int capa_num;
> ++ int ret;
> ++
> ++ capa_num = hns3_get_speed_capa_num(device_id);
> ++ if (capa_num == 0) {
> ++ hns3_err(hw, "device(0x%x) is not supported by hns3 PMD",
> ++ device_id);
> ++ return -ENOTSUP;
> ++ }
> ++
> ++ if (speed_fec_capa == NULL || num < capa_num)
> ++ return capa_num;
> ++
> ++ ret = hns3_get_speed_fec_capa(speed_fec_capa, device_id);
> ++ if (ret)
> ++ return -ENOTSUP;
> ++
> ++ return capa_num;
> ++}
> ++
> ++static int
> ++get_current_fec_auto_state(struct hns3_hw *hw, uint8_t *state)
> ++{
> ++ struct hns3_config_fec_cmd *req;
> ++ struct hns3_cmd_desc desc;
> ++ int ret;
> ++
> ++ /*
> ++ * CMD(HNS3_OPC_CONFIG_FEC_MODE) read is not supported
> ++ * in device of link speed
> ++ * below 10 Gbps.
> ++ */
> ++ if (hw->mac.link_speed < ETH_SPEED_NUM_10G) {
> ++ *state = 0;
> ++ return 0;
> ++ }
> ++
> ++ hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CONFIG_FEC_MODE, true);
> ++ req = (struct hns3_config_fec_cmd *)desc.data;
> ++ ret = hns3_cmd_send(hw, &desc, 1);
> ++ if (ret) {
> ++ hns3_err(hw, "get current fec auto state failed, ret = %d",
> ++ ret);
> ++ return ret;
> ++ }
> ++
> ++ *state = req->fec_mode & (1U << HNS3_MAC_CFG_FEC_AUTO_EN_B);
> ++ return 0;
> ++}
> ++
> ++static int
> ++hns3_fec_get_internal(struct hns3_hw *hw, uint32_t *fec_capa)
> ++{
> ++ struct hns3_sfp_info_cmd *resp;
> ++ uint32_t tmp_fec_capa;
> ++ uint8_t auto_state;
> ++ struct hns3_cmd_desc desc;
> ++ int ret;
> ++
> ++ /*
> ++ * If link is down and AUTO is enabled, AUTO is returned, otherwise,
> ++ * configured FEC mode is returned.
> ++ * If link is up, current FEC mode is returned.
> ++ */
> ++ if (hw->mac.link_status == ETH_LINK_DOWN) {
> ++ ret = get_current_fec_auto_state(hw, &auto_state);
> ++ if (ret)
> ++ return ret;
> ++
> ++ if (auto_state == 0x1) {
> ++ *fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(AUTO);
> ++ return 0;
> ++ }
> ++ }
> ++
> ++ hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_GET_SFP_INFO, true);
> ++ resp = (struct hns3_sfp_info_cmd *)desc.data;
> ++ resp->query_type = HNS3_ACTIVE_QUERY;
> ++
> ++ ret = hns3_cmd_send(hw, &desc, 1);
> ++ if (ret == -EOPNOTSUPP) {
> ++ hns3_err(hw, "IMP do not support get FEC, ret = %d", ret);
> ++ return ret;
> ++ } else if (ret) {
> ++ hns3_err(hw, "get FEC failed, ret = %d", ret);
> ++ return ret;
> ++ }
> ++
> ++ /*
> ++ * FEC mode order defined in hns3 hardware is inconsistend with
> ++ * that defined in the ethdev library. So the sequence needs
> ++ * to be converted.
> ++ */
> ++ switch (resp->active_fec) {
> ++ case HNS3_HW_FEC_MODE_NOFEC:
> ++ tmp_fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC);
> ++ break;
> ++ case HNS3_HW_FEC_MODE_BASER:
> ++ tmp_fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(BASER);
> ++ break;
> ++ case HNS3_HW_FEC_MODE_RS:
> ++ tmp_fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(RS);
> ++ break;
> ++ default:
> ++ tmp_fec_capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC);
> ++ break;
> ++ }
> ++
> ++ *fec_capa = tmp_fec_capa;
> ++ return 0;
> ++}
> ++
> ++static int
> ++hns3_fec_get(struct rte_eth_dev *dev, uint32_t *fec_capa)
> ++{
> ++ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> ++
> ++ return hns3_fec_get_internal(hw, fec_capa);
> ++}
> ++
> ++static int
> ++hns3_set_fec_hw(struct hns3_hw *hw, uint32_t mode)
> ++{
> ++ struct hns3_config_fec_cmd *req;
> ++ struct hns3_cmd_desc desc;
> ++ int ret;
> ++
> ++ hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_CONFIG_FEC_MODE, false);
> ++
> ++ req = (struct hns3_config_fec_cmd *)desc.data;
> ++ switch (mode) {
> ++ case RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC):
> ++ hns3_set_field(req->fec_mode, HNS3_MAC_CFG_FEC_MODE_M,
> ++ HNS3_MAC_CFG_FEC_MODE_S, HNS3_MAC_FEC_OFF);
> ++ break;
> ++ case RTE_ETH_FEC_MODE_CAPA_MASK(BASER):
> ++ hns3_set_field(req->fec_mode, HNS3_MAC_CFG_FEC_MODE_M,
> ++ HNS3_MAC_CFG_FEC_MODE_S, HNS3_MAC_FEC_BASER);
> ++ break;
> ++ case RTE_ETH_FEC_MODE_CAPA_MASK(RS):
> ++ hns3_set_field(req->fec_mode, HNS3_MAC_CFG_FEC_MODE_M,
> ++ HNS3_MAC_CFG_FEC_MODE_S, HNS3_MAC_FEC_RS);
> ++ break;
> ++ case RTE_ETH_FEC_MODE_CAPA_MASK(AUTO):
> ++ hns3_set_bit(req->fec_mode, HNS3_MAC_CFG_FEC_AUTO_EN_B, 1);
> ++ break;
> ++ default:
> ++ return 0;
> ++ }
> ++ ret = hns3_cmd_send(hw, &desc, 1);
> ++ if (ret)
> ++ hns3_err(hw, "set fec mode failed, ret = %d", ret);
> ++
> ++ return ret;
> ++}
> ++
> ++static uint32_t
> ++get_current_speed_fec_cap(struct hns3_hw *hw, struct rte_eth_fec_capa *fec_capa)
> ++{
> ++ struct hns3_mac *mac = &hw->mac;
> ++ uint32_t cur_capa;
> ++
> ++ switch (mac->link_speed) {
> ++ case ETH_SPEED_NUM_10G:
> ++ cur_capa = fec_capa[1].capa;
> ++ break;
> ++ case ETH_SPEED_NUM_25G:
> ++ case ETH_SPEED_NUM_100G:
> ++ case ETH_SPEED_NUM_200G:
> ++ cur_capa = fec_capa[0].capa;
> ++ break;
> ++ default:
> ++ cur_capa = 0;
> ++ break;
> ++ }
> ++
> ++ return cur_capa;
> ++}
> ++
> ++static bool
> ++is_fec_mode_one_bit_set(uint32_t mode)
> ++{
> ++ int cnt = 0;
> ++ uint8_t i;
> ++
> ++ for (i = 0; i < sizeof(mode); i++)
> ++ if (mode >> i & 0x1)
> ++ cnt++;
> ++
> ++ return cnt == 1 ? true : false;
> ++}
> ++
> ++static int
> ++hns3_fec_set(struct rte_eth_dev *dev, uint32_t mode)
> ++{
> ++#define FEC_CAPA_NUM 2
> ++ struct hns3_adapter *hns = dev->data->dev_private;
> ++ struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(hns);
> ++ struct hns3_pf *pf = &hns->pf;
> ++
> ++ struct rte_eth_fec_capa fec_capa[FEC_CAPA_NUM];
> ++ uint32_t cur_capa;
> ++ uint32_t num = FEC_CAPA_NUM;
> ++ int ret;
> ++
> ++ ret = hns3_fec_get_capability(dev, fec_capa, num);
> ++ if (ret < 0)
> ++ return ret;
> ++
> ++ /* HNS3 PMD driver only support one bit set mode, e.g. 0x1, 0x4 */
> ++ if (!is_fec_mode_one_bit_set(mode)) {
> ++ hns3_err(hw, "FEC mode(0x%x) not supported in HNS3 PMD, "
> ++ "FEC mode should be only one bit set", mode);
> ++ return -EINVAL;
> ++ }
> ++
> ++ /*
> ++ * Check whether the configured mode is within the FEC capability.
> ++ * If not, the configured mode will not be supported.
> ++ */
> ++ cur_capa = get_current_speed_fec_cap(hw, fec_capa);
> ++ if (!(cur_capa & mode)) {
> ++ hns3_err(hw, "unsupported FEC mode = 0x%x", mode);
> ++ return -EINVAL;
> ++ }
> ++
> ++ rte_spinlock_lock(&hw->lock);
> ++ ret = hns3_set_fec_hw(hw, mode);
> ++ if (ret) {
> ++ rte_spinlock_unlock(&hw->lock);
> ++ return ret;
> ++ }
> ++
> ++ pf->fec_mode = mode;
> ++ rte_spinlock_unlock(&hw->lock);
> ++
> ++ return 0;
> ++}
> ++
> ++static int
> ++hns3_restore_fec(struct hns3_hw *hw)
> ++{
> ++ struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
> ++ struct hns3_pf *pf = &hns->pf;
> ++ uint32_t mode = pf->fec_mode;
> ++ int ret;
> ++>>>>>>> net/hns3: fix delay for waiting to stop Rx/Tx
> +
> + if (hw->reset.level != HNS3_NONE_RESET && reset_level < hw->reset.level)
> + return HNS3_NONE_RESET;
> +@@ -5201,7 +6381,7 @@ hns3_stop_service(struct hns3_adapter *hns)
> @@ -46 +1238 @@
> -index 41dd8ee129..7a5c162964 100644
> +index a7b6188eea..eb3edf3464 100644
> @@ -49 +1241 @@
> -@@ -2107,7 +2107,7 @@ hns3vf_dev_stop(struct rte_eth_dev *dev)
> +@@ -1631,7 +1631,7 @@ hns3vf_dev_stop(struct rte_eth_dev *dev)
> @@ -57,2 +1249,2 @@
> - if (__atomic_load_n(&hw->reset.resetting, __ATOMIC_RELAXED) == 0) {
> -@@ -2558,7 +2558,7 @@ hns3vf_stop_service(struct hns3_adapter *hns)
> + if (rte_atomic16_read(&hw->reset.resetting) == 0) {
> +@@ -2005,7 +2005,7 @@ hns3vf_stop_service(struct hns3_adapter *hns)
--
Christian Ehrhardt
Staff Engineer, Ubuntu Server
Canonical Ltd
next prev parent reply other threads:[~2021-08-11 9:03 UTC|newest]
Thread overview: 114+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-10 15:38 [dpdk-stable] patch 'bitmap: fix buffer overrun in bitmap init' " christian.ehrhardt
2021-08-10 15:38 ` [dpdk-stable] patch 'net/bnxt: check access to possible null pointer' " christian.ehrhardt
2021-08-10 15:38 ` [dpdk-stable] patch 'net/bnxt: fix error messages in VNIC prepare' " christian.ehrhardt
2021-08-10 15:38 ` [dpdk-stable] patch 'net/bnxt: set flow error when free filter not available' " christian.ehrhardt
2021-08-10 15:38 ` [dpdk-stable] patch 'net/bnxt: remove unnecessary code' " christian.ehrhardt
2021-08-10 15:38 ` [dpdk-stable] patch 'net/bnxt: fix error handling in VNIC prepare' " christian.ehrhardt
2021-08-10 15:38 ` [dpdk-stable] patch 'net/bnxt: set flow error after tunnel redirection free' " christian.ehrhardt
2021-08-10 15:38 ` [dpdk-stable] patch 'net/bnxt: use common function to free VNIC resource' " christian.ehrhardt
2021-08-10 15:38 ` [dpdk-stable] patch 'net/bnxt: fix check for PTP support in FW' " christian.ehrhardt
2021-08-10 15:38 ` [dpdk-stable] patch 'net/bnxt: improve probing log message' " christian.ehrhardt
2021-08-10 15:38 ` [dpdk-stable] patch 'net/iavf: fix RSS key access out of bound' " christian.ehrhardt
2021-08-10 15:38 ` [dpdk-stable] patch 'doc: fix default burst size in testpmd' " christian.ehrhardt
2021-08-10 15:38 ` [dpdk-stable] patch 'devtools: fix file listing in maintainers check' " christian.ehrhardt
2021-08-10 15:38 ` [dpdk-stable] patch 'vhost/crypto: check request pointer before dereference' " christian.ehrhardt
2021-08-10 15:38 ` [dpdk-stable] patch 'kni: fix mbuf allocation for kernel side use' " christian.ehrhardt
2021-08-10 15:38 ` [dpdk-stable] patch 'kni: fix crash on userspace VA for segmented packets' " christian.ehrhardt
2021-08-10 15:38 ` [dpdk-stable] patch 'flow_classify: fix leaking rules on delete' " christian.ehrhardt
2021-08-10 15:38 ` [dpdk-stable] patch 'rib: fix max depth IPv6 lookup' " christian.ehrhardt
2021-08-10 15:38 ` [dpdk-stable] patch 'tests/eal: fix memory leak' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'common/mlx5: fix Netlink port name padding in probing' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'app/test: fix IPv6 header initialization' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'test/mbuf: fix virtual address conversion' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/octeontx2: fix flow creation limit on CN98xx' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/octeontx2: use runtime LSO format indices' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/ice/base: fix first profile mask' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'vhost: fix missing memory table NUMA realloc' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'vhost: fix missing guest pages " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'mempool/octeontx2: fix shift calculation' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'drivers/net: fix memzone allocations for DMA memory' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/hns3: increase VF reset retry maximum' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/hns3: fix delay for waiting to stop Rx/Tx' " christian.ehrhardt
2021-08-11 9:02 ` Christian Ehrhardt [this message]
2021-08-10 15:39 ` [dpdk-stable] patch 'net/hns3: fix VLAN strip log' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/pfe: remove unnecessary null check' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'ethdev: fix doc of flow action' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'app/testpmd: change port link speed without stopping all' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'crypto/qat: fix Arm build with special memcpy' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'app/crypto-perf: fix out-of-place mempool allocation' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'test/crypto: fix mbuf reset after null check' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'test/crypto: fix typo in AES case' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'test/crypto: fix typo in ESN " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'crypto/mvsam: fix AES-GCM session parameters' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'crypto/mvsam: fix capabilities' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'crypto/mvsam: fix session data reset' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'crypto/mvsam: fix options parsing' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'ipc: stop mp control thread on cleanup' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'power: fix namespace for internal struct' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/bnxt: cleanup code' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/bnxt: fix typo in log message' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/bnxt: fix Tx descriptor status implementation' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/bnxt: fix scalar Tx completion handling' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/bnxt: fix Rx interrupt setting' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'doc: add limitation for ConnectX-4 with L2 in mlx5 guide' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/mlx5: fix match MPLS over GRE with key' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'common/mlx5: fix Netlink receive message buffer size' " christian.ehrhardt
2021-08-11 9:26 ` Christian Ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/mlx5: remove unsupported flow item MPLS over IP' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/ice: fix memzone leak when firmware is missing' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/i40e: fix descriptor scan on Arm' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/ixgbe: fix flow entry access after freeing' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/octeontx/base: fix debug build with clang' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'test/power: fix CPU frequency when turbo enabled' " christian.ehrhardt
2021-08-11 9:59 ` Christian Ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/virtio: fix aarch32 build' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/bonding: fix error message on flow verify' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/bonding: check flow setting' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/softnic: fix connection memory leak' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/mlx5: remove redundant operations in NEON Rx' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/mlx5: fix typo in vectorized Rx comments' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/mvpp2: fix port speed overflow' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/mvpp2: fix configured state dependency' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/bnxt: fix nested lock during bonding' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/bnxt: clear cached statistics' " christian.ehrhardt
2021-08-11 8:54 ` Christian Ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/ice/base: revert change of first profile mask' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'distributor: fix 128-bit write alignment' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'test/power: fix CPU frequency check for intel_pstate' " christian.ehrhardt
2021-08-11 9:51 ` Christian Ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'test/crypto: fix mempool size for session-less' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/mlx5: fix overflow in mempool argument' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/mlx5: fix Rx/Tx queue checks' " christian.ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/mlx5: reject inner ethernet matching in GTP' " christian.ehrhardt
2021-08-11 9:17 ` Christian Ehrhardt
2021-08-11 10:25 ` Lior Margalit
2021-08-11 11:23 ` Christian Ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/bnxt: fix null dereference in interrupt handler' " christian.ehrhardt
2021-08-11 8:50 ` Christian Ehrhardt
2021-08-10 15:39 ` [dpdk-stable] patch 'net/softnic: fix memory leak in arguments parsing' " christian.ehrhardt
2021-08-11 10:19 ` Christian Ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'net/hns3: fix filter parsing comment' " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'net/hns3: fix Tx prepare after stop' " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'net/hinic: increase protection of the VLAN' " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'net/hinic/base: fix LRO' " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'bus/dpaa: fix freeing in FMAN interface destructor' " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'net/sfc: fix MAC stats lock in xstats query by ID' " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'net/sfc: fix reading adapter state without locking' " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'net/sfc: fix xstats query by ID according to ethdev' " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'net/sfc: fix xstats query by unsorted list of IDs' " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'net/sfc: fix MAC stats update for stopped device' " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'app/testpmd: fix help string for port reset' " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'app/testpmd: fix MAC address after " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'net/virtio: report maximum MTU in device info' " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'vhost: fix crash on reconnect' " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'net/virtio: fix interrupt handle leak' " christian.ehrhardt
2021-08-11 6:52 ` Christian Ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'net/iavf: fix Tx threshold check' " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'net/softnic: fix null dereference in arguments parsing' " christian.ehrhardt
2021-08-11 10:21 ` Christian Ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'net/softnic: fix memory leak as profile is freed' " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'app/testpmd: fix Tx checksum calculation for tunnel' " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'cryptodev: fix freeing after device release' " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'crypto/octeontx: " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'doc: announce common prefix for ethdev' " christian.ehrhardt
2021-08-10 15:40 ` [dpdk-stable] patch 'app/testpmd: fix IPv4 checksum' " christian.ehrhardt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAATJJ0L+2ekLS6UaBRmqCb=sHdN-xQ7Ug=LwKoL9wRop6-L8EA@mail.gmail.com' \
--to=christian.ehrhardt@canonical.com \
--cc=humin29@huawei.com \
--cc=lihuisong@huawei.com \
--cc=stable@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).