From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BA65DA00C5; Thu, 27 Oct 2022 09:50:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EFC5142BE0; Thu, 27 Oct 2022 09:49:29 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 9EC9042BBB for ; Thu, 27 Oct 2022 09:49:24 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666856964; x=1698392964; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eSLwv/X8Htsh516ovsOtWiC+vH3TE76BlAeLa2Y9bdE=; b=Uhs2vjsMeZ4etfR/A7HEbdxkdCL8J1XXvcPJhmK1g6IN6jCW8q5G/q4H PT7ffp27IGJ6dGOVJ1463z8s1krBgxEg4VSU15cmvO7mVWaVUqbF88AMO Jf+hThZeGIHxhxYLhrF85QoPnkovTZPvxm/ShGCQD/1p/hDoxPHEhZMVA o8W8VfVFS1KhBWUVsW68W5aNqrrIrdXJKMlrYRnRRM4cI8LBJkHF0tM6E cjddHWkoK8YwGywxVYS3XkaJ6qT4RFcsvO6PybG8KHbKq7x5btBay5T7c 7kbKEqSPHk2dZLNTOksP5zgQmwQ85MSFd8sORBzZTqOy414GCvGr54V6c Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10512"; a="287873151" X-IronPort-AV: E=Sophos;i="5.95,217,1661842800"; d="scan'208";a="287873151" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 00:49:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10512"; a="757607663" X-IronPort-AV: E=Sophos;i="5.95,217,1661842800"; d="scan'208";a="757607663" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga004.jf.intel.com with ESMTP; 27 Oct 2022 00:49:22 -0700 From: Junfeng Guo To: andrew.rybchenko@oktetlabs.ru, qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, Junfeng Guo , Xiaoyun Li Subject: [PATCH v14 05/18] net/idpf: add support for device start and stop Date: Thu, 27 Oct 2022 15:47:16 +0800 Message-Id: <20221027074729.1494529-6-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221027074729.1494529-1-junfeng.guo@intel.com> References: <20221027054505.1369248-2-junfeng.guo@intel.com> <20221027074729.1494529-1-junfeng.guo@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add dev ops dev_start, dev_stop and link_update. Signed-off-by: Beilei Xing Signed-off-by: Xiaoyun Li Signed-off-by: Junfeng Guo --- drivers/net/idpf/idpf_ethdev.c | 55 ++++++++++++++++++++++++++++++++++ drivers/net/idpf/idpf_rxtx.c | 20 +++++++++++++ 2 files changed, 75 insertions(+) diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c index 0585153f69..3430d00e92 100644 --- a/drivers/net/idpf/idpf_ethdev.c +++ b/drivers/net/idpf/idpf_ethdev.c @@ -30,17 +30,38 @@ static const char * const idpf_valid_args[] = { }; static int idpf_dev_configure(struct rte_eth_dev *dev); +static int idpf_dev_start(struct rte_eth_dev *dev); +static int idpf_dev_stop(struct rte_eth_dev *dev); static int idpf_dev_close(struct rte_eth_dev *dev); static int idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info); static void idpf_adapter_rel(struct idpf_adapter *adapter); +static int +idpf_dev_link_update(struct rte_eth_dev *dev, + __rte_unused int wait_to_complete) +{ + struct rte_eth_link new_link; + + memset(&new_link, 0, sizeof(new_link)); + + new_link.link_speed = RTE_ETH_SPEED_NUM_NONE; + new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + new_link.link_autoneg = !(dev->data->dev_conf.link_speeds & + RTE_ETH_LINK_SPEED_FIXED); + + return rte_eth_linkstatus_set(dev, &new_link); +} + static const struct eth_dev_ops idpf_eth_dev_ops = { .dev_configure = idpf_dev_configure, + .dev_start = idpf_dev_start, + .dev_stop = idpf_dev_stop, .dev_close = idpf_dev_close, .rx_queue_setup = idpf_rx_queue_setup, .tx_queue_setup = idpf_tx_queue_setup, .dev_infos_get = idpf_dev_info_get, + .link_update = idpf_dev_link_update, }; static int @@ -284,6 +305,40 @@ idpf_dev_configure(struct rte_eth_dev *dev) return 0; } +static int +idpf_dev_start(struct rte_eth_dev *dev) +{ + struct idpf_vport *vport = dev->data->dev_private; + + if (dev->data->mtu > vport->max_mtu) { + PMD_DRV_LOG(ERR, "MTU should be less than %d", vport->max_mtu); + return -1; + } + + vport->max_pkt_len = dev->data->mtu + IDPF_ETH_OVERHEAD; + + /* TODO: start queues */ + + if (idpf_vc_ena_dis_vport(vport, true) != 0) { + PMD_DRV_LOG(ERR, "Failed to enable vport"); + return -1; + } + + return 0; +} + +static int +idpf_dev_stop(struct rte_eth_dev *dev) +{ + struct idpf_vport *vport = dev->data->dev_private; + + idpf_vc_ena_dis_vport(vport, false); + + /* TODO: stop queues */ + + return 0; +} + static int idpf_dev_close(struct rte_eth_dev *dev) { diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c index 25dd5d85d5..3528d2f2c7 100644 --- a/drivers/net/idpf/idpf_rxtx.c +++ b/drivers/net/idpf/idpf_rxtx.c @@ -334,6 +334,11 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, if (check_rx_thresh(nb_desc, rx_free_thresh) != 0) return -EINVAL; + if (rx_conf->rx_deferred_start) { + PMD_INIT_LOG(ERR, "Queue start is not supported currently."); + return -EINVAL; + } + /* Setup Rx description queue */ rxq = rte_zmalloc_socket("idpf rxq", sizeof(struct idpf_rx_queue), @@ -465,6 +470,11 @@ idpf_rx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, if (check_rx_thresh(nb_desc, rx_free_thresh) != 0) return -EINVAL; + if (rx_conf->rx_deferred_start) { + PMD_INIT_LOG(ERR, "Queue start is not supported currently."); + return -EINVAL; + } + /* Setup Rx description queue */ rxq = rte_zmalloc_socket("idpf rxq", sizeof(struct idpf_rx_queue), @@ -569,6 +579,11 @@ idpf_tx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0) return -EINVAL; + if (tx_conf->tx_deferred_start) { + PMD_INIT_LOG(ERR, "Queue start is not supported currently."); + return -EINVAL; + } + /* Allocate the TX queue data structure. */ txq = rte_zmalloc_socket("idpf split txq", sizeof(struct idpf_tx_queue), @@ -691,6 +706,11 @@ idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0) return -EINVAL; + if (tx_conf->tx_deferred_start) { + PMD_INIT_LOG(ERR, "Queue start is not supported currently."); + return -EINVAL; + } + /* Allocate the TX queue data structure. */ txq = rte_zmalloc_socket("idpf txq", sizeof(struct idpf_tx_queue), -- 2.34.1