From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7BF4DA00C4; Fri, 28 Oct 2022 17:35:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1C05F40151; Fri, 28 Oct 2022 17:35:17 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 00ADF40146 for ; Fri, 28 Oct 2022 17:35:15 +0200 (CEST) Received: from [192.168.1.39] (unknown [188.170.73.42]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id D30A95E; Fri, 28 Oct 2022 18:35:14 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru D30A95E DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1666971315; bh=HxTRlhN39s7fqX46ZeqT2izfItsaqDOoosDa5ct5r5w=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=TJ3ayIh74szv10rsdRVHlMwSy8xdiuGAiX1mtxoyposfZnm46SweH8rs66vOIv2cV VuYsHwrN/XjMXbqWxv82c2MRBz0Ei+IwvXo++5rPs8S6YWw1dWfML0EGFRbh5qrm5F l5wVaZ6hy3IAWTSZ0AhjE3bVknJtcvQ6csxqjLkc= Message-ID: Date: Fri, 28 Oct 2022 18:35:13 +0300 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.3.0 Subject: Re: [PATCH v14 02/18] net/idpf: add support for device initialization Content-Language: en-US To: Junfeng Guo , qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, Xiaoyun Li , Xiao Wang , Wenjun Wu References: <20221027054505.1369248-2-junfeng.guo@intel.com> <20221027074729.1494529-1-junfeng.guo@intel.com> <20221027074729.1494529-3-junfeng.guo@intel.com> From: Andrew Rybchenko In-Reply-To: <20221027074729.1494529-3-junfeng.guo@intel.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 10/27/22 10:47, Junfeng Guo wrote: > Support device init and add the following dev ops: > - dev_configure > - dev_close > - dev_infos_get > > Signed-off-by: Beilei Xing > Signed-off-by: Xiaoyun Li > Signed-off-by: Xiao Wang > Signed-off-by: Wenjun Wu > Signed-off-by: Junfeng Guo [snip] > +static int idpf_dev_configure(struct rte_eth_dev *dev); > +static int idpf_dev_close(struct rte_eth_dev *dev); > +static int idpf_dev_info_get(struct rte_eth_dev *dev, > + struct rte_eth_dev_info *dev_info); > +static void idpf_adapter_rel(struct idpf_adapter *adapter); > + > +static const struct eth_dev_ops idpf_eth_dev_ops = { > + .dev_configure = idpf_dev_configure, > + .dev_close = idpf_dev_close, > + .dev_infos_get = idpf_dev_info_get, > +}; Typically it is better to avoid forward static declarations and simply define the ops structure after callbacks. > + > +static int > +idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) > +{ > + struct idpf_vport *vport = dev->data->dev_private; > + struct idpf_adapter *adapter = vport->adapter; > + > + dev_info->max_rx_queues = adapter->caps->max_rx_q; > + dev_info->max_tx_queues = adapter->caps->max_tx_q; > + dev_info->min_rx_bufsize = IDPF_MIN_BUF_SIZE; > + dev_info->max_rx_pktlen = IDPF_MAX_FRAME_SIZE; > + > + dev_info->max_mtu = dev_info->max_rx_pktlen - IDPF_ETH_OVERHEAD; > + dev_info->min_mtu = RTE_ETHER_MIN_MTU; > + > + dev_info->max_mac_addrs = IDPF_NUM_MACADDR_MAX; I guess it make sense if and only if you support API to add/remove unicast MAC addresses. > + > + return 0; > + [snip] > +static int > +idpf_init_vport(struct rte_eth_dev *dev) > +{ > + struct idpf_vport *vport = dev->data->dev_private; > + struct idpf_adapter *adapter = vport->adapter; > + uint16_t idx = adapter->cur_vport_idx; > + struct virtchnl2_create_vport *vport_info = > + (struct virtchnl2_create_vport *)adapter->vport_recv_info[idx]; > + int i, type, ret; > + > + vport->vport_id = vport_info->vport_id; > + vport->txq_model = vport_info->txq_model; > + vport->rxq_model = vport_info->rxq_model; > + vport->num_tx_q = vport_info->num_tx_q; > + vport->num_tx_complq = vport_info->num_tx_complq; > + vport->num_rx_q = vport_info->num_rx_q; > + vport->num_rx_bufq = vport_info->num_rx_bufq; > + vport->max_mtu = vport_info->max_mtu; > + rte_memcpy(vport->default_mac_addr, > + vport_info->default_mac_addr, ETH_ALEN); > + vport->sw_idx = idx; > + > + for (i = 0; i < vport_info->chunks.num_chunks; i++) { > + type = vport_info->chunks.chunks[i].type; > + switch (type) { > + case VIRTCHNL2_QUEUE_TYPE_TX: > + vport->chunks_info.tx_start_qid = > + vport_info->chunks.chunks[i].start_queue_id; > + vport->chunks_info.tx_qtail_start = > + vport_info->chunks.chunks[i].qtail_reg_start; > + vport->chunks_info.tx_qtail_spacing = > + vport_info->chunks.chunks[i].qtail_reg_spacing; > + break; > + case VIRTCHNL2_QUEUE_TYPE_RX: > + vport->chunks_info.rx_start_qid = > + vport_info->chunks.chunks[i].start_queue_id; > + vport->chunks_info.rx_qtail_start = > + vport_info->chunks.chunks[i].qtail_reg_start; > + vport->chunks_info.rx_qtail_spacing = > + vport_info->chunks.chunks[i].qtail_reg_spacing; > + break; > + case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: > + vport->chunks_info.tx_compl_start_qid = > + vport_info->chunks.chunks[i].start_queue_id; > + vport->chunks_info.tx_compl_qtail_start = > + vport_info->chunks.chunks[i].qtail_reg_start; > + vport->chunks_info.tx_compl_qtail_spacing = > + vport_info->chunks.chunks[i].qtail_reg_spacing; > + break; > + case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: > + vport->chunks_info.rx_buf_start_qid = > + vport_info->chunks.chunks[i].start_queue_id; > + vport->chunks_info.rx_buf_qtail_start = > + vport_info->chunks.chunks[i].qtail_reg_start; > + vport->chunks_info.rx_buf_qtail_spacing = > + vport_info->chunks.chunks[i].qtail_reg_spacing; > + break; > + default: > + PMD_INIT_LOG(ERR, "Unsupported queue type"); > + break; > + } > + } > + > + ret = idpf_parse_devarg_id(dev->data->name); > + if (ret < 0) { > + PMD_INIT_LOG(ERR, "Failed to parse devarg id."); > + return -1; Negative errno must be returned since finally it is used as rte_eth_dev_create() return value which is negative errno. > + } > + vport->devarg_id = ret; > + > + vport->dev_data = dev->data; > + > + adapter->vports[idx] = vport; > + > + return 0; > +} > + > +static int > +idpf_dev_configure(struct rte_eth_dev *dev) > +{ > + struct rte_eth_conf *conf = &dev->data->dev_conf; > + > + if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) { > + PMD_INIT_LOG(ERR, "Setting link speed is not supported"); > + return -1; The return value is used as rte_eth_dev_configure() return value which should be negative errno, not -1. Please, double-check all other similar cases. > + } > + > + if ((dev->data->nb_rx_queues == 1 && conf->rxmode.mq_mode != RTE_ETH_MQ_RX_NONE) || > + (dev->data->nb_rx_queues > 1 && conf->rxmode.mq_mode != RTE_ETH_MQ_RX_RSS)) { Rigth now (just after the patch) you don't support RSS since you don't handle corresponding configuratoin items. So, Nothing except RX_NONE is supported. RX_RSS should be added when you really support it (later). > + PMD_INIT_LOG(ERR, "Multi-queue packet distribution mode %d is not supported", > + conf->rxmode.mq_mode); > + return -1; > + } [snip] > +struct idpf_adapter * > +idpf_find_adapter(struct rte_pci_device *pci_dev) > +{ > + struct idpf_adapter *adapter; > + > + rte_spinlock_lock(&idpf_adapter_lock); > + TAILQ_FOREACH(adapter, &idpf_adapter_list, next) { > + if (strncmp(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE) == 0) { > + rte_spinlock_unlock(&idpf_adapter_lock); > + return adapter; Pointer to an element of the list protected by spin lock is returned here. > + } > + } > + rte_spinlock_unlock(&idpf_adapter_lock); > + > + return NULL; > +} [snip] > +static int > +idpf_pci_remove(struct rte_pci_device *pci_dev) > +{ > + struct idpf_adapter *adapter = idpf_find_adapter(pci_dev); Question about locking still stands. I'm not sure that I understand the purpose of locking here. Or why can it be omitted. Anyway, returing pointer to the list element when the list is protected by lock looks suspicous. > + uint16_t port_id; > + > + /* Ethdev created can be found RTE_ETH_FOREACH_DEV_OF through rte_device */ > + RTE_ETH_FOREACH_DEV_OF(port_id, &pci_dev->device) { > + rte_eth_dev_close(port_id); > + } > + > + rte_spinlock_lock(&idpf_adapter_lock); > + TAILQ_REMOVE(&idpf_adapter_list, adapter, next); > + rte_spinlock_unlock(&idpf_adapter_lock); > + idpf_adapter_rel(adapter); > + rte_free(adapter); > + > + return 0; > +} [snip] > +int > +idpf_vc_get_caps(struct idpf_adapter *adapter) > +{ > + struct virtchnl2_get_capabilities caps_msg; > + struct idpf_cmd_info args; > + int err; > + > + memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities)); > + caps_msg.csum_caps = > + VIRTCHNL2_CAP_TX_CSUM_L3_IPV4 | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP | > + VIRTCHNL2_CAP_TX_CSUM_GENERIC | > + VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP | > + VIRTCHNL2_CAP_RX_CSUM_GENERIC; > + > + caps_msg.seg_caps = > + VIRTCHNL2_CAP_SEG_IPV4_TCP | > + VIRTCHNL2_CAP_SEG_IPV4_UDP | > + VIRTCHNL2_CAP_SEG_IPV4_SCTP | > + VIRTCHNL2_CAP_SEG_IPV6_TCP | > + VIRTCHNL2_CAP_SEG_IPV6_UDP | > + VIRTCHNL2_CAP_SEG_IPV6_SCTP | > + VIRTCHNL2_CAP_SEG_GENERIC; > + > + caps_msg.rss_caps = > + VIRTCHNL2_CAP_RSS_IPV4_TCP | > + VIRTCHNL2_CAP_RSS_IPV4_UDP | > + VIRTCHNL2_CAP_RSS_IPV4_SCTP | > + VIRTCHNL2_CAP_RSS_IPV4_OTHER | > + VIRTCHNL2_CAP_RSS_IPV6_TCP | > + VIRTCHNL2_CAP_RSS_IPV6_UDP | > + VIRTCHNL2_CAP_RSS_IPV6_SCTP | > + VIRTCHNL2_CAP_RSS_IPV6_OTHER | > + VIRTCHNL2_CAP_RSS_IPV4_AH | > + VIRTCHNL2_CAP_RSS_IPV4_ESP | > + VIRTCHNL2_CAP_RSS_IPV4_AH_ESP | > + VIRTCHNL2_CAP_RSS_IPV6_AH | > + VIRTCHNL2_CAP_RSS_IPV6_ESP | > + VIRTCHNL2_CAP_RSS_IPV6_AH_ESP; > + > + caps_msg.other_caps = > + VIRTCHNL2_CAP_SPLITQ_QSCHED | > + VIRTCHNL2_CAP_CRC | > + VIRTCHNL2_CAP_WB_ON_ITR | > + VIRTCHNL2_CAP_PROMISC | > + VIRTCHNL2_CAP_LINK_SPEED | > + VIRTCHNL2_CAP_VLAN; > + My question aske in v11 still stands since I have no an answer yet. Basically it looks like corresponding caps should be added when corresponding offload support is added later. If no, I'd like to understand why. > + args.ops = VIRTCHNL2_OP_GET_CAPS; > + args.in_args = (uint8_t *)&caps_msg; > + args.in_args_size = sizeof(caps_msg); > + args.out_buffer = adapter->mbx_resp; > + args.out_size = IDPF_DFLT_MBX_BUF_SIZE; > + > + err = idpf_execute_vc_cmd(adapter, &args); > + if (err != 0) { > + PMD_DRV_LOG(ERR, > + "Failed to execute command of VIRTCHNL2_OP_GET_CAPS"); > + return err; > + } > + > + rte_memcpy(adapter->caps, args.out_buffer, sizeof(caps_msg)); > + > + return 0; > +} [snip] > +int > +idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable) > +{ > + struct idpf_adapter *adapter = vport->adapter; > + struct virtchnl2_vport vc_vport; > + struct idpf_cmd_info args; > + int err; > + > + vc_vport.vport_id = vport->vport_id; > + args.ops = enable ? VIRTCHNL2_OP_ENABLE_VPORT : > + VIRTCHNL2_OP_DISABLE_VPORT; > + args.in_args = (u8 *)&vc_vport; uint8_t should be used as you do above in idpf_vc_destroy_vport() and many other cases. [snip]