From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3E281A0613 for ; Mon, 23 Sep 2019 05:28:55 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 815C85B32; Mon, 23 Sep 2019 05:28:54 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 3967437A2 for ; Mon, 23 Sep 2019 05:28:53 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 22 Sep 2019 20:28:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,538,1559545200"; d="scan'208";a="190532854" Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206]) by orsmga003.jf.intel.com with ESMTP; 22 Sep 2019 20:28:51 -0700 Received: from FMSMSX110.amr.corp.intel.com (10.18.116.10) by FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS) id 14.3.439.0; Sun, 22 Sep 2019 20:28:51 -0700 Received: from shsmsx106.ccr.corp.intel.com (10.239.4.159) by fmsmsx110.amr.corp.intel.com (10.18.116.10) with Microsoft SMTP Server (TLS) id 14.3.439.0; Sun, 22 Sep 2019 20:28:51 -0700 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.92]) by SHSMSX106.ccr.corp.intel.com ([169.254.10.86]) with mapi id 14.03.0439.000; Mon, 23 Sep 2019 11:28:48 +0800 From: "Li, Xiaoyun" To: "Wu, Jingjing" , "Wiles, Keith" , "Maslekar, Omkar" , "Liang, Cunming" CC: "dev@dpdk.org" Thread-Topic: [PATCH v4 1/4] raw/ntb: setup ntb queue Thread-Index: AQHVZr6cDYkgDM4jkkKYxlHjTIyoqac4H52AgACIgIA= Date: Mon, 23 Sep 2019 03:28:47 +0000 Message-ID: References: <20190906075402.114177-1-xiaoyun.li@intel.com> <20190909032730.29718-1-xiaoyun.li@intel.com> <20190909032730.29718-2-xiaoyun.li@intel.com> <9BB6961774997848B5B42BEC655768F81151CAF1@SHSMSX103.ccr.corp.intel.com> In-Reply-To: <9BB6961774997848B5B42BEC655768F81151CAF1@SHSMSX103.ccr.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v4 1/4] raw/ntb: setup ntb queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi > -----Original Message----- > From: Wu, Jingjing > Sent: Monday, September 23, 2019 10:50 > To: Li, Xiaoyun ; Wiles, Keith ; > Maslekar, Omkar ; Liang, Cunming > > Cc: dev@dpdk.org > Subject: RE: [PATCH v4 1/4] raw/ntb: setup ntb queue >=20 > <...> > > +static void > > +ntb_rxq_release(struct ntb_rx_queue *rxq) { > > + if (!rxq) { > > + NTB_LOG(ERR, "Pointer to rxq is NULL"); > > + return; > > + } > > + > > + ntb_rxq_release_mbufs(rxq); > > + > > + rte_free(rxq->sw_ring); > > + rte_free(rxq); > It' better to free rxq out of this function, as the point of param "rxq" = cannot be > set to NULL in this func. OK. >=20 > > +} > > + > > +static int > > +ntb_rxq_setup(struct rte_rawdev *dev, > > + uint16_t qp_id, > > + rte_rawdev_obj_t queue_conf) > > +{ > > + struct ntb_queue_conf *rxq_conf =3D queue_conf; > > + struct ntb_hw *hw =3D dev->dev_private; > > + struct ntb_rx_queue *rxq; > > + > > + /* Allocate the rx queue data structure */ > > + rxq =3D rte_zmalloc_socket("ntb rx queue", > > + sizeof(struct ntb_rx_queue), > > + RTE_CACHE_LINE_SIZE, > > + dev->socket_id); > > + if (!rxq) { > > + NTB_LOG(ERR, "Failed to allocate memory for " > > + "rx queue data structure."); > > + return -ENOMEM; > Need to free rxq here. It only allocates memory. And this error means allocate failure and rxq =3D= =3D NULL. So no need to free here. > <...> >=20 > > +static void > > +ntb_txq_release(struct ntb_tx_queue *txq) > > { > > + if (!txq) { > > + NTB_LOG(ERR, "Pointer to txq is NULL"); > > + return; > > + } > > + > > + ntb_txq_release_mbufs(txq); > > + > > + rte_free(txq->sw_ring); > > + rte_free(txq); > The same as above "ntb_rxq_release". OK. >=20 > <...> >=20 > > +static int > > +ntb_queue_setup(struct rte_rawdev *dev, > > + uint16_t queue_id, > > + rte_rawdev_obj_t queue_conf) > > +{ > > + struct ntb_hw *hw =3D dev->dev_private; > > + int ret; > > + > > + if (queue_id > hw->queue_pairs) > Should be ">=3D" ? Yes. >=20 > > + return -EINVAL; > > + > > + ret =3D ntb_txq_setup(dev, queue_id, queue_conf); > > + if (ret < 0) > > + return ret; > > + > > + ret =3D ntb_rxq_setup(dev, queue_id, queue_conf); > > + > > + return ret; > > +} > > + > > static int > > -ntb_queue_release(struct rte_rawdev *dev __rte_unused, > > - uint16_t queue_id __rte_unused) > > +ntb_queue_release(struct rte_rawdev *dev, uint16_t queue_id) > > { > > + struct ntb_hw *hw =3D dev->dev_private; > > + struct ntb_tx_queue *txq; > > + struct ntb_rx_queue *rxq; > > + > > + if (queue_id > hw->queue_pairs) > Should be ">=3D" ? >=20 > > + return -EINVAL; > > + > > + txq =3D hw->tx_queues[queue_id]; > > + rxq =3D hw->rx_queues[queue_id]; > > + ntb_txq_release(txq); > > + ntb_rxq_release(rxq); > > + > > return 0; > > } > > > > @@ -234,6 +470,77 @@ ntb_queue_count(struct rte_rawdev *dev) > > return hw->queue_pairs; > > } > > > > +static int > > +ntb_queue_init(struct rte_rawdev *dev, uint16_t qp_id) { > > + struct ntb_hw *hw =3D dev->dev_private; > > + struct ntb_rx_queue *rxq =3D hw->rx_queues[qp_id]; > > + struct ntb_tx_queue *txq =3D hw->tx_queues[qp_id]; > > + volatile struct ntb_header *local_hdr; > > + struct ntb_header *remote_hdr; > > + uint16_t q_size =3D hw->queue_size; > > + uint32_t hdr_offset; > > + void *bar_addr; > > + uint16_t i; > > + > > + if (hw->ntb_ops->get_peer_mw_addr =3D=3D NULL) { > > + NTB_LOG(ERR, "Failed to get mapped peer addr."); > Would it be better to log as "XX ops is not supported" to keep consistent= as > others? OK. >=20 > > + return -EINVAL; > > + } > > + > > + /* Put queue info into the start of shared memory. */ > > + hdr_offset =3D hw->hdr_size_per_queue * qp_id; > > + local_hdr =3D (volatile struct ntb_header *) > > + ((size_t)hw->mz[0]->addr + hdr_offset); > > + bar_addr =3D (*hw->ntb_ops->get_peer_mw_addr)(dev, 0); > > + if (bar_addr =3D=3D NULL) > > + return -EINVAL; > > + remote_hdr =3D (struct ntb_header *) > > + ((size_t)bar_addr + hdr_offset); > > + > > + /* rxq init. */ > > + rxq->rx_desc_ring =3D (struct ntb_desc *) > > + (&remote_hdr->desc_ring); > > + rxq->rx_used_ring =3D (volatile struct ntb_used *) > > + (&local_hdr->desc_ring[q_size]); > > + rxq->avail_cnt =3D &remote_hdr->avail_cnt; > > + rxq->used_cnt =3D &local_hdr->used_cnt; > > + > > + for (i =3D 0; i < rxq->nb_rx_desc - 1; i++) { > > + struct rte_mbuf *mbuf =3D rte_mbuf_raw_alloc(rxq->mpool); > > + if (unlikely(!mbuf)) { > > + NTB_LOG(ERR, "Failed to allocate mbuf for RX"); > Need release mbufs allocated here or in " ntb_dev_start". OK. >=20 > <...> >=20 > > + hw->hdr_size_per_queue =3D RTE_ALIGN(sizeof(struct ntb_header) + > > + hw->queue_size * sizeof(struct ntb_desc) + > > + hw->queue_size * sizeof(struct ntb_used), > > + RTE_CACHE_LINE_SIZE); > hw->hdr_size_per_queue is internal information, why put the assignment in > ntb_dev_info_get? Because the calculation needs the app to pass params (queue size and queue = number). And the app needs the result to populate mempool and then configur= e to driver. There is no else place where can do the calculation. >=20 > > + info->ntb_hdr_size =3D hw->hdr_size_per_queue * hw->queue_pairs; > > } > > > > static int > > -ntb_dev_configure(const struct rte_rawdev *dev __rte_unused, > > - rte_rawdev_obj_t config __rte_unused) > > +ntb_dev_configure(const struct rte_rawdev *dev, rte_rawdev_obj_t > > +config) > > { > > + struct ntb_dev_config *conf =3D config; > > + struct ntb_hw *hw =3D dev->dev_private; > > + int ret; > > + > > + hw->queue_pairs =3D conf->num_queues; > > + hw->queue_size =3D conf->queue_size; > > + hw->used_mw_num =3D conf->mz_num; > > + hw->mz =3D conf->mz_list; > > + hw->rx_queues =3D rte_zmalloc("ntb_rx_queues", > > + sizeof(struct ntb_rx_queue *) * hw->queue_pairs, 0); > > + hw->tx_queues =3D rte_zmalloc("ntb_tx_queues", > > + sizeof(struct ntb_tx_queue *) * hw->queue_pairs, 0); > > + > > + /* Start handshake with the peer. */ > > + ret =3D ntb_handshake_work(dev); > > + if (ret < 0) > Free? OK. >=20 > > + return ret; > > + > > return 0; > > } > > > > @@ -337,21 +637,52 @@ static int > > ntb_dev_start(struct rte_rawdev *dev) { > > struct ntb_hw *hw =3D dev->dev_private; > > - int ret, i; > > + uint32_t peer_base_l, peer_val; > > + uint64_t peer_base_h; > > + uint32_t i; > > + int ret; > > > > - /* TODO: init queues and start queues. */ > > + if (!hw->link_status || !hw->peer_dev_up) > > + return -EINVAL; > > > > - /* Map memory of bar_size to remote. */ > > - hw->mz =3D rte_zmalloc("struct rte_memzone *", > > - hw->mw_cnt * sizeof(struct rte_memzone *), 0); > > - for (i =3D 0; i < hw->mw_cnt; i++) { > > - ret =3D ntb_set_mw(dev, i, hw->mw_size[i]); > > + for (i =3D 0; i < hw->queue_pairs; i++) { > > + ret =3D ntb_queue_init(dev, i); > > if (ret) { > > - NTB_LOG(ERR, "Fail to set mw."); > > + NTB_LOG(ERR, "Failed to init queue."); > Free when error. OK. >=20 > <...> >=20 > > +struct ntb_used { > > + uint16_t len; /* buffer length */ > > +#define NTB_FLAG_EOP 1 /* end of packet */ > Better to > > + uint16_t flags; /* flags */ > > +}; > > + > > +struct ntb_rx_entry { > > + struct rte_mbuf *mbuf; > > +}; > > + > > +struct ntb_rx_queue { > > + struct ntb_desc *rx_desc_ring; > > + volatile struct ntb_used *rx_used_ring; > > + uint16_t *avail_cnt; > > + volatile uint16_t *used_cnt; > > + uint16_t last_avail; > > + uint16_t last_used; > > + uint16_t nb_rx_desc; > > + > > + uint16_t rx_free_thresh; > > + > > + struct rte_mempool *mpool; /**< mempool for mbuf allocation */ > Generally comments: comments in internal header doesn't need to be wrappe= d > with "/**< */", keep consistent in one file would be nice. OK.