From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6FC1AA0613 for ; Mon, 23 Sep 2019 04:50:15 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 415E84CA6; Mon, 23 Sep 2019 04:50:15 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 6A4994C9D for ; Mon, 23 Sep 2019 04:50:12 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 22 Sep 2019 19:50:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,538,1559545200"; d="scan'208";a="203030036" Received: from fmsmsx107.amr.corp.intel.com ([10.18.124.205]) by fmsmga001.fm.intel.com with ESMTP; 22 Sep 2019 19:50:11 -0700 Received: from fmsmsx604.amr.corp.intel.com (10.18.126.84) by fmsmsx107.amr.corp.intel.com (10.18.124.205) with Microsoft SMTP Server (TLS) id 14.3.439.0; Sun, 22 Sep 2019 19:50:11 -0700 Received: from fmsmsx604.amr.corp.intel.com (10.18.126.84) by fmsmsx604.amr.corp.intel.com (10.18.126.84) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Sun, 22 Sep 2019 19:50:09 -0700 Received: from shsmsx104.ccr.corp.intel.com (10.239.4.70) by fmsmsx604.amr.corp.intel.com (10.18.126.84) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.1713.5 via Frontend Transport; Sun, 22 Sep 2019 19:50:09 -0700 Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.140]) by SHSMSX104.ccr.corp.intel.com ([169.254.5.32]) with mapi id 14.03.0439.000; Mon, 23 Sep 2019 10:50:08 +0800 From: "Wu, Jingjing" To: "Li, Xiaoyun" , "Wiles, Keith" , "Maslekar, Omkar" , "Liang, Cunming" CC: "dev@dpdk.org" Thread-Topic: [PATCH v4 1/4] raw/ntb: setup ntb queue Thread-Index: AQHVZr6ch0oCORCYWEqUBy01QYjgtKc4naxQ Date: Mon, 23 Sep 2019 02:50:07 +0000 Message-ID: <9BB6961774997848B5B42BEC655768F81151CAF1@SHSMSX103.ccr.corp.intel.com> References: <20190906075402.114177-1-xiaoyun.li@intel.com> <20190909032730.29718-1-xiaoyun.li@intel.com> <20190909032730.29718-2-xiaoyun.li@intel.com> In-Reply-To: <20190909032730.29718-2-xiaoyun.li@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiOTIxMjVjMmUtYzc1OS00MjdhLTgwOGQtYjQyYjA5ZTc5OGIzIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoibTNqYVV5SjFOVE9MWWNTRGh2YUR1RmVrazJyOVhSd1RqYmxKRHpEOUJLWDhQUk1vWHRSNUZJWkU3dHcreWoyTCJ9 x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.2.0.6 dlp-reaction: no-action x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v4 1/4] raw/ntb: setup ntb queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" <...> > +static void > +ntb_rxq_release(struct ntb_rx_queue *rxq) > +{ > + if (!rxq) { > + NTB_LOG(ERR, "Pointer to rxq is NULL"); > + return; > + } > + > + ntb_rxq_release_mbufs(rxq); > + > + rte_free(rxq->sw_ring); > + rte_free(rxq); It' better to free rxq out of this function, as the point of param "rxq" ca= nnot be set to NULL in this func. > +} > + > +static int > +ntb_rxq_setup(struct rte_rawdev *dev, > + uint16_t qp_id, > + rte_rawdev_obj_t queue_conf) > +{ > + struct ntb_queue_conf *rxq_conf =3D queue_conf; > + struct ntb_hw *hw =3D dev->dev_private; > + struct ntb_rx_queue *rxq; > + > + /* Allocate the rx queue data structure */ > + rxq =3D rte_zmalloc_socket("ntb rx queue", > + sizeof(struct ntb_rx_queue), > + RTE_CACHE_LINE_SIZE, > + dev->socket_id); > + if (!rxq) { > + NTB_LOG(ERR, "Failed to allocate memory for " > + "rx queue data structure."); > + return -ENOMEM; Need to free rxq here. <...> > +static void > +ntb_txq_release(struct ntb_tx_queue *txq) > { > + if (!txq) { > + NTB_LOG(ERR, "Pointer to txq is NULL"); > + return; > + } > + > + ntb_txq_release_mbufs(txq); > + > + rte_free(txq->sw_ring); > + rte_free(txq); The same as above "ntb_rxq_release". <...> > +static int > +ntb_queue_setup(struct rte_rawdev *dev, > + uint16_t queue_id, > + rte_rawdev_obj_t queue_conf) > +{ > + struct ntb_hw *hw =3D dev->dev_private; > + int ret; > + > + if (queue_id > hw->queue_pairs) Should be ">=3D" ? > + return -EINVAL; > + > + ret =3D ntb_txq_setup(dev, queue_id, queue_conf); > + if (ret < 0) > + return ret; > + > + ret =3D ntb_rxq_setup(dev, queue_id, queue_conf); > + > + return ret; > +} > + > static int > -ntb_queue_release(struct rte_rawdev *dev __rte_unused, > - uint16_t queue_id __rte_unused) > +ntb_queue_release(struct rte_rawdev *dev, uint16_t queue_id) > { > + struct ntb_hw *hw =3D dev->dev_private; > + struct ntb_tx_queue *txq; > + struct ntb_rx_queue *rxq; > + > + if (queue_id > hw->queue_pairs) Should be ">=3D" ? > + return -EINVAL; > + > + txq =3D hw->tx_queues[queue_id]; > + rxq =3D hw->rx_queues[queue_id]; > + ntb_txq_release(txq); > + ntb_rxq_release(rxq); > + > return 0; > } >=20 > @@ -234,6 +470,77 @@ ntb_queue_count(struct rte_rawdev *dev) > return hw->queue_pairs; > } >=20 > +static int > +ntb_queue_init(struct rte_rawdev *dev, uint16_t qp_id) > +{ > + struct ntb_hw *hw =3D dev->dev_private; > + struct ntb_rx_queue *rxq =3D hw->rx_queues[qp_id]; > + struct ntb_tx_queue *txq =3D hw->tx_queues[qp_id]; > + volatile struct ntb_header *local_hdr; > + struct ntb_header *remote_hdr; > + uint16_t q_size =3D hw->queue_size; > + uint32_t hdr_offset; > + void *bar_addr; > + uint16_t i; > + > + if (hw->ntb_ops->get_peer_mw_addr =3D=3D NULL) { > + NTB_LOG(ERR, "Failed to get mapped peer addr."); Would it be better to log as "XX ops is not supported" to keep consistent a= s others? > + return -EINVAL; > + } > + > + /* Put queue info into the start of shared memory. */ > + hdr_offset =3D hw->hdr_size_per_queue * qp_id; > + local_hdr =3D (volatile struct ntb_header *) > + ((size_t)hw->mz[0]->addr + hdr_offset); > + bar_addr =3D (*hw->ntb_ops->get_peer_mw_addr)(dev, 0); > + if (bar_addr =3D=3D NULL) > + return -EINVAL; > + remote_hdr =3D (struct ntb_header *) > + ((size_t)bar_addr + hdr_offset); > + > + /* rxq init. */ > + rxq->rx_desc_ring =3D (struct ntb_desc *) > + (&remote_hdr->desc_ring); > + rxq->rx_used_ring =3D (volatile struct ntb_used *) > + (&local_hdr->desc_ring[q_size]); > + rxq->avail_cnt =3D &remote_hdr->avail_cnt; > + rxq->used_cnt =3D &local_hdr->used_cnt; > + > + for (i =3D 0; i < rxq->nb_rx_desc - 1; i++) { > + struct rte_mbuf *mbuf =3D rte_mbuf_raw_alloc(rxq->mpool); > + if (unlikely(!mbuf)) { > + NTB_LOG(ERR, "Failed to allocate mbuf for RX"); Need release mbufs allocated here or in " ntb_dev_start". <...> > + hw->hdr_size_per_queue =3D RTE_ALIGN(sizeof(struct ntb_header) + > + hw->queue_size * sizeof(struct ntb_desc) + > + hw->queue_size * sizeof(struct ntb_used), > + RTE_CACHE_LINE_SIZE); hw->hdr_size_per_queue is internal information, why put the assignment in n= tb_dev_info_get? > + info->ntb_hdr_size =3D hw->hdr_size_per_queue * hw->queue_pairs; > } >=20 > static int > -ntb_dev_configure(const struct rte_rawdev *dev __rte_unused, > - rte_rawdev_obj_t config __rte_unused) > +ntb_dev_configure(const struct rte_rawdev *dev, rte_rawdev_obj_t config) > { > + struct ntb_dev_config *conf =3D config; > + struct ntb_hw *hw =3D dev->dev_private; > + int ret; > + > + hw->queue_pairs =3D conf->num_queues; > + hw->queue_size =3D conf->queue_size; > + hw->used_mw_num =3D conf->mz_num; > + hw->mz =3D conf->mz_list; > + hw->rx_queues =3D rte_zmalloc("ntb_rx_queues", > + sizeof(struct ntb_rx_queue *) * hw->queue_pairs, 0); > + hw->tx_queues =3D rte_zmalloc("ntb_tx_queues", > + sizeof(struct ntb_tx_queue *) * hw->queue_pairs, 0); > + > + /* Start handshake with the peer. */ > + ret =3D ntb_handshake_work(dev); > + if (ret < 0) Free? > + return ret; > + > return 0; > } >=20 > @@ -337,21 +637,52 @@ static int > ntb_dev_start(struct rte_rawdev *dev) > { > struct ntb_hw *hw =3D dev->dev_private; > - int ret, i; > + uint32_t peer_base_l, peer_val; > + uint64_t peer_base_h; > + uint32_t i; > + int ret; >=20 > - /* TODO: init queues and start queues. */ > + if (!hw->link_status || !hw->peer_dev_up) > + return -EINVAL; >=20 > - /* Map memory of bar_size to remote. */ > - hw->mz =3D rte_zmalloc("struct rte_memzone *", > - hw->mw_cnt * sizeof(struct rte_memzone *), 0); > - for (i =3D 0; i < hw->mw_cnt; i++) { > - ret =3D ntb_set_mw(dev, i, hw->mw_size[i]); > + for (i =3D 0; i < hw->queue_pairs; i++) { > + ret =3D ntb_queue_init(dev, i); > if (ret) { > - NTB_LOG(ERR, "Fail to set mw."); > + NTB_LOG(ERR, "Failed to init queue."); Free when error. <...> > +struct ntb_used { > + uint16_t len; /* buffer length */ > +#define NTB_FLAG_EOP 1 /* end of packet */ Better to=20 > + uint16_t flags; /* flags */ > +}; > + > +struct ntb_rx_entry { > + struct rte_mbuf *mbuf; > +}; > + > +struct ntb_rx_queue { > + struct ntb_desc *rx_desc_ring; > + volatile struct ntb_used *rx_used_ring; > + uint16_t *avail_cnt; > + volatile uint16_t *used_cnt; > + uint16_t last_avail; > + uint16_t last_used; > + uint16_t nb_rx_desc; > + > + uint16_t rx_free_thresh; > + > + struct rte_mempool *mpool; /**< mempool for mbuf allocation */ Generally comments: comments in internal header doesn't need to be wrapped = with "/**< */", keep consistent in one file would be nice.