From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9C4F3A00C4; Sun, 24 Apr 2022 09:16:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 78A544113D; Sun, 24 Apr 2022 09:15:51 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 3971940E09 for ; Sun, 24 Apr 2022 09:15:49 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650784549; x=1682320549; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=emFSHhBoOh9TwZuEuCgauaVH8ExFmnhToPhPeb06jH4=; b=k7T11Ya6JIZLyd5yOOuXHPyDb9VJPwKzmyBEIxXoSObS5Cur54RltkjD WkKDMq2X5A4mtJiyBlQUzmVihC+la9iQYZ6ozTMiRiJnJrZMEZ3uNsARl +ct8oFSI41CKM92p/kNAmPSHm7C4Hztaws3Fj3Kj+FQb2qTkUUHgbVbAN l48yRS9QhZiarePD551edR4uxxRwzBFr7fQhI/Jb/hXXLdNc0am+KpXCD bmGImYAe9Vtl8uVRFa0VfMZU6hgfSZg8LGS/UeU9Nz64xNHmZq3ltFSqF 7N+gVw3GLMBG8Q20uu4+UqkpKcbU0tttWhimNxnk1+j/EniEiHdpjxhRX w==; X-IronPort-AV: E=McAfee;i="6400,9594,10326"; a="263855988" X-IronPort-AV: E=Sophos;i="5.90,286,1643702400"; d="scan'208";a="263855988" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2022 00:15:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,286,1643702400"; d="scan'208";a="594773448" Received: from unknown (HELO npg-dpdk-simeisu-cvl-119d218.sh.intel.com) ([10.67.119.218]) by orsmga001.jf.intel.com with ESMTP; 24 Apr 2022 00:15:46 -0700 From: Simei Su To: qi.z.zhang@intel.com, qiming.yang@intel.com Cc: dev@dpdk.org, wenjun1.wu@intel.com, Simei Su Subject: [PATCH v2 2/3] net/iavf: enable Rx timestamp on Flex Descriptor Date: Sun, 24 Apr 2022 15:08:44 +0800 Message-Id: <20220424070845.87096-3-simei.su@intel.com> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20220424070845.87096-1-simei.su@intel.com> References: <20220408021307.272746-1-simei.su@intel.com> <20220424070845.87096-1-simei.su@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Dump Rx timestamp value into dynamic mbuf field by flex descriptor. This feature is turned on by dev config "enable-rx-timestamp". Currently, it's only supported under scalar path. Signed-off-by: Simei Su --- doc/guides/nics/features/iavf.ini | 1 + doc/guides/rel_notes/release_22_07.rst | 1 + drivers/net/iavf/iavf.h | 5 ++ drivers/net/iavf/iavf_ethdev.c | 26 +++++++++++ drivers/net/iavf/iavf_rxtx.c | 58 +++++++++++++++++++++++ drivers/net/iavf/iavf_rxtx.h | 22 +++++++++ drivers/net/iavf/iavf_rxtx_vec_common.h | 3 ++ drivers/net/iavf/iavf_vchnl.c | 83 ++++++++++++++++++++++++++++----- 8 files changed, 188 insertions(+), 11 deletions(-) diff --git a/doc/guides/nics/features/iavf.ini b/doc/guides/nics/features/iavf.ini index 01f5142..5a0d9d8 100644 --- a/doc/guides/nics/features/iavf.ini +++ b/doc/guides/nics/features/iavf.ini @@ -24,6 +24,7 @@ CRC offload = Y VLAN offload = Y L3 checksum offload = P L4 checksum offload = P +Timestamp offload = P Packet type parsing = Y Rx descriptor status = Y Tx descriptor status = Y diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index f1b4057..567f23d 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -59,6 +59,7 @@ New Features * Added Tx QoS queue rate limitation support. * Added quanta size configuration support. + * Added ``DEV_RX_OFFLOAD_TIMESTAMP`` support. Removed Items ------------- diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index c0a4a47..3255c93 100644 --- a/drivers/net/iavf/iavf.h +++ b/drivers/net/iavf/iavf.h @@ -268,6 +268,8 @@ struct iavf_info { struct iavf_tm_conf tm_conf; struct rte_eth_dev *eth_dev; + + uint32_t ptp_caps; }; #define IAVF_MAX_PKT_TYPE 1024 @@ -312,6 +314,7 @@ struct iavf_adapter { bool stopped; uint16_t fdir_ref_cnt; struct iavf_devargs devargs; + uint64_t phc_time; }; /* IAVF_DEV_PRIVATE_TO */ @@ -476,4 +479,6 @@ int iavf_ipsec_crypto_request(struct iavf_adapter *adapter, uint8_t *msg, size_t msg_len, uint8_t *resp_msg, size_t resp_msg_len); extern const struct rte_tm_ops iavf_tm_ops; +int iavf_get_ptp_cap(struct iavf_adapter *adapter); +int iavf_get_phc_time(struct iavf_adapter *adapter); #endif /* _IAVF_ETHDEV_H_ */ diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 7d093bd..89e4240 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -36,6 +36,9 @@ #define IAVF_PROTO_XTR_ARG "proto_xtr" #define IAVF_QUANTA_SIZE_ARG "quanta_size" +uint64_t iavf_timestamp_dynflag; +int iavf_timestamp_dynfield_offset = -1; + static const char * const iavf_valid_args[] = { IAVF_PROTO_XTR_ARG, IAVF_QUANTA_SIZE_ARG, @@ -687,6 +690,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq) struct rte_eth_dev_data *dev_data = dev->data; uint16_t buf_size, max_pkt_len; uint32_t frame_size = dev->data->mtu + IAVF_ETH_OVERHEAD; + enum iavf_status err; buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM; @@ -705,6 +709,18 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq) return -EINVAL; } + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { + /* Register mbuf field and flag for Rx timestamp */ + err = rte_mbuf_dyn_rx_timestamp_register( + &iavf_timestamp_dynfield_offset, + &iavf_timestamp_dynflag); + if (err) { + PMD_DRV_LOG(ERR, + "Cannot register mbuf field/flag for timestamp"); + return -EINVAL; + } + } + rxq->max_pkt_len = max_pkt_len; if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) || rxq->max_pkt_len > buf_size) { @@ -947,6 +963,13 @@ iavf_dev_start(struct rte_eth_dev *dev) return -1; } + if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_CAP_PTP) { + if (iavf_get_ptp_cap(adapter)) { + PMD_INIT_LOG(ERR, "Failed to get ptp capability"); + return -1; + } + } + if (iavf_init_queues(dev) != 0) { PMD_DRV_LOG(ERR, "failed to do Queue init"); return -1; @@ -1092,6 +1115,9 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_CRC) dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_KEEP_CRC; + if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_CAP_PTP) + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP; + if (iavf_ipsec_crypto_supported(adapter)) { dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SECURITY; dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY; diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index c21f818..2d3bafd 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -1422,6 +1422,11 @@ iavf_recv_pkts_flex_rxd(void *rx_queue, uint64_t dma_addr; uint64_t pkt_flags; const uint32_t *ptype_tbl; + struct iavf_adapter *ad = rxq->vsi->adapter; + uint64_t ts_ns; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) + rxq->hw_register_set = 1; nb_rx = 0; nb_hold = 0; @@ -1491,6 +1496,21 @@ iavf_recv_pkts_flex_rxd(void *rx_queue, &rxq->stats.ipsec_crypto); rxd_to_pkt_fields_ops[rxq->rxdid](rxq, rxm, &rxd); pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0); + + if (iavf_timestamp_dynflag > 0) { + if (rxq->hw_register_set) + iavf_get_phc_time(ad); + + rxq->hw_register_set = 0; + ts_ns = iavf_tstamp_convert_32b_64b(ad->phc_time, + rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high)); + + *RTE_MBUF_DYNFIELD(rxm, + iavf_timestamp_dynfield_offset, + rte_mbuf_timestamp_t *) = ts_ns; + rxm->ol_flags |= iavf_timestamp_dynflag; + } + rxm->ol_flags |= pkt_flags; rx_pkts[nb_rx++] = rxm; @@ -1519,11 +1539,16 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t rx_stat_err0; uint64_t dma_addr; uint64_t pkt_flags; + struct iavf_adapter *ad = rxq->vsi->adapter; + uint64_t ts_ns; volatile union iavf_rx_desc *rx_ring = rxq->rx_ring; volatile union iavf_rx_flex_desc *rxdp; const uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) + rxq->hw_register_set = 1; + while (nb_rx < nb_pkts) { rxdp = (volatile union iavf_rx_flex_desc *)&rx_ring[rx_id]; rx_stat_err0 = rte_le_to_cpu_16(rxdp->wb.status_error0); @@ -1636,6 +1661,20 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, rxd_to_pkt_fields_ops[rxq->rxdid](rxq, first_seg, &rxd); pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0); + if (iavf_timestamp_dynflag > 0) { + if (rxq->hw_register_set) + iavf_get_phc_time(ad); + + rxq->hw_register_set = 0; + ts_ns = iavf_tstamp_convert_32b_64b(ad->phc_time, + rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high)); + + *RTE_MBUF_DYNFIELD(first_seg, + iavf_timestamp_dynfield_offset, + rte_mbuf_timestamp_t *) = ts_ns; + first_seg->ol_flags |= iavf_timestamp_dynflag; + } + first_seg->ol_flags |= pkt_flags; /* Prefetch data of first segment, if configured to do so. */ @@ -1831,6 +1870,8 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq, int32_t nb_staged = 0; uint64_t pkt_flags; const uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + struct iavf_adapter *ad = rxq->vsi->adapter; + uint64_t ts_ns; rxdp = (volatile union iavf_rx_flex_desc *)&rxq->rx_ring[rxq->rx_tail]; rxep = &rxq->sw_ring[rxq->rx_tail]; @@ -1841,6 +1882,9 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq, if (!(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_DD_S))) return 0; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) + rxq->hw_register_set = 1; + /* Scan LOOK_AHEAD descriptors at a time to determine which * descriptors reference packets that are ready to be received. */ @@ -1897,6 +1941,20 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq, stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0); pkt_flags = iavf_flex_rxd_error_to_pkt_flags(stat_err0); + if (iavf_timestamp_dynflag > 0) { + if (rxq->hw_register_set) + iavf_get_phc_time(ad); + + rxq->hw_register_set = 0; + ts_ns = iavf_tstamp_convert_32b_64b(ad->phc_time, + rte_le_to_cpu_32(rxdp[j].wb.flex_ts.ts_high)); + + *RTE_MBUF_DYNFIELD(mb, + iavf_timestamp_dynfield_offset, + rte_mbuf_timestamp_t *) = ts_ns; + mb->ol_flags |= iavf_timestamp_dynflag; + } + mb->ol_flags |= pkt_flags; /* Put up to nb_pkts directly into buffers */ diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h index bf8aebb..37453c4 100644 --- a/drivers/net/iavf/iavf_rxtx.h +++ b/drivers/net/iavf/iavf_rxtx.h @@ -72,6 +72,9 @@ #define IAVF_TX_OFFLOAD_NOTSUP_MASK \ (RTE_MBUF_F_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK) +extern uint64_t iavf_timestamp_dynflag; +extern int iavf_timestamp_dynfield_offset; + /** * Rx Flex Descriptors * These descriptors are used instead of the legacy version descriptors @@ -219,6 +222,7 @@ struct iavf_rx_queue { /* flexible descriptor metadata extraction offload flag */ struct iavf_rx_queue_stats stats; uint64_t offloads; + uint32_t hw_register_set; }; struct iavf_tx_entry { @@ -778,6 +782,24 @@ void iavf_fdir_rx_proc_enable(struct iavf_adapter *ad, bool on) } } +static inline +uint64_t iavf_tstamp_convert_32b_64b(uint64_t time, uint32_t in_timestamp) +{ + const uint64_t mask = 0xFFFFFFFF; + uint32_t delta; + uint64_t ns; + + delta = (in_timestamp - (uint32_t)(time & mask)); + if (delta > (mask / 2)) { + delta = ((uint32_t)(time & mask) - in_timestamp); + ns = time - delta; + } else { + ns = time + delta; + } + + return ns; +} + #ifdef RTE_LIBRTE_IAVF_DEBUG_DUMP_DESC #define IAVF_DUMP_RX_DESC(rxq, desc, rx_id) \ iavf_dump_rx_descriptor(rxq, desc, rx_id) diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h index 1fd37b7..a59cb2c 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_common.h +++ b/drivers/net/iavf/iavf_rxtx_vec_common.h @@ -231,6 +231,9 @@ iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq) if (rxq->proto_xtr != IAVF_PROTO_XTR_NONE) return -1; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) + return -1; + if (rxq->offloads & IAVF_RX_VECTOR_OFFLOAD) return IAVF_VECTOR_OFFLOAD_PATH; diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index f9452d1..b654433 100644 --- a/drivers/net/iavf/iavf_vchnl.c +++ b/drivers/net/iavf/iavf_vchnl.c @@ -502,7 +502,8 @@ iavf_get_vf_resource(struct iavf_adapter *adapter) VIRTCHNL_VF_OFFLOAD_VLAN_V2 | VIRTCHNL_VF_LARGE_NUM_QPAIRS | VIRTCHNL_VF_OFFLOAD_QOS | - VIRTCHNL_VF_OFFLOAD_INLINE_IPSEC_CRYPTO; + VIRTCHNL_VF_OFFLOAD_INLINE_IPSEC_CRYPTO | + VIRTCHNL_VF_CAP_PTP; args.in_args = (uint8_t *)∩︀ args.in_args_size = sizeof(caps); @@ -1047,16 +1048,21 @@ iavf_configure_queues(struct iavf_adapter *adapter, vc_qp->rxq.crc_disable = rxq[i]->crc_len != 0 ? 1 : 0; #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC if (vf->vf_res->vf_cap_flags & - VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC && - vf->supported_rxdid & BIT(rxq[i]->rxdid)) { - vc_qp->rxq.rxdid = rxq[i]->rxdid; - PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]", - vc_qp->rxq.rxdid, i); - } else { - PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, " - "request default RXDID[%d] in Queue[%d]", - rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i); - vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1; + VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) { + if (vf->supported_rxdid & BIT(rxq[i]->rxdid)) { + vc_qp->rxq.rxdid = rxq[i]->rxdid; + PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]", + vc_qp->rxq.rxdid, i); + } else { + PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, " + "request default RXDID[%d] in Queue[%d]", + rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i); + vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1; + } + + if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_CAP_PTP && + vf->ptp_caps & VIRTCHNL_1588_PTP_CAP_RX_TSTAMP) + vc_qp->rxq.flags |= VIRTCHNL_PTP_RX_TSTAMP; } #else if (vf->vf_res->vf_cap_flags & @@ -1859,3 +1865,58 @@ iavf_set_vf_quanta_size(struct iavf_adapter *adapter, u16 start_queue_id, u16 nu return 0; } + +int +iavf_get_ptp_cap(struct iavf_adapter *adapter) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); + struct virtchnl_ptp_caps ptp_caps; + struct iavf_cmd_info args; + int err; + + ptp_caps.caps = VIRTCHNL_1588_PTP_CAP_RX_TSTAMP | + VIRTCHNL_1588_PTP_CAP_READ_PHC; + + args.ops = VIRTCHNL_OP_1588_PTP_GET_CAPS; + args.in_args = (uint8_t *)&ptp_caps; + args.in_args_size = sizeof(ptp_caps); + args.out_buffer = vf->aq_resp; + args.out_size = IAVF_AQ_BUF_SZ; + + err = iavf_execute_vf_cmd(adapter, &args, 0); + if (err) { + PMD_DRV_LOG(ERR, + "Failed to execute command of OP_1588_PTP_GET_CAPS"); + return err; + } + + vf->ptp_caps = ((struct virtchnl_ptp_caps *)args.out_buffer)->caps; + + return 0; +} + +int +iavf_get_phc_time(struct iavf_adapter *adapter) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); + struct virtchnl_phc_time phc_time; + struct iavf_cmd_info args; + int err; + + args.ops = VIRTCHNL_OP_1588_PTP_GET_TIME; + args.in_args = (uint8_t *)&phc_time; + args.in_args_size = sizeof(phc_time); + args.out_buffer = vf->aq_resp; + args.out_size = IAVF_AQ_BUF_SZ; + + err = iavf_execute_vf_cmd(adapter, &args, 0); + if (err) { + PMD_DRV_LOG(ERR, + "Failed to execute command of VIRTCHNL_OP_1588_PTP_GET_TIME"); + return err; + } + + adapter->phc_time = ((struct virtchnl_phc_time *)args.out_buffer)->time; + + return 0; +} -- 2.9.5