From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D2BA4A0564; Sat, 29 Feb 2020 17:37:13 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1C4B32F42; Sat, 29 Feb 2020 17:37:13 +0100 (CET) Received: from mail-wm1-f67.google.com (mail-wm1-f67.google.com [209.85.128.67]) by dpdk.org (Postfix) with ESMTP id 61ED823D for ; Sat, 29 Feb 2020 17:37:12 +0100 (CET) Received: by mail-wm1-f67.google.com with SMTP id 9so501246wmo.1 for ; Sat, 29 Feb 2020 08:37:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:mime-version :content-transfer-encoding; bh=Q0q1zqaWuNntqz6zSP1nraAC77GNm2spai8v0V3VRZA=; b=rAuTB+JqY9V1u/JnofT0f/mfAtv1mL8v9e6wR/7lryFXlGK2J4UoYGZdik+u/+44Wn YlYo4Ulr1NyUSvbT8sxVBeaeHjSXUzLff0VeAGT6nBFJg/n6WrcfV7Nddb52tssithzF o26cDdn+SMpI0j/QGIVaVTlai/dFOGyr1uIT4MjI966pUE70gDdPUOb7AgqMIJwe5Eyc jpukgOSIuie9Dd4ZfhvWuWC3FTNAagSgIYnYgUb39W044whQDO7bEbBfKfzj04kuSCK/ drvgqU0jPM9WD/oiV7R7ojJux4IYhsO8XLly0pyg0G4eOXb6NRFAu4Mnhbh4rt/y8/9Y exew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:mime-version :content-transfer-encoding; bh=Q0q1zqaWuNntqz6zSP1nraAC77GNm2spai8v0V3VRZA=; b=ebc8LKGK9TTHpZS0AwkIwFxLcHDva/NJyzM5jAEG9YpgQfLiVeMSf63qubtS2Baev4 Mqj/UUZ0H2BXqdAJ4I4m2m4vhNOSrKrMpdsn9vJELN/xl20gzPbgA2uU81CkBF/VSVbe vJO2WPh91fDDmfEJnLoUQi340xlxgeb6Ipqo266z3GMWCXVzZTYAlX6PYTmT46Zk3Vxq K/Dz1TL4dEocy8yt9U7sgIHZX5h1TFUnY39FVSZAwYYSFRje1DChrthdBU5MNedP2LVB VB22GUVDobqbPXaouH61o7Z1UAi1CFzFBjSRHcfqOCRH8az1MRqpyydRKuxwwTTA/9S0 2R+g== X-Gm-Message-State: APjAAAW4zc0uIn2OypzP0iLZ23KeF3cK83lvnP1bULvBkd+6f2apYSR6 8FBLnwB5YpqPLiyuppSSHHleVr5J X-Google-Smtp-Source: APXvYqyBFhl22t11r35CJkW381m8m7uNRZyK1QAoXCXLPTztYD3MYBbvAC+MkjjmZDbwp5yFIeveNQ== X-Received: by 2002:a1c:4b0f:: with SMTP id y15mr10417278wma.87.1582994231195; Sat, 29 Feb 2020 08:37:11 -0800 (PST) Received: from localhost ([88.98.246.218]) by smtp.gmail.com with ESMTPSA id i8sm12222738wrq.10.2020.02.29.08.37.09 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 29 Feb 2020 08:37:09 -0800 (PST) From: luca.boccassi@gmail.com To: dev@dpdk.org Date: Sat, 29 Feb 2020 16:37:06 +0000 Message-Id: <20200229163706.12741-1-luca.boccassi@gmail.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH] Fix various typos found by Lintian X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Luca Boccassi Cc: stable@dpdk.org Signed-off-by: Luca Boccassi --- Debian's linter is getting more and more annoy^^smart and now parses binaries for typos too - CC stable to get it off my back in the next release app/test-pmd/cmdline.c | 4 ++-- app/test/test_mbuf.c | 2 +- doc/guides/nics/mlx5.rst | 2 +- doc/guides/vdpadevs/mlx5.rst | 2 +- drivers/common/octeontx2/hw/otx2_npc.h | 4 ++-- drivers/compress/octeontx/otx_zip_pmd.c | 2 +- drivers/event/dpaa2/dpaa2_eventdev.c | 2 +- drivers/net/bnxt/bnxt_ethdev.c | 2 +- drivers/net/bonding/rte_eth_bond_pmd.c | 4 ++-- drivers/net/cxgbe/cxgbe_flow.c | 2 +- drivers/net/dpaa2/dpaa2_flow.c | 4 ++-- drivers/net/dpaa2/dpaa2_mux.c | 2 +- drivers/net/hinic/base/hinic_pmd_mbox.c | 2 +- drivers/net/i40e/base/i40e_nvm.c | 2 +- drivers/net/i40e/i40e_ethdev.c | 2 +- drivers/net/mlx5/mlx5_rxtx.c | 2 +- drivers/net/pfe/pfe_ethdev.c | 2 +- drivers/net/qede/base/ecore_iov_api.h | 2 +- drivers/net/qede/qede_ethdev.c | 2 +- drivers/net/sfc/sfc_ef10_tx.c | 2 +- examples/vmdq/main.c | 2 +- examples/vmdq_dcb/main.c | 2 +- lib/librte_eal/common/include/arch/x86/rte_atomic.h | 2 +- lib/librte_eal/linux/eal/eal.c | 2 +- lib/librte_ipsec/sa.h | 2 +- lib/librte_mbuf/rte_mbuf_core.h | 2 +- 26 files changed, 30 insertions(+), 30 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index a037a55c6a..7c50337953 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -95,7 +95,7 @@ static void cmd_help_brief_parsed(__attribute__((unused)) void *parsed_result, " help ports : Configuring ports.\n" " help registers : Reading and setting port registers.\n" " help filters : Filters configuration help.\n" - " help traffic_management : Traffic Management commmands.\n" + " help traffic_management : Traffic Management commands.\n" " help devices : Device related cmds.\n" " help all : All of the above sections.\n\n" ); @@ -5131,7 +5131,7 @@ cmd_gso_size_parsed(void *parsed_result, if (test_done == 0) { printf("Before setting GSO segsz, please first" - " stop fowarding\n"); + " stop forwarding\n"); return; } diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c index 8200b4f71e..71bdab6917 100644 --- a/app/test/test_mbuf.c +++ b/app/test/test_mbuf.c @@ -1157,7 +1157,7 @@ test_refcnt_mbuf(void) tref += refcnt_lcore[slave]; if (tref != refcnt_lcore[master]) - rte_panic("refernced mbufs: %u, freed mbufs: %u\n", + rte_panic("referenced mbufs: %u, freed mbufs: %u\n", tref, refcnt_lcore[master]); rte_mempool_dump(stdout, refcnt_pool); diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index afd11cd830..1a90e42a9c 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -35,7 +35,7 @@ operations such as querying/updating the MTU and flow control parameters. For security reasons and robustness, this driver only deals with virtual memory addresses. The way resources allocations are handled by the kernel, -combined with hardware specifications that allow to handle virtual memory +combined with hardware specifications that allow one to handle virtual memory addresses directly, ensure that DPDK applications cannot access random physical memory (or memory that does not belong to the current process). diff --git a/doc/guides/vdpadevs/mlx5.rst b/doc/guides/vdpadevs/mlx5.rst index dd377afda5..41c9731e90 100644 --- a/doc/guides/vdpadevs/mlx5.rst +++ b/doc/guides/vdpadevs/mlx5.rst @@ -25,7 +25,7 @@ Design For security reasons and robustness, this driver only deals with virtual memory addresses. The way resources allocations are handled by the kernel, -combined with hardware specifications that allow to handle virtual memory +combined with hardware specifications that allow one to handle virtual memory addresses directly, ensure that DPDK applications cannot access random physical memory (or memory that does not belong to the current process). diff --git a/drivers/common/octeontx2/hw/otx2_npc.h b/drivers/common/octeontx2/hw/otx2_npc.h index 3dfc137a30..600084ff31 100644 --- a/drivers/common/octeontx2/hw/otx2_npc.h +++ b/drivers/common/octeontx2/hw/otx2_npc.h @@ -213,7 +213,7 @@ enum npc_kpu_lc_ltype { NPC_LT_LC_FCOE, }; -/* Don't modify Ltypes upto SCTP, otherwise it will +/* Don't modify Ltypes up to SCTP, otherwise it will * effect flow tag calculation and thus RSS. */ enum npc_kpu_ld_ltype { @@ -260,7 +260,7 @@ enum npc_kpu_lg_ltype { NPC_LT_LG_TU_ETHER_IN_NSH, }; -/* Don't modify Ltypes upto SCTP, otherwise it will +/* Don't modify Ltypes up to SCTP, otherwise it will * effect flow tag calculation and thus RSS. */ enum npc_kpu_lh_ltype { diff --git a/drivers/compress/octeontx/otx_zip_pmd.c b/drivers/compress/octeontx/otx_zip_pmd.c index 9e00c86630..bff8ef035e 100644 --- a/drivers/compress/octeontx/otx_zip_pmd.c +++ b/drivers/compress/octeontx/otx_zip_pmd.c @@ -406,7 +406,7 @@ zip_pmd_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, qp->name = name; - /* Create completion queue upto max_inflight_ops */ + /* Create completion queue up to max_inflight_ops */ qp->processed_pkts = zip_pmd_qp_create_processed_pkts_ring(qp, max_inflight_ops, socket_id); if (qp->processed_pkts == NULL) diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c index 1833d659d8..208fce444d 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev.c +++ b/drivers/event/dpaa2/dpaa2_eventdev.c @@ -391,7 +391,7 @@ dpaa2_eventdev_info_get(struct rte_eventdev *dev, dev_info->max_event_priority_levels = DPAA2_EVENT_MAX_EVENT_PRIORITY_LEVELS; dev_info->max_event_ports = rte_fslmc_get_device_count(DPAA2_IO); - /* we only support dpio upto number of cores*/ + /* we only support dpio up to number of cores*/ if (dev_info->max_event_ports > rte_lcore_count()) dev_info->max_event_ports = rte_lcore_count(); dev_info->max_event_port_dequeue_depth = diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 18aa313fd8..1b2edd04c0 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -1519,7 +1519,7 @@ static int bnxt_rss_hash_conf_get_op(struct rte_eth_dev *eth_dev, } if (hash_types) { PMD_DRV_LOG(ERR, - "Unknwon RSS config from firmware (%08x), RSS disabled", + "Unknown RSS config from firmware (%08x), RSS disabled", vnic->hash_type); return -ENOTSUP; } diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index 707a0f3cdd..750e17dced 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -198,7 +198,7 @@ bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev, if (slave_info.max_rx_queues < bond_dev->data->nb_rx_queues || slave_info.max_tx_queues < bond_dev->data->nb_tx_queues) { RTE_BOND_LOG(ERR, - "%s: Slave %d capabilities doesn't allow to allocate additional queues", + "%s: Slave %d capabilities doesn't allow one to allocate additional queues", __func__, slave_port); return -1; } @@ -3230,7 +3230,7 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode) internals->candidate_max_rx_pktlen = 0; internals->max_rx_pktlen = 0; - /* Initially allow to choose any offload type */ + /* Initially allow one to choose any offload type */ internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK; memset(&internals->default_rxconf, 0, diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c index 9070f4960d..2fb77b4abb 100644 --- a/drivers/net/cxgbe/cxgbe_flow.c +++ b/drivers/net/cxgbe/cxgbe_flow.c @@ -230,7 +230,7 @@ ch_rte_parsetype_port(const void *dmask, const struct rte_flow_item *item, if (val->index > 0x7) return rte_flow_error_set(e, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, - "port index upto 0x7 is supported"); + "port index up to 0x7 is supported"); CXGBE_FILL_FS(val->index, mask->index, iport); diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index 2212650320..8aa65db305 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -1850,13 +1850,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev, key_iova = (size_t)rte_malloc(NULL, 256, 64); if (!key_iova) { DPAA2_PMD_ERR( - "Memory allocation failure for rule configration\n"); + "Memory allocation failure for rule configuration\n"); goto mem_failure; } mask_iova = (size_t)rte_malloc(NULL, 256, 64); if (!mask_iova) { DPAA2_PMD_ERR( - "Memory allocation failure for rule configration\n"); + "Memory allocation failure for rule configuration\n"); goto mem_failure; } diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c index 1910cc4184..af90adb828 100644 --- a/drivers/net/dpaa2/dpaa2_mux.c +++ b/drivers/net/dpaa2/dpaa2_mux.c @@ -84,7 +84,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id, (2 * DIST_PARAM_IOVA_SIZE), RTE_CACHE_LINE_SIZE); if (!flow) { DPAA2_PMD_ERR( - "Memory allocation failure for rule configration\n"); + "Memory allocation failure for rule configuration\n"); goto creation_error; } key_iova = (void *)((size_t)flow + sizeof(struct rte_flow)); diff --git a/drivers/net/hinic/base/hinic_pmd_mbox.c b/drivers/net/hinic/base/hinic_pmd_mbox.c index 3d3c1bc4ab..e82a07dbce 100644 --- a/drivers/net/hinic/base/hinic_pmd_mbox.c +++ b/drivers/net/hinic/base/hinic_pmd_mbox.c @@ -872,7 +872,7 @@ static int hinic_func_to_func_init(struct hinic_hwdev *hwdev) err = alloc_mbox_info(func_to_func->mbox_resp); if (err) { - PMD_DRV_LOG(ERR, "Allocating memory for mailbox responsing failed"); + PMD_DRV_LOG(ERR, "Allocating memory for mailbox responding failed"); goto alloc_mbox_for_resp_err; } diff --git a/drivers/net/i40e/base/i40e_nvm.c b/drivers/net/i40e/base/i40e_nvm.c index fc24cc2ce0..93c7ea6bf1 100644 --- a/drivers/net/i40e/base/i40e_nvm.c +++ b/drivers/net/i40e/base/i40e_nvm.c @@ -465,7 +465,7 @@ STATIC enum i40e_status_code i40e_read_nvm_buffer_aq(struct i40e_hw *hw, u16 off do { /* Calculate number of bytes we should read in this step. - * FVL AQ do not allow to read more than one page at a time or + * FVL AQ do not allow one to read more than one page at a time or * to cross page boundaries. */ if (offset % I40E_SR_SECTOR_SIZE_IN_WORDS) diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index 9fbda1c34c..5a10cd961f 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -2051,7 +2051,7 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx) for (i = 0; i < vsi->nb_used_qps; i++) { if (nb_msix <= 1) { if (!rte_intr_allow_others(intr_handle)) - /* allow to share MISC_VEC_ID */ + /* allow one to share MISC_VEC_ID */ msix_vect = I40E_MISC_VEC_ID; /* no enough msix_vect, map all to one */ diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c index 5ac63da803..ad34d31f37 100644 --- a/drivers/net/mlx5/mlx5_rxtx.c +++ b/drivers/net/mlx5/mlx5_rxtx.c @@ -4829,7 +4829,7 @@ mlx5_tx_burst_tmpl(struct mlx5_txq_data *restrict txq, /* * Calculate the number of available resources - elts and WQEs. * There are two possible different scenarios: - * - no data inlining into WQEs, one WQEBB may contains upto + * - no data inlining into WQEs, one WQEBB may contains up to * four packets, in this case elts become scarce resource * - data inlining into WQEs, one packet may require multiple * WQEBBs, the WQEs become the limiting factor. diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c index 9403478198..7facc47b62 100644 --- a/drivers/net/pfe/pfe_ethdev.c +++ b/drivers/net/pfe/pfe_ethdev.c @@ -13,7 +13,7 @@ #include "pfe_logs.h" #include "pfe_mod.h" -#define PFE_MAX_MACS 1 /*we can support upto 4 MACs per IF*/ +#define PFE_MAX_MACS 1 /*we can support up to 4 MACs per IF*/ #define PFE_VDEV_GEM_ID_ARG "intf" struct pfe_vdev_init_params { diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h index 5450018121..b8cd98ee8c 100644 --- a/drivers/net/qede/base/ecore_iov_api.h +++ b/drivers/net/qede/base/ecore_iov_api.h @@ -71,7 +71,7 @@ struct ecore_vf_acquire_sw_info { /* We have several close releases that all use ~same FW with different * versions [making it incompatible as the versioning scheme is still - * tied directly to FW version], allow to override the checking. Only + * tied directly to FW version], allow one to override the checking. Only * those versions would actually support this feature [so it would not * break forward compatibility with newer HV drivers that are no longer * suited]. diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c index 74dfe895ad..1542073a27 100644 --- a/drivers/net/qede/qede_ethdev.c +++ b/drivers/net/qede/qede_ethdev.c @@ -1100,7 +1100,7 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev) qede_reset_queue_stats(qdev, true); /* Newer SR-IOV PF driver expects RX/TX queues to be started before - * enabling RSS. Hence RSS configuration is deferred upto this point. + * enabling RSS. Hence RSS configuration is deferred up to this point. * Also, we would like to retain similar behavior in PF case, so we * don't do PF/VF specific check here. */ diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c index 43e3447805..014824499b 100644 --- a/drivers/net/sfc/sfc_ef10_tx.c +++ b/drivers/net/sfc/sfc_ef10_tx.c @@ -406,7 +406,7 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg, if (txq->completed != pkt_start) return ENOSPC; /* - * Do not allow to send packet if the maximum DMA + * Do not allow one to send packet if the maximum DMA * descriptor space is not sufficient to hold TSO * descriptors, header descriptor and at least 1 * segment descriptor. diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c index 0111109203..ffc83d2bd1 100644 --- a/examples/vmdq/main.c +++ b/examples/vmdq/main.c @@ -178,7 +178,7 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool) max_nb_pools = (uint32_t)dev_info.max_vmdq_pools; /* - * We allow to process part of VMDQ pools specified by num_pools in + * We allow one to process part of VMDQ pools specified by num_pools in * command line. */ if (num_pools > max_nb_pools) { diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c index f417b2fd9b..a70bca72f9 100644 --- a/examples/vmdq_dcb/main.c +++ b/examples/vmdq_dcb/main.c @@ -212,7 +212,7 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool) max_nb_pools = (uint32_t)dev_info.max_vmdq_pools; /* - * We allow to process part of VMDQ pools specified by num_pools in + * We allow one to process part of VMDQ pools specified by num_pools in * command line. */ if (num_pools > max_nb_pools) { diff --git a/lib/librte_eal/common/include/arch/x86/rte_atomic.h b/lib/librte_eal/common/include/arch/x86/rte_atomic.h index 148398f50a..b9dcd30aba 100644 --- a/lib/librte_eal/common/include/arch/x86/rte_atomic.h +++ b/lib/librte_eal/common/include/arch/x86/rte_atomic.h @@ -55,7 +55,7 @@ extern "C" { * * As pointed by Java guys, that makes possible to use lock-prefixed * instructions to get the same effect as mfence and on most modern HW - * that gives a better perfomance then using mfence: + * that gives a better performance then using mfence: * https://shipilev.net/blog/2014/on-the-fence-with-dependencies/ * Basic idea is to use lock prefixed add with some dummy memory location * as the destination. From their experiments 128B(2 cache lines) below diff --git a/lib/librte_eal/linux/eal/eal.c b/lib/librte_eal/linux/eal/eal.c index 9530ee55f8..e6d4cc7178 100644 --- a/lib/librte_eal/linux/eal/eal.c +++ b/lib/librte_eal/linux/eal/eal.c @@ -1077,7 +1077,7 @@ rte_eal_init(int argc, char **argv) #if defined(RTE_LIBRTE_KNI) && LINUX_VERSION_CODE >= KERNEL_VERSION(4, 10, 0) } else if (rte_eal_check_module("rte_kni") == 1) { iova_mode = RTE_IOVA_PA; - RTE_LOG(DEBUG, EAL, "KNI is loaded, selecting IOVA as PA mode for better KNI perfomance.\n"); + RTE_LOG(DEBUG, EAL, "KNI is loaded, selecting IOVA as PA mode for better KNI performance.\n"); #endif } else if (is_iommu_enabled()) { /* we have an IOMMU, pick IOVA as VA mode */ diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h index d22451b38a..29cfe7279a 100644 --- a/lib/librte_ipsec/sa.h +++ b/lib/librte_ipsec/sa.h @@ -115,7 +115,7 @@ struct rte_ipsec_sa { * sqn and replay window * In case of SA handled by multiple threads *sqn* cacheline * could be shared by multiple cores. - * To minimise perfomance impact, we try to locate in a separate + * To minimise performance impact, we try to locate in a separate * place from other frequently accesed data. */ union { diff --git a/lib/librte_mbuf/rte_mbuf_core.h b/lib/librte_mbuf/rte_mbuf_core.h index b9a59c879c..2dddd8b833 100644 --- a/lib/librte_mbuf/rte_mbuf_core.h +++ b/lib/librte_mbuf/rte_mbuf_core.h @@ -591,7 +591,7 @@ struct rte_mbuf { /** Valid if PKT_RX_TIMESTAMP is set. The unit and time reference * are not normalized but are always the same for a given port. - * Some devices allow to query rte_eth_read_clock that will return the + * Some devices allow one to query rte_eth_read_clock that will return the * current device timestamp. */ uint64_t timestamp; -- 2.20.1