From: Josh Soref <jsoref@gmail.com>
To: dev@dpdk.org
Cc: Josh Soref <jsoref@gmail.com>
Subject: [PATCH 1/1] fix spelling in code
Date: Wed, 12 Jan 2022 02:28:08 -0500 [thread overview]
Message-ID: <20220112072808.59713-2-jsoref@gmail.com> (raw)
In-Reply-To: <20220112072808.59713-1-jsoref@gmail.com>
Some additional comment fixes...
The tool comes from https://github.com/jsoref
Signed-off-by: Josh Soref <jsoref@gmail.com>
---
app/test-crypto-perf/cperf_test_vectors.h | 4 +-
app/test/test_acl.c | 6 +-
app/test/test_link_bonding.c | 4 +-
app/test/test_link_bonding_mode4.c | 30 ++++----
app/test/test_table.h | 2 +-
doc/guides/nics/octeontx2.rst | 2 +-
drivers/bus/dpaa/base/fman/netcfg_layer.c | 4 +-
drivers/bus/fslmc/fslmc_vfio.h | 2 +-
drivers/common/cnxk/roc_cpt.c | 10 +--
drivers/common/cnxk/roc_cpt_priv.h | 2 +-
drivers/common/cnxk/roc_mbox.h | 4 +-
drivers/common/cnxk/roc_tim.c | 2 +-
drivers/common/dpaax/caamflib/desc/ipsec.h | 4 +-
.../common/dpaax/caamflib/rta/operation_cmd.h | 6 +-
drivers/common/mlx5/mlx5_devx_cmds.c | 24 +++---
drivers/common/mlx5/mlx5_devx_cmds.h | 10 +--
drivers/common/mlx5/mlx5_prm.h | 4 +-
drivers/common/octeontx2/otx2_mbox.h | 4 +-
.../sfc_efx/base/ef10_signed_image_layout.h | 2 +-
drivers/common/sfc_efx/base/efx_port.c | 2 +-
drivers/common/sfc_efx/base/efx_regs.h | 2 +-
drivers/common/sfc_efx/base/efx_types.h | 2 +-
drivers/compress/octeontx/otx_zip.h | 2 +-
drivers/compress/qat/dev/qat_comp_pmd_gen1.c | 4 +-
drivers/compress/qat/qat_comp.c | 12 +--
drivers/compress/qat/qat_comp_pmd.c | 6 +-
drivers/compress/qat/qat_comp_pmd.h | 2 +-
drivers/crypto/caam_jr/caam_jr.c | 4 +-
drivers/crypto/ccp/ccp_crypto.h | 2 +-
drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 4 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 30 ++++----
drivers/crypto/dpaa_sec/dpaa_sec.c | 2 +-
drivers/crypto/qat/qat_sym_session.h | 4 -
drivers/crypto/virtio/virtio_cryptodev.c | 6 +-
drivers/crypto/virtio/virtqueue.c | 2 +-
drivers/crypto/virtio/virtqueue.h | 2 +-
drivers/dma/ioat/ioat_dmadev.c | 2 +-
drivers/dma/ioat/ioat_hw_defs.h | 2 +-
drivers/event/octeontx2/otx2_tim_evdev.c | 4 +-
drivers/net/ark/ark_ethdev.c | 4 +-
drivers/net/ark/ark_rqp.c | 4 +-
drivers/net/ark/ark_rqp.h | 4 +-
drivers/net/bnx2x/bnx2x.c | 18 ++---
drivers/net/bnx2x/bnx2x.h | 6 +-
drivers/net/bnx2x/bnx2x_stats.c | 8 +-
drivers/net/bnx2x/bnx2x_stats.h | 4 +-
drivers/net/bnx2x/ecore_hsi.h | 38 +++++-----
drivers/net/bnx2x/ecore_init.h | 2 +-
drivers/net/bnx2x/ecore_reg.h | 12 +--
drivers/net/bnx2x/ecore_sp.c | 36 ++++-----
drivers/net/bnx2x/ecore_sp.h | 2 +-
drivers/net/bnx2x/elink.c | 32 ++++----
drivers/net/bnx2x/elink.h | 2 +-
drivers/net/bonding/eth_bond_8023ad_private.h | 2 +-
drivers/net/cxgbe/base/adapter.h | 2 +-
drivers/net/cxgbe/base/t4_chip_type.h | 2 +-
drivers/net/cxgbe/base/t4_hw.c | 8 +-
drivers/net/dpaa/fmlib/fm_port_ext.h | 4 +-
drivers/net/dpaa2/base/dpaa2_hw_dpni_annot.h | 2 +-
drivers/net/dpaa2/dpaa2_flow.c | 4 +-
drivers/net/dpaa2/dpaa2_mux.c | 4 +-
drivers/net/dpaa2/mc/dpdmux.c | 8 +-
drivers/net/dpaa2/mc/dpni.c | 2 +-
drivers/net/dpaa2/mc/fsl_dpdmux.h | 4 +-
drivers/net/e1000/base/e1000_82575.c | 2 +-
drivers/net/e1000/base/e1000_phy.c | 2 +-
drivers/net/enic/base/vnic_devcmd.h | 2 +-
drivers/net/enic/enic_flow.c | 48 ++++++------
drivers/net/fm10k/base/fm10k_mbx.c | 2 +-
drivers/net/fm10k/base/fm10k_pf.c | 2 +-
drivers/net/fm10k/base/fm10k_vf.c | 4 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 4 +-
drivers/net/hinic/hinic_pmd_flow.c | 4 +-
drivers/net/hinic/hinic_pmd_rx.c | 30 ++++----
drivers/net/hns3/hns3_dcb.c | 76 +++++++++----------
drivers/net/hns3/hns3_dcb.h | 24 +++---
drivers/net/hns3/hns3_ethdev.c | 18 ++---
drivers/net/hns3/hns3_fdir.c | 4 +-
drivers/net/hns3/hns3_tm.c | 4 +-
drivers/net/i40e/base/i40e_adminq_cmd.h | 2 +-
drivers/net/i40e/base/i40e_common.c | 4 +-
drivers/net/iavf/iavf_ethdev.c | 8 +-
drivers/net/iavf/iavf_hash.c | 6 +-
drivers/net/iavf/iavf_ipsec_crypto.c | 4 +-
drivers/net/iavf/iavf_ipsec_crypto.h | 4 +-
drivers/net/ice/base/ice_adminq_cmd.h | 2 +-
drivers/net/ice/ice_switch_filter.c | 26 +++----
drivers/net/igc/base/igc_defines.h | 2 +-
drivers/net/ipn3ke/ipn3ke_tm.c | 4 +-
drivers/net/ixgbe/base/ixgbe_82598.c | 2 +-
drivers/net/ixgbe/ixgbe_bypass.c | 18 ++---
drivers/net/ixgbe/ixgbe_bypass_defines.h | 2 +-
drivers/net/mlx5/mlx5.c | 4 +-
drivers/net/mlx5/mlx5.h | 4 +-
drivers/net/mlx5/mlx5_flow.c | 14 ++--
drivers/net/mlx5/mlx5_flow.h | 2 +-
drivers/net/mlx5/mlx5_flow_aso.c | 8 +-
drivers/net/mlx5/mlx5_flow_dv.c | 44 +++++------
drivers/net/mlx5/mlx5_flow_meter.c | 6 +-
drivers/net/mlx5/mlx5_rx.c | 2 +-
drivers/net/mlx5/mlx5_tx.c | 2 +-
drivers/net/ngbe/ngbe_ethdev.c | 8 +-
drivers/net/pfe/pfe_ethdev.c | 4 +-
drivers/net/qede/base/ecore_chain.h | 4 +-
drivers/net/qede/base/ecore_cxt.c | 8 +-
drivers/net/qede/base/ecore_dev.c | 2 +-
drivers/net/qede/base/ecore_dev_api.h | 2 +-
drivers/net/qede/base/ecore_hsi_eth.h | 4 +-
drivers/net/qede/base/ecore_hw_defs.h | 4 +-
drivers/net/qede/base/ecore_init_fw_funcs.c | 6 +-
drivers/net/qede/base/ecore_init_fw_funcs.h | 4 +-
drivers/net/qede/base/ecore_int.c | 2 +-
drivers/net/qede/base/ecore_iov_api.h | 2 +-
drivers/net/qede/base/ecore_l2.c | 2 +-
drivers/net/qede/base/ecore_mcp.h | 2 +-
drivers/net/qede/base/ecore_mcp_api.h | 2 +-
drivers/net/qede/base/ecore_spq.c | 2 +-
drivers/net/qede/base/ecore_spq.h | 2 +-
drivers/net/qede/base/ecore_sriov.c | 4 +-
drivers/net/qede/base/ecore_sriov.h | 4 +-
drivers/net/qede/base/ecore_vf.c | 2 +-
drivers/net/qede/base/ecore_vfpf_if.h | 2 +-
drivers/net/qede/base/mcp_public.h | 4 +-
drivers/net/qede/qede_debug.c | 10 +--
drivers/net/sfc/sfc_mae.c | 50 ++++++------
drivers/net/sfc/sfc_tso.h | 10 +--
drivers/net/txgbe/txgbe_ethdev.c | 8 +-
drivers/net/virtio/virtio_ethdev.c | 2 +-
drivers/net/virtio/virtqueue.c | 2 +-
drivers/net/virtio/virtqueue.h | 2 +-
drivers/net/vmxnet3/base/upt1_defs.h | 2 +-
drivers/raw/ifpga/base/ifpga_defines.h | 6 +-
drivers/raw/ifpga/base/ifpga_feature_dev.c | 2 +-
drivers/raw/ifpga/base/ifpga_fme_pr.c | 2 +-
drivers/raw/ifpga/base/opae_hw_api.h | 4 +-
drivers/raw/ioat/ioat_rawdev.c | 2 +-
drivers/raw/ioat/ioat_spec.h | 2 +-
drivers/regex/mlx5/mlx5_regex_fastpath.c | 2 +-
drivers/vdpa/mlx5/mlx5_vdpa.h | 2 +-
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 4 +-
examples/ipsec-secgw/ipsec_process.c | 2 +-
examples/vhost/virtio_net.c | 50 ++++++------
lib/bpf/bpf_validate.c | 10 +--
lib/cryptodev/rte_cryptodev.h | 2 +-
lib/eal/common/eal_common_trace_ctf.c | 8 +-
lib/fib/trie_avx512.c | 2 +-
lib/graph/graph_populate.c | 4 +-
lib/graph/graph_stats.c | 4 +-
lib/hash/rte_crc_arm64.h | 2 +-
lib/hash/rte_thash.c | 2 +-
lib/ip_frag/ip_frag_internal.c | 2 +-
lib/ipsec/ipsec_sad.c | 10 +--
lib/vhost/vhost_user.h | 2 +-
lib/vhost/virtio_net.c | 10 +--
154 files changed, 563 insertions(+), 567 deletions(-)
diff --git a/app/test-crypto-perf/cperf_test_vectors.h b/app/test-crypto-perf/cperf_test_vectors.h
index 70f2839c..4390c570 100644
--- a/app/test-crypto-perf/cperf_test_vectors.h
+++ b/app/test-crypto-perf/cperf_test_vectors.h
@@ -2,8 +2,8 @@
* Copyright(c) 2016-2017 Intel Corporation
*/
-#ifndef _CPERF_TEST_VECTRORS_
-#define _CPERF_TEST_VECTRORS_
+#ifndef _CPERF_TEST_VECTORS_
+#define _CPERF_TEST_VECTORS_
#include "cperf_options.h"
diff --git a/app/test/test_acl.c b/app/test/test_acl.c
index 5b323479..1ac3512e 100644
--- a/app/test/test_acl.c
+++ b/app/test/test_acl.c
@@ -368,7 +368,7 @@ test_classify_run(struct rte_acl_ctx *acx, struct ipv4_7tuple test_data[],
}
static int
-test_classify_buid(struct rte_acl_ctx *acx,
+test_classify_build(struct rte_acl_ctx *acx,
const struct rte_acl_ipv4vlan_rule *rules, uint32_t num)
{
int ret;
@@ -417,7 +417,7 @@ test_classify(void)
else
rte_acl_reset_rules(acx);
- ret = test_classify_buid(acx, acl_test_rules,
+ ret = test_classify_build(acx, acl_test_rules,
RTE_DIM(acl_test_rules));
if (ret != 0) {
printf("Line %i, iter: %d: "
@@ -552,7 +552,7 @@ test_build_ports_range(void)
for (i = 0; i != RTE_DIM(test_rules); i++) {
rte_acl_reset(acx);
- ret = test_classify_buid(acx, test_rules, i + 1);
+ ret = test_classify_build(acx, test_rules, i + 1);
if (ret != 0) {
printf("Line %i, iter: %d: "
"Adding rules to ACL context failed!\n",
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index dc6fc46b..80ea1cdc 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -2026,7 +2026,7 @@ uint8_t polling_slave_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
int polling_test_slaves[TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT] = { -1, -1 };
static int
-test_roundrobin_verfiy_polling_slave_link_status_change(void)
+test_roundrobin_verify_polling_slave_link_status_change(void)
{
struct rte_ether_addr *mac_addr =
(struct rte_ether_addr *)polling_slave_mac;
@@ -5118,7 +5118,7 @@ static struct unit_test_suite link_bonding_test_suite = {
TEST_CASE(test_roundrobin_verify_promiscuous_enable_disable),
TEST_CASE(test_roundrobin_verify_mac_assignment),
TEST_CASE(test_roundrobin_verify_slave_link_status_change_behaviour),
- TEST_CASE(test_roundrobin_verfiy_polling_slave_link_status_change),
+ TEST_CASE(test_roundrobin_verify_polling_slave_link_status_change),
TEST_CASE(test_activebackup_tx_burst),
TEST_CASE(test_activebackup_rx_burst),
TEST_CASE(test_activebackup_verify_promiscuous_enable_disable),
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 351129de..aea76e70 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -58,11 +58,11 @@ static const struct rte_ether_addr slave_mac_default = {
{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 }
};
-static const struct rte_ether_addr parnter_mac_default = {
+static const struct rte_ether_addr partner_mac_default = {
{ 0x22, 0xBB, 0xFF, 0xBB, 0x00, 0x00 }
};
-static const struct rte_ether_addr parnter_system = {
+static const struct rte_ether_addr partner_system = {
{ 0x33, 0xFF, 0xBB, 0xFF, 0x00, 0x00 }
};
@@ -76,7 +76,7 @@ struct slave_conf {
uint16_t port_id;
uint8_t bonded : 1;
- uint8_t lacp_parnter_state;
+ uint8_t lacp_partner_state;
};
struct ether_vlan_hdr {
@@ -258,7 +258,7 @@ add_slave(struct slave_conf *slave, uint8_t start)
TEST_ASSERT_EQUAL(rte_is_same_ether_addr(&addr, &addr_check), 1,
"Slave MAC address is not as expected");
- RTE_VERIFY(slave->lacp_parnter_state == 0);
+ RTE_VERIFY(slave->lacp_partner_state == 0);
return 0;
}
@@ -288,7 +288,7 @@ remove_slave(struct slave_conf *slave)
test_params.bonded_port_id);
slave->bonded = 0;
- slave->lacp_parnter_state = 0;
+ slave->lacp_partner_state = 0;
return 0;
}
@@ -501,20 +501,20 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
slow_hdr = rte_pktmbuf_mtod(pkt, struct slow_protocol_frame *);
/* Change source address to partner address */
- rte_ether_addr_copy(&parnter_mac_default, &slow_hdr->eth_hdr.src_addr);
+ rte_ether_addr_copy(&partner_mac_default, &slow_hdr->eth_hdr.src_addr);
slow_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
slave->port_id;
lacp = (struct lacpdu *) &slow_hdr->slow_protocol;
/* Save last received state */
- slave->lacp_parnter_state = lacp->actor.state;
+ slave->lacp_partner_state = lacp->actor.state;
/* Change it into LACP replay by matching parameters. */
memcpy(&lacp->partner.port_params, &lacp->actor.port_params,
sizeof(struct port_params));
lacp->partner.state = lacp->actor.state;
- rte_ether_addr_copy(&parnter_system, &lacp->actor.port_params.system);
+ rte_ether_addr_copy(&partner_system, &lacp->actor.port_params.system);
lacp->actor.state = STATE_LACP_ACTIVE |
STATE_SYNCHRONIZATION |
STATE_AGGREGATION |
@@ -580,7 +580,7 @@ bond_handshake_done(struct slave_conf *slave)
const uint8_t expected_state = STATE_LACP_ACTIVE | STATE_SYNCHRONIZATION |
STATE_AGGREGATION | STATE_COLLECTING | STATE_DISTRIBUTING;
- return slave->lacp_parnter_state == expected_state;
+ return slave->lacp_partner_state == expected_state;
}
static unsigned
@@ -1134,7 +1134,7 @@ test_mode4_tx_burst(void)
if (slave_down_id == slave->port_id) {
TEST_ASSERT_EQUAL(normal_cnt + slow_cnt, 0,
- "slave %u enexpectedly transmitted %u packets",
+ "slave %u unexpectedly transmitted %u packets",
normal_cnt + slow_cnt, slave->port_id);
} else {
TEST_ASSERT_EQUAL(slow_cnt, 0,
@@ -1165,7 +1165,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
&marker_hdr->eth_hdr.dst_addr);
/* Init source address */
- rte_ether_addr_copy(&parnter_mac_default,
+ rte_ether_addr_copy(&partner_mac_default,
&marker_hdr->eth_hdr.src_addr);
marker_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
slave->port_id;
@@ -1353,7 +1353,7 @@ test_mode4_expired(void)
/* After test only expected slave should be in EXPIRED state */
FOR_EACH_SLAVE(i, slave) {
if (slave == exp_slave)
- TEST_ASSERT(slave->lacp_parnter_state & STATE_EXPIRED,
+ TEST_ASSERT(slave->lacp_partner_state & STATE_EXPIRED,
"Slave %u should be in expired.", slave->port_id);
else
TEST_ASSERT_EQUAL(bond_handshake_done(slave), 1,
@@ -1392,7 +1392,7 @@ test_mode4_ext_ctrl(void)
},
};
- rte_ether_addr_copy(&parnter_system, &src_mac);
+ rte_ether_addr_copy(&partner_system, &src_mac);
rte_ether_addr_copy(&slow_protocol_mac_addr, &dst_mac);
initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
@@ -1446,7 +1446,7 @@ test_mode4_ext_lacp(void)
},
};
- rte_ether_addr_copy(&parnter_system, &src_mac);
+ rte_ether_addr_copy(&partner_system, &src_mac);
rte_ether_addr_copy(&slow_protocol_mac_addr, &dst_mac);
initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
@@ -1535,7 +1535,7 @@ check_environment(void)
if (port->bonded != 0)
env_state |= 0x04;
- if (port->lacp_parnter_state != 0)
+ if (port->lacp_partner_state != 0)
env_state |= 0x08;
if (env_state != 0)
diff --git a/app/test/test_table.h b/app/test/test_table.h
index 209bdbff..003088f2 100644
--- a/app/test/test_table.h
+++ b/app/test/test_table.h
@@ -25,7 +25,7 @@
#define MAX_BULK 32
#define N 65536
#define TIME_S 5
-#define TEST_RING_FULL_EMTPY_ITER 8
+#define TEST_RING_FULL_EMPTY_ITER 8
#define N_PORTS 2
#define N_PKTS 2
#define N_PKTS_EXT 6
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
index 4ce067f2..b7569e08 100644
--- a/doc/guides/nics/octeontx2.rst
+++ b/doc/guides/nics/octeontx2.rst
@@ -244,7 +244,7 @@ configure the following features:
Both DWRR and Static Priority(SP) hierarchical scheduling is supported.
-Every parent can have atmost 10 SP Children and unlimited DWRR children.
+Every parent can have at most 10 SP Children and unlimited DWRR children.
Both PF & VF supports traffic management API with PF supporting 6 levels
and VF supporting 5 levels of topology.
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
index 6a405c98..200d441d 100644
--- a/drivers/bus/dpaa/base/fman/netcfg_layer.c
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -18,7 +18,7 @@
#include <rte_dpaa_logs.h>
#include <netcfg.h>
-/* This data structure contaings all configurations information
+/* This data structure contains all configurations information
* related to usages of DPA devices.
*/
static struct netcfg_info *netcfg;
@@ -112,7 +112,7 @@ netcfg_acquire(void)
netcfg = rte_calloc(NULL, 1, size, 0);
if (unlikely(netcfg == NULL)) {
- DPAA_BUS_LOG(ERR, "Unable to allocat mem for netcfg");
+ DPAA_BUS_LOG(ERR, "Unable to allocate mem for netcfg");
goto error;
}
diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h
index 133606a9..2394445b 100644
--- a/drivers/bus/fslmc/fslmc_vfio.h
+++ b/drivers/bus/fslmc/fslmc_vfio.h
@@ -56,7 +56,7 @@ int rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
int fslmc_vfio_setup_group(void);
int fslmc_vfio_process_group(void);
char *fslmc_get_container(void);
-int fslmc_get_container_group(int *gropuid);
+int fslmc_get_container_group(int *groupid);
int rte_fslmc_vfio_dmamap(void);
int rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size);
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 8f8e6d38..aac0fd6a 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -375,7 +375,7 @@ cpt_available_lfs_get(struct dev *dev, uint16_t *nb_lf)
}
int
-cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmsk, uint8_t blkaddr,
+cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmask, uint8_t blkaddr,
bool inl_dev_sso)
{
struct cpt_lf_alloc_req_msg *req;
@@ -390,7 +390,7 @@ cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmsk, uint8_t blkaddr,
req->sso_pf_func = nix_inl_dev_pffunc_get();
else
req->sso_pf_func = idev_sso_pffunc_get();
- req->eng_grpmsk = eng_grpmsk;
+ req->eng_grpmask = eng_grpmask;
req->blkaddr = blkaddr;
return mbox_process(mbox);
@@ -481,7 +481,7 @@ roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf)
struct cpt *cpt = roc_cpt_to_cpt_priv(roc_cpt);
uint8_t blkaddr[ROC_CPT_MAX_BLKS];
struct msix_offset_rsp *rsp;
- uint8_t eng_grpmsk;
+ uint8_t eng_grpmask;
int blknum = 0;
int rc, i;
@@ -508,11 +508,11 @@ roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf)
for (i = 0; i < nb_lf; i++)
cpt->lf_blkaddr[i] = blkaddr[blknum];
- eng_grpmsk = (1 << roc_cpt->eng_grp[CPT_ENG_TYPE_AE]) |
+ eng_grpmask = (1 << roc_cpt->eng_grp[CPT_ENG_TYPE_AE]) |
(1 << roc_cpt->eng_grp[CPT_ENG_TYPE_SE]) |
(1 << roc_cpt->eng_grp[CPT_ENG_TYPE_IE]);
- rc = cpt_lfs_alloc(&cpt->dev, eng_grpmsk, blkaddr[blknum], false);
+ rc = cpt_lfs_alloc(&cpt->dev, eng_grpmask, blkaddr[blknum], false);
if (rc)
goto lfs_detach;
diff --git a/drivers/common/cnxk/roc_cpt_priv.h b/drivers/common/cnxk/roc_cpt_priv.h
index 61dec9a1..4bc888b2 100644
--- a/drivers/common/cnxk/roc_cpt_priv.h
+++ b/drivers/common/cnxk/roc_cpt_priv.h
@@ -21,7 +21,7 @@ roc_cpt_to_cpt_priv(struct roc_cpt *roc_cpt)
int cpt_lfs_attach(struct dev *dev, uint8_t blkaddr, bool modify,
uint16_t nb_lf);
int cpt_lfs_detach(struct dev *dev);
-int cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmsk, uint8_t blk,
+int cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmask, uint8_t blk,
bool inl_dev_sso);
int cpt_lfs_free(struct dev *dev);
int cpt_lf_init(struct roc_cpt_lf *lf);
diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h
index b63fe108..ae576d1b 100644
--- a/drivers/common/cnxk/roc_mbox.h
+++ b/drivers/common/cnxk/roc_mbox.h
@@ -1328,7 +1328,7 @@ struct cpt_lf_alloc_req_msg {
struct mbox_msghdr hdr;
uint16_t __io nix_pf_func;
uint16_t __io sso_pf_func;
- uint16_t __io eng_grpmsk;
+ uint16_t __io eng_grpmask;
uint8_t __io blkaddr;
};
@@ -1739,7 +1739,7 @@ enum tim_af_status {
TIM_AF_INVALID_BSIZE = -813,
TIM_AF_INVALID_ENABLE_PERIODIC = -814,
TIM_AF_INVALID_ENABLE_DONTFREE = -815,
- TIM_AF_ENA_DONTFRE_NSET_PERIODIC = -816,
+ TIM_AF_ENA_DONTFREE_NSET_PERIODIC = -816,
TIM_AF_RING_ALREADY_DISABLED = -817,
};
diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c
index 534b697b..ca58e19a 100644
--- a/drivers/common/cnxk/roc_tim.c
+++ b/drivers/common/cnxk/roc_tim.c
@@ -73,7 +73,7 @@ tim_err_desc(int rc)
case TIM_AF_INVALID_ENABLE_DONTFREE:
plt_err("Invalid Don't free value.");
break;
- case TIM_AF_ENA_DONTFRE_NSET_PERIODIC:
+ case TIM_AF_ENA_DONTFREE_NSET_PERIODIC:
plt_err("Don't free bit not set when periodic is enabled.");
break;
case TIM_AF_RING_ALREADY_DISABLED:
diff --git a/drivers/common/dpaax/caamflib/desc/ipsec.h b/drivers/common/dpaax/caamflib/desc/ipsec.h
index 668d2164..499f4f93 100644
--- a/drivers/common/dpaax/caamflib/desc/ipsec.h
+++ b/drivers/common/dpaax/caamflib/desc/ipsec.h
@@ -1437,7 +1437,7 @@ cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps,
CAAM_CMD_SZ)
/**
- * cnstr_shdsc_authenc - authenc-like descriptor
+ * cnstr_shdsc_authentic - authentic-like descriptor
* @descbuf: pointer to buffer used for descriptor construction
* @ps: if 36/40bit addressing is desired, this parameter must be true
* @swap: if true, perform descriptor byte swapping on a 4-byte boundary
@@ -1502,7 +1502,7 @@ cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps,
* Return: size of descriptor written in words or negative number on error
*/
static inline int
-cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap,
+cnstr_shdsc_authentic(uint32_t *descbuf, bool ps, bool swap,
enum rta_share_type share,
struct alginfo *cipherdata,
struct alginfo *authdata,
diff --git a/drivers/common/dpaax/caamflib/rta/operation_cmd.h b/drivers/common/dpaax/caamflib/rta/operation_cmd.h
index 3d339cb0..e456ad3c 100644
--- a/drivers/common/dpaax/caamflib/rta/operation_cmd.h
+++ b/drivers/common/dpaax/caamflib/rta/operation_cmd.h
@@ -199,7 +199,7 @@ __rta_alg_aai_zuca(uint16_t aai)
}
struct alg_aai_map {
- uint32_t chipher_algo;
+ uint32_t cipher_algo;
int (*aai_func)(uint16_t);
uint32_t class;
};
@@ -242,7 +242,7 @@ rta_operation(struct program *program, uint32_t cipher_algo,
int ret;
for (i = 0; i < alg_table_sz[rta_sec_era]; i++) {
- if (alg_table[i].chipher_algo == cipher_algo) {
+ if (alg_table[i].cipher_algo == cipher_algo) {
if ((aai == OP_ALG_AAI_XCBC_MAC) ||
(aai == OP_ALG_AAI_CBC_XCBCMAC))
opcode |= cipher_algo | OP_TYPE_CLASS2_ALG;
@@ -340,7 +340,7 @@ rta_operation2(struct program *program, uint32_t cipher_algo,
int ret;
for (i = 0; i < alg_table_sz[rta_sec_era]; i++) {
- if (alg_table[i].chipher_algo == cipher_algo) {
+ if (alg_table[i].cipher_algo == cipher_algo) {
if ((aai == OP_ALG_AAI_XCBC_MAC) ||
(aai == OP_ALG_AAI_CBC_XCBCMAC) ||
(aai == OP_ALG_AAI_CMAC))
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 7cd3d4fa..0167c0a1 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -2290,12 +2290,12 @@ mlx5_devx_cmd_create_virtio_q_counters(void *ctx)
{
uint32_t in[MLX5_ST_SZ_DW(create_virtio_q_counters_in)] = {0};
uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0};
- struct mlx5_devx_obj *couners_obj = mlx5_malloc(MLX5_MEM_ZERO,
- sizeof(*couners_obj), 0,
+ struct mlx5_devx_obj *counters_obj = mlx5_malloc(MLX5_MEM_ZERO,
+ sizeof(*counters_obj), 0,
SOCKET_ID_ANY);
void *hdr = MLX5_ADDR_OF(create_virtio_q_counters_in, in, hdr);
- if (!couners_obj) {
+ if (!counters_obj) {
DRV_LOG(ERR, "Failed to allocate virtio queue counters data.");
rte_errno = ENOMEM;
return NULL;
@@ -2304,22 +2304,22 @@ mlx5_devx_cmd_create_virtio_q_counters(void *ctx)
MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS);
- couners_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out,
+ counters_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out,
sizeof(out));
- if (!couners_obj->obj) {
+ if (!counters_obj->obj) {
rte_errno = errno;
DRV_LOG(ERR, "Failed to create virtio queue counters Obj using"
" DevX.");
- mlx5_free(couners_obj);
+ mlx5_free(counters_obj);
return NULL;
}
- couners_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
- return couners_obj;
+ counters_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
+ return counters_obj;
}
int
-mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *couners_obj,
- struct mlx5_devx_virtio_q_couners_attr *attr)
+mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *counters_obj,
+ struct mlx5_devx_virtio_q_counters_attr *attr)
{
uint32_t in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0};
uint32_t out[MLX5_ST_SZ_DW(query_virtio_q_counters_out)] = {0};
@@ -2332,8 +2332,8 @@ mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *couners_obj,
MLX5_CMD_OP_QUERY_GENERAL_OBJECT);
MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS);
- MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_id, couners_obj->id);
- ret = mlx5_glue->devx_obj_query(couners_obj->obj, in, sizeof(in), out,
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_id, counters_obj->id);
+ ret = mlx5_glue->devx_obj_query(counters_obj->obj, in, sizeof(in), out,
sizeof(out));
if (ret) {
DRV_LOG(ERR, "Failed to query virtio q counters using DevX.");
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index d7f71646..107f28bb 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -128,7 +128,7 @@ enum {
enum {
PARSE_GRAPH_NODE_CAP_LENGTH_MODE_FIXED = RTE_BIT32(0),
- PARSE_GRAPH_NODE_CAP_LENGTH_MODE_EXPLISIT_FIELD = RTE_BIT32(1),
+ PARSE_GRAPH_NODE_CAP_LENGTH_MODE_EXPLICIT_FIELD = RTE_BIT32(1),
PARSE_GRAPH_NODE_CAP_LENGTH_MODE_BITMASK_FIELD = RTE_BIT32(2)
};
@@ -491,7 +491,7 @@ struct mlx5_devx_qp_attr {
uint32_t mmo:1;
};
-struct mlx5_devx_virtio_q_couners_attr {
+struct mlx5_devx_virtio_q_counters_attr {
uint64_t received_desc;
uint64_t completed_desc;
uint32_t error_cqes;
@@ -697,7 +697,7 @@ struct mlx5_devx_obj *mlx5_devx_cmd_create_virtio_q_counters(void *ctx);
/**
* Query virtio queue counters object using DevX API.
*
- * @param[in] couners_obj
+ * @param[in] counters_obj
* Pointer to virtq object structure.
* @param [in/out] attr
* Pointer to virtio queue counters attributes structure.
@@ -706,8 +706,8 @@ struct mlx5_devx_obj *mlx5_devx_cmd_create_virtio_q_counters(void *ctx);
* 0 on success, a negative errno value otherwise and rte_errno is set.
*/
__rte_internal
-int mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *couners_obj,
- struct mlx5_devx_virtio_q_couners_attr *attr);
+int mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *counters_obj,
+ struct mlx5_devx_virtio_q_counters_attr *attr);
__rte_internal
struct mlx5_devx_obj *mlx5_devx_cmd_create_flow_hit_aso_obj(void *ctx,
uint32_t pd);
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 982a53ff..d921d525 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -3112,8 +3112,8 @@ struct mlx5_ifc_conn_track_aso_bits {
u8 max_ack_window[0x3];
u8 reserved_at_1f8[0x1];
u8 retransmission_counter[0x3];
- u8 retranmission_limit_exceeded[0x1];
- u8 retranmission_limit[0x3]; /* End of DW15. */
+ u8 retransmission_limit_exceeded[0x1];
+ u8 retransmission_limit[0x3]; /* End of DW15. */
};
struct mlx5_ifc_conn_track_offload_bits {
diff --git a/drivers/common/octeontx2/otx2_mbox.h b/drivers/common/octeontx2/otx2_mbox.h
index 25b521a7..8d8fe58d 100644
--- a/drivers/common/octeontx2/otx2_mbox.h
+++ b/drivers/common/octeontx2/otx2_mbox.h
@@ -1296,7 +1296,7 @@ struct cpt_lf_alloc_req_msg {
struct cpt_lf_alloc_rsp_msg {
struct mbox_msghdr hdr;
- uint16_t __otx2_io eng_grpmsk;
+ uint16_t __otx2_io eng_grpmask;
};
#define CPT_INLINE_INBOUND 0
@@ -1625,7 +1625,7 @@ enum tim_af_status {
TIM_AF_INVALID_BSIZE = -813,
TIM_AF_INVALID_ENABLE_PERIODIC = -814,
TIM_AF_INVALID_ENABLE_DONTFREE = -815,
- TIM_AF_ENA_DONTFRE_NSET_PERIODIC = -816,
+ TIM_AF_ENA_DONTFREE_NSET_PERIODIC = -816,
TIM_AF_RING_ALREADY_DISABLED = -817,
};
diff --git a/drivers/common/sfc_efx/base/ef10_signed_image_layout.h b/drivers/common/sfc_efx/base/ef10_signed_image_layout.h
index 2f3dd257..81994382 100644
--- a/drivers/common/sfc_efx/base/ef10_signed_image_layout.h
+++ b/drivers/common/sfc_efx/base/ef10_signed_image_layout.h
@@ -35,7 +35,7 @@ enum {
SIGNED_IMAGE_CHUNK_IMAGE, /* Bootable binary image */
SIGNED_IMAGE_CHUNK_REFLASH_TRAILER, /* Reflash trailer */
SIGNED_IMAGE_CHUNK_SIGNATURE, /* Remaining contents of the signed image,
- * including the certifiates and signature */
+ * including the certificates and signature */
NUM_SIGNED_IMAGE_CHUNKS,
};
diff --git a/drivers/common/sfc_efx/base/efx_port.c b/drivers/common/sfc_efx/base/efx_port.c
index a5f982e3..1011cc26 100644
--- a/drivers/common/sfc_efx/base/efx_port.c
+++ b/drivers/common/sfc_efx/base/efx_port.c
@@ -36,7 +36,7 @@ efx_port_init(
epp->ep_emop->emo_reconfigure(enp);
- /* Pick up current phy capababilities */
+ /* Pick up current phy capabilities */
(void) efx_port_poll(enp, NULL);
/*
diff --git a/drivers/common/sfc_efx/base/efx_regs.h b/drivers/common/sfc_efx/base/efx_regs.h
index 5cd364ea..63e62c2b 100644
--- a/drivers/common/sfc_efx/base/efx_regs.h
+++ b/drivers/common/sfc_efx/base/efx_regs.h
@@ -533,7 +533,7 @@ extern "C" {
/*
* FR_BZ_INT_ISR0_REG(128bit):
- * Function 0 Interrupt Acknowlege Status register
+ * Function 0 Interrupt Acknowledge Status register
*/
#define FR_BZ_INT_ISR0_REG_OFST 0x00000090
/* falconb0,sienaa0=net_func_bar2 */
diff --git a/drivers/common/sfc_efx/base/efx_types.h b/drivers/common/sfc_efx/base/efx_types.h
index 12ae1084..78d8214c 100644
--- a/drivers/common/sfc_efx/base/efx_types.h
+++ b/drivers/common/sfc_efx/base/efx_types.h
@@ -3,7 +3,7 @@
* Copyright(c) 2019-2021 Xilinx, Inc.
* Copyright(c) 2007-2019 Solarflare Communications Inc.
*
- * Ackowledgement to Fen Systems Ltd.
+ * Acknowledgement to Fen Systems Ltd.
*/
#ifndef _SYS_EFX_TYPES_H
diff --git a/drivers/compress/octeontx/otx_zip.h b/drivers/compress/octeontx/otx_zip.h
index 118a95d7..e29b8b87 100644
--- a/drivers/compress/octeontx/otx_zip.h
+++ b/drivers/compress/octeontx/otx_zip.h
@@ -66,7 +66,7 @@ extern int octtx_zip_logtype_driver;
((_align) * (((x) + (_align) - 1) / (_align)))
/**< ZIP PMD device name */
-#define COMPRESSDEV_NAME_ZIP_PMD compress_octeonx
+#define COMPRESSDEV_NAME_ZIP_PMD compress_octeontx
#define ZIP_PMD_LOG(level, fmt, args...) \
rte_log(RTE_LOG_ ## level, \
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen1.c b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c
index 12d9d890..f92250d3 100644
--- a/drivers/compress/qat/dev/qat_comp_pmd_gen1.c
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c
@@ -39,10 +39,10 @@ qat_comp_dev_config_gen1(struct rte_compressdev *dev,
"RTE_PMD_QAT_COMP_IM_BUFFER_SIZE = 0 in config file, so"
" QAT device can't be used for Dynamic Deflate.");
} else {
- comp_dev->interm_buff_mz =
+ comp_dev->interim_buff_mz =
qat_comp_setup_inter_buffers(comp_dev,
RTE_PMD_QAT_COMP_IM_BUFFER_SIZE);
- if (comp_dev->interm_buff_mz == NULL)
+ if (comp_dev->interim_buff_mz == NULL)
return -ENOMEM;
}
diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index e8f57c3c..2a3ce2ad 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -815,7 +815,7 @@ qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
static int
qat_comp_create_templates(struct qat_comp_xform *qat_xform,
- const struct rte_memzone *interm_buff_mz,
+ const struct rte_memzone *interim_buff_mz,
const struct rte_comp_xform *xform,
const struct qat_comp_stream *stream,
enum rte_comp_op_type op_type,
@@ -923,7 +923,7 @@ qat_comp_create_templates(struct qat_comp_xform *qat_xform,
comp_req->u1.xlt_pars.inter_buff_ptr =
(qat_comp_get_num_im_bufs_required(qat_dev_gen)
- == 0) ? 0 : interm_buff_mz->iova;
+ == 0) ? 0 : interim_buff_mz->iova;
}
#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
@@ -979,7 +979,7 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
if (xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_FIXED ||
((xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_DEFAULT)
- && qat->interm_buff_mz == NULL
+ && qat->interim_buff_mz == NULL
&& im_bufs > 0))
qat_xform->qat_comp_request_type =
QAT_COMP_REQUEST_FIXED_COMP_STATELESS;
@@ -988,7 +988,7 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
RTE_COMP_HUFFMAN_DYNAMIC ||
xform->compress.deflate.huffman ==
RTE_COMP_HUFFMAN_DEFAULT) &&
- (qat->interm_buff_mz != NULL ||
+ (qat->interim_buff_mz != NULL ||
im_bufs == 0))
qat_xform->qat_comp_request_type =
@@ -1007,7 +1007,7 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
qat_xform->checksum_type = xform->decompress.chksum;
}
- if (qat_comp_create_templates(qat_xform, qat->interm_buff_mz, xform,
+ if (qat_comp_create_templates(qat_xform, qat->interim_buff_mz, xform,
NULL, RTE_COMP_OP_STATELESS,
qat_dev_gen)) {
QAT_LOG(ERR, "QAT: Problem with setting compression");
@@ -1107,7 +1107,7 @@ qat_comp_stream_create(struct rte_compressdev *dev,
ptr->qat_xform.qat_comp_request_type = QAT_COMP_REQUEST_DECOMPRESS;
ptr->qat_xform.checksum_type = xform->decompress.chksum;
- if (qat_comp_create_templates(&ptr->qat_xform, qat->interm_buff_mz,
+ if (qat_comp_create_templates(&ptr->qat_xform, qat->interim_buff_mz,
xform, ptr, RTE_COMP_OP_STATEFUL,
qat->qat_dev->qat_dev_gen)) {
QAT_LOG(ERR, "QAT: problem with creating descriptor template for stream");
diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c
index da6404c0..ebb93acc 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -477,7 +477,7 @@ static void
_qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev)
{
/* Free intermediate buffers */
- if (comp_dev->interm_buff_mz) {
+ if (comp_dev->interim_buff_mz) {
char mz_name[RTE_MEMZONE_NAMESIZE];
int i = qat_comp_get_num_im_bufs_required(
comp_dev->qat_dev->qat_dev_gen);
@@ -488,8 +488,8 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev)
comp_dev->qat_dev->name, i);
rte_memzone_free(rte_memzone_lookup(mz_name));
}
- rte_memzone_free(comp_dev->interm_buff_mz);
- comp_dev->interm_buff_mz = NULL;
+ rte_memzone_free(comp_dev->interim_buff_mz);
+ comp_dev->interim_buff_mz = NULL;
}
/* Free private_xform pool */
diff --git a/drivers/compress/qat/qat_comp_pmd.h b/drivers/compress/qat/qat_comp_pmd.h
index 3c8682a7..8331b54d 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -60,7 +60,7 @@ struct qat_comp_dev_private {
/**< The pointer to this compression device structure */
const struct rte_compressdev_capabilities *qat_dev_capabilities;
/* QAT device compression capabilities */
- const struct rte_memzone *interm_buff_mz;
+ const struct rte_memzone *interim_buff_mz;
/**< The device's memory for intermediate buffers */
struct rte_mempool *xformpool;
/**< The device's pool for qat_comp_xforms */
diff --git a/drivers/crypto/caam_jr/caam_jr.c b/drivers/crypto/caam_jr/caam_jr.c
index 8e9cfe73..7b2c7d04 100644
--- a/drivers/crypto/caam_jr/caam_jr.c
+++ b/drivers/crypto/caam_jr/caam_jr.c
@@ -58,7 +58,7 @@ struct sec_outring_entry {
uint32_t status; /* Status for completed descriptor */
} __rte_packed;
-/* virtual address conversin when mempool support is available for ctx */
+/* virtual address conversion when mempool support is available for ctx */
static inline phys_addr_t
caam_jr_vtop_ctx(struct caam_jr_op_ctx *ctx, void *vaddr)
{
@@ -447,7 +447,7 @@ caam_jr_prep_cdb(struct caam_jr_session *ses)
}
} else {
/* Auth_only_len is overwritten in fd for each job */
- shared_desc_len = cnstr_shdsc_authenc(cdb->sh_desc,
+ shared_desc_len = cnstr_shdsc_authentic(cdb->sh_desc,
true, swap, SHR_SERIAL,
&alginfo_c, &alginfo_a,
ses->iv.length,
diff --git a/drivers/crypto/ccp/ccp_crypto.h b/drivers/crypto/ccp/ccp_crypto.h
index d307f73e..bc14e8a4 100644
--- a/drivers/crypto/ccp/ccp_crypto.h
+++ b/drivers/crypto/ccp/ccp_crypto.h
@@ -291,7 +291,7 @@ struct ccp_session {
} ut;
enum ccp_hash_op op;
uint64_t key_length;
- /**< max hash key size 144 bytes (struct capabilties) */
+ /**< max hash key size 144 bytes (struct capabilities) */
uint8_t key[144];
/**< max be key size of AES is 32*/
uint8_t key_ccp[32];
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
index 0d363651..a510271a 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
@@ -65,7 +65,7 @@ struct pending_queue {
uint64_t time_out;
};
-struct crypto_adpter_info {
+struct crypto_adapter_info {
bool enabled;
/**< Set if queue pair is added to crypto adapter */
struct rte_mempool *req_mp;
@@ -85,7 +85,7 @@ struct cnxk_cpt_qp {
/**< Metabuf info required to support operations on the queue pair */
struct roc_cpt_lmtline lmtline;
/**< Lmtline information */
- struct crypto_adpter_info ca;
+ struct crypto_adapter_info ca;
/**< Crypto adapter related info */
};
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index a5b05237..e5e554fd 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -281,7 +281,7 @@ build_proto_fd(dpaa2_sec_session *sess,
#endif
static inline int
-build_authenc_gcm_sg_fd(dpaa2_sec_session *sess,
+build_authentic_gcm_sg_fd(dpaa2_sec_session *sess,
struct rte_crypto_op *op,
struct qbman_fd *fd, __rte_unused uint16_t bpid)
{
@@ -426,7 +426,7 @@ build_authenc_gcm_sg_fd(dpaa2_sec_session *sess,
}
static inline int
-build_authenc_gcm_fd(dpaa2_sec_session *sess,
+build_authentic_gcm_fd(dpaa2_sec_session *sess,
struct rte_crypto_op *op,
struct qbman_fd *fd, uint16_t bpid)
{
@@ -448,7 +448,7 @@ build_authenc_gcm_fd(dpaa2_sec_session *sess,
/* TODO we are using the first FLE entry to store Mbuf and session ctxt.
* Currently we donot know which FLE has the mbuf stored.
- * So while retreiving we can go back 1 FLE from the FD -ADDR
+ * So while retrieving we can go back 1 FLE from the FD -ADDR
* to get the MBUF Addr from the previous FLE.
* We can have a better approach to use the inline Mbuf
*/
@@ -566,7 +566,7 @@ build_authenc_gcm_fd(dpaa2_sec_session *sess,
}
static inline int
-build_authenc_sg_fd(dpaa2_sec_session *sess,
+build_authentic_sg_fd(dpaa2_sec_session *sess,
struct rte_crypto_op *op,
struct qbman_fd *fd, __rte_unused uint16_t bpid)
{
@@ -713,7 +713,7 @@ build_authenc_sg_fd(dpaa2_sec_session *sess,
}
static inline int
-build_authenc_fd(dpaa2_sec_session *sess,
+build_authentic_fd(dpaa2_sec_session *sess,
struct rte_crypto_op *op,
struct qbman_fd *fd, uint16_t bpid)
{
@@ -740,7 +740,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
/* we are using the first FLE entry to store Mbuf.
* Currently we donot know which FLE has the mbuf stored.
- * So while retreiving we can go back 1 FLE from the FD -ADDR
+ * So while retrieving we can go back 1 FLE from the FD -ADDR
* to get the MBUF Addr from the previous FLE.
* We can have a better approach to use the inline Mbuf
*/
@@ -1009,7 +1009,7 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
memset(fle, 0, FLE_POOL_BUF_SIZE);
/* TODO we are using the first FLE entry to store Mbuf.
* Currently we donot know which FLE has the mbuf stored.
- * So while retreiving we can go back 1 FLE from the FD -ADDR
+ * So while retrieving we can go back 1 FLE from the FD -ADDR
* to get the MBUF Addr from the previous FLE.
* We can have a better approach to use the inline Mbuf
*/
@@ -1262,7 +1262,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
memset(fle, 0, FLE_POOL_BUF_SIZE);
/* TODO we are using the first FLE entry to store Mbuf.
* Currently we donot know which FLE has the mbuf stored.
- * So while retreiving we can go back 1 FLE from the FD -ADDR
+ * So while retrieving we can go back 1 FLE from the FD -ADDR
* to get the MBUF Addr from the previous FLE.
* We can have a better approach to use the inline Mbuf
*/
@@ -1372,10 +1372,10 @@ build_sec_fd(struct rte_crypto_op *op,
ret = build_auth_sg_fd(sess, op, fd, bpid);
break;
case DPAA2_SEC_AEAD:
- ret = build_authenc_gcm_sg_fd(sess, op, fd, bpid);
+ ret = build_authentic_gcm_sg_fd(sess, op, fd, bpid);
break;
case DPAA2_SEC_CIPHER_HASH:
- ret = build_authenc_sg_fd(sess, op, fd, bpid);
+ ret = build_authentic_sg_fd(sess, op, fd, bpid);
break;
#ifdef RTE_LIB_SECURITY
case DPAA2_SEC_IPSEC:
@@ -1396,10 +1396,10 @@ build_sec_fd(struct rte_crypto_op *op,
ret = build_auth_fd(sess, op, fd, bpid);
break;
case DPAA2_SEC_AEAD:
- ret = build_authenc_gcm_fd(sess, op, fd, bpid);
+ ret = build_authentic_gcm_fd(sess, op, fd, bpid);
break;
case DPAA2_SEC_CIPHER_HASH:
- ret = build_authenc_fd(sess, op, fd, bpid);
+ ret = build_authentic_fd(sess, op, fd, bpid);
break;
#ifdef RTE_LIB_SECURITY
case DPAA2_SEC_IPSEC:
@@ -1568,7 +1568,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd)
/* we are using the first FLE entry to store Mbuf.
* Currently we donot know which FLE has the mbuf stored.
- * So while retreiving we can go back 1 FLE from the FD -ADDR
+ * So while retrieving we can go back 1 FLE from the FD -ADDR
* to get the MBUF Addr from the previous FLE.
* We can have a better approach to use the inline Mbuf
*/
@@ -1580,7 +1580,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd)
}
op = (struct rte_crypto_op *)DPAA2_GET_FLE_ADDR((fle - 1));
- /* Prefeth op */
+ /* Prefetch op */
src = op->sym->m_src;
rte_prefetch0(src);
@@ -2525,7 +2525,7 @@ dpaa2_sec_aead_chain_init(struct rte_cryptodev *dev,
priv->flc_desc[0].desc[2] = 0;
if (session->ctxt_type == DPAA2_SEC_CIPHER_HASH) {
- bufsize = cnstr_shdsc_authenc(priv->flc_desc[0].desc, 1,
+ bufsize = cnstr_shdsc_authentic(priv->flc_desc[0].desc, 1,
0, SHR_SERIAL,
&cipherdata, &authdata,
session->iv.length,
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index f20acdd1..0d500919 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -628,7 +628,7 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses)
/* Auth_only_len is set as 0 here and it will be
* overwritten in fd for each packet.
*/
- shared_desc_len = cnstr_shdsc_authenc(cdb->sh_desc,
+ shared_desc_len = cnstr_shdsc_authentic(cdb->sh_desc,
true, swap, SHR_SERIAL, &alginfo_c, &alginfo_a,
ses->iv.length,
ses->digest_length, ses->dir);
diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h
index 6ebc1767..6965b31e 100644
--- a/drivers/crypto/qat/qat_sym_session.h
+++ b/drivers/crypto/qat/qat_sym_session.h
@@ -141,10 +141,6 @@ qat_sym_session_clear(struct rte_cryptodev *dev,
unsigned int
qat_sym_session_get_private_size(struct rte_cryptodev *dev);
-void
-qat_sym_sesssion_init_common_hdr(struct qat_sym_session *session,
- struct icp_qat_fw_comn_req_hdr *header,
- enum qat_sym_proto_flag proto_flags);
int
qat_sym_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg);
int
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index ed648667..ce23e38b 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -862,7 +862,7 @@ virtio_crypto_dev_free_mbufs(struct rte_cryptodev *dev)
VIRTIO_CRYPTO_INIT_LOG_DBG("queue_pairs[%d]=%p",
i, dev->data->queue_pairs[i]);
- virtqueue_detatch_unused(dev->data->queue_pairs[i]);
+ virtqueue_detach_unused(dev->data->queue_pairs[i]);
VIRTIO_CRYPTO_INIT_LOG_DBG("After freeing dataq[%d] used and "
"unused buf", i);
@@ -1205,7 +1205,7 @@ virtio_crypto_sym_pad_auth_param(
static int
virtio_crypto_sym_pad_op_ctrl_req(
struct virtio_crypto_op_ctrl_req *ctrl,
- struct rte_crypto_sym_xform *xform, bool is_chainned,
+ struct rte_crypto_sym_xform *xform, bool is_chained,
uint8_t *cipher_key_data, uint8_t *auth_key_data,
struct virtio_crypto_session *session)
{
@@ -1228,7 +1228,7 @@ virtio_crypto_sym_pad_op_ctrl_req(
VIRTIO_CRYPTO_MAX_IV_SIZE);
return -1;
}
- if (is_chainned)
+ if (is_chained)
ret = virtio_crypto_sym_pad_cipher_param(
&ctrl->u.sym_create_session.u.chain.para
.cipher_param, cipher_xform);
diff --git a/drivers/crypto/virtio/virtqueue.c b/drivers/crypto/virtio/virtqueue.c
index fd8be581..33985d1d 100644
--- a/drivers/crypto/virtio/virtqueue.c
+++ b/drivers/crypto/virtio/virtqueue.c
@@ -22,7 +22,7 @@ virtqueue_disable_intr(struct virtqueue *vq)
}
void
-virtqueue_detatch_unused(struct virtqueue *vq)
+virtqueue_detach_unused(struct virtqueue *vq)
{
struct rte_crypto_op *cop = NULL;
diff --git a/drivers/crypto/virtio/virtqueue.h b/drivers/crypto/virtio/virtqueue.h
index c96ca629..1a67b408 100644
--- a/drivers/crypto/virtio/virtqueue.h
+++ b/drivers/crypto/virtio/virtqueue.h
@@ -99,7 +99,7 @@ void virtqueue_disable_intr(struct virtqueue *vq);
/**
* Get all mbufs to be freed.
*/
-void virtqueue_detatch_unused(struct virtqueue *vq);
+void virtqueue_detach_unused(struct virtqueue *vq);
static inline int
virtqueue_full(const struct virtqueue *vq)
diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c
index a230496b..533f7231 100644
--- a/drivers/dma/ioat/ioat_dmadev.c
+++ b/drivers/dma/ioat/ioat_dmadev.c
@@ -624,7 +624,7 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev)
ioat = dmadev->data->dev_private;
ioat->dmadev = dmadev;
ioat->regs = dev->mem_resource[0].addr;
- ioat->doorbell = &ioat->regs->dmacount;
+ ioat->doorbell = &ioat->regs->dmaccount;
ioat->qcfg.nb_desc = 0;
ioat->desc_ring = NULL;
ioat->version = ioat->regs->cbver;
diff --git a/drivers/dma/ioat/ioat_hw_defs.h b/drivers/dma/ioat/ioat_hw_defs.h
index dc3493a7..88bf09a7 100644
--- a/drivers/dma/ioat/ioat_hw_defs.h
+++ b/drivers/dma/ioat/ioat_hw_defs.h
@@ -68,7 +68,7 @@ struct ioat_registers {
uint8_t reserved6[0x2]; /* 0x82 */
uint8_t chancmd; /* 0x84 */
uint8_t reserved3[1]; /* 0x85 */
- uint16_t dmacount; /* 0x86 */
+ uint16_t dmaccount; /* 0x86 */
uint64_t chansts; /* 0x88 */
uint64_t chainaddr; /* 0x90 */
uint64_t chancmp; /* 0x98 */
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index 6da8b14b..440b713a 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -145,7 +145,7 @@ tim_err_desc(int rc)
{
switch (rc) {
case TIM_AF_NO_RINGS_LEFT:
- otx2_err("Unable to allocat new TIM ring.");
+ otx2_err("Unable to allocate new TIM ring.");
break;
case TIM_AF_INVALID_NPA_PF_FUNC:
otx2_err("Invalid NPA pf func.");
@@ -189,7 +189,7 @@ tim_err_desc(int rc)
case TIM_AF_INVALID_ENABLE_DONTFREE:
otx2_err("Invalid Don't free value.");
break;
- case TIM_AF_ENA_DONTFRE_NSET_PERIODIC:
+ case TIM_AF_ENA_DONTFREE_NSET_PERIODIC:
otx2_err("Don't free bit not set when periodic is enabled.");
break;
case TIM_AF_RING_ALREADY_DISABLED:
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index b618cba3..1d4b5e1c 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -309,7 +309,7 @@ eth_ark_dev_init(struct rte_eth_dev *dev)
return -1;
}
if (ark->sysctrl.t32[3] != 0) {
- if (ark_rqp_lasped(ark->rqpacing)) {
+ if (ark_rqp_lapsed(ark->rqpacing)) {
ARK_PMD_LOG(ERR, "Arkville Evaluation System - "
"Timer has Expired\n");
return -1;
@@ -565,7 +565,7 @@ eth_ark_dev_start(struct rte_eth_dev *dev)
if (ark->start_pg && (dev->data->port_id == 0)) {
pthread_t thread;
- /* Delay packet generatpr start allow the hardware to be ready
+ /* Delay packet generator start allow the hardware to be ready
* This is only used for sanity checking with internal generator
*/
if (rte_ctrl_thread_create(&thread, "ark-delay-pg", NULL,
diff --git a/drivers/net/ark/ark_rqp.c b/drivers/net/ark/ark_rqp.c
index ef9ccd07..1193a462 100644
--- a/drivers/net/ark/ark_rqp.c
+++ b/drivers/net/ark/ark_rqp.c
@@ -62,7 +62,7 @@ ark_rqp_dump(struct ark_rqpace_t *rqp)
}
int
-ark_rqp_lasped(struct ark_rqpace_t *rqp)
+ark_rqp_lapsed(struct ark_rqpace_t *rqp)
{
- return rqp->lasped;
+ return rqp->lapsed;
}
diff --git a/drivers/net/ark/ark_rqp.h b/drivers/net/ark/ark_rqp.h
index 6c804606..fc9c5b57 100644
--- a/drivers/net/ark/ark_rqp.h
+++ b/drivers/net/ark/ark_rqp.h
@@ -48,10 +48,10 @@ struct ark_rqpace_t {
volatile uint32_t cpld_pending_max;
volatile uint32_t err_count_other;
char eval[4];
- volatile int lasped;
+ volatile int lapsed;
};
void ark_rqp_dump(struct ark_rqpace_t *rqp);
void ark_rqp_stats_reset(struct ark_rqpace_t *rqp);
-int ark_rqp_lasped(struct ark_rqpace_t *rqp);
+int ark_rqp_lapsed(struct ark_rqpace_t *rqp);
#endif
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 74e3018e..f4c54448 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -4031,17 +4031,17 @@ static void bnx2x_attn_int_deasserted2(struct bnx2x_softc *sc, uint32_t attn)
}
}
- if (attn & HW_INTERRUT_ASSERT_SET_2) {
+ if (attn & HW_INTERRUPT_ASSERT_SET_2) {
reg_offset = (port ? MISC_REG_AEU_ENABLE1_FUNC_1_OUT_2 :
MISC_REG_AEU_ENABLE1_FUNC_0_OUT_2);
val = REG_RD(sc, reg_offset);
- val &= ~(attn & HW_INTERRUT_ASSERT_SET_2);
+ val &= ~(attn & HW_INTERRUPT_ASSERT_SET_2);
REG_WR(sc, reg_offset, val);
PMD_DRV_LOG(ERR, sc,
"FATAL HW block attention set2 0x%x",
- (uint32_t) (attn & HW_INTERRUT_ASSERT_SET_2));
+ (uint32_t) (attn & HW_INTERRUPT_ASSERT_SET_2));
rte_panic("HW block attention set2");
}
}
@@ -4061,17 +4061,17 @@ static void bnx2x_attn_int_deasserted1(struct bnx2x_softc *sc, uint32_t attn)
}
}
- if (attn & HW_INTERRUT_ASSERT_SET_1) {
+ if (attn & HW_INTERRUPT_ASSERT_SET_1) {
reg_offset = (port ? MISC_REG_AEU_ENABLE1_FUNC_1_OUT_1 :
MISC_REG_AEU_ENABLE1_FUNC_0_OUT_1);
val = REG_RD(sc, reg_offset);
- val &= ~(attn & HW_INTERRUT_ASSERT_SET_1);
+ val &= ~(attn & HW_INTERRUPT_ASSERT_SET_1);
REG_WR(sc, reg_offset, val);
PMD_DRV_LOG(ERR, sc,
"FATAL HW block attention set1 0x%08x",
- (uint32_t) (attn & HW_INTERRUT_ASSERT_SET_1));
+ (uint32_t) (attn & HW_INTERRUPT_ASSERT_SET_1));
rte_panic("HW block attention set1");
}
}
@@ -4103,13 +4103,13 @@ static void bnx2x_attn_int_deasserted0(struct bnx2x_softc *sc, uint32_t attn)
bnx2x_release_phy_lock(sc);
}
- if (attn & HW_INTERRUT_ASSERT_SET_0) {
+ if (attn & HW_INTERRUPT_ASSERT_SET_0) {
val = REG_RD(sc, reg_offset);
- val &= ~(attn & HW_INTERRUT_ASSERT_SET_0);
+ val &= ~(attn & HW_INTERRUPT_ASSERT_SET_0);
REG_WR(sc, reg_offset, val);
rte_panic("FATAL HW block attention set0 0x%lx",
- (attn & (unsigned long)HW_INTERRUT_ASSERT_SET_0));
+ (attn & (unsigned long)HW_INTERRUPT_ASSERT_SET_0));
}
}
diff --git a/drivers/net/bnx2x/bnx2x.h b/drivers/net/bnx2x/bnx2x.h
index d7e1729e..3e79d272 100644
--- a/drivers/net/bnx2x/bnx2x.h
+++ b/drivers/net/bnx2x/bnx2x.h
@@ -1709,7 +1709,7 @@ static const uint32_t dmae_reg_go_c[] = {
GENERAL_ATTEN_OFFSET(LATCHED_ATTN_RBCP) | \
GENERAL_ATTEN_OFFSET(LATCHED_ATTN_RSVD_GRC))
-#define HW_INTERRUT_ASSERT_SET_0 \
+#define HW_INTERRUPT_ASSERT_SET_0 \
(AEU_INPUTS_ATTN_BITS_TSDM_HW_INTERRUPT | \
AEU_INPUTS_ATTN_BITS_TCM_HW_INTERRUPT | \
AEU_INPUTS_ATTN_BITS_TSEMI_HW_INTERRUPT | \
@@ -1722,7 +1722,7 @@ static const uint32_t dmae_reg_go_c[] = {
AEU_INPUTS_ATTN_BITS_TSEMI_PARITY_ERROR |\
AEU_INPUTS_ATTN_BITS_TCM_PARITY_ERROR |\
AEU_INPUTS_ATTN_BITS_PBCLIENT_PARITY_ERROR)
-#define HW_INTERRUT_ASSERT_SET_1 \
+#define HW_INTERRUPT_ASSERT_SET_1 \
(AEU_INPUTS_ATTN_BITS_QM_HW_INTERRUPT | \
AEU_INPUTS_ATTN_BITS_TIMERS_HW_INTERRUPT | \
AEU_INPUTS_ATTN_BITS_XSDM_HW_INTERRUPT | \
@@ -1750,7 +1750,7 @@ static const uint32_t dmae_reg_go_c[] = {
AEU_INPUTS_ATTN_BITS_UPB_PARITY_ERROR | \
AEU_INPUTS_ATTN_BITS_CSDM_PARITY_ERROR |\
AEU_INPUTS_ATTN_BITS_CCM_PARITY_ERROR)
-#define HW_INTERRUT_ASSERT_SET_2 \
+#define HW_INTERRUPT_ASSERT_SET_2 \
(AEU_INPUTS_ATTN_BITS_CSEMI_HW_INTERRUPT | \
AEU_INPUTS_ATTN_BITS_CDU_HW_INTERRUPT | \
AEU_INPUTS_ATTN_BITS_DMAE_HW_INTERRUPT | \
diff --git a/drivers/net/bnx2x/bnx2x_stats.c b/drivers/net/bnx2x/bnx2x_stats.c
index c07b0151..b19f7d67 100644
--- a/drivers/net/bnx2x/bnx2x_stats.c
+++ b/drivers/net/bnx2x/bnx2x_stats.c
@@ -551,7 +551,7 @@ bnx2x_bmac_stats_update(struct bnx2x_softc *sc)
UPDATE_STAT64(rx_stat_grfrg, rx_stat_etherstatsfragments);
UPDATE_STAT64(rx_stat_grjbr, rx_stat_etherstatsjabbers);
UPDATE_STAT64(rx_stat_grxcf, rx_stat_maccontrolframesreceived);
- UPDATE_STAT64(rx_stat_grxpf, rx_stat_xoffstateentered);
+ UPDATE_STAT64(rx_stat_grxpf, rx_stat_xoffsetateentered);
UPDATE_STAT64(rx_stat_grxpf, rx_stat_mac_xpf);
UPDATE_STAT64(tx_stat_gtxpf, tx_stat_outxoffsent);
@@ -586,7 +586,7 @@ bnx2x_bmac_stats_update(struct bnx2x_softc *sc)
UPDATE_STAT64(rx_stat_grfrg, rx_stat_etherstatsfragments);
UPDATE_STAT64(rx_stat_grjbr, rx_stat_etherstatsjabbers);
UPDATE_STAT64(rx_stat_grxcf, rx_stat_maccontrolframesreceived);
- UPDATE_STAT64(rx_stat_grxpf, rx_stat_xoffstateentered);
+ UPDATE_STAT64(rx_stat_grxpf, rx_stat_xoffsetateentered);
UPDATE_STAT64(rx_stat_grxpf, rx_stat_mac_xpf);
UPDATE_STAT64(tx_stat_gtxpf, tx_stat_outxoffsent);
UPDATE_STAT64(tx_stat_gtxpf, tx_stat_flowcontroldone);
@@ -646,7 +646,7 @@ bnx2x_mstat_stats_update(struct bnx2x_softc *sc)
ADD_STAT64(stats_rx.rx_grovr, rx_stat_dot3statsframestoolong);
ADD_STAT64(stats_rx.rx_grfrg, rx_stat_etherstatsfragments);
ADD_STAT64(stats_rx.rx_grxcf, rx_stat_maccontrolframesreceived);
- ADD_STAT64(stats_rx.rx_grxpf, rx_stat_xoffstateentered);
+ ADD_STAT64(stats_rx.rx_grxpf, rx_stat_xoffsetateentered);
ADD_STAT64(stats_rx.rx_grxpf, rx_stat_mac_xpf);
ADD_STAT64(stats_tx.tx_gtxpf, tx_stat_outxoffsent);
ADD_STAT64(stats_tx.tx_gtxpf, tx_stat_flowcontroldone);
@@ -729,7 +729,7 @@ bnx2x_emac_stats_update(struct bnx2x_softc *sc)
UPDATE_EXTEND_STAT(rx_stat_etherstatsfragments);
UPDATE_EXTEND_STAT(rx_stat_etherstatsjabbers);
UPDATE_EXTEND_STAT(rx_stat_maccontrolframesreceived);
- UPDATE_EXTEND_STAT(rx_stat_xoffstateentered);
+ UPDATE_EXTEND_STAT(rx_stat_xoffsetateentered);
UPDATE_EXTEND_STAT(rx_stat_xonpauseframesreceived);
UPDATE_EXTEND_STAT(rx_stat_xoffpauseframesreceived);
UPDATE_EXTEND_STAT(tx_stat_outxonsent);
diff --git a/drivers/net/bnx2x/bnx2x_stats.h b/drivers/net/bnx2x/bnx2x_stats.h
index 11ddab50..5eeb148b 100644
--- a/drivers/net/bnx2x/bnx2x_stats.h
+++ b/drivers/net/bnx2x/bnx2x_stats.h
@@ -105,8 +105,8 @@ struct bnx2x_eth_stats {
uint32_t rx_stat_bmac_xpf_lo;
uint32_t rx_stat_bmac_xcf_hi;
uint32_t rx_stat_bmac_xcf_lo;
- uint32_t rx_stat_xoffstateentered_hi;
- uint32_t rx_stat_xoffstateentered_lo;
+ uint32_t rx_stat_xoffsetateentered_hi;
+ uint32_t rx_stat_xoffsetateentered_lo;
uint32_t rx_stat_xonpauseframesreceived_hi;
uint32_t rx_stat_xonpauseframesreceived_lo;
uint32_t rx_stat_xoffpauseframesreceived_hi;
diff --git a/drivers/net/bnx2x/ecore_hsi.h b/drivers/net/bnx2x/ecore_hsi.h
index eda79408..7fc5525b 100644
--- a/drivers/net/bnx2x/ecore_hsi.h
+++ b/drivers/net/bnx2x/ecore_hsi.h
@@ -961,10 +961,10 @@ struct port_feat_cfg { /* port 0: 0x454 port 1: 0x4c8 */
#define PORT_FEAT_CFG_DCBX_DISABLED 0x00000000
#define PORT_FEAT_CFG_DCBX_ENABLED 0x00000100
- #define PORT_FEAT_CFG_AUTOGREEEN_MASK 0x00000200
- #define PORT_FEAT_CFG_AUTOGREEEN_SHIFT 9
- #define PORT_FEAT_CFG_AUTOGREEEN_DISABLED 0x00000000
- #define PORT_FEAT_CFG_AUTOGREEEN_ENABLED 0x00000200
+ #define PORT_FEAT_CFG_AUTOGREEN_MASK 0x00000200
+ #define PORT_FEAT_CFG_AUTOGREEN_SHIFT 9
+ #define PORT_FEAT_CFG_AUTOGREEN_DISABLED 0x00000000
+ #define PORT_FEAT_CFG_AUTOGREEN_ENABLED 0x00000200
#define PORT_FEAT_CFG_STORAGE_PERSONALITY_MASK 0x00000C00
#define PORT_FEAT_CFG_STORAGE_PERSONALITY_SHIFT 10
@@ -1070,7 +1070,7 @@ struct port_feat_cfg { /* port 0: 0x454 port 1: 0x4c8 */
#define PORT_FEATURE_MBA_VLAN_TAG_MASK 0x0000FFFF
#define PORT_FEATURE_MBA_VLAN_TAG_SHIFT 0
#define PORT_FEATURE_MBA_VLAN_EN 0x00010000
- #define PORT_FEATUTE_BOFM_CFGD_EN 0x00020000
+ #define PORT_FEATURE_BOFM_CFGD_EN 0x00020000
#define PORT_FEATURE_BOFM_CFGD_FTGT 0x00040000
#define PORT_FEATURE_BOFM_CFGD_VEN 0x00080000
@@ -1183,12 +1183,12 @@ struct shm_dev_info { /* size */
struct extended_dev_info_shared_cfg { /* NVRAM OFFSET */
- /* Threshold in celcius to start using the fan */
+ /* Threshold in celsius to start using the fan */
uint32_t temperature_monitor1; /* 0x4000 */
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_THRESH_MASK 0x0000007F
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_FAN_THRESH_SHIFT 0
- /* Threshold in celcius to shut down the board */
+ /* Threshold in celsius to shut down the board */
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_THRESH_MASK 0x00007F00
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SHUT_THRESH_SHIFT 8
@@ -1378,12 +1378,12 @@ struct extended_dev_info_shared_cfg { /* NVRAM OFFSET */
#define EXTENDED_DEV_INFO_SHARED_CFG_REV_ID_CTRL_ACTUAL 0x00001000
#define EXTENDED_DEV_INFO_SHARED_CFG_REV_ID_CTRL_FORCE_B0 0x00002000
#define EXTENDED_DEV_INFO_SHARED_CFG_REV_ID_CTRL_FORCE_B1 0x00003000
- /* Threshold in celcius for max continuous operation */
+ /* Threshold in celsius for max continuous operation */
uint32_t temperature_report; /* 0x4014 */
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_MCOT_MASK 0x0000007F
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_MCOT_SHIFT 0
- /* Threshold in celcius for sensor caution */
+ /* Threshold in celsius for sensor caution */
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SCT_MASK 0x00007F00
#define EXTENDED_DEV_INFO_SHARED_CFG_TEMP_SCT_SHIFT 8
@@ -2172,7 +2172,7 @@ struct eee_remote_vals {
uint32_t rx_tw;
};
-/**** SUPPORT FOR SHMEM ARRRAYS ***
+/**** SUPPORT FOR SHMEM ARRAYS ***
* The SHMEM HSI is aligned on 32 bit boundaries which makes it difficult to
* define arrays with storage types smaller then unsigned dwords.
* The macros below add generic support for SHMEM arrays with numeric elements
@@ -2754,8 +2754,8 @@ struct shmem2_region {
struct eee_remote_vals eee_remote_vals[PORT_MAX]; /* 0x0110 */
uint32_t pf_allocation[E2_FUNC_MAX]; /* 0x0120 */
- #define PF_ALLOACTION_MSIX_VECTORS_MASK 0x000000ff /* real value, as PCI config space can show only maximum of 64 vectors */
- #define PF_ALLOACTION_MSIX_VECTORS_SHIFT 0
+ #define PF_ALLOCATION_MSIX_VECTORS_MASK 0x000000ff /* real value, as PCI config space can show only maximum of 64 vectors */
+ #define PF_ALLOCATION_MSIX_VECTORS_SHIFT 0
/* the status of EEE auto-negotiation
* bits 15:0 the configured tx-lpi entry timer value. Depends on bit 31.
@@ -2940,7 +2940,7 @@ struct emac_stats {
uint32_t rx_stat_xonpauseframesreceived;
uint32_t rx_stat_xoffpauseframesreceived;
uint32_t rx_stat_maccontrolframesreceived;
- uint32_t rx_stat_xoffstateentered;
+ uint32_t rx_stat_xoffsetateentered;
uint32_t rx_stat_dot3statsframestoolong;
uint32_t rx_stat_etherstatsjabbers;
uint32_t rx_stat_etherstatsundersizepkts;
@@ -3378,8 +3378,8 @@ struct mac_stx {
uint32_t rx_stat_mac_xcf_lo;
/* xoff_state_entered */
- uint32_t rx_stat_xoffstateentered_hi;
- uint32_t rx_stat_xoffstateentered_lo;
+ uint32_t rx_stat_xoffsetateentered_hi;
+ uint32_t rx_stat_xoffsetateentered_lo;
/* pause_xon_frames_received */
uint32_t rx_stat_xonpauseframesreceived_hi;
uint32_t rx_stat_xonpauseframesreceived_lo;
@@ -6090,8 +6090,8 @@ struct fw_version {
uint32_t flags;
#define FW_VERSION_OPTIMIZED (0x1 << 0)
#define FW_VERSION_OPTIMIZED_SHIFT 0
-#define FW_VERSION_BIG_ENDIEN (0x1 << 1)
-#define FW_VERSION_BIG_ENDIEN_SHIFT 1
+#define FW_VERSION_BIG_ENDIAN (0x1 << 1)
+#define FW_VERSION_BIG_ENDIAN_SHIFT 1
#define FW_VERSION_CHIP_VERSION (0x3 << 2)
#define FW_VERSION_CHIP_VERSION_SHIFT 2
#define __FW_VERSION_RESERVED (0xFFFFFFF << 4)
@@ -6407,8 +6407,8 @@ struct pram_fw_version {
#define PRAM_FW_VERSION_OPTIMIZED_SHIFT 0
#define PRAM_FW_VERSION_STORM_ID (0x3 << 1)
#define PRAM_FW_VERSION_STORM_ID_SHIFT 1
-#define PRAM_FW_VERSION_BIG_ENDIEN (0x1 << 3)
-#define PRAM_FW_VERSION_BIG_ENDIEN_SHIFT 3
+#define PRAM_FW_VERSION_BIG_ENDIAN (0x1 << 3)
+#define PRAM_FW_VERSION_BIG_ENDIAN_SHIFT 3
#define PRAM_FW_VERSION_CHIP_VERSION (0x3 << 4)
#define PRAM_FW_VERSION_CHIP_VERSION_SHIFT 4
#define __PRAM_FW_VERSION_RESERVED0 (0x3 << 6)
diff --git a/drivers/net/bnx2x/ecore_init.h b/drivers/net/bnx2x/ecore_init.h
index 4e348612..a339c0bf 100644
--- a/drivers/net/bnx2x/ecore_init.h
+++ b/drivers/net/bnx2x/ecore_init.h
@@ -288,7 +288,7 @@ static inline void ecore_dcb_config_qm(struct bnx2x_softc *sc, enum cos_mode mod
*
* IMPORTANT REMARKS:
* 1. the cmng_init struct does not represent the contiguous internal ram
- * structure. the driver should use the XSTORM_CMNG_PERPORT_VARS_OFFSET
+ * structure. the driver should use the XSTORM_CMNG_PER_PORT_VARS_OFFSET
* offset in order to write the port sub struct and the
* PFID_FROM_PORT_AND_VNIC offset for writing the vnic sub struct (in other
* words - don't use memcpy!).
diff --git a/drivers/net/bnx2x/ecore_reg.h b/drivers/net/bnx2x/ecore_reg.h
index 6f7b0522..6b220bc5 100644
--- a/drivers/net/bnx2x/ecore_reg.h
+++ b/drivers/net/bnx2x/ecore_reg.h
@@ -1398,11 +1398,11 @@
* ~nig_registers_led_control_blink_traffic_p0.led_control_blink_traffic_p0
*/
#define NIG_REG_LED_CONTROL_OVERRIDE_TRAFFIC_P0 0x102f8
-/* [RW 1] Port0: If set along with the led_control_override_trafic_p0 bit;
+/* [RW 1] Port0: If set along with the led_control_override_traffic_p0 bit;
* turns on the Traffic LED. If the led_control_blink_traffic_p0 bit is also
* set; the LED will blink with blink rate specified in
* ~nig_registers_led_control_blink_rate_p0.led_control_blink_rate_p0 and
- * ~nig_regsters_led_control_blink_rate_ena_p0.led_control_blink_rate_ena_p0
+ * ~nig_registers_led_control_blink_rate_ena_p0.led_control_blink_rate_ena_p0
* fields.
*/
#define NIG_REG_LED_CONTROL_TRAFFIC_P0 0x10300
@@ -4570,8 +4570,8 @@
#define PCICFG_COMMAND_RESERVED (0x1f<<11)
#define PCICFG_STATUS_OFFSET 0x06
#define PCICFG_REVISION_ID_OFFSET 0x08
-#define PCICFG_REVESION_ID_MASK 0xff
-#define PCICFG_REVESION_ID_ERROR_VAL 0xff
+#define PCICFG_REVISION_ID_MASK 0xff
+#define PCICFG_REVISION_ID_ERROR_VAL 0xff
#define PCICFG_CACHE_LINE_SIZE 0x0c
#define PCICFG_LATENCY_TIMER 0x0d
#define PCICFG_HEADER_TYPE 0x0e
@@ -5272,8 +5272,8 @@
#define MDIO_GP_STATUS_TOP_AN_STATUS1_DUPLEX_STATUS 0x0008
#define MDIO_GP_STATUS_TOP_AN_STATUS1_CL73_MR_LP_NP_AN_ABLE 0x0010
#define MDIO_GP_STATUS_TOP_AN_STATUS1_CL73_LP_NP_BAM_ABLE 0x0020
-#define MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RSOLUTION_TXSIDE 0x0040
-#define MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RSOLUTION_RXSIDE 0x0080
+#define MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RESOLUTION_TXSIDE 0x0040
+#define MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RESOLUTION_RXSIDE 0x0080
#define MDIO_GP_STATUS_TOP_AN_STATUS1_ACTUAL_SPEED_MASK 0x3f00
#define MDIO_GP_STATUS_TOP_AN_STATUS1_ACTUAL_SPEED_10M 0x0000
#define MDIO_GP_STATUS_TOP_AN_STATUS1_ACTUAL_SPEED_100M 0x0100
diff --git a/drivers/net/bnx2x/ecore_sp.c b/drivers/net/bnx2x/ecore_sp.c
index c6c38577..6c727f2f 100644
--- a/drivers/net/bnx2x/ecore_sp.c
+++ b/drivers/net/bnx2x/ecore_sp.c
@@ -2352,11 +2352,11 @@ static int ecore_mcast_get_next_bin(struct ecore_mcast_obj *o, int last)
int i, j, inner_start = last % BIT_VEC64_ELEM_SZ;
for (i = last / BIT_VEC64_ELEM_SZ; i < ECORE_MCAST_VEC_SZ; i++) {
- if (o->registry.aprox_match.vec[i])
+ if (o->registry.approx_match.vec[i])
for (j = inner_start; j < BIT_VEC64_ELEM_SZ; j++) {
int cur_bit = j + BIT_VEC64_ELEM_SZ * i;
if (BIT_VEC64_TEST_BIT
- (o->registry.aprox_match.vec, cur_bit)) {
+ (o->registry.approx_match.vec, cur_bit)) {
return cur_bit;
}
}
@@ -2379,7 +2379,7 @@ static int ecore_mcast_clear_first_bin(struct ecore_mcast_obj *o)
int cur_bit = ecore_mcast_get_next_bin(o, 0);
if (cur_bit >= 0)
- BIT_VEC64_CLEAR_BIT(o->registry.aprox_match.vec, cur_bit);
+ BIT_VEC64_CLEAR_BIT(o->registry.approx_match.vec, cur_bit);
return cur_bit;
}
@@ -2421,7 +2421,7 @@ static void ecore_mcast_set_one_rule_e2(struct bnx2x_softc *sc __rte_unused,
switch (cmd) {
case ECORE_MCAST_CMD_ADD:
bin = ecore_mcast_bin_from_mac(cfg_data->mac);
- BIT_VEC64_SET_BIT(o->registry.aprox_match.vec, bin);
+ BIT_VEC64_SET_BIT(o->registry.approx_match.vec, bin);
break;
case ECORE_MCAST_CMD_DEL:
@@ -2812,7 +2812,7 @@ static int ecore_mcast_refresh_registry_e2(struct ecore_mcast_obj *o)
uint64_t elem;
for (i = 0; i < ECORE_MCAST_VEC_SZ; i++) {
- elem = o->registry.aprox_match.vec[i];
+ elem = o->registry.approx_match.vec[i];
for (; elem; cnt++)
elem &= elem - 1;
}
@@ -2950,7 +2950,7 @@ static void ecore_mcast_hdl_add_e1h(struct bnx2x_softc *sc __rte_unused,
bit);
/* bookkeeping... */
- BIT_VEC64_SET_BIT(o->registry.aprox_match.vec, bit);
+ BIT_VEC64_SET_BIT(o->registry.approx_match.vec, bit);
}
}
@@ -2998,8 +2998,8 @@ static int ecore_mcast_setup_e1h(struct bnx2x_softc *sc,
ECORE_MSG(sc, "Invalidating multicast MACs configuration");
/* clear the registry */
- ECORE_MEMSET(o->registry.aprox_match.vec, 0,
- sizeof(o->registry.aprox_match.vec));
+ ECORE_MEMSET(o->registry.approx_match.vec, 0,
+ sizeof(o->registry.approx_match.vec));
break;
case ECORE_MCAST_CMD_RESTORE:
@@ -3016,8 +3016,8 @@ static int ecore_mcast_setup_e1h(struct bnx2x_softc *sc,
REG_WR(sc, ECORE_MC_HASH_OFFSET(sc, i), mc_filter[i]);
} else
/* clear the registry */
- ECORE_MEMSET(o->registry.aprox_match.vec, 0,
- sizeof(o->registry.aprox_match.vec));
+ ECORE_MEMSET(o->registry.approx_match.vec, 0,
+ sizeof(o->registry.approx_match.vec));
/* We are done */
r->clear_pending(r);
@@ -3025,15 +3025,15 @@ static int ecore_mcast_setup_e1h(struct bnx2x_softc *sc,
return ECORE_SUCCESS;
}
-static int ecore_mcast_get_registry_size_aprox(struct ecore_mcast_obj *o)
+static int ecore_mcast_get_registry_size_approx(struct ecore_mcast_obj *o)
{
- return o->registry.aprox_match.num_bins_set;
+ return o->registry.approx_match.num_bins_set;
}
-static void ecore_mcast_set_registry_size_aprox(struct ecore_mcast_obj *o,
+static void ecore_mcast_set_registry_size_approx(struct ecore_mcast_obj *o,
int n)
{
- o->registry.aprox_match.num_bins_set = n;
+ o->registry.approx_match.num_bins_set = n;
}
int ecore_config_mcast(struct bnx2x_softc *sc,
@@ -3163,9 +3163,9 @@ void ecore_init_mcast_obj(struct bnx2x_softc *sc,
mcast_obj->validate = ecore_mcast_validate_e1h;
mcast_obj->revert = ecore_mcast_revert_e1h;
mcast_obj->get_registry_size =
- ecore_mcast_get_registry_size_aprox;
+ ecore_mcast_get_registry_size_approx;
mcast_obj->set_registry_size =
- ecore_mcast_set_registry_size_aprox;
+ ecore_mcast_set_registry_size_approx;
} else {
mcast_obj->config_mcast = ecore_mcast_setup_e2;
mcast_obj->enqueue_cmd = ecore_mcast_enqueue_cmd;
@@ -3177,9 +3177,9 @@ void ecore_init_mcast_obj(struct bnx2x_softc *sc,
mcast_obj->validate = ecore_mcast_validate_e2;
mcast_obj->revert = ecore_mcast_revert_e2;
mcast_obj->get_registry_size =
- ecore_mcast_get_registry_size_aprox;
+ ecore_mcast_get_registry_size_approx;
mcast_obj->set_registry_size =
- ecore_mcast_set_registry_size_aprox;
+ ecore_mcast_set_registry_size_approx;
}
}
diff --git a/drivers/net/bnx2x/ecore_sp.h b/drivers/net/bnx2x/ecore_sp.h
index 1f4d5a3e..a5276475 100644
--- a/drivers/net/bnx2x/ecore_sp.h
+++ b/drivers/net/bnx2x/ecore_sp.h
@@ -974,7 +974,7 @@ struct ecore_mcast_obj {
* properly create DEL commands.
*/
int num_bins_set;
- } aprox_match;
+ } approx_match;
struct {
ecore_list_t macs;
diff --git a/drivers/net/bnx2x/elink.c b/drivers/net/bnx2x/elink.c
index 43fbf04e..838ad351 100644
--- a/drivers/net/bnx2x/elink.c
+++ b/drivers/net/bnx2x/elink.c
@@ -147,8 +147,8 @@
#define MDIO_GP_STATUS_TOP_AN_STATUS1_DUPLEX_STATUS 0x0008
#define MDIO_GP_STATUS_TOP_AN_STATUS1_CL73_MR_LP_NP_AN_ABLE 0x0010
#define MDIO_GP_STATUS_TOP_AN_STATUS1_CL73_LP_NP_BAM_ABLE 0x0020
- #define MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RSOLUTION_TXSIDE 0x0040
- #define MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RSOLUTION_RXSIDE 0x0080
+ #define MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RESOLUTION_TXSIDE 0x0040
+ #define MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RESOLUTION_RXSIDE 0x0080
#define MDIO_GP_STATUS_TOP_AN_STATUS1_ACTUAL_SPEED_MASK 0x3f00
#define MDIO_GP_STATUS_TOP_AN_STATUS1_ACTUAL_SPEED_10M 0x0000
#define MDIO_GP_STATUS_TOP_AN_STATUS1_ACTUAL_SPEED_100M 0x0100
@@ -746,7 +746,7 @@ typedef elink_status_t (*read_sfp_module_eeprom_func_p)(struct elink_phy *phy,
/********************************************************/
#define ELINK_ETH_HLEN 14
/* L2 header size + 2*VLANs (8 bytes) + LLC SNAP (8 bytes) */
-#define ELINK_ETH_OVREHEAD (ELINK_ETH_HLEN + 8 + 8)
+#define ELINK_ETH_OVERHEAD (ELINK_ETH_HLEN + 8 + 8)
#define ELINK_ETH_MIN_PACKET_SIZE 60
#define ELINK_ETH_MAX_PACKET_SIZE 1500
#define ELINK_ETH_MAX_JUMBO_PACKET_SIZE 9600
@@ -814,10 +814,10 @@ typedef elink_status_t (*read_sfp_module_eeprom_func_p)(struct elink_phy *phy,
SHARED_HW_CFG_AN_EN_SGMII_FIBER_AUTO_DETECT
#define ELINK_AUTONEG_REMOTE_PHY SHARED_HW_CFG_AN_ENABLE_REMOTE_PHY
-#define ELINK_GP_STATUS_PAUSE_RSOLUTION_TXSIDE \
- MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RSOLUTION_TXSIDE
-#define ELINK_GP_STATUS_PAUSE_RSOLUTION_RXSIDE \
- MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RSOLUTION_RXSIDE
+#define ELINK_GP_STATUS_PAUSE_RESOLUTION_TXSIDE \
+ MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RESOLUTION_TXSIDE
+#define ELINK_GP_STATUS_PAUSE_RESOLUTION_RXSIDE \
+ MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RESOLUTION_RXSIDE
#define ELINK_GP_STATUS_SPEED_MASK \
MDIO_GP_STATUS_TOP_AN_STATUS1_ACTUAL_SPEED_MASK
#define ELINK_GP_STATUS_10M MDIO_GP_STATUS_TOP_AN_STATUS1_ACTUAL_SPEED_10M
@@ -2726,7 +2726,7 @@ static elink_status_t elink_emac_enable(struct elink_params *params,
/* Enable emac for jumbo packets */
elink_cb_reg_write(sc, emac_base + EMAC_REG_EMAC_RX_MTU_SIZE,
(EMAC_RX_MTU_SIZE_JUMBO_ENA |
- (ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVREHEAD)));
+ (ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVERHEAD)));
/* Strip CRC */
REG_WR(sc, NIG_REG_NIG_INGRESS_EMAC0_NO_CRC + port * 4, 0x1);
@@ -3124,19 +3124,19 @@ static elink_status_t elink_bmac1_enable(struct elink_params *params,
REG_WR_DMAE(sc, bmac_addr + BIGMAC_REGISTER_BMAC_CONTROL, wb_data, 2);
/* Set rx mtu */
- wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVREHEAD;
+ wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVERHEAD;
wb_data[1] = 0;
REG_WR_DMAE(sc, bmac_addr + BIGMAC_REGISTER_RX_MAX_SIZE, wb_data, 2);
elink_update_pfc_bmac1(params, vars);
/* Set tx mtu */
- wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVREHEAD;
+ wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVERHEAD;
wb_data[1] = 0;
REG_WR_DMAE(sc, bmac_addr + BIGMAC_REGISTER_TX_MAX_SIZE, wb_data, 2);
/* Set cnt max size */
- wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVREHEAD;
+ wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVERHEAD;
wb_data[1] = 0;
REG_WR_DMAE(sc, bmac_addr + BIGMAC_REGISTER_CNT_MAX_SIZE, wb_data, 2);
@@ -3203,18 +3203,18 @@ static elink_status_t elink_bmac2_enable(struct elink_params *params,
DELAY(30);
/* Set RX MTU */
- wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVREHEAD;
+ wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVERHEAD;
wb_data[1] = 0;
REG_WR_DMAE(sc, bmac_addr + BIGMAC2_REGISTER_RX_MAX_SIZE, wb_data, 2);
DELAY(30);
/* Set TX MTU */
- wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVREHEAD;
+ wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVERHEAD;
wb_data[1] = 0;
REG_WR_DMAE(sc, bmac_addr + BIGMAC2_REGISTER_TX_MAX_SIZE, wb_data, 2);
DELAY(30);
/* Set cnt max size */
- wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVREHEAD - 2;
+ wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVERHEAD - 2;
wb_data[1] = 0;
REG_WR_DMAE(sc, bmac_addr + BIGMAC2_REGISTER_CNT_MAX_SIZE, wb_data, 2);
DELAY(30);
@@ -3339,7 +3339,7 @@ static elink_status_t elink_pbf_update(struct elink_params *params,
} else {
uint32_t thresh = (ELINK_ETH_MAX_JUMBO_PACKET_SIZE +
- ELINK_ETH_OVREHEAD) / 16;
+ ELINK_ETH_OVERHEAD) / 16;
REG_WR(sc, PBF_REG_P0_PAUSE_ENABLE + port * 4, 0);
/* Update threshold */
REG_WR(sc, PBF_REG_P0_ARB_THRSH + port * 4, thresh);
@@ -12102,7 +12102,7 @@ static uint8_t elink_54618se_config_init(struct elink_phy *phy,
if (phy->flags & ELINK_FLAGS_EEE) {
/* Handle legacy auto-grEEEn */
if (params->feature_config_flags &
- ELINK_FEATURE_CONFIG_AUTOGREEEN_ENABLED) {
+ ELINK_FEATURE_CONFIG_AUTOGREEN_ENABLED) {
temp = 6;
ELINK_DEBUG_P0(sc, "Enabling Auto-GrEEEn");
} else {
diff --git a/drivers/net/bnx2x/elink.h b/drivers/net/bnx2x/elink.h
index 6b2e85f1..1dd9b799 100644
--- a/drivers/net/bnx2x/elink.h
+++ b/drivers/net/bnx2x/elink.h
@@ -403,7 +403,7 @@ struct elink_params {
#define ELINK_FEATURE_CONFIG_EMUL_DISABLE_UMAC (1 << 6)
#define ELINK_FEATURE_CONFIG_EMUL_DISABLE_XMAC (1 << 7)
#define ELINK_FEATURE_CONFIG_BC_SUPPORTS_AFEX (1 << 8)
-#define ELINK_FEATURE_CONFIG_AUTOGREEEN_ENABLED (1 << 9)
+#define ELINK_FEATURE_CONFIG_AUTOGREEN_ENABLED (1 << 9)
#define ELINK_FEATURE_CONFIG_BC_SUPPORTS_SFP_TX_DISABLED (1 << 10)
#define ELINK_FEATURE_CONFIG_DISABLE_REMOTE_FAULT_DET (1 << 11)
#define ELINK_FEATURE_CONFIG_IEEE_PHY_TEST (1 << 12)
diff --git a/drivers/net/bonding/eth_bond_8023ad_private.h b/drivers/net/bonding/eth_bond_8023ad_private.h
index a5e1fffe..5e112100 100644
--- a/drivers/net/bonding/eth_bond_8023ad_private.h
+++ b/drivers/net/bonding/eth_bond_8023ad_private.h
@@ -124,7 +124,7 @@ struct port {
uint64_t wait_while_timer;
uint64_t tx_machine_timer;
uint64_t tx_marker_timer;
- /* Agregator parameters */
+ /* Aggregator parameters */
/** Used aggregator port ID */
uint16_t aggregator_port_id;
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index 1c7c8afe..cc6223fd 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -730,7 +730,7 @@ static inline struct mbox_entry *t4_os_list_first_entry(struct mbox_list *head)
/**
* t4_os_atomic_add_tail - Enqueue list element atomically onto list
- * @new: the entry to be addded to the queue
+ * @new: the entry to be added to the queue
* @head: current head of the linked list
* @lock: lock to use to guarantee atomicity
*/
diff --git a/drivers/net/cxgbe/base/t4_chip_type.h b/drivers/net/cxgbe/base/t4_chip_type.h
index c0c5d0b2..43066229 100644
--- a/drivers/net/cxgbe/base/t4_chip_type.h
+++ b/drivers/net/cxgbe/base/t4_chip_type.h
@@ -13,7 +13,7 @@
* F = "0" for PF 0..3; "4".."7" for PF4..7; and "8" for VFs
* PP = adapter product designation
*
- * We use the "version" (V) of the adpater to code the Chip Version above.
+ * We use the "version" (V) of the adapter to code the Chip Version above.
*/
#define CHELSIO_PCI_ID_VER(devid) ((devid) >> 12)
#define CHELSIO_PCI_ID_FUNC(devid) (((devid) >> 8) & 0xf)
diff --git a/drivers/net/cxgbe/base/t4_hw.c b/drivers/net/cxgbe/base/t4_hw.c
index cdcd7e55..72e6aeab 100644
--- a/drivers/net/cxgbe/base/t4_hw.c
+++ b/drivers/net/cxgbe/base/t4_hw.c
@@ -2284,7 +2284,7 @@ int t4_config_rss_range(struct adapter *adapter, int mbox, unsigned int viid,
* Grab up to the next 3 Ingress Queue IDs (wrapping
* around the Ingress Queue ID array if necessary) and
* insert them into the firmware RSS command at the
- * current 3-tuple position within the commad.
+ * current 3-tuple position within the command.
*/
u16 qbuf[3];
u16 *qbp = qbuf;
@@ -3919,7 +3919,7 @@ int t4_alloc_vi_func(struct adapter *adap, unsigned int mbox,
* @mac: the MAC addresses of the VI
* @rss_size: size of RSS table slice associated with this VI
*
- * Backwards compatible and convieniance routine to allocate a Virtual
+ * Backwards compatible and convenience routine to allocate a Virtual
* Interface with a Ethernet Port Application Function and Intrustion
* Detection System disabled.
*/
@@ -5150,7 +5150,7 @@ int t4_bar2_sge_qregs(struct adapter *adapter, unsigned int qid,
* the BAR2 Queue ID and the hardware will infer the Absolute Queue ID
* from the BAR2 Page and BAR2 Queue ID.
*
- * One important censequence of this is that some BAR2 SGE registers
+ * One important consequence of this is that some BAR2 SGE registers
* have a "Queue ID" field and we can write the BAR2 SGE Queue ID
* there. But other registers synthesize the SGE Queue ID purely
* from the writes to the registers -- the Write Combined Doorbell
@@ -5467,7 +5467,7 @@ int t4_port_init(struct adapter *adap, int mbox, int pf, int vf)
*
* Reads/writes an [almost] arbitrary memory region in the firmware: the
* firmware memory address and host buffer must be aligned on 32-bit
- * boudaries; the length may be arbitrary.
+ * boundaries; the length may be arbitrary.
*
* NOTES:
* 1. The memory is transferred as a raw byte sequence from/to the
diff --git a/drivers/net/dpaa/fmlib/fm_port_ext.h b/drivers/net/dpaa/fmlib/fm_port_ext.h
index bb2e0022..abdec961 100644
--- a/drivers/net/dpaa/fmlib/fm_port_ext.h
+++ b/drivers/net/dpaa/fmlib/fm_port_ext.h
@@ -177,7 +177,7 @@ typedef enum ioc_fm_port_counters {
/**< BMI OP & HC only statistics counter */
e_IOC_FM_PORT_COUNTERS_LENGTH_ERR,
/**< BMI non-Rx statistics counter */
- e_IOC_FM_PORT_COUNTERS_UNSUPPRTED_FORMAT,
+ e_IOC_FM_PORT_COUNTERS_UNSUPPORTED_FORMAT,
/**< BMI non-Rx statistics counter */
e_IOC_FM_PORT_COUNTERS_DEQ_TOTAL,/**< QMI total QM dequeues counter */
e_IOC_FM_PORT_COUNTERS_ENQ_TOTAL,/**< QMI total QM enqueues counter */
@@ -2538,7 +2538,7 @@ typedef enum e_fm_port_counters {
/**< BMI OP & HC only statistics counter */
e_FM_PORT_COUNTERS_LENGTH_ERR,
/**< BMI non-Rx statistics counter */
- e_FM_PORT_COUNTERS_UNSUPPRTED_FORMAT,
+ e_FM_PORT_COUNTERS_UNSUPPORTED_FORMAT,
/**< BMI non-Rx statistics counter */
e_FM_PORT_COUNTERS_DEQ_TOTAL, /**< QMI total QM dequeues counter */
e_FM_PORT_COUNTERS_ENQ_TOTAL, /**< QMI total QM enqueues counter */
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni_annot.h b/drivers/net/dpaa2/base/dpaa2_hw_dpni_annot.h
index 7e5e499b..7bb439b6 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni_annot.h
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni_annot.h
@@ -253,7 +253,7 @@ struct dpaa2_annot_hdr {
#define PARSE_ERROR_CODE(var) ((uint64_t)(var) & 0xFF00000000000000)
#define SOFT_PARSING_CONTEXT(var) ((uint64_t)(var) & 0x00FFFFFFFFFFFFFF)
-/*FAEAD offset in anmotation area*/
+/*FAEAD offset in annotation area*/
#define DPAA2_FD_HW_ANNOT_FAEAD_OFFSET 0x58
struct dpaa2_faead {
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index bf55eb70..58e58789 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -1341,7 +1341,7 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
}
static int
-dpaa2_configure_flow_ip_discrimation(
+dpaa2_configure_flow_ip_discrimination(
struct dpaa2_dev_priv *priv, struct rte_flow *flow,
const struct rte_flow_item *pattern,
int *local_cfg, int *device_configured,
@@ -1447,7 +1447,7 @@ dpaa2_configure_flow_generic_ip(
flow->tc_id = group;
flow->tc_index = attr->priority;
- ret = dpaa2_configure_flow_ip_discrimation(priv,
+ ret = dpaa2_configure_flow_ip_discrimination(priv,
flow, pattern, &local_cfg,
device_configured, group);
if (ret) {
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index cd2f7b8a..f54ab5df 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -311,11 +311,11 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
goto init_err;
}
- /* The new dpdmux_set/get_resetable() API are available starting with
+ /* The new dpdmux_set/get_resettable() API are available starting with
* DPDMUX_VER_MAJOR==6 and DPDMUX_VER_MINOR==6
*/
if (maj_ver >= 6 && min_ver >= 6) {
- ret = dpdmux_set_resetable(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
+ ret = dpdmux_set_resettable(&dpdmux_dev->dpdmux, CMD_PRI_LOW,
dpdmux_dev->token,
DPDMUX_SKIP_DEFAULT_INTERFACE |
DPDMUX_SKIP_UNICAST_RULES |
diff --git a/drivers/net/dpaa2/mc/dpdmux.c b/drivers/net/dpaa2/mc/dpdmux.c
index edbb01b4..693557e1 100644
--- a/drivers/net/dpaa2/mc/dpdmux.c
+++ b/drivers/net/dpaa2/mc/dpdmux.c
@@ -281,7 +281,7 @@ int dpdmux_reset(struct fsl_mc_io *mc_io,
}
/**
- * dpdmux_set_resetable() - Set overall resetable DPDMUX parameters.
+ * dpdmux_set_resettable() - Set overall resettable DPDMUX parameters.
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPDMUX object
@@ -299,7 +299,7 @@ int dpdmux_reset(struct fsl_mc_io *mc_io,
*
* Return: '0' on Success; Error code otherwise.
*/
-int dpdmux_set_resetable(struct fsl_mc_io *mc_io,
+int dpdmux_set_resettable(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
uint8_t skip_reset_flags)
@@ -321,7 +321,7 @@ int dpdmux_set_resetable(struct fsl_mc_io *mc_io,
}
/**
- * dpdmux_get_resetable() - Get overall resetable parameters.
+ * dpdmux_get_resettable() - Get overall resettable parameters.
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
* @token: Token of DPDMUX object
@@ -334,7 +334,7 @@ int dpdmux_set_resetable(struct fsl_mc_io *mc_io,
*
* Return: '0' on Success; Error code otherwise.
*/
-int dpdmux_get_resetable(struct fsl_mc_io *mc_io,
+int dpdmux_get_resettable(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
uint8_t *skip_reset_flags)
diff --git a/drivers/net/dpaa2/mc/dpni.c b/drivers/net/dpaa2/mc/dpni.c
index 60048d6c..9d5acca7 100644
--- a/drivers/net/dpaa2/mc/dpni.c
+++ b/drivers/net/dpaa2/mc/dpni.c
@@ -2746,7 +2746,7 @@ int dpni_set_rx_hash_dist(struct fsl_mc_io *mc_io, uint32_t cmd_flags,
/**
* dpni_add_custom_tpid() - Configures a distinct Ethertype value (or TPID
- * value) to indicate VLAN tag in adition to the common TPID values
+ * value) to indicate VLAN tag in addition to the common TPID values
* 0x81000 and 0x88A8
* @mc_io: Pointer to MC portal's I/O object
* @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h
index b01a98eb..274dcffc 100644
--- a/drivers/net/dpaa2/mc/fsl_dpdmux.h
+++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h
@@ -155,12 +155,12 @@ int dpdmux_reset(struct fsl_mc_io *mc_io,
*/
#define DPDMUX_SKIP_MULTICAST_RULES 0x04
-int dpdmux_set_resetable(struct fsl_mc_io *mc_io,
+int dpdmux_set_resettable(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
uint8_t skip_reset_flags);
-int dpdmux_get_resetable(struct fsl_mc_io *mc_io,
+int dpdmux_get_resettable(struct fsl_mc_io *mc_io,
uint32_t cmd_flags,
uint16_t token,
uint8_t *skip_reset_flags);
diff --git a/drivers/net/e1000/base/e1000_82575.c b/drivers/net/e1000/base/e1000_82575.c
index 7c786493..75feca9c 100644
--- a/drivers/net/e1000/base/e1000_82575.c
+++ b/drivers/net/e1000/base/e1000_82575.c
@@ -2050,7 +2050,7 @@ STATIC s32 e1000_set_pcie_completion_timeout(struct e1000_hw *hw)
goto out;
/*
- * if capababilities version is type 1 we can write the
+ * if capabilities version is type 1 we can write the
* timeout of 10ms to 200ms through the GCR register
*/
if (!(gcr & E1000_GCR_CAP_VER2)) {
diff --git a/drivers/net/e1000/base/e1000_phy.c b/drivers/net/e1000/base/e1000_phy.c
index 62d0be50..f992512e 100644
--- a/drivers/net/e1000/base/e1000_phy.c
+++ b/drivers/net/e1000/base/e1000_phy.c
@@ -3741,7 +3741,7 @@ s32 e1000_write_phy_reg_page_hv(struct e1000_hw *hw, u32 offset, u16 data)
}
/**
- * e1000_get_phy_addr_for_hv_page - Get PHY adrress based on page
+ * e1000_get_phy_addr_for_hv_page - Get PHY address based on page
* @page: page to be accessed
**/
STATIC u32 e1000_get_phy_addr_for_hv_page(u32 page)
diff --git a/drivers/net/enic/base/vnic_devcmd.h b/drivers/net/enic/base/vnic_devcmd.h
index 3157bc8c..394294f5 100644
--- a/drivers/net/enic/base/vnic_devcmd.h
+++ b/drivers/net/enic/base/vnic_devcmd.h
@@ -591,7 +591,7 @@ enum vnic_devcmd_cmd {
CMD_CONFIG_GRPINTR = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 75),
/*
- * Set cq arrary base and size in a list of consective wqs and
+ * Set cq array base and size in a list of consective wqs and
* rqs for a device
* in: (uint16_t) a0 = the wq relative index in the device.
* -1 indicates skipping wq configuration
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index cf51793c..b5b59e5b 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -21,7 +21,7 @@
* so we can easily add new arguments.
* item: Item specification.
* filter: Partially filled in NIC filter structure.
- * inner_ofst: If zero, this is an outer header. If non-zero, this is
+ * inner_offset: If zero, this is an outer header. If non-zero, this is
* the offset into L5 where the header begins.
* l2_proto_off: offset to EtherType eth or vlan header.
* l3_proto_off: offset to next protocol field in IPv4 or 6 header.
@@ -29,7 +29,7 @@
struct copy_item_args {
const struct rte_flow_item *item;
struct filter_v2 *filter;
- uint8_t *inner_ofst;
+ uint8_t *inner_offset;
uint8_t l2_proto_off;
uint8_t l3_proto_off;
struct enic *enic;
@@ -504,7 +504,7 @@ enic_copy_item_tcp_v1(struct copy_item_args *arg)
* we set EtherType and IP proto as necessary.
*/
static int
-copy_inner_common(struct filter_generic_1 *gp, uint8_t *inner_ofst,
+copy_inner_common(struct filter_generic_1 *gp, uint8_t *inner_offset,
const void *val, const void *mask, uint8_t val_size,
uint8_t proto_off, uint16_t proto_val, uint8_t proto_size)
{
@@ -512,7 +512,7 @@ copy_inner_common(struct filter_generic_1 *gp, uint8_t *inner_ofst,
uint8_t start_off;
/* No space left in the L5 pattern buffer. */
- start_off = *inner_ofst;
+ start_off = *inner_offset;
if ((start_off + val_size) > FILTER_GENERIC_1_KEY_LEN)
return ENOTSUP;
l5_mask = gp->layer[FILTER_GENERIC_1_L5].mask;
@@ -537,7 +537,7 @@ copy_inner_common(struct filter_generic_1 *gp, uint8_t *inner_ofst,
}
}
/* All inner headers land in L5 buffer even if their spec is null. */
- *inner_ofst += val_size;
+ *inner_offset += val_size;
return 0;
}
@@ -545,7 +545,7 @@ static int
enic_copy_item_inner_eth_v2(struct copy_item_args *arg)
{
const void *mask = arg->item->mask;
- uint8_t *off = arg->inner_ofst;
+ uint8_t *off = arg->inner_offset;
ENICPMD_FUNC_TRACE();
if (!mask)
@@ -560,7 +560,7 @@ static int
enic_copy_item_inner_vlan_v2(struct copy_item_args *arg)
{
const void *mask = arg->item->mask;
- uint8_t *off = arg->inner_ofst;
+ uint8_t *off = arg->inner_offset;
uint8_t eth_type_off;
ENICPMD_FUNC_TRACE();
@@ -578,7 +578,7 @@ static int
enic_copy_item_inner_ipv4_v2(struct copy_item_args *arg)
{
const void *mask = arg->item->mask;
- uint8_t *off = arg->inner_ofst;
+ uint8_t *off = arg->inner_offset;
ENICPMD_FUNC_TRACE();
if (!mask)
@@ -594,7 +594,7 @@ static int
enic_copy_item_inner_ipv6_v2(struct copy_item_args *arg)
{
const void *mask = arg->item->mask;
- uint8_t *off = arg->inner_ofst;
+ uint8_t *off = arg->inner_offset;
ENICPMD_FUNC_TRACE();
if (!mask)
@@ -610,7 +610,7 @@ static int
enic_copy_item_inner_udp_v2(struct copy_item_args *arg)
{
const void *mask = arg->item->mask;
- uint8_t *off = arg->inner_ofst;
+ uint8_t *off = arg->inner_offset;
ENICPMD_FUNC_TRACE();
if (!mask)
@@ -625,7 +625,7 @@ static int
enic_copy_item_inner_tcp_v2(struct copy_item_args *arg)
{
const void *mask = arg->item->mask;
- uint8_t *off = arg->inner_ofst;
+ uint8_t *off = arg->inner_offset;
ENICPMD_FUNC_TRACE();
if (!mask)
@@ -899,7 +899,7 @@ enic_copy_item_vxlan_v2(struct copy_item_args *arg)
{
const struct rte_flow_item *item = arg->item;
struct filter_v2 *enic_filter = arg->filter;
- uint8_t *inner_ofst = arg->inner_ofst;
+ uint8_t *inner_offset = arg->inner_offset;
const struct rte_flow_item_vxlan *spec = item->spec;
const struct rte_flow_item_vxlan *mask = item->mask;
struct filter_generic_1 *gp = &enic_filter->u.generic_1;
@@ -929,7 +929,7 @@ enic_copy_item_vxlan_v2(struct copy_item_args *arg)
memcpy(gp->layer[FILTER_GENERIC_1_L5].val, spec,
sizeof(struct rte_vxlan_hdr));
- *inner_ofst = sizeof(struct rte_vxlan_hdr);
+ *inner_offset = sizeof(struct rte_vxlan_hdr);
return 0;
}
@@ -943,7 +943,7 @@ enic_copy_item_raw_v2(struct copy_item_args *arg)
{
const struct rte_flow_item *item = arg->item;
struct filter_v2 *enic_filter = arg->filter;
- uint8_t *inner_ofst = arg->inner_ofst;
+ uint8_t *inner_offset = arg->inner_offset;
const struct rte_flow_item_raw *spec = item->spec;
const struct rte_flow_item_raw *mask = item->mask;
struct filter_generic_1 *gp = &enic_filter->u.generic_1;
@@ -951,7 +951,7 @@ enic_copy_item_raw_v2(struct copy_item_args *arg)
ENICPMD_FUNC_TRACE();
/* Cannot be used for inner packet */
- if (*inner_ofst)
+ if (*inner_offset)
return EINVAL;
/* Need both spec and mask */
if (!spec || !mask)
@@ -1020,13 +1020,13 @@ item_stacking_valid(enum rte_flow_item_type prev_item,
*/
static void
fixup_l5_layer(struct enic *enic, struct filter_generic_1 *gp,
- uint8_t inner_ofst)
+ uint8_t inner_offset)
{
uint8_t layer[FILTER_GENERIC_1_KEY_LEN];
uint8_t inner;
uint8_t vxlan;
- if (!(inner_ofst > 0 && enic->vxlan))
+ if (!(inner_offset > 0 && enic->vxlan))
return;
ENICPMD_FUNC_TRACE();
vxlan = sizeof(struct rte_vxlan_hdr);
@@ -1034,7 +1034,7 @@ fixup_l5_layer(struct enic *enic, struct filter_generic_1 *gp,
gp->layer[FILTER_GENERIC_1_L5].mask, vxlan);
memcpy(gp->layer[FILTER_GENERIC_1_L4].val + sizeof(struct rte_udp_hdr),
gp->layer[FILTER_GENERIC_1_L5].val, vxlan);
- inner = inner_ofst - vxlan;
+ inner = inner_offset - vxlan;
memset(layer, 0, sizeof(layer));
memcpy(layer, gp->layer[FILTER_GENERIC_1_L5].mask + vxlan, inner);
memcpy(gp->layer[FILTER_GENERIC_1_L5].mask, layer, sizeof(layer));
@@ -1063,7 +1063,7 @@ enic_copy_filter(const struct rte_flow_item pattern[],
{
int ret;
const struct rte_flow_item *item = pattern;
- uint8_t inner_ofst = 0; /* If encapsulated, ofst into L5 */
+ uint8_t inner_offset = 0; /* If encapsulated, offset into L5 */
enum rte_flow_item_type prev_item;
const struct enic_items *item_info;
struct copy_item_args args;
@@ -1075,7 +1075,7 @@ enic_copy_filter(const struct rte_flow_item pattern[],
prev_item = 0;
args.filter = enic_filter;
- args.inner_ofst = &inner_ofst;
+ args.inner_offset = &inner_offset;
args.enic = enic;
for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
/* Get info about how to validate and copy the item. If NULL
@@ -1087,7 +1087,7 @@ enic_copy_filter(const struct rte_flow_item pattern[],
item_info = &cap->item_info[item->type];
if (item->type > cap->max_item_type ||
item_info->copy_item == NULL ||
- (inner_ofst > 0 && item_info->inner_copy_item == NULL)) {
+ (inner_offset > 0 && item_info->inner_copy_item == NULL)) {
rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ITEM,
NULL, "Unsupported item.");
@@ -1099,7 +1099,7 @@ enic_copy_filter(const struct rte_flow_item pattern[],
goto stacking_error;
args.item = item;
- copy_fn = inner_ofst > 0 ? item_info->inner_copy_item :
+ copy_fn = inner_offset > 0 ? item_info->inner_copy_item :
item_info->copy_item;
ret = copy_fn(&args);
if (ret)
@@ -1107,7 +1107,7 @@ enic_copy_filter(const struct rte_flow_item pattern[],
prev_item = item->type;
is_first_item = 0;
}
- fixup_l5_layer(enic, &enic_filter->u.generic_1, inner_ofst);
+ fixup_l5_layer(enic, &enic_filter->u.generic_1, inner_offset);
return 0;
@@ -1319,7 +1319,7 @@ enic_match_action(const struct rte_flow_action *action,
return 0;
}
-/** Get the NIC filter capabilties structure */
+/** Get the NIC filter capabilities structure */
static const struct enic_filter_cap *
enic_get_filter_cap(struct enic *enic)
{
diff --git a/drivers/net/fm10k/base/fm10k_mbx.c b/drivers/net/fm10k/base/fm10k_mbx.c
index 2bb0d82e..2f08dccb 100644
--- a/drivers/net/fm10k/base/fm10k_mbx.c
+++ b/drivers/net/fm10k/base/fm10k_mbx.c
@@ -1862,7 +1862,7 @@ STATIC void fm10k_sm_mbx_process_error(struct fm10k_mbx_info *mbx)
fm10k_sm_mbx_connect_reset(mbx);
break;
case FM10K_STATE_CONNECT:
- /* try connnecting at lower version */
+ /* try connecting at lower version */
if (mbx->remote) {
while (mbx->local > 1)
mbx->local--;
diff --git a/drivers/net/fm10k/base/fm10k_pf.c b/drivers/net/fm10k/base/fm10k_pf.c
index 439dd224..e25e45ba 100644
--- a/drivers/net/fm10k/base/fm10k_pf.c
+++ b/drivers/net/fm10k/base/fm10k_pf.c
@@ -1693,7 +1693,7 @@ STATIC s32 fm10k_get_host_state_pf(struct fm10k_hw *hw, bool *switch_ready)
return fm10k_get_host_state_generic(hw, switch_ready);
}
-/* This structure defines the attibutes to be parsed below */
+/* This structure defines the attributes to be parsed below */
const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[] = {
FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_ERR,
sizeof(struct fm10k_swapi_error)),
diff --git a/drivers/net/fm10k/base/fm10k_vf.c b/drivers/net/fm10k/base/fm10k_vf.c
index 6809c3cf..a453e48c 100644
--- a/drivers/net/fm10k/base/fm10k_vf.c
+++ b/drivers/net/fm10k/base/fm10k_vf.c
@@ -169,7 +169,7 @@ STATIC bool fm10k_is_slot_appropriate_vf(struct fm10k_hw *hw)
}
#endif
-/* This structure defines the attibutes to be parsed below */
+/* This structure defines the attributes to be parsed below */
const struct fm10k_tlv_attr fm10k_mac_vlan_msg_attr[] = {
FM10K_TLV_ATTR_U32(FM10K_MAC_VLAN_MSG_VLAN),
FM10K_TLV_ATTR_BOOL(FM10K_MAC_VLAN_MSG_SET),
@@ -393,7 +393,7 @@ STATIC void fm10k_update_int_moderator_vf(struct fm10k_hw *hw)
mbx->ops.enqueue_tx(hw, mbx, msg);
}
-/* This structure defines the attibutes to be parsed below */
+/* This structure defines the attributes to be parsed below */
const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[] = {
FM10K_TLV_ATTR_BOOL(FM10K_LPORT_STATE_MSG_DISABLE),
FM10K_TLV_ATTR_U8(FM10K_LPORT_STATE_MSG_XCAST_MODE),
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index e8d9aaba..eb768074 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -436,7 +436,7 @@ static int hinic_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
if (rc) {
PMD_DRV_LOG(ERR, "Create rxq[%d] failed, dev_name: %s, rq_depth: %d",
queue_idx, dev->data->name, rq_depth);
- goto ceate_rq_fail;
+ goto create_rq_fail;
}
/* mbuf pool must be assigned before setup rx resources */
@@ -484,7 +484,7 @@ static int hinic_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
adjust_bufsize_fail:
hinic_destroy_rq(hwdev, queue_idx);
-ceate_rq_fail:
+create_rq_fail:
rte_free(rxq);
return rc;
diff --git a/drivers/net/hinic/hinic_pmd_flow.c b/drivers/net/hinic/hinic_pmd_flow.c
index 2cf24ebc..9c0c098e 100644
--- a/drivers/net/hinic/hinic_pmd_flow.c
+++ b/drivers/net/hinic/hinic_pmd_flow.c
@@ -232,7 +232,7 @@ static int hinic_check_ethertype_first_item(const struct rte_flow_item *item,
}
static int
-hinic_parse_ethertype_aciton(const struct rte_flow_action *actions,
+hinic_parse_ethertype_action(const struct rte_flow_action *actions,
const struct rte_flow_action *act,
const struct rte_flow_action_queue *act_q,
struct rte_eth_ethertype_filter *filter,
@@ -344,7 +344,7 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr,
return -rte_errno;
}
- if (hinic_parse_ethertype_aciton(actions, act, act_q, filter, error))
+ if (hinic_parse_ethertype_action(actions, act, act_q, filter, error))
return -rte_errno;
if (hinic_check_ethertype_attr_ele(attr, error))
diff --git a/drivers/net/hinic/hinic_pmd_rx.c b/drivers/net/hinic/hinic_pmd_rx.c
index 7adb6e36..db63c855 100644
--- a/drivers/net/hinic/hinic_pmd_rx.c
+++ b/drivers/net/hinic/hinic_pmd_rx.c
@@ -142,33 +142,33 @@
#define HINIC_GET_SUPER_CQE_EN(pkt_info) \
RQ_CQE_SUPER_CQE_EN_GET(pkt_info, SUPER_CQE_EN)
-#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21
-#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U
+#define RQ_CQE_OFFLOAD_TYPE_VLAN_EN_SHIFT 21
+#define RQ_CQE_OFFLOAD_TYPE_VLAN_EN_MASK 0x1U
-#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_SHIFT 0
-#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK 0xFFFU
+#define RQ_CQE_OFFLOAD_TYPE_PKT_TYPE_SHIFT 0
+#define RQ_CQE_OFFLOAD_TYPE_PKT_TYPE_MASK 0xFFFU
-#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_SHIFT 19
-#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_MASK 0x3U
+#define RQ_CQE_OFFLOAD_TYPE_PKT_UMBCAST_SHIFT 19
+#define RQ_CQE_OFFLOAD_TYPE_PKT_UMBCAST_MASK 0x3U
-#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24
-#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU
+#define RQ_CQE_OFFLOAD_TYPE_RSS_TYPE_SHIFT 24
+#define RQ_CQE_OFFLOAD_TYPE_RSS_TYPE_MASK 0xFFU
-#define RQ_CQE_OFFOLAD_TYPE_GET(val, member) (((val) >> \
- RQ_CQE_OFFOLAD_TYPE_##member##_SHIFT) & \
- RQ_CQE_OFFOLAD_TYPE_##member##_MASK)
+#define RQ_CQE_OFFLOAD_TYPE_GET(val, member) (((val) >> \
+ RQ_CQE_OFFLOAD_TYPE_##member##_SHIFT) & \
+ RQ_CQE_OFFLOAD_TYPE_##member##_MASK)
#define HINIC_GET_RX_VLAN_OFFLOAD_EN(offload_type) \
- RQ_CQE_OFFOLAD_TYPE_GET(offload_type, VLAN_EN)
+ RQ_CQE_OFFLOAD_TYPE_GET(offload_type, VLAN_EN)
#define HINIC_GET_RSS_TYPES(offload_type) \
- RQ_CQE_OFFOLAD_TYPE_GET(offload_type, RSS_TYPE)
+ RQ_CQE_OFFLOAD_TYPE_GET(offload_type, RSS_TYPE)
#define HINIC_GET_RX_PKT_TYPE(offload_type) \
- RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE)
+ RQ_CQE_OFFLOAD_TYPE_GET(offload_type, PKT_TYPE)
#define HINIC_GET_RX_PKT_UMBCAST(offload_type) \
- RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_UMBCAST)
+ RQ_CQE_OFFLOAD_TYPE_GET(offload_type, PKT_UMBCAST)
#define RQ_CQE_STATUS_CSUM_BYPASS_VAL 0x80U
#define RQ_CQE_STATUS_CSUM_ERR_IP_MASK 0x39U
diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index e4417e87..c8a1fb2c 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -312,30 +312,30 @@ hns3_dcb_pg_schd_mode_cfg(struct hns3_hw *hw, uint8_t pg_id)
}
static uint32_t
-hns3_dcb_get_shapping_para(uint8_t ir_b, uint8_t ir_u, uint8_t ir_s,
+hns3_dcb_get_shaping_para(uint8_t ir_b, uint8_t ir_u, uint8_t ir_s,
uint8_t bs_b, uint8_t bs_s)
{
- uint32_t shapping_para = 0;
+ uint32_t shaping_para = 0;
- /* If ir_b is zero it means IR is 0Mbps, return zero of shapping_para */
+ /* If ir_b is zero it means IR is 0Mbps, return zero of shaping_para */
if (ir_b == 0)
- return shapping_para;
+ return shaping_para;
- hns3_dcb_set_field(shapping_para, IR_B, ir_b);
- hns3_dcb_set_field(shapping_para, IR_U, ir_u);
- hns3_dcb_set_field(shapping_para, IR_S, ir_s);
- hns3_dcb_set_field(shapping_para, BS_B, bs_b);
- hns3_dcb_set_field(shapping_para, BS_S, bs_s);
+ hns3_dcb_set_field(shaping_para, IR_B, ir_b);
+ hns3_dcb_set_field(shaping_para, IR_U, ir_u);
+ hns3_dcb_set_field(shaping_para, IR_S, ir_s);
+ hns3_dcb_set_field(shaping_para, BS_B, bs_b);
+ hns3_dcb_set_field(shaping_para, BS_S, bs_s);
- return shapping_para;
+ return shaping_para;
}
static int
hns3_dcb_port_shaper_cfg(struct hns3_hw *hw, uint32_t speed)
{
- struct hns3_port_shapping_cmd *shap_cfg_cmd;
+ struct hns3_port_shaping_cmd *shap_cfg_cmd;
struct hns3_shaper_parameter shaper_parameter;
- uint32_t shapping_para;
+ uint32_t shaping_para;
uint32_t ir_u, ir_b, ir_s;
struct hns3_cmd_desc desc;
int ret;
@@ -348,21 +348,21 @@ hns3_dcb_port_shaper_cfg(struct hns3_hw *hw, uint32_t speed)
}
hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_TM_PORT_SHAPPING, false);
- shap_cfg_cmd = (struct hns3_port_shapping_cmd *)desc.data;
+ shap_cfg_cmd = (struct hns3_port_shaping_cmd *)desc.data;
ir_b = shaper_parameter.ir_b;
ir_u = shaper_parameter.ir_u;
ir_s = shaper_parameter.ir_s;
- shapping_para = hns3_dcb_get_shapping_para(ir_b, ir_u, ir_s,
+ shaping_para = hns3_dcb_get_shaping_para(ir_b, ir_u, ir_s,
HNS3_SHAPER_BS_U_DEF,
HNS3_SHAPER_BS_S_DEF);
- shap_cfg_cmd->port_shapping_para = rte_cpu_to_le_32(shapping_para);
+ shap_cfg_cmd->port_shaping_para = rte_cpu_to_le_32(shaping_para);
/*
* Configure the port_rate and set bit HNS3_TM_RATE_VLD_B of flag
- * field in hns3_port_shapping_cmd to require firmware to recalculate
- * shapping parameters. And whether the parameters are recalculated
+ * field in hns3_port_shaping_cmd to require firmware to recalculate
+ * shaping parameters. And whether the parameters are recalculated
* depends on the firmware version. But driver still needs to
* calculate it and configure to firmware for better compatibility.
*/
@@ -385,10 +385,10 @@ hns3_port_shaper_update(struct hns3_hw *hw, uint32_t speed)
}
static int
-hns3_dcb_pg_shapping_cfg(struct hns3_hw *hw, enum hns3_shap_bucket bucket,
- uint8_t pg_id, uint32_t shapping_para, uint32_t rate)
+hns3_dcb_pg_shaping_cfg(struct hns3_hw *hw, enum hns3_shap_bucket bucket,
+ uint8_t pg_id, uint32_t shaping_para, uint32_t rate)
{
- struct hns3_pg_shapping_cmd *shap_cfg_cmd;
+ struct hns3_pg_shaping_cmd *shap_cfg_cmd;
enum hns3_opcode_type opcode;
struct hns3_cmd_desc desc;
@@ -396,15 +396,15 @@ hns3_dcb_pg_shapping_cfg(struct hns3_hw *hw, enum hns3_shap_bucket bucket,
HNS3_OPC_TM_PG_C_SHAPPING;
hns3_cmd_setup_basic_desc(&desc, opcode, false);
- shap_cfg_cmd = (struct hns3_pg_shapping_cmd *)desc.data;
+ shap_cfg_cmd = (struct hns3_pg_shaping_cmd *)desc.data;
shap_cfg_cmd->pg_id = pg_id;
- shap_cfg_cmd->pg_shapping_para = rte_cpu_to_le_32(shapping_para);
+ shap_cfg_cmd->pg_shaping_para = rte_cpu_to_le_32(shaping_para);
/*
* Configure the pg_rate and set bit HNS3_TM_RATE_VLD_B of flag field in
- * hns3_pg_shapping_cmd to require firmware to recalculate shapping
+ * hns3_pg_shaping_cmd to require firmware to recalculate shaping
* parameters. And whether parameters are recalculated depends on
* the firmware version. But driver still needs to calculate it and
* configure to firmware for better compatibility.
@@ -432,11 +432,11 @@ hns3_pg_shaper_rate_cfg(struct hns3_hw *hw, uint8_t pg_id, uint32_t rate)
return ret;
}
- shaper_para = hns3_dcb_get_shapping_para(0, 0, 0,
+ shaper_para = hns3_dcb_get_shaping_para(0, 0, 0,
HNS3_SHAPER_BS_U_DEF,
HNS3_SHAPER_BS_S_DEF);
- ret = hns3_dcb_pg_shapping_cfg(hw, HNS3_DCB_SHAP_C_BUCKET, pg_id,
+ ret = hns3_dcb_pg_shaping_cfg(hw, HNS3_DCB_SHAP_C_BUCKET, pg_id,
shaper_para, rate);
if (ret) {
hns3_err(hw, "config PG CIR shaper parameter fail, ret = %d.",
@@ -447,11 +447,11 @@ hns3_pg_shaper_rate_cfg(struct hns3_hw *hw, uint8_t pg_id, uint32_t rate)
ir_b = shaper_parameter.ir_b;
ir_u = shaper_parameter.ir_u;
ir_s = shaper_parameter.ir_s;
- shaper_para = hns3_dcb_get_shapping_para(ir_b, ir_u, ir_s,
+ shaper_para = hns3_dcb_get_shaping_para(ir_b, ir_u, ir_s,
HNS3_SHAPER_BS_U_DEF,
HNS3_SHAPER_BS_S_DEF);
- ret = hns3_dcb_pg_shapping_cfg(hw, HNS3_DCB_SHAP_P_BUCKET, pg_id,
+ ret = hns3_dcb_pg_shaping_cfg(hw, HNS3_DCB_SHAP_P_BUCKET, pg_id,
shaper_para, rate);
if (ret) {
hns3_err(hw, "config PG PIR shaper parameter fail, ret = %d.",
@@ -520,10 +520,10 @@ hns3_dcb_pri_schd_mode_cfg(struct hns3_hw *hw, uint8_t pri_id)
}
static int
-hns3_dcb_pri_shapping_cfg(struct hns3_hw *hw, enum hns3_shap_bucket bucket,
- uint8_t pri_id, uint32_t shapping_para, uint32_t rate)
+hns3_dcb_pri_shaping_cfg(struct hns3_hw *hw, enum hns3_shap_bucket bucket,
+ uint8_t pri_id, uint32_t shaping_para, uint32_t rate)
{
- struct hns3_pri_shapping_cmd *shap_cfg_cmd;
+ struct hns3_pri_shaping_cmd *shap_cfg_cmd;
enum hns3_opcode_type opcode;
struct hns3_cmd_desc desc;
@@ -532,16 +532,16 @@ hns3_dcb_pri_shapping_cfg(struct hns3_hw *hw, enum hns3_shap_bucket bucket,
hns3_cmd_setup_basic_desc(&desc, opcode, false);
- shap_cfg_cmd = (struct hns3_pri_shapping_cmd *)desc.data;
+ shap_cfg_cmd = (struct hns3_pri_shaping_cmd *)desc.data;
shap_cfg_cmd->pri_id = pri_id;
- shap_cfg_cmd->pri_shapping_para = rte_cpu_to_le_32(shapping_para);
+ shap_cfg_cmd->pri_shaping_para = rte_cpu_to_le_32(shaping_para);
/*
* Configure the pri_rate and set bit HNS3_TM_RATE_VLD_B of flag
- * field in hns3_pri_shapping_cmd to require firmware to recalculate
- * shapping parameters. And whether the parameters are recalculated
+ * field in hns3_pri_shaping_cmd to require firmware to recalculate
+ * shaping parameters. And whether the parameters are recalculated
* depends on the firmware version. But driver still needs to
* calculate it and configure to firmware for better compatibility.
*/
@@ -567,11 +567,11 @@ hns3_pri_shaper_rate_cfg(struct hns3_hw *hw, uint8_t tc_no, uint32_t rate)
return ret;
}
- shaper_para = hns3_dcb_get_shapping_para(0, 0, 0,
+ shaper_para = hns3_dcb_get_shaping_para(0, 0, 0,
HNS3_SHAPER_BS_U_DEF,
HNS3_SHAPER_BS_S_DEF);
- ret = hns3_dcb_pri_shapping_cfg(hw, HNS3_DCB_SHAP_C_BUCKET, tc_no,
+ ret = hns3_dcb_pri_shaping_cfg(hw, HNS3_DCB_SHAP_C_BUCKET, tc_no,
shaper_para, rate);
if (ret) {
hns3_err(hw,
@@ -583,11 +583,11 @@ hns3_pri_shaper_rate_cfg(struct hns3_hw *hw, uint8_t tc_no, uint32_t rate)
ir_b = shaper_parameter.ir_b;
ir_u = shaper_parameter.ir_u;
ir_s = shaper_parameter.ir_s;
- shaper_para = hns3_dcb_get_shapping_para(ir_b, ir_u, ir_s,
+ shaper_para = hns3_dcb_get_shaping_para(ir_b, ir_u, ir_s,
HNS3_SHAPER_BS_U_DEF,
HNS3_SHAPER_BS_S_DEF);
- ret = hns3_dcb_pri_shapping_cfg(hw, HNS3_DCB_SHAP_P_BUCKET, tc_no,
+ ret = hns3_dcb_pri_shaping_cfg(hw, HNS3_DCB_SHAP_P_BUCKET, tc_no,
shaper_para, rate);
if (ret) {
hns3_err(hw,
diff --git a/drivers/net/hns3/hns3_dcb.h b/drivers/net/hns3/hns3_dcb.h
index e06ec177..b3b990f0 100644
--- a/drivers/net/hns3/hns3_dcb.h
+++ b/drivers/net/hns3/hns3_dcb.h
@@ -86,41 +86,41 @@ struct hns3_nq_to_qs_link_cmd {
#define HNS3_DCB_SHAP_BS_S_LSH 21
/*
- * For more flexible selection of shapping algorithm in different network
- * engine, the algorithm calculating shapping parameter is moved to firmware to
- * execute. Bit HNS3_TM_RATE_VLD_B of flag field in hns3_pri_shapping_cmd,
- * hns3_pg_shapping_cmd or hns3_port_shapping_cmd is set to 1 to require
- * firmware to recalculate shapping parameters. However, whether the parameters
+ * For more flexible selection of shaping algorithm in different network
+ * engine, the algorithm calculating shaping parameter is moved to firmware to
+ * execute. Bit HNS3_TM_RATE_VLD_B of flag field in hns3_pri_shaping_cmd,
+ * hns3_pg_shaping_cmd or hns3_port_shaping_cmd is set to 1 to require
+ * firmware to recalculate shaping parameters. However, whether the parameters
* are recalculated depends on the firmware version. If firmware doesn't support
- * the calculation of shapping parameters, such as on network engine with
+ * the calculation of shaping parameters, such as on network engine with
* revision id 0x21, the value driver calculated will be used to configure to
* hardware. On the contrary, firmware ignores configuration of driver
* and recalculates the parameter.
*/
#define HNS3_TM_RATE_VLD_B 0
-struct hns3_pri_shapping_cmd {
+struct hns3_pri_shaping_cmd {
uint8_t pri_id;
uint8_t rsvd[3];
- uint32_t pri_shapping_para;
+ uint32_t pri_shaping_para;
uint8_t flag;
uint8_t rsvd1[3];
uint32_t pri_rate; /* Unit Mbps */
uint8_t rsvd2[8];
};
-struct hns3_pg_shapping_cmd {
+struct hns3_pg_shaping_cmd {
uint8_t pg_id;
uint8_t rsvd[3];
- uint32_t pg_shapping_para;
+ uint32_t pg_shaping_para;
uint8_t flag;
uint8_t rsvd1[3];
uint32_t pg_rate; /* Unit Mbps */
uint8_t rsvd2[8];
};
-struct hns3_port_shapping_cmd {
- uint32_t port_shapping_para;
+struct hns3_port_shaping_cmd {
+ uint32_t port_shaping_para;
uint8_t flag;
uint8_t rsvd[3];
uint32_t port_rate; /* Unit Mbps */
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 3b897492..fee9c2a0 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -386,7 +386,7 @@ hns3_rm_dev_vlan_table(struct hns3_adapter *hns, uint16_t vlan_id)
static void
hns3_add_dev_vlan_table(struct hns3_adapter *hns, uint16_t vlan_id,
- bool writen_to_tbl)
+ bool written_to_tbl)
{
struct hns3_user_vlan_table *vlan_entry;
struct hns3_hw *hw = &hns->hw;
@@ -403,7 +403,7 @@ hns3_add_dev_vlan_table(struct hns3_adapter *hns, uint16_t vlan_id,
return;
}
- vlan_entry->hd_tbl_status = writen_to_tbl;
+ vlan_entry->hd_tbl_status = written_to_tbl;
vlan_entry->vlan_id = vlan_id;
LIST_INSERT_HEAD(&pf->vlan_list, vlan_entry, next);
@@ -438,7 +438,7 @@ static int
hns3_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on)
{
struct hns3_hw *hw = &hns->hw;
- bool writen_to_tbl = false;
+ bool written_to_tbl = false;
int ret = 0;
/*
@@ -458,12 +458,12 @@ hns3_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on)
*/
if (hw->port_base_vlan_cfg.state == HNS3_PORT_BASE_VLAN_DISABLE) {
ret = hns3_set_port_vlan_filter(hns, vlan_id, on);
- writen_to_tbl = true;
+ written_to_tbl = true;
}
if (ret == 0) {
if (on)
- hns3_add_dev_vlan_table(hns, vlan_id, writen_to_tbl);
+ hns3_add_dev_vlan_table(hns, vlan_id, written_to_tbl);
else
hns3_rm_dev_vlan_table(hns, vlan_id);
}
@@ -2177,7 +2177,7 @@ hns3_get_copper_port_speed_capa(uint32_t supported_speed)
}
static uint32_t
-hns3_get_firber_port_speed_capa(uint32_t supported_speed)
+hns3_get_fiber_port_speed_capa(uint32_t supported_speed)
{
uint32_t speed_capa = 0;
@@ -2210,7 +2210,7 @@ hns3_get_speed_capa(struct hns3_hw *hw)
hns3_get_copper_port_speed_capa(mac->supported_speed);
else
speed_capa =
- hns3_get_firber_port_speed_capa(mac->supported_speed);
+ hns3_get_fiber_port_speed_capa(mac->supported_speed);
if (mac->support_autoneg == 0)
speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
@@ -4524,7 +4524,7 @@ hns3_config_all_msix_error(struct hns3_hw *hw, bool enable)
}
static uint32_t
-hns3_set_firber_default_support_speed(struct hns3_hw *hw)
+hns3_set_fiber_default_support_speed(struct hns3_hw *hw)
{
struct hns3_mac *mac = &hw->mac;
@@ -4582,7 +4582,7 @@ hns3_get_port_supported_speed(struct rte_eth_dev *eth_dev)
*/
if (mac->supported_speed == 0)
mac->supported_speed =
- hns3_set_firber_default_support_speed(hw);
+ hns3_set_fiber_default_support_speed(hw);
}
return 0;
diff --git a/drivers/net/hns3/hns3_fdir.c b/drivers/net/hns3/hns3_fdir.c
index d043f578..870bde4d 100644
--- a/drivers/net/hns3/hns3_fdir.c
+++ b/drivers/net/hns3/hns3_fdir.c
@@ -67,7 +67,7 @@ enum HNS3_FD_KEY_TYPE {
enum HNS3_FD_META_DATA {
PACKET_TYPE_ID,
- IP_FRAGEMENT,
+ IP_FRAGMENT,
ROCE_TYPE,
NEXT_KEY,
VLAN_NUMBER,
@@ -84,7 +84,7 @@ struct key_info {
static const struct key_info meta_data_key_info[] = {
{PACKET_TYPE_ID, 6},
- {IP_FRAGEMENT, 1},
+ {IP_FRAGMENT, 1},
{ROCE_TYPE, 1},
{NEXT_KEY, 5},
{VLAN_NUMBER, 2},
diff --git a/drivers/net/hns3/hns3_tm.c b/drivers/net/hns3/hns3_tm.c
index e1089b6b..4fc00cbc 100644
--- a/drivers/net/hns3/hns3_tm.c
+++ b/drivers/net/hns3/hns3_tm.c
@@ -739,7 +739,7 @@ hns3_tm_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
}
static void
-hns3_tm_nonleaf_level_capsbilities_get(struct rte_eth_dev *dev,
+hns3_tm_nonleaf_level_capabilities_get(struct rte_eth_dev *dev,
uint32_t level_id,
struct rte_tm_level_capabilities *cap)
{
@@ -818,7 +818,7 @@ hns3_tm_level_capabilities_get(struct rte_eth_dev *dev,
memset(cap, 0, sizeof(struct rte_tm_level_capabilities));
if (level_id != HNS3_TM_NODE_LEVEL_QUEUE)
- hns3_tm_nonleaf_level_capsbilities_get(dev, level_id, cap);
+ hns3_tm_nonleaf_level_capabilities_get(dev, level_id, cap);
else
hns3_tm_leaf_level_capabilities_get(dev, cap);
diff --git a/drivers/net/i40e/base/i40e_adminq_cmd.h b/drivers/net/i40e/base/i40e_adminq_cmd.h
index def307b5..71694aeb 100644
--- a/drivers/net/i40e/base/i40e_adminq_cmd.h
+++ b/drivers/net/i40e/base/i40e_adminq_cmd.h
@@ -2349,7 +2349,7 @@ struct i40e_aqc_phy_register_access {
#define I40E_AQ_PHY_REG_ACCESS_INTERNAL 0
#define I40E_AQ_PHY_REG_ACCESS_EXTERNAL 1
#define I40E_AQ_PHY_REG_ACCESS_EXTERNAL_MODULE 2
- u8 dev_addres;
+ u8 dev_address;
u8 cmd_flags;
#define I40E_AQ_PHY_REG_ACCESS_DONT_CHANGE_QSFP_PAGE 0x01
#define I40E_AQ_PHY_REG_ACCESS_SET_MDIO_IF_NUMBER 0x02
diff --git a/drivers/net/i40e/base/i40e_common.c b/drivers/net/i40e/base/i40e_common.c
index 9eee1040..b51c3cc2 100644
--- a/drivers/net/i40e/base/i40e_common.c
+++ b/drivers/net/i40e/base/i40e_common.c
@@ -7582,7 +7582,7 @@ i40e_aq_set_phy_register_ext(struct i40e_hw *hw,
i40e_aqc_opc_set_phy_register);
cmd->phy_interface = phy_select;
- cmd->dev_addres = dev_addr;
+ cmd->dev_address = dev_addr;
cmd->reg_address = CPU_TO_LE32(reg_addr);
cmd->reg_value = CPU_TO_LE32(reg_val);
@@ -7628,7 +7628,7 @@ i40e_aq_get_phy_register_ext(struct i40e_hw *hw,
i40e_aqc_opc_get_phy_register);
cmd->phy_interface = phy_select;
- cmd->dev_addres = dev_addr;
+ cmd->dev_address = dev_addr;
cmd->reg_address = CPU_TO_LE32(reg_addr);
if (!page_change)
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 79397f15..dfe63f1d 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1753,7 +1753,7 @@ static int iavf_dev_xstats_get(struct rte_eth_dev *dev,
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct iavf_vsi *vsi = &vf->vsi;
struct virtchnl_eth_stats *pstats = NULL;
- struct iavf_eth_xstats iavf_xtats = {{0}};
+ struct iavf_eth_xstats iavf_xstats = {{0}};
if (n < IAVF_NB_XSTATS)
return IAVF_NB_XSTATS;
@@ -1766,15 +1766,15 @@ static int iavf_dev_xstats_get(struct rte_eth_dev *dev,
return 0;
iavf_update_stats(vsi, pstats);
- iavf_xtats.eth_stats = *pstats;
+ iavf_xstats.eth_stats = *pstats;
if (iavf_ipsec_crypto_supported(adapter))
- iavf_dev_update_ipsec_xstats(dev, &iavf_xtats.ips_stats);
+ iavf_dev_update_ipsec_xstats(dev, &iavf_xstats.ips_stats);
/* loop over xstats array and values from pstats */
for (i = 0; i < IAVF_NB_XSTATS; i++) {
xstats[i].id = i;
- xstats[i].value = *(uint64_t *)(((char *)&iavf_xtats) +
+ xstats[i].value = *(uint64_t *)(((char *)&iavf_xstats) +
rte_iavf_stats_strings[i].offset);
}
diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index 5e0888ea..d675e0fe 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -814,7 +814,7 @@ iavf_hash_parse_pattern(const struct rte_flow_item pattern[], uint64_t *phint,
#define REFINE_PROTO_FLD(op, fld) \
VIRTCHNL_##op##_PROTO_HDR_FIELD(hdr, VIRTCHNL_PROTO_HDR_##fld)
-#define REPALCE_PROTO_FLD(fld_1, fld_2) \
+#define REPLACE_PROTO_FLD(fld_1, fld_2) \
do { \
REFINE_PROTO_FLD(DEL, fld_1); \
REFINE_PROTO_FLD(ADD, fld_2); \
@@ -925,10 +925,10 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
}
if (rss_type & RTE_ETH_RSS_L3_PRE64) {
if (REFINE_PROTO_FLD(TEST, IPV6_SRC))
- REPALCE_PROTO_FLD(IPV6_SRC,
+ REPLACE_PROTO_FLD(IPV6_SRC,
IPV6_PREFIX64_SRC);
if (REFINE_PROTO_FLD(TEST, IPV6_DST))
- REPALCE_PROTO_FLD(IPV6_DST,
+ REPLACE_PROTO_FLD(IPV6_DST,
IPV6_PREFIX64_DST);
}
break;
diff --git a/drivers/net/iavf/iavf_ipsec_crypto.c b/drivers/net/iavf/iavf_ipsec_crypto.c
index adf101ab..8174cbfc 100644
--- a/drivers/net/iavf/iavf_ipsec_crypto.c
+++ b/drivers/net/iavf/iavf_ipsec_crypto.c
@@ -1334,7 +1334,7 @@ update_aead_capabilities(struct rte_cryptodev_capabilities *scap,
* capabilities structure.
*/
int
-iavf_ipsec_crypto_set_security_capabililites(struct iavf_security_ctx
+iavf_ipsec_crypto_set_security_capabilities(struct iavf_security_ctx
*iavf_sctx, struct virtchnl_ipsec_cap *vch_cap)
{
struct rte_cryptodev_capabilities *capabilities;
@@ -1524,7 +1524,7 @@ iavf_security_init(struct iavf_adapter *adapter)
if (rc)
return rc;
- return iavf_ipsec_crypto_set_security_capabililites(iavf_sctx,
+ return iavf_ipsec_crypto_set_security_capabilities(iavf_sctx,
&capabilities);
}
diff --git a/drivers/net/iavf/iavf_ipsec_crypto.h b/drivers/net/iavf/iavf_ipsec_crypto.h
index 68754107..921ca676 100644
--- a/drivers/net/iavf/iavf_ipsec_crypto.h
+++ b/drivers/net/iavf/iavf_ipsec_crypto.h
@@ -118,8 +118,8 @@ int iavf_security_init(struct iavf_adapter *adapter);
/**
* Set security capabilities
*/
-int iavf_ipsec_crypto_set_security_capabililites(struct iavf_security_ctx
- *iavf_sctx, struct virtchnl_ipsec_cap *virtchl_capabilities);
+int iavf_ipsec_crypto_set_security_capabilities(struct iavf_security_ctx
+ *iavf_sctx, struct virtchnl_ipsec_cap *virtchnl_capabilities);
int iavf_security_get_pkt_md_offset(struct iavf_adapter *adapter);
diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 253b971d..a949f8e5 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -2173,7 +2173,7 @@ struct ice_aqc_acl_tbl_actpair {
* alloc/dealloc action-pair
*/
struct ice_aqc_acl_generic {
- /* if alloc_id is below 0x1000 then alllocation failed due to
+ /* if alloc_id is below 0x1000 then allocation failed due to
* unavailable resources, else this is set by FW to identify
* table allocation
*/
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index ed29c00d..5b9251f1 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -1649,11 +1649,11 @@ ice_switch_parse_action(struct ice_pf *pf,
struct ice_vsi *vsi = pf->main_vsi;
struct rte_eth_dev_data *dev_data = pf->adapter->pf.dev_data;
const struct rte_flow_action_queue *act_q;
- const struct rte_flow_action_rss *act_qgrop;
+ const struct rte_flow_action_rss *act_qgroup;
uint16_t base_queue, i;
const struct rte_flow_action *action;
enum rte_flow_action_type action_type;
- uint16_t valid_qgrop_number[MAX_QGRP_NUM_TYPE] = {
+ uint16_t valid_qgroup_number[MAX_QGRP_NUM_TYPE] = {
2, 4, 8, 16, 32, 64, 128};
base_queue = pf->base_queue + vsi->base_queue;
@@ -1662,30 +1662,30 @@ ice_switch_parse_action(struct ice_pf *pf,
action_type = action->type;
switch (action_type) {
case RTE_FLOW_ACTION_TYPE_RSS:
- act_qgrop = action->conf;
- if (act_qgrop->queue_num <= 1)
+ act_qgroup = action->conf;
+ if (act_qgroup->queue_num <= 1)
goto error;
rule_info->sw_act.fltr_act =
ICE_FWD_TO_QGRP;
rule_info->sw_act.fwd_id.q_id =
- base_queue + act_qgrop->queue[0];
+ base_queue + act_qgroup->queue[0];
for (i = 0; i < MAX_QGRP_NUM_TYPE; i++) {
- if (act_qgrop->queue_num ==
- valid_qgrop_number[i])
+ if (act_qgroup->queue_num ==
+ valid_qgroup_number[i])
break;
}
if (i == MAX_QGRP_NUM_TYPE)
goto error;
- if ((act_qgrop->queue[0] +
- act_qgrop->queue_num) >
+ if ((act_qgroup->queue[0] +
+ act_qgroup->queue_num) >
dev_data->nb_rx_queues)
goto error1;
- for (i = 0; i < act_qgrop->queue_num - 1; i++)
- if (act_qgrop->queue[i + 1] !=
- act_qgrop->queue[i] + 1)
+ for (i = 0; i < act_qgroup->queue_num - 1; i++)
+ if (act_qgroup->queue[i + 1] !=
+ act_qgroup->queue[i] + 1)
goto error2;
rule_info->sw_act.qgrp_size =
- act_qgrop->queue_num;
+ act_qgroup->queue_num;
break;
case RTE_FLOW_ACTION_TYPE_QUEUE:
act_q = action->conf;
diff --git a/drivers/net/igc/base/igc_defines.h b/drivers/net/igc/base/igc_defines.h
index 30a41300..53044c8a 100644
--- a/drivers/net/igc/base/igc_defines.h
+++ b/drivers/net/igc/base/igc_defines.h
@@ -632,7 +632,7 @@
#define IGC_ICS_LSC IGC_ICR_LSC /* Link Status Change */
#define IGC_ICS_RXSEQ IGC_ICR_RXSEQ /* Rx sequence error */
#define IGC_ICS_RXDMT0 IGC_ICR_RXDMT0 /* Rx desc min. threshold */
-#define IGC_ICS_DRSTA IGC_ICR_DRSTA /* Device Reset Aserted */
+#define IGC_ICS_DRSTA IGC_ICR_DRSTA /* Device Reset Asserted */
/* Extended Interrupt Cause Set */
#define IGC_EICS_RX_QUEUE0 IGC_EICR_RX_QUEUE0 /* Rx Queue 0 Interrupt */
diff --git a/drivers/net/ipn3ke/ipn3ke_tm.c b/drivers/net/ipn3ke/ipn3ke_tm.c
index 6a9b98fd..5172f21f 100644
--- a/drivers/net/ipn3ke/ipn3ke_tm.c
+++ b/drivers/net/ipn3ke/ipn3ke_tm.c
@@ -1956,7 +1956,7 @@ ipn3ke_tm_show(struct rte_eth_dev *dev)
}
static void
-ipn3ke_tm_show_commmit(struct rte_eth_dev *dev)
+ipn3ke_tm_show_commit(struct rte_eth_dev *dev)
{
struct ipn3ke_tm_internals *tm = IPN3KE_DEV_PRIVATE_TO_TM(dev);
uint32_t tm_id;
@@ -2013,7 +2013,7 @@ ipn3ke_tm_hierarchy_commit(struct rte_eth_dev *dev,
NULL,
rte_strerror(EBUSY));
- ipn3ke_tm_show_commmit(dev);
+ ipn3ke_tm_show_commit(dev);
status = ipn3ke_tm_hierarchy_commit_check(dev, error);
if (status) {
diff --git a/drivers/net/ixgbe/base/ixgbe_82598.c b/drivers/net/ixgbe/base/ixgbe_82598.c
index 6a7983f1..596621e3 100644
--- a/drivers/net/ixgbe/base/ixgbe_82598.c
+++ b/drivers/net/ixgbe/base/ixgbe_82598.c
@@ -57,7 +57,7 @@ void ixgbe_set_pcie_completion_timeout(struct ixgbe_hw *hw)
goto out;
/*
- * if capababilities version is type 1 we can write the
+ * if capabilities version is type 1 we can write the
* timeout of 10ms to 250ms through the GCR register
*/
if (!(gcr & IXGBE_GCR_CAP_VER2)) {
diff --git a/drivers/net/ixgbe/ixgbe_bypass.c b/drivers/net/ixgbe/ixgbe_bypass.c
index 94f34a29..8ed382c7 100644
--- a/drivers/net/ixgbe/ixgbe_bypass.c
+++ b/drivers/net/ixgbe/ixgbe_bypass.c
@@ -80,7 +80,7 @@ ixgbe_bypass_init(struct rte_eth_dev *dev)
struct ixgbe_adapter *adapter;
struct ixgbe_hw *hw;
- adapter = IXGBE_DEV_TO_ADPATER(dev);
+ adapter = IXGBE_DEV_TO_ADAPTER(dev);
hw = &adapter->hw;
/* Only allow BYPASS ops on the first port */
@@ -112,7 +112,7 @@ ixgbe_bypass_state_show(struct rte_eth_dev *dev, u32 *state)
s32 ret_val;
u32 cmd;
u32 by_ctl = 0;
- struct ixgbe_adapter *adapter = IXGBE_DEV_TO_ADPATER(dev);
+ struct ixgbe_adapter *adapter = IXGBE_DEV_TO_ADAPTER(dev);
hw = &adapter->hw;
FUNC_PTR_OR_ERR_RET(adapter->bps.ops.bypass_rw, -ENOTSUP);
@@ -132,7 +132,7 @@ ixgbe_bypass_state_show(struct rte_eth_dev *dev, u32 *state)
s32
ixgbe_bypass_state_store(struct rte_eth_dev *dev, u32 *new_state)
{
- struct ixgbe_adapter *adapter = IXGBE_DEV_TO_ADPATER(dev);
+ struct ixgbe_adapter *adapter = IXGBE_DEV_TO_ADAPTER(dev);
struct ixgbe_hw *hw;
s32 ret_val;
@@ -163,7 +163,7 @@ ixgbe_bypass_event_show(struct rte_eth_dev *dev, u32 event,
u32 shift;
u32 cmd;
u32 by_ctl = 0;
- struct ixgbe_adapter *adapter = IXGBE_DEV_TO_ADPATER(dev);
+ struct ixgbe_adapter *adapter = IXGBE_DEV_TO_ADAPTER(dev);
hw = &adapter->hw;
FUNC_PTR_OR_ERR_RET(adapter->bps.ops.bypass_rw, -ENOTSUP);
@@ -207,7 +207,7 @@ ixgbe_bypass_event_store(struct rte_eth_dev *dev, u32 event,
u32 status;
u32 off;
s32 ret_val;
- struct ixgbe_adapter *adapter = IXGBE_DEV_TO_ADPATER(dev);
+ struct ixgbe_adapter *adapter = IXGBE_DEV_TO_ADAPTER(dev);
hw = &adapter->hw;
FUNC_PTR_OR_ERR_RET(adapter->bps.ops.bypass_set, -ENOTSUP);
@@ -250,7 +250,7 @@ ixgbe_bypass_wd_timeout_store(struct rte_eth_dev *dev, u32 timeout)
u32 status;
u32 mask;
s32 ret_val;
- struct ixgbe_adapter *adapter = IXGBE_DEV_TO_ADPATER(dev);
+ struct ixgbe_adapter *adapter = IXGBE_DEV_TO_ADAPTER(dev);
hw = &adapter->hw;
FUNC_PTR_OR_ERR_RET(adapter->bps.ops.bypass_set, -ENOTSUP);
@@ -282,7 +282,7 @@ ixgbe_bypass_ver_show(struct rte_eth_dev *dev, u32 *ver)
u32 cmd;
u32 status;
s32 ret_val;
- struct ixgbe_adapter *adapter = IXGBE_DEV_TO_ADPATER(dev);
+ struct ixgbe_adapter *adapter = IXGBE_DEV_TO_ADAPTER(dev);
hw = &adapter->hw;
FUNC_PTR_OR_ERR_RET(adapter->bps.ops.bypass_rw, -ENOTSUP);
@@ -317,7 +317,7 @@ ixgbe_bypass_wd_timeout_show(struct rte_eth_dev *dev, u32 *wd_timeout)
u32 cmd;
u32 wdg;
s32 ret_val;
- struct ixgbe_adapter *adapter = IXGBE_DEV_TO_ADPATER(dev);
+ struct ixgbe_adapter *adapter = IXGBE_DEV_TO_ADAPTER(dev);
hw = &adapter->hw;
FUNC_PTR_OR_ERR_RET(adapter->bps.ops.bypass_rw, -ENOTSUP);
@@ -344,7 +344,7 @@ ixgbe_bypass_wd_reset(struct rte_eth_dev *dev)
u32 count = 0;
s32 ret_val;
struct ixgbe_hw *hw;
- struct ixgbe_adapter *adapter = IXGBE_DEV_TO_ADPATER(dev);
+ struct ixgbe_adapter *adapter = IXGBE_DEV_TO_ADAPTER(dev);
hw = &adapter->hw;
diff --git a/drivers/net/ixgbe/ixgbe_bypass_defines.h b/drivers/net/ixgbe/ixgbe_bypass_defines.h
index 7740546b..65794aed 100644
--- a/drivers/net/ixgbe/ixgbe_bypass_defines.h
+++ b/drivers/net/ixgbe/ixgbe_bypass_defines.h
@@ -106,7 +106,7 @@ enum ixgbe_state_t {
#define BYPASS_LOG_EVENT_SHIFT 28
#define BYPASS_LOG_CLEAR_SHIFT 24 /* bit offset */
-#define IXGBE_DEV_TO_ADPATER(dev) \
+#define IXGBE_DEV_TO_ADAPTER(dev) \
((struct ixgbe_adapter *)(dev->data->dev_private))
/* extractions from ixgbe_phy.h */
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 8b4387d6..9b066f7e 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -96,7 +96,7 @@
/*
* Device parameter to force doorbell register mapping
- * to non-cahed region eliminating the extra write memory barrier.
+ * to non-cached region eliminating the extra write memory barrier.
*/
#define MLX5_TX_DB_NC "tx_db_nc"
@@ -350,7 +350,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = {
.free = mlx5_free,
.type = "rte_flow_ipool",
},
- [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID] = {
+ [MLX5_IPOOL_RSS_EXPANSION_FLOW_ID] = {
.size = 0,
.need_lock = 1,
.type = "mlx5_flow_rss_id_ipool",
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index b55f5816..61287800 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -73,7 +73,7 @@ enum mlx5_ipool_index {
MLX5_IPOOL_HRXQ, /* Pool for hrxq resource. */
MLX5_IPOOL_MLX5_FLOW, /* Pool for mlx5 flow handle. */
MLX5_IPOOL_RTE_FLOW, /* Pool for rte_flow. */
- MLX5_IPOOL_RSS_EXPANTION_FLOW_ID, /* Pool for Queue/RSS flow ID. */
+ MLX5_IPOOL_RSS_EXPANSION_FLOW_ID, /* Pool for Queue/RSS flow ID. */
MLX5_IPOOL_RSS_SHARED_ACTIONS, /* Pool for RSS shared actions. */
MLX5_IPOOL_MTR_POLICY, /* Pool for meter policy resource. */
MLX5_IPOOL_MAX,
@@ -751,7 +751,7 @@ struct mlx5_flow_meter_policy {
/* drop action for red color. */
uint16_t sub_policy_num;
/* Count sub policy tables, 3 bits per domain. */
- struct mlx5_flow_meter_sub_policy **sub_policys[MLX5_MTR_DOMAIN_MAX];
+ struct mlx5_flow_meter_sub_policy **sub_policies[MLX5_MTR_DOMAIN_MAX];
/* Sub policy table array must be the end of struct. */
};
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index b7cf4143..a360aa01 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1251,7 +1251,7 @@ flow_drv_rxq_flags_set(struct rte_eth_dev *dev,
continue;
/*
* To support metadata register copy on Tx loopback,
- * this must be always enabled (metadata may arive
+ * this must be always enabled (metadata may arrive
* from other port - not from local flows only.
*/
if (priv->config.dv_flow_en &&
@@ -4933,7 +4933,7 @@ get_meter_sub_policy(struct rte_eth_dev *dev,
attr->transfer ? MLX5_MTR_DOMAIN_TRANSFER :
(attr->egress ? MLX5_MTR_DOMAIN_EGRESS :
MLX5_MTR_DOMAIN_INGRESS);
- sub_policy = policy->sub_policys[mtr_domain][0];
+ sub_policy = policy->sub_policies[mtr_domain][0];
}
if (!sub_policy)
rte_flow_error_set(error, EINVAL,
@@ -5301,7 +5301,7 @@ flow_mreg_split_qrss_prep(struct rte_eth_dev *dev,
* IDs.
*/
mlx5_ipool_malloc(priv->sh->ipool
- [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], &flow_id);
+ [MLX5_IPOOL_RSS_EXPANSION_FLOW_ID], &flow_id);
if (!flow_id)
return rte_flow_error_set(error, ENOMEM,
RTE_FLOW_ERROR_TYPE_ACTION,
@@ -5628,7 +5628,7 @@ flow_sample_split_prep(struct rte_eth_dev *dev,
if (ret < 0)
return ret;
mlx5_ipool_malloc(priv->sh->ipool
- [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], &tag_id);
+ [MLX5_IPOOL_RSS_EXPANSION_FLOW_ID], &tag_id);
*set_tag = (struct mlx5_rte_flow_action_set_tag) {
.id = ret,
.data = tag_id,
@@ -5899,7 +5899,7 @@ flow_create_split_metadata(struct rte_eth_dev *dev,
* These ones are included into parent flow list and will be destroyed
* by flow_drv_destroy.
*/
- mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_RSS_EXPANTION_FLOW_ID],
+ mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_RSS_EXPANSION_FLOW_ID],
qrss_id);
mlx5_free(ext_actions);
return ret;
@@ -5963,7 +5963,7 @@ flow_meter_create_drop_flow_with_org_pattern(struct rte_eth_dev *dev,
* suffix flow. The packets make sense only it pass the prefix
* meter action.
*
- * - Reg_C_5 is used for the packet to match betweend prefix and
+ * - Reg_C_5 is used for the packet to match between prefix and
* suffix flow.
*
* @param dev
@@ -8594,7 +8594,7 @@ mlx5_flow_dev_dump(struct rte_eth_dev *dev, struct rte_flow *flow_idx,
* Pointer to the Ethernet device structure.
* @param[in] context
* The address of an array of pointers to the aged-out flows contexts.
- * @param[in] nb_countexts
+ * @param[in] nb_contexts
* The length of context array pointers.
* @param[out] error
* Perform verbose error reporting if not NULL. Initialized in case of
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 8c131d61..305bfe96 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -324,7 +324,7 @@ enum mlx5_feature_name {
/*
* Max priority for ingress\egress flow groups
* greater than 0 and for any transfer flow group.
- * From user configation: 0 - 21843.
+ * From user configuration: 0 - 21843.
*/
#define MLX5_NON_ROOT_FLOW_MAX_PRIO (21843 + 1)
diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c
index ddf4328d..cd01e0c3 100644
--- a/drivers/net/mlx5/mlx5_flow_aso.c
+++ b/drivers/net/mlx5/mlx5_flow_aso.c
@@ -981,13 +981,13 @@ mlx5_aso_ct_sq_enqueue_single(struct mlx5_dev_ctx_shared *sh,
MLX5_SET(conn_track_aso, desg, sack_permitted, profile->selective_ack);
MLX5_SET(conn_track_aso, desg, challenged_acked,
profile->challenge_ack_passed);
- /* Heartbeat, retransmission_counter, retranmission_limit_exceeded: 0 */
+ /* Heartbeat, retransmission_counter, retransmission_limit_exceeded: 0 */
MLX5_SET(conn_track_aso, desg, heartbeat, 0);
MLX5_SET(conn_track_aso, desg, max_ack_window,
profile->max_ack_window);
MLX5_SET(conn_track_aso, desg, retransmission_counter, 0);
- MLX5_SET(conn_track_aso, desg, retranmission_limit_exceeded, 0);
- MLX5_SET(conn_track_aso, desg, retranmission_limit,
+ MLX5_SET(conn_track_aso, desg, retransmission_limit_exceeded, 0);
+ MLX5_SET(conn_track_aso, desg, retransmission_limit,
profile->retransmission_limit);
MLX5_SET(conn_track_aso, desg, reply_direction_tcp_scale,
profile->reply_dir.scale);
@@ -1312,7 +1312,7 @@ mlx5_aso_ct_obj_analyze(struct rte_flow_action_conntrack *profile,
profile->max_ack_window = MLX5_GET(conn_track_aso, wdata,
max_ack_window);
profile->retransmission_limit = MLX5_GET(conn_track_aso, wdata,
- retranmission_limit);
+ retransmission_limit);
profile->last_window = MLX5_GET(conn_track_aso, wdata, last_win);
profile->last_direction = MLX5_GET(conn_track_aso, wdata, last_dir);
profile->last_index = (enum rte_flow_conntrack_tcp_last_index)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 8022d7d1..0109adcf 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -14538,7 +14538,7 @@ flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow)
else if (dev_handle->split_flow_id &&
!dev_handle->is_meter_flow_id)
mlx5_ipool_free(priv->sh->ipool
- [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID],
+ [MLX5_IPOOL_RSS_EXPANSION_FLOW_ID],
dev_handle->split_flow_id);
mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MLX5_FLOW],
tmp_idx);
@@ -15311,7 +15311,7 @@ flow_dv_destroy_policy_rules(struct rte_eth_dev *dev,
(MLX5_MTR_SUB_POLICY_NUM_SHIFT * i)) &
MLX5_MTR_SUB_POLICY_NUM_MASK;
for (j = 0; j < sub_policy_num; j++) {
- sub_policy = mtr_policy->sub_policys[i][j];
+ sub_policy = mtr_policy->sub_policies[i][j];
if (sub_policy)
__flow_dv_destroy_sub_policy_rules(dev,
sub_policy);
@@ -15649,7 +15649,7 @@ __flow_dv_create_domain_policy_acts(struct rte_eth_dev *dev,
(1 << MLX5_SCALE_FLOW_GROUP_BIT),
};
struct mlx5_flow_meter_sub_policy *sub_policy =
- mtr_policy->sub_policys[domain][0];
+ mtr_policy->sub_policies[domain][0];
if (i >= MLX5_MTR_RTE_COLORS)
return -rte_mtr_error_set(error,
@@ -16504,7 +16504,7 @@ __flow_dv_create_policy_acts_rules(struct rte_eth_dev *dev,
next_fm->policy_id, NULL);
MLX5_ASSERT(next_policy);
next_sub_policy =
- next_policy->sub_policys[domain][0];
+ next_policy->sub_policies[domain][0];
}
tbl_data =
container_of(next_sub_policy->tbl_rsc,
@@ -16559,7 +16559,7 @@ flow_dv_create_policy_rules(struct rte_eth_dev *dev,
continue;
/* Prepare actions list and create policy rules. */
if (__flow_dv_create_policy_acts_rules(dev, mtr_policy,
- mtr_policy->sub_policys[i][0], i)) {
+ mtr_policy->sub_policies[i][0], i)) {
DRV_LOG(ERR, "Failed to create policy action "
"list per domain.");
return -1;
@@ -16898,7 +16898,7 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev,
for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) {
if (rss_desc[i] &&
hrxq_idx[i] !=
- mtr_policy->sub_policys[domain][j]->rix_hrxq[i])
+ mtr_policy->sub_policies[domain][j]->rix_hrxq[i])
break;
}
if (i >= MLX5_MTR_RTE_COLORS) {
@@ -16910,13 +16910,13 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev,
for (i = 0; i < MLX5_MTR_RTE_COLORS; i++)
mlx5_hrxq_release(dev, hrxq_idx[i]);
*is_reuse = true;
- return mtr_policy->sub_policys[domain][j];
+ return mtr_policy->sub_policies[domain][j];
}
}
/* Create sub policy. */
- if (!mtr_policy->sub_policys[domain][0]->rix_hrxq[0]) {
+ if (!mtr_policy->sub_policies[domain][0]->rix_hrxq[0]) {
/* Reuse the first pre-allocated sub_policy. */
- sub_policy = mtr_policy->sub_policys[domain][0];
+ sub_policy = mtr_policy->sub_policies[domain][0];
sub_policy_idx = sub_policy->idx;
} else {
sub_policy = mlx5_ipool_zmalloc
@@ -16967,7 +16967,7 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev,
"rules for ingress domain.");
goto rss_sub_policy_error;
}
- if (sub_policy != mtr_policy->sub_policys[domain][0]) {
+ if (sub_policy != mtr_policy->sub_policies[domain][0]) {
i = (mtr_policy->sub_policy_num >>
(MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)) &
MLX5_MTR_SUB_POLICY_NUM_MASK;
@@ -16975,7 +16975,7 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev,
DRV_LOG(ERR, "No free sub-policy slot.");
goto rss_sub_policy_error;
}
- mtr_policy->sub_policys[domain][i] = sub_policy;
+ mtr_policy->sub_policies[domain][i] = sub_policy;
i++;
mtr_policy->sub_policy_num &= ~(MLX5_MTR_SUB_POLICY_NUM_MASK <<
(MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain));
@@ -16989,11 +16989,11 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev,
rss_sub_policy_error:
if (sub_policy) {
__flow_dv_destroy_sub_policy_rules(dev, sub_policy);
- if (sub_policy != mtr_policy->sub_policys[domain][0]) {
+ if (sub_policy != mtr_policy->sub_policies[domain][0]) {
i = (mtr_policy->sub_policy_num >>
(MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)) &
MLX5_MTR_SUB_POLICY_NUM_MASK;
- mtr_policy->sub_policys[domain][i] = NULL;
+ mtr_policy->sub_policies[domain][i] = NULL;
mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MTR_POLICY],
sub_policy->idx);
}
@@ -17078,11 +17078,11 @@ flow_dv_meter_sub_policy_rss_prepare(struct rte_eth_dev *dev,
sub_policy = sub_policies[--j];
mtr_policy = sub_policy->main_policy;
__flow_dv_destroy_sub_policy_rules(dev, sub_policy);
- if (sub_policy != mtr_policy->sub_policys[domain][0]) {
+ if (sub_policy != mtr_policy->sub_policies[domain][0]) {
sub_policy_num = (mtr_policy->sub_policy_num >>
(MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)) &
MLX5_MTR_SUB_POLICY_NUM_MASK;
- mtr_policy->sub_policys[domain][sub_policy_num - 1] =
+ mtr_policy->sub_policies[domain][sub_policy_num - 1] =
NULL;
sub_policy_num--;
mtr_policy->sub_policy_num &=
@@ -17157,7 +17157,7 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev,
if (!next_fm->drop_cnt)
goto exit;
color_reg_c_idx = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, error);
- sub_policy = mtr_policy->sub_policys[domain][0];
+ sub_policy = mtr_policy->sub_policies[domain][0];
for (i = 0; i < RTE_COLORS; i++) {
bool rule_exist = false;
struct mlx5_meter_policy_action_container *act_cnt;
@@ -17184,7 +17184,7 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev,
next_policy = mlx5_flow_meter_policy_find(dev,
next_fm->policy_id, NULL);
MLX5_ASSERT(next_policy);
- next_sub_policy = next_policy->sub_policys[domain][0];
+ next_sub_policy = next_policy->sub_policies[domain][0];
tbl_data = container_of(next_sub_policy->tbl_rsc,
struct mlx5_flow_tbl_data_entry, tbl);
act_cnt = &mtr_policy->act_cnt[i];
@@ -17277,13 +17277,13 @@ flow_dv_destroy_sub_policy_with_rxq(struct rte_eth_dev *dev,
new_policy_num = sub_policy_num;
for (j = 0; j < sub_policy_num; j++) {
sub_policy =
- mtr_policy->sub_policys[domain][j];
+ mtr_policy->sub_policies[domain][j];
if (sub_policy) {
__flow_dv_destroy_sub_policy_rules(dev,
sub_policy);
if (sub_policy !=
- mtr_policy->sub_policys[domain][0]) {
- mtr_policy->sub_policys[domain][j] =
+ mtr_policy->sub_policies[domain][0]) {
+ mtr_policy->sub_policies[domain][j] =
NULL;
mlx5_ipool_free
(priv->sh->ipool[MLX5_IPOOL_MTR_POLICY],
@@ -17303,7 +17303,7 @@ flow_dv_destroy_sub_policy_with_rxq(struct rte_eth_dev *dev,
}
break;
case MLX5_FLOW_FATE_QUEUE:
- sub_policy = mtr_policy->sub_policys[domain][0];
+ sub_policy = mtr_policy->sub_policies[domain][0];
__flow_dv_destroy_sub_policy_rules(dev,
sub_policy);
break;
@@ -18045,7 +18045,7 @@ flow_dv_validate_mtr_policy_acts(struct rte_eth_dev *dev,
domain_color[i] &= hierarchy_domain;
/*
* Non-termination actions only support NIC Tx domain.
- * The adjustion should be skipped when there is no
+ * The adjustment should be skipped when there is no
* action or only END is provided. The default domains
* bit-mask is set to find the MIN intersection.
* The action flags checking should also be skipped.
diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c
index 0e4e6ac3..be693e10 100644
--- a/drivers/net/mlx5/mlx5_flow_meter.c
+++ b/drivers/net/mlx5/mlx5_flow_meter.c
@@ -696,7 +696,7 @@ __mlx5_flow_meter_policy_delete(struct rte_eth_dev *dev,
MLX5_MTR_SUB_POLICY_NUM_MASK;
if (sub_policy_num) {
for (j = 0; j < sub_policy_num; j++) {
- sub_policy = mtr_policy->sub_policys[i][j];
+ sub_policy = mtr_policy->sub_policies[i][j];
if (sub_policy)
mlx5_ipool_free
(priv->sh->ipool[MLX5_IPOOL_MTR_POLICY],
@@ -847,10 +847,10 @@ mlx5_flow_meter_policy_add(struct rte_eth_dev *dev,
policy_idx = sub_policy_idx;
sub_policy->main_policy_id = 1;
}
- mtr_policy->sub_policys[i] =
+ mtr_policy->sub_policies[i] =
(struct mlx5_flow_meter_sub_policy **)
((uint8_t *)mtr_policy + policy_size);
- mtr_policy->sub_policys[i][0] = sub_policy;
+ mtr_policy->sub_policies[i][0] = sub_policy;
sub_policy_num = (mtr_policy->sub_policy_num >>
(MLX5_MTR_SUB_POLICY_NUM_SHIFT * i)) &
MLX5_MTR_SUB_POLICY_NUM_MASK;
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index c8d2f407..04041024 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -181,7 +181,7 @@ mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
* Rx queue identification.
*
* @param mode
- * Pointer to the burts mode information.
+ * Pointer to the burst mode information.
*
* @return
* 0 as success, -EINVAL as failure.
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index fd2cf209..402b50af 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -731,7 +731,7 @@ mlx5_txq_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id,
* Tx queue identification.
*
* @param mode
- * Pointer to the burts mode information.
+ * Pointer to the burst mode information.
*
* @return
* 0 as success, -EINVAL as failure.
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 0d66c325..2534c175 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -594,12 +594,12 @@ ngbe_vlan_tpid_set(struct rte_eth_dev *dev,
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
int ret = 0;
- uint32_t portctrl, vlan_ext, qinq;
+ uint32_t portctl, vlan_ext, qinq;
- portctrl = rd32(hw, NGBE_PORTCTL);
+ portctl = rd32(hw, NGBE_PORTCTL);
- vlan_ext = (portctrl & NGBE_PORTCTL_VLANEXT);
- qinq = vlan_ext && (portctrl & NGBE_PORTCTL_QINQ);
+ vlan_ext = (portctl & NGBE_PORTCTL_VLANEXT);
+ qinq = vlan_ext && (portctl & NGBE_PORTCTL_QINQ);
switch (vlan_type) {
case RTE_ETH_VLAN_TYPE_INNER:
if (vlan_ext) {
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index ebb5d1ae..41159d6e 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -845,7 +845,7 @@ pfe_eth_init(struct rte_vdev_device *vdev, struct pfe *pfe, int id)
}
static int
-pfe_get_gemac_if_proprties(struct pfe *pfe,
+pfe_get_gemac_if_properties(struct pfe *pfe,
__rte_unused const struct device_node *parent,
unsigned int port, unsigned int if_cnt,
struct ls1012a_pfe_platform_data *pdata)
@@ -1053,7 +1053,7 @@ pmd_pfe_probe(struct rte_vdev_device *vdev)
g_pfe->platform_data.ls1012a_mdio_pdata[0].phy_mask = 0xffffffff;
for (ii = 0; ii < interface_count; ii++) {
- pfe_get_gemac_if_proprties(g_pfe, np, ii, interface_count,
+ pfe_get_gemac_if_properties(g_pfe, np, ii, interface_count,
&g_pfe->platform_data);
}
diff --git a/drivers/net/qede/base/ecore_chain.h b/drivers/net/qede/base/ecore_chain.h
index c69920be..7a0f0ed1 100644
--- a/drivers/net/qede/base/ecore_chain.h
+++ b/drivers/net/qede/base/ecore_chain.h
@@ -283,7 +283,7 @@ dma_addr_t ecore_chain_get_pbl_phys(struct ecore_chain *p_chain)
/**
* @brief ecore_chain_advance_page -
*
- * Advance the next element accros pages for a linked chain
+ * Advance the next element across pages for a linked chain
*
* @param p_chain
* @param p_next_elem
@@ -507,7 +507,7 @@ static OSAL_INLINE void *ecore_chain_consume(struct ecore_chain *p_chain)
*
* Resets the chain to its start state
*
- * @param p_chain pointer to a previously allocted chain
+ * @param p_chain pointer to a previously allocated chain
*/
static OSAL_INLINE void ecore_chain_reset(struct ecore_chain *p_chain)
{
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index d3025724..10876dac 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -70,8 +70,8 @@ struct src_ent {
u64 next;
};
-#define CDUT_SEG_ALIGNMET 3 /* in 4k chunks */
-#define CDUT_SEG_ALIGNMET_IN_BYTES (1 << (CDUT_SEG_ALIGNMET + 12))
+#define CDUT_SEG_ALIGNMENT 3 /* in 4k chunks */
+#define CDUT_SEG_ALIGNMENT_IN_BYTES (1 << (CDUT_SEG_ALIGNMENT + 12))
#define CONN_CXT_SIZE(p_hwfn) \
ALIGNED_TYPE_SIZE(union conn_context, p_hwfn)
@@ -1383,7 +1383,7 @@ static void ecore_cdu_init_pf(struct ecore_hwfn *p_hwfn)
*/
offset = (ILT_PAGE_IN_BYTES(p_cli->p_size.val) *
(p_cli->pf_blks[CDUT_SEG_BLK(i)].start_line -
- p_cli->first.val)) / CDUT_SEG_ALIGNMET_IN_BYTES;
+ p_cli->first.val)) / CDUT_SEG_ALIGNMENT_IN_BYTES;
cdu_seg_params = 0;
SET_FIELD(cdu_seg_params, CDU_SEG_REG_TYPE, p_seg->type);
@@ -1392,7 +1392,7 @@ static void ecore_cdu_init_pf(struct ecore_hwfn *p_hwfn)
offset = (ILT_PAGE_IN_BYTES(p_cli->p_size.val) *
(p_cli->pf_blks[CDUT_FL_SEG_BLK(i, PF)].start_line -
- p_cli->first.val)) / CDUT_SEG_ALIGNMET_IN_BYTES;
+ p_cli->first.val)) / CDUT_SEG_ALIGNMENT_IN_BYTES;
cdu_seg_params = 0;
SET_FIELD(cdu_seg_params, CDU_SEG_REG_TYPE, p_seg->type);
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index e895dee4..7511a4ae 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1666,7 +1666,7 @@ void ecore_resc_free(struct ecore_dev *p_dev)
/* bitmaps for indicating active traffic classes.
* Special case for Arrowhead 4 port
*/
-/* 0..3 actualy used, 4 serves OOO, 7 serves high priority stuff (e.g. DCQCN) */
+/* 0..3 actually used, 4 serves OOO, 7 serves high priority stuff (e.g. DCQCN) */
#define ACTIVE_TCS_BMAP 0x9f
/* 0..3 actually used, OOO and high priority stuff all use 3 */
#define ACTIVE_TCS_BMAP_4PORT_K2 0xf
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 9ddf502e..57f34a87 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -509,7 +509,7 @@ enum ecore_eng {
*
* @param p_dev
*
- * @return enum ecore_eng - L2 affintiy hint
+ * @return enum ecore_eng - L2 affinity hint
*/
enum ecore_eng ecore_llh_get_l2_affinity_hint(struct ecore_dev *p_dev);
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index bd7bd865..9f592662 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -445,7 +445,7 @@ struct ystorm_eth_conn_ag_ctx {
#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT 6
#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK 0x1 /* rule4en */
#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT 7
- u8 tx_q0_int_coallecing_timeset /* byte2 */;
+ u8 tx_q0_int_coalescing_timeset /* byte2 */;
u8 byte3 /* byte3 */;
__le16 word0 /* word0 */;
__le32 terminate_spqe /* reg0 */;
@@ -525,7 +525,7 @@ struct ustorm_eth_conn_ag_ctx {
__le32 reg0 /* reg0 */;
__le32 reg1 /* reg1 */;
__le32 reg2 /* reg2 */;
- __le32 tx_int_coallecing_timeset /* reg3 */;
+ __le32 tx_int_coalescing_timeset /* reg3 */;
__le16 tx_drv_bd_cons /* word2 */;
__le16 rx_drv_cqe_cons /* word3 */;
};
diff --git a/drivers/net/qede/base/ecore_hw_defs.h b/drivers/net/qede/base/ecore_hw_defs.h
index 92361e79..9610f04a 100644
--- a/drivers/net/qede/base/ecore_hw_defs.h
+++ b/drivers/net/qede/base/ecore_hw_defs.h
@@ -7,7 +7,7 @@
#ifndef _ECORE_IGU_DEF_H_
#define _ECORE_IGU_DEF_H_
-/* Fields of IGU PF CONFIGRATION REGISTER */
+/* Fields of IGU PF CONFIGURATION REGISTER */
/* function enable */
#define IGU_PF_CONF_FUNC_EN (0x1 << 0)
/* MSI/MSIX enable */
@@ -21,7 +21,7 @@
/* simd all ones mode */
#define IGU_PF_CONF_SIMD_MODE (0x1 << 5)
-/* Fields of IGU VF CONFIGRATION REGISTER */
+/* Fields of IGU VF CONFIGURATION REGISTER */
/* function enable */
#define IGU_VF_CONF_FUNC_EN (0x1 << 0)
/* MSI/MSIX enable */
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index 6a52f32c..aebea2e0 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -348,7 +348,7 @@ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn,
BTB_HEADROOM_BLOCKS;
/* Find blocks per physical TC. use factor to avoid floating
- * arithmethic.
+ * arithmetic.
*/
num_tcs_in_port = 0;
for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
@@ -1964,7 +1964,7 @@ static u8 ecore_calc_cdu_validation_byte(struct ecore_hwfn *p_hwfn,
return validation_byte;
}
-/* Calcualte and set validation bytes for session context */
+/* Calculate and set validation bytes for session context */
void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
void *p_ctx_mem, u16 ctx_size,
u8 ctx_type, u32 cid)
@@ -1984,7 +1984,7 @@ void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
*u_val_ptr = ecore_calc_cdu_validation_byte(p_hwfn, ctx_type, 5, cid);
}
-/* Calcualte and set validation bytes for task context */
+/* Calculate and set validation bytes for task context */
void ecore_calc_task_ctx_validation(struct ecore_hwfn *p_hwfn, void *p_ctx_mem,
u16 ctx_size, u8 ctx_type, u32 tid)
{
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index a393d088..a8b58e6a 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -457,7 +457,7 @@ u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, u8 vf_id, u8
void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt);
/**
- * @brief ecore_calc_session_ctx_validation - Calcualte validation byte for
+ * @brief ecore_calc_session_ctx_validation - Calculate validation byte for
* session context.
*
* @param p_hwfn - HW device data
@@ -473,7 +473,7 @@ void ecore_calc_session_ctx_validation(struct ecore_hwfn *p_hwfn,
u32 cid);
/**
- * @brief ecore_calc_task_ctx_validation - Calcualte validation byte for task
+ * @brief ecore_calc_task_ctx_validation - Calculate validation byte for task
* context.
*
* @param p_hwfn - HW device data
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index 2c4aac94..89c03101 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -486,7 +486,7 @@ enum _ecore_status_t ecore_db_rec_handler(struct ecore_hwfn *p_hwfn,
return rc;
}
- /* flush any pedning (e)dpm as they may never arrive */
+ /* flush any pending (e)dpm as they may never arrive */
ecore_wr(p_hwfn, p_ptt, DORQ_REG_DPM_FORCE_ABORT, 0x1);
/* release overflow sticky indication (stop silently dropping
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index bd7c5703..15c3cfd1 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -153,7 +153,7 @@ ecore_iov_get_vf_sw_mbx(struct ecore_hwfn *p_hwfn,
#endif
/* This struct is part of ecore_dev and contains data relevant to all hwfns;
- * Initialized only if SR-IOV cpabability is exposed in PCIe config space.
+ * Initialized only if SR-IOV capability is exposed in PCIe config space.
*/
struct ecore_hw_sriov_info {
/* standard SRIOV capability fields, mostly for debugging */
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index af234dec..7624e3b5 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -2293,7 +2293,7 @@ ecore_get_queue_coalesce(struct ecore_hwfn *p_hwfn, u16 *p_coal,
rc = ecore_vf_pf_get_coalesce(p_hwfn, p_coal, p_cid);
if (rc != ECORE_SUCCESS)
DP_NOTICE(p_hwfn, false,
- "Unable to read queue calescing\n");
+ "Unable to read queue coalescing\n");
return rc;
}
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 185cc233..eadc6c7d 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -67,7 +67,7 @@ struct ecore_mcp_info {
u16 mfw_mb_length;
u32 mcp_hist;
- /* Capabilties negotiated with the MFW */
+ /* Capabilities negotiated with the MFW */
u32 capabilities;
};
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index c3922ba4..cdfe2caa 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -159,7 +159,7 @@ struct ecore_mcp_rdma_stats {
u64 rx_pkts;
u64 tx_pkts;
u64 rx_bytes;
- u64 tx_byts;
+ u64 tx_bytes;
};
enum ecore_mcp_protocol_type {
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 02f61368..034fde03 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -706,7 +706,7 @@ void ecore_spq_return_entry(struct ecore_hwfn *p_hwfn,
* @brief ecore_spq_add_entry - adds a new entry to the pending
* list. Should be used while lock is being held.
*
- * Addes an entry to the pending list is there is room (en empty
+ * Adds an entry to the pending list is there is room (en empty
* element is available in the free_pool), or else places the
* entry in the unlimited_pending pool.
*
diff --git a/drivers/net/qede/base/ecore_spq.h b/drivers/net/qede/base/ecore_spq.h
index 0958e5a0..a0a3077b 100644
--- a/drivers/net/qede/base/ecore_spq.h
+++ b/drivers/net/qede/base/ecore_spq.h
@@ -174,7 +174,7 @@ enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn,
u8 *fw_return_code);
/**
- * @brief ecore_spq_allocate - Alloocates & initializes the SPQ and EQ.
+ * @brief ecore_spq_allocate - Allocates & initializes the SPQ and EQ.
*
* @param p_hwfn
*
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index e12e9981..2ce518f6 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -1034,7 +1034,7 @@ static u8 ecore_iov_alloc_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
*
* @brief The function invalidates all the VF entries,
* technically this isn't required, but added for
- * cleaness and ease of debugging incase a VF attempts to
+ * cleanness and ease of debugging incase a VF attempts to
* produce an interrupt after it has been taken down.
*
* @param p_hwfn
@@ -3564,7 +3564,7 @@ static void ecore_iov_vf_mbx_ucast_filter(struct ecore_hwfn *p_hwfn,
ECORE_SUCCESS)
goto out;
- /* Determine if the unicast filtering is acceptible by PF */
+ /* Determine if the unicast filtering is acceptable by PF */
if ((p_bulletin->valid_bitmap & (1 << VLAN_ADDR_FORCED)) &&
(params.type == ECORE_FILTER_VLAN ||
params.type == ECORE_FILTER_MAC_VLAN)) {
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index e748e67d..3530675d 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -183,7 +183,7 @@ struct ecore_pf_iov {
u64 active_vfs[ECORE_VF_ARRAY_LENGTH];
#endif
- /* Allocate message address continuosuly and split to each VF */
+ /* Allocate message address continuously and split to each VF */
void *mbx_msg_virt_addr;
dma_addr_t mbx_msg_phys_addr;
u32 mbx_msg_size;
@@ -198,7 +198,7 @@ struct ecore_pf_iov {
#ifdef CONFIG_ECORE_SRIOV
/**
* @brief Read sriov related information and allocated resources
- * reads from configuraiton space, shmem, etc.
+ * reads from configuration space, shmem, etc.
*
* @param p_hwfn
*
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index a36ae47c..fe959238 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -547,7 +547,7 @@ ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn,
return ECORE_NOMEM;
}
- /* Doorbells are tricky; Upper-layer has alreday set the hwfn doorbell
+ /* Doorbells are tricky; Upper-layer has already set the hwfn doorbell
* value, but there are several incompatibily scenarios where that
* would be incorrect and we'd need to override it.
*/
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index f92dc428..df48f6cf 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -242,7 +242,7 @@ struct pfvf_start_queue_resp_tlv {
};
/* Extended queue information - additional index for reference inside qzone.
- * If commmunicated between VF/PF, each TLV relating to queues should be
+ * If communicated between VF/PF, each TLV relating to queues should be
* extended by one such [or have a future base TLV that already contains info].
*/
struct vfpf_qid_tlv {
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 511742c6..504e1553 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -530,7 +530,7 @@ struct public_global {
u32 debug_mb_offset;
u32 phymod_dbg_mb_offset;
struct couple_mode_teaming cmt;
-/* Temperature in Celcius (-255C / +255C), measured every second. */
+/* Temperature in Celsius (-255C / +255C), measured every second. */
s32 internal_temperature;
u32 mfw_ver;
u32 running_bundle_id;
@@ -1968,7 +1968,7 @@ enum MFW_DRV_MSG_TYPE {
((u8)((u8 *)(MFW_MB_P(shmem_func)->msg))[msg_id]++;)
struct public_mfw_mb {
- u32 sup_msgs; /* Assigend with MFW_DRV_MSG_MAX */
+ u32 sup_msgs; /* Assigned with MFW_DRV_MSG_MAX */
/* Incremented by the MFW */
u32 msg[MFW_DRV_MSG_MAX_DWORDS(MFW_DRV_MSG_MAX)];
/* Incremented by the driver */
diff --git a/drivers/net/qede/qede_debug.c b/drivers/net/qede/qede_debug.c
index af86bcc6..9a2f05ac 100644
--- a/drivers/net/qede/qede_debug.c
+++ b/drivers/net/qede/qede_debug.c
@@ -457,7 +457,7 @@ struct split_type_defs {
(MCP_REG_SCRATCH + \
offsetof(struct static_init, sections[SPAD_SECTION_TRACE]))
-#define MAX_SW_PLTAFORM_STR_SIZE 64
+#define MAX_SW_PLATFORM_STR_SIZE 64
#define EMPTY_FW_VERSION_STR "???_???_???_???"
#define EMPTY_FW_IMAGE_STR "???????????????"
@@ -1227,13 +1227,13 @@ static u32 qed_dump_common_global_params(struct ecore_hwfn *p_hwfn,
u8 num_specific_global_params)
{
struct dbg_tools_data *dev_data = &p_hwfn->dbg_info;
- char sw_platform_str[MAX_SW_PLTAFORM_STR_SIZE];
+ char sw_platform_str[MAX_SW_PLATFORM_STR_SIZE];
u32 offset = 0;
u8 num_params;
/* Fill platform string */
ecore_set_platform_str(p_hwfn, sw_platform_str,
- MAX_SW_PLTAFORM_STR_SIZE);
+ MAX_SW_PLATFORM_STR_SIZE);
/* Dump global params section header */
num_params = NUM_COMMON_GLOBAL_PARAMS + num_specific_global_params +
@@ -7441,11 +7441,11 @@ qed_print_idle_chk_results_wrapper(struct ecore_hwfn *p_hwfn,
u32 num_dumped_dwords,
char *results_buf)
{
- u32 num_errors, num_warnnings;
+ u32 num_errors, num_warnings;
return qed_print_idle_chk_results(p_hwfn, dump_buf, num_dumped_dwords,
results_buf, &num_errors,
- &num_warnnings);
+ &num_warnings);
}
/* Feature meta data lookup table */
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index b34c9afd..9127c903 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -1805,7 +1805,7 @@ struct sfc_mae_field_locator {
efx_mae_field_id_t field_id;
size_t size;
/* Field offset in the corresponding rte_flow_item_ struct */
- size_t ofst;
+ size_t offset;
};
static void
@@ -1820,8 +1820,8 @@ sfc_mae_item_build_supp_mask(const struct sfc_mae_field_locator *field_locators,
for (i = 0; i < nb_field_locators; ++i) {
const struct sfc_mae_field_locator *fl = &field_locators[i];
- SFC_ASSERT(fl->ofst + fl->size <= mask_size);
- memset(RTE_PTR_ADD(mask_ptr, fl->ofst), 0xff, fl->size);
+ SFC_ASSERT(fl->offset + fl->size <= mask_size);
+ memset(RTE_PTR_ADD(mask_ptr, fl->offset), 0xff, fl->size);
}
}
@@ -1843,8 +1843,8 @@ sfc_mae_parse_item(const struct sfc_mae_field_locator *field_locators,
rc = efx_mae_match_spec_field_set(ctx->match_spec,
fremap[fl->field_id],
- fl->size, spec + fl->ofst,
- fl->size, mask + fl->ofst);
+ fl->size, spec + fl->offset,
+ fl->size, mask + fl->offset);
if (rc != 0)
break;
}
@@ -2387,7 +2387,7 @@ static const struct sfc_mae_field_locator flocs_tunnel[] = {
* for Geneve and NVGRE, too.
*/
.size = RTE_SIZEOF_FIELD(struct rte_flow_item_vxlan, vni),
- .ofst = offsetof(struct rte_flow_item_vxlan, vni),
+ .offset = offsetof(struct rte_flow_item_vxlan, vni),
},
};
@@ -3297,7 +3297,7 @@ sfc_mae_rule_parse_action_of_set_vlan_pcp(
struct sfc_mae_parsed_item {
const struct rte_flow_item *item;
- size_t proto_header_ofst;
+ size_t proto_header_offset;
size_t proto_header_size;
};
@@ -3316,20 +3316,20 @@ sfc_mae_header_force_item_masks(uint8_t *header_buf,
const struct sfc_mae_parsed_item *parsed_item;
const struct rte_flow_item *item;
size_t proto_header_size;
- size_t ofst;
+ size_t offset;
parsed_item = &parsed_items[item_idx];
proto_header_size = parsed_item->proto_header_size;
item = parsed_item->item;
- for (ofst = 0; ofst < proto_header_size;
- ofst += sizeof(rte_be16_t)) {
- rte_be16_t *wp = RTE_PTR_ADD(header_buf, ofst);
+ for (offset = 0; offset < proto_header_size;
+ offset += sizeof(rte_be16_t)) {
+ rte_be16_t *wp = RTE_PTR_ADD(header_buf, offset);
const rte_be16_t *w_maskp;
const rte_be16_t *w_specp;
- w_maskp = RTE_PTR_ADD(item->mask, ofst);
- w_specp = RTE_PTR_ADD(item->spec, ofst);
+ w_maskp = RTE_PTR_ADD(item->mask, offset);
+ w_specp = RTE_PTR_ADD(item->spec, offset);
*wp &= ~(*w_maskp);
*wp |= (*w_specp & *w_maskp);
@@ -3363,7 +3363,7 @@ sfc_mae_rule_parse_action_vxlan_encap(
1 /* VXLAN */];
unsigned int nb_parsed_items = 0;
- size_t eth_ethertype_ofst = offsetof(struct rte_ether_hdr, ether_type);
+ size_t eth_ethertype_offset = offsetof(struct rte_ether_hdr, ether_type);
uint8_t dummy_buf[RTE_MAX(sizeof(struct rte_ipv4_hdr),
sizeof(struct rte_ipv6_hdr))];
struct rte_ipv4_hdr *ipv4 = (void *)dummy_buf;
@@ -3371,8 +3371,8 @@ sfc_mae_rule_parse_action_vxlan_encap(
struct rte_vxlan_hdr *vxlan = NULL;
struct rte_udp_hdr *udp = NULL;
unsigned int nb_vlan_tags = 0;
- size_t next_proto_ofst = 0;
- size_t ethertype_ofst = 0;
+ size_t next_proto_offset = 0;
+ size_t ethertype_offset = 0;
uint64_t exp_items;
int rc;
@@ -3444,7 +3444,7 @@ sfc_mae_rule_parse_action_vxlan_encap(
proto_header_size = sizeof(struct rte_ether_hdr);
- ethertype_ofst = eth_ethertype_ofst;
+ ethertype_offset = eth_ethertype_offset;
exp_items = RTE_BIT64(RTE_FLOW_ITEM_TYPE_VLAN) |
RTE_BIT64(RTE_FLOW_ITEM_TYPE_IPV4) |
@@ -3458,13 +3458,13 @@ sfc_mae_rule_parse_action_vxlan_encap(
proto_header_size = sizeof(struct rte_vlan_hdr);
- ethertypep = RTE_PTR_ADD(buf, eth_ethertype_ofst);
+ ethertypep = RTE_PTR_ADD(buf, eth_ethertype_offset);
*ethertypep = RTE_BE16(RTE_ETHER_TYPE_QINQ);
- ethertypep = RTE_PTR_ADD(buf, ethertype_ofst);
+ ethertypep = RTE_PTR_ADD(buf, ethertype_offset);
*ethertypep = RTE_BE16(RTE_ETHER_TYPE_VLAN);
- ethertype_ofst =
+ ethertype_offset =
bounce_eh->size +
offsetof(struct rte_vlan_hdr, eth_proto);
@@ -3482,10 +3482,10 @@ sfc_mae_rule_parse_action_vxlan_encap(
proto_header_size = sizeof(struct rte_ipv4_hdr);
- ethertypep = RTE_PTR_ADD(buf, ethertype_ofst);
+ ethertypep = RTE_PTR_ADD(buf, ethertype_offset);
*ethertypep = RTE_BE16(RTE_ETHER_TYPE_IPV4);
- next_proto_ofst =
+ next_proto_offset =
bounce_eh->size +
offsetof(struct rte_ipv4_hdr, next_proto_id);
@@ -3501,10 +3501,10 @@ sfc_mae_rule_parse_action_vxlan_encap(
proto_header_size = sizeof(struct rte_ipv6_hdr);
- ethertypep = RTE_PTR_ADD(buf, ethertype_ofst);
+ ethertypep = RTE_PTR_ADD(buf, ethertype_offset);
*ethertypep = RTE_BE16(RTE_ETHER_TYPE_IPV6);
- next_proto_ofst = bounce_eh->size +
+ next_proto_offset = bounce_eh->size +
offsetof(struct rte_ipv6_hdr, proto);
ipv6 = (struct rte_ipv6_hdr *)buf_cur;
@@ -3519,7 +3519,7 @@ sfc_mae_rule_parse_action_vxlan_encap(
proto_header_size = sizeof(struct rte_udp_hdr);
- next_protop = RTE_PTR_ADD(buf, next_proto_ofst);
+ next_protop = RTE_PTR_ADD(buf, next_proto_offset);
*next_protop = IPPROTO_UDP;
udp = (struct rte_udp_hdr *)buf_cur;
diff --git a/drivers/net/sfc/sfc_tso.h b/drivers/net/sfc/sfc_tso.h
index 9029ad15..f2fba304 100644
--- a/drivers/net/sfc/sfc_tso.h
+++ b/drivers/net/sfc/sfc_tso.h
@@ -53,21 +53,21 @@ sfc_tso_outer_udp_fix_len(const struct rte_mbuf *m, uint8_t *tsoh)
static inline void
sfc_tso_innermost_ip_fix_len(const struct rte_mbuf *m, uint8_t *tsoh,
- size_t iph_ofst)
+ size_t iph_offset)
{
size_t ip_payload_len = m->l4_len + m->tso_segsz;
- size_t field_ofst;
+ size_t field_offset;
rte_be16_t len;
if (m->ol_flags & RTE_MBUF_F_TX_IPV4) {
- field_ofst = offsetof(struct rte_ipv4_hdr, total_length);
+ field_offset = offsetof(struct rte_ipv4_hdr, total_length);
len = rte_cpu_to_be_16(m->l3_len + ip_payload_len);
} else {
- field_ofst = offsetof(struct rte_ipv6_hdr, payload_len);
+ field_offset = offsetof(struct rte_ipv6_hdr, payload_len);
len = rte_cpu_to_be_16(ip_payload_len);
}
- rte_memcpy(tsoh + iph_ofst + field_ofst, &len, sizeof(len));
+ rte_memcpy(tsoh + iph_offset + field_offset, &len, sizeof(len));
}
unsigned int sfc_tso_prepare_header(uint8_t *tsoh, size_t header_len,
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index ac4d4e08..e617c9af 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -1026,12 +1026,12 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
int ret = 0;
- uint32_t portctrl, vlan_ext, qinq;
+ uint32_t portctl, vlan_ext, qinq;
- portctrl = rd32(hw, TXGBE_PORTCTL);
+ portctl = rd32(hw, TXGBE_PORTCTL);
- vlan_ext = (portctrl & TXGBE_PORTCTL_VLANEXT);
- qinq = vlan_ext && (portctrl & TXGBE_PORTCTL_QINQ);
+ vlan_ext = (portctl & TXGBE_PORTCTL_VLANEXT);
+ qinq = vlan_ext && (portctl & TXGBE_PORTCTL_QINQ);
switch (vlan_type) {
case RTE_ETH_VLAN_TYPE_INNER:
if (vlan_ext) {
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index b317649d..8180f9ff 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -2871,7 +2871,7 @@ static void virtio_dev_free_mbufs(struct rte_eth_dev *dev)
type, i);
VIRTQUEUE_DUMP(vq);
- while ((buf = virtqueue_detach_unused(vq)) != NULL) {
+ while ((buf = virtqueue_detach_unused_cookie(vq)) != NULL) {
rte_pktmbuf_free(buf);
mbuf_num++;
}
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index c98d696e..bc964ec5 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -16,7 +16,7 @@
* 2) mbuf that hasn't been consumed by backend.
*/
struct rte_mbuf *
-virtqueue_detach_unused(struct virtqueue *vq)
+virtqueue_detach_unused_cookie(struct virtqueue *vq)
{
struct rte_mbuf *cookie;
struct virtio_hw *hw;
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 99c68cf6..0307f7a5 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -481,7 +481,7 @@ void virtqueue_dump(struct virtqueue *vq);
/**
* Get all mbufs to be freed.
*/
-struct rte_mbuf *virtqueue_detach_unused(struct virtqueue *vq);
+struct rte_mbuf *virtqueue_detach_unused_cookie(struct virtqueue *vq);
/* Flush the elements in the used ring. */
void virtqueue_rxvq_flush(struct virtqueue *vq);
diff --git a/drivers/net/vmxnet3/base/upt1_defs.h b/drivers/net/vmxnet3/base/upt1_defs.h
index 5fd7a397..40604f34 100644
--- a/drivers/net/vmxnet3/base/upt1_defs.h
+++ b/drivers/net/vmxnet3/base/upt1_defs.h
@@ -57,7 +57,7 @@ UPT1_RxStats;
/* interrupt moderation level */
#define UPT1_IML_NONE 0 /* no interrupt moderation */
#define UPT1_IML_HIGHEST 7 /* least intr generated */
-#define UPT1_IML_ADAPTIVE 8 /* adpative intr moderation */
+#define UPT1_IML_ADAPTIVE 8 /* adaptive intr moderation */
/* values for UPT1_RSSConf.hashFunc */
#define UPT1_RSS_HASH_TYPE_NONE 0x0
diff --git a/drivers/raw/ifpga/base/ifpga_defines.h b/drivers/raw/ifpga/base/ifpga_defines.h
index dca1518a..2b822e03 100644
--- a/drivers/raw/ifpga/base/ifpga_defines.h
+++ b/drivers/raw/ifpga/base/ifpga_defines.h
@@ -753,7 +753,7 @@ struct feature_fme_ifpmon_ch_ctr {
union {
u64 csr;
struct {
- /* Cache Counter for even addresse */
+ /* Cache Counter for even addresses */
u64 cache_counter:48;
u16 rsvd:12; /* Reserved */
/* Cache Event being reported */
@@ -1279,7 +1279,7 @@ struct feature_fme_hssi_eth_ctrl {
u32 data:32; /* HSSI data */
u16 address:16; /* HSSI address */
/*
- * HSSI comamnd
+ * HSSI command
* 0x0 - No request
* 0x08 - SW register RD request
* 0x10 - SW register WR request
@@ -1595,7 +1595,7 @@ struct feature_port_stp {
* @FPGA_PR_STATE_OPERATING: FPGA PR done
*/
enum fpga_pr_states {
- /* canot determine state states */
+ /* cannot determine state states */
FPGA_PR_STATE_UNKNOWN,
/* write sequence: init, write, complete */
diff --git a/drivers/raw/ifpga/base/ifpga_feature_dev.c b/drivers/raw/ifpga/base/ifpga_feature_dev.c
index 08135137..c48d172d 100644
--- a/drivers/raw/ifpga/base/ifpga_feature_dev.c
+++ b/drivers/raw/ifpga/base/ifpga_feature_dev.c
@@ -227,7 +227,7 @@ static struct feature_driver fme_feature_drvs[] = {
&fme_i2c_master_ops),},
{FEATURE_DRV(FME_FEATURE_ID_ETH_GROUP, FME_FEATURE_ETH_GROUP,
&fme_eth_group_ops),},
- {0, NULL, NULL}, /* end of arrary */
+ {0, NULL, NULL}, /* end of array */
};
static struct feature_driver port_feature_drvs[] = {
diff --git a/drivers/raw/ifpga/base/ifpga_fme_pr.c b/drivers/raw/ifpga/base/ifpga_fme_pr.c
index 9997942d..7a057741 100644
--- a/drivers/raw/ifpga/base/ifpga_fme_pr.c
+++ b/drivers/raw/ifpga/base/ifpga_fme_pr.c
@@ -297,7 +297,7 @@ int do_pr(struct ifpga_hw *hw, u32 port_id, const char *buffer,
bts_hdr = (const struct bts_header *)buffer;
if (is_valid_bts(bts_hdr)) {
- dev_info(hw, "this is a valid bitsteam..\n");
+ dev_info(hw, "this is a valid bitstream..\n");
header_size = sizeof(struct bts_header) +
bts_hdr->metadata_len;
if (size < header_size)
diff --git a/drivers/raw/ifpga/base/opae_hw_api.h b/drivers/raw/ifpga/base/opae_hw_api.h
index 7e04b564..af5b19f8 100644
--- a/drivers/raw/ifpga/base/opae_hw_api.h
+++ b/drivers/raw/ifpga/base/opae_hw_api.h
@@ -129,7 +129,7 @@ opae_bridge_alloc(const char *name, struct opae_bridge_ops *ops, void *data);
int opae_bridge_reset(struct opae_bridge *br);
#define opae_bridge_free(br) opae_free(br)
-/* OPAE Acceleraotr Data Structure */
+/* OPAE Accelerator Data Structure */
struct opae_accelerator_ops;
/*
@@ -267,7 +267,7 @@ struct opae_adapter_ops {
TAILQ_HEAD(opae_accelerator_list, opae_accelerator);
-#define opae_adapter_for_each_acc(adatper, acc) \
+#define opae_adapter_for_each_acc(adapter, acc) \
TAILQ_FOREACH(acc, &adapter->acc_list, node)
#define SHM_PREFIX "/IFPGA:"
diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c
index 5396671d..d4dcb233 100644
--- a/drivers/raw/ioat/ioat_rawdev.c
+++ b/drivers/raw/ioat/ioat_rawdev.c
@@ -200,7 +200,7 @@ ioat_rawdev_create(const char *name, struct rte_pci_device *dev)
ioat->rawdev = rawdev;
ioat->mz = mz;
ioat->regs = dev->mem_resource[0].addr;
- ioat->doorbell = &ioat->regs->dmacount;
+ ioat->doorbell = &ioat->regs->dmaccount;
ioat->ring_size = 0;
ioat->desc_ring = NULL;
ioat->status_addr = ioat->mz->iova +
diff --git a/drivers/raw/ioat/ioat_spec.h b/drivers/raw/ioat/ioat_spec.h
index 6aa467e4..51c4b3f8 100644
--- a/drivers/raw/ioat/ioat_spec.h
+++ b/drivers/raw/ioat/ioat_spec.h
@@ -60,7 +60,7 @@ struct rte_ioat_registers {
uint8_t reserved6[0x2]; /* 0x82 */
uint8_t chancmd; /* 0x84 */
uint8_t reserved3[1]; /* 0x85 */
- uint16_t dmacount; /* 0x86 */
+ uint16_t dmaccount; /* 0x86 */
uint64_t chansts; /* 0x88 */
uint64_t chainaddr; /* 0x90 */
uint64_t chancmp; /* 0x98 */
diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c
index 9a2db7e4..72464cad 100644
--- a/drivers/regex/mlx5/mlx5_regex_fastpath.c
+++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c
@@ -226,7 +226,7 @@ complete_umr_wqe(struct mlx5_regex_qp *qp, struct mlx5_regex_hw_qp *qp_obj,
rte_cpu_to_be_32(mkey_job->imkey->id));
/* Set UMR WQE control seg. */
ucseg->mkey_mask |= rte_cpu_to_be_64(MLX5_WQE_UMR_CTRL_MKEY_MASK_LEN |
- MLX5_WQE_UMR_CTRL_FLAG_TRNSLATION_OFFSET |
+ MLX5_WQE_UMR_CTRL_FLAG_TRANSLATION_OFFSET |
MLX5_WQE_UMR_CTRL_MKEY_MASK_ACCESS_LOCAL_WRITE);
ucseg->klm_octowords = rte_cpu_to_be_16(klm_align);
/* Set mkey context seg. */
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index 22617924..c2aa6bc8 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -92,7 +92,7 @@ struct mlx5_vdpa_virtq {
struct rte_intr_handle *intr_handle;
uint64_t err_time[3]; /* RDTSC time of recent errors. */
uint32_t n_retry;
- struct mlx5_devx_virtio_q_couners_attr reset;
+ struct mlx5_devx_virtio_q_counters_attr reset;
};
struct mlx5_vdpa_steer {
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index 2f32aef6..bbb520e8 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -259,7 +259,7 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index)
virtq->counters = mlx5_devx_cmd_create_virtio_q_counters
(priv->cdev->ctx);
if (!virtq->counters) {
- DRV_LOG(ERR, "Failed to create virtq couners for virtq"
+ DRV_LOG(ERR, "Failed to create virtq counters for virtq"
" %d.", index);
goto error;
}
@@ -592,7 +592,7 @@ mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid,
struct rte_vdpa_stat *stats, unsigned int n)
{
struct mlx5_vdpa_virtq *virtq = &priv->virtqs[qid];
- struct mlx5_devx_virtio_q_couners_attr attr = {0};
+ struct mlx5_devx_virtio_q_counters_attr attr = {0};
int ret;
if (!virtq->counters) {
diff --git a/examples/ipsec-secgw/ipsec_process.c b/examples/ipsec-secgw/ipsec_process.c
index 3fc4b3a8..88680966 100644
--- a/examples/ipsec-secgw/ipsec_process.c
+++ b/examples/ipsec-secgw/ipsec_process.c
@@ -106,7 +106,7 @@ fill_ipsec_session(struct rte_ipsec_session *ss, struct ipsec_ctx *ctx,
}
/*
- * group input packets byt the SA they belong to.
+ * group input packets byte the SA they belong to.
*/
static uint32_t
sa_group(void *sa_ptr[], struct rte_mbuf *pkts[],
diff --git a/examples/vhost/virtio_net.c b/examples/vhost/virtio_net.c
index 9064fc3a..1b646059 100644
--- a/examples/vhost/virtio_net.c
+++ b/examples/vhost/virtio_net.c
@@ -62,7 +62,7 @@ enqueue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
struct rte_mbuf *m, uint16_t desc_idx)
{
uint32_t desc_avail, desc_offset;
- uint64_t desc_chunck_len;
+ uint64_t desc_chunk_len;
uint32_t mbuf_avail, mbuf_offset;
uint32_t cpy_len;
struct vring_desc *desc;
@@ -72,10 +72,10 @@ enqueue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
uint16_t nr_desc = 1;
desc = &vr->desc[desc_idx];
- desc_chunck_len = desc->len;
+ desc_chunk_len = desc->len;
desc_gaddr = desc->addr;
desc_addr = rte_vhost_va_from_guest_pa(
- dev->mem, desc_gaddr, &desc_chunck_len);
+ dev->mem, desc_gaddr, &desc_chunk_len);
/*
* Checking of 'desc_addr' placed outside of 'unlikely' macro to avoid
* performance issue with some versions of gcc (4.8.4 and 5.3.0) which
@@ -87,7 +87,7 @@ enqueue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
rte_prefetch0((void *)(uintptr_t)desc_addr);
/* write virtio-net header */
- if (likely(desc_chunck_len >= dev->hdr_len)) {
+ if (likely(desc_chunk_len >= dev->hdr_len)) {
*(struct virtio_net_hdr *)(uintptr_t)desc_addr = virtio_hdr;
desc_offset = dev->hdr_len;
} else {
@@ -112,11 +112,11 @@ enqueue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
src += len;
}
- desc_chunck_len = desc->len - dev->hdr_len;
+ desc_chunk_len = desc->len - dev->hdr_len;
desc_gaddr += dev->hdr_len;
desc_addr = rte_vhost_va_from_guest_pa(
dev->mem, desc_gaddr,
- &desc_chunck_len);
+ &desc_chunk_len);
if (unlikely(!desc_addr))
return -1;
@@ -147,28 +147,28 @@ enqueue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
return -1;
desc = &vr->desc[desc->next];
- desc_chunck_len = desc->len;
+ desc_chunk_len = desc->len;
desc_gaddr = desc->addr;
desc_addr = rte_vhost_va_from_guest_pa(
- dev->mem, desc_gaddr, &desc_chunck_len);
+ dev->mem, desc_gaddr, &desc_chunk_len);
if (unlikely(!desc_addr))
return -1;
desc_offset = 0;
desc_avail = desc->len;
- } else if (unlikely(desc_chunck_len == 0)) {
- desc_chunck_len = desc_avail;
+ } else if (unlikely(desc_chunk_len == 0)) {
+ desc_chunk_len = desc_avail;
desc_gaddr += desc_offset;
desc_addr = rte_vhost_va_from_guest_pa(dev->mem,
desc_gaddr,
- &desc_chunck_len);
+ &desc_chunk_len);
if (unlikely(!desc_addr))
return -1;
desc_offset = 0;
}
- cpy_len = RTE_MIN(desc_chunck_len, mbuf_avail);
+ cpy_len = RTE_MIN(desc_chunk_len, mbuf_avail);
rte_memcpy((void *)((uintptr_t)(desc_addr + desc_offset)),
rte_pktmbuf_mtod_offset(m, void *, mbuf_offset),
cpy_len);
@@ -177,7 +177,7 @@ enqueue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
mbuf_offset += cpy_len;
desc_avail -= cpy_len;
desc_offset += cpy_len;
- desc_chunck_len -= cpy_len;
+ desc_chunk_len -= cpy_len;
}
return 0;
@@ -246,7 +246,7 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
struct vring_desc *desc;
uint64_t desc_addr, desc_gaddr;
uint32_t desc_avail, desc_offset;
- uint64_t desc_chunck_len;
+ uint64_t desc_chunk_len;
uint32_t mbuf_avail, mbuf_offset;
uint32_t cpy_len;
struct rte_mbuf *cur = m, *prev = m;
@@ -258,10 +258,10 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
(desc->flags & VRING_DESC_F_INDIRECT))
return -1;
- desc_chunck_len = desc->len;
+ desc_chunk_len = desc->len;
desc_gaddr = desc->addr;
desc_addr = rte_vhost_va_from_guest_pa(
- dev->mem, desc_gaddr, &desc_chunck_len);
+ dev->mem, desc_gaddr, &desc_chunk_len);
if (unlikely(!desc_addr))
return -1;
@@ -275,10 +275,10 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
* header.
*/
desc = &vr->desc[desc->next];
- desc_chunck_len = desc->len;
+ desc_chunk_len = desc->len;
desc_gaddr = desc->addr;
desc_addr = rte_vhost_va_from_guest_pa(
- dev->mem, desc_gaddr, &desc_chunck_len);
+ dev->mem, desc_gaddr, &desc_chunk_len);
if (unlikely(!desc_addr))
return -1;
rte_prefetch0((void *)(uintptr_t)desc_addr);
@@ -290,7 +290,7 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
mbuf_offset = 0;
mbuf_avail = m->buf_len - RTE_PKTMBUF_HEADROOM;
while (1) {
- cpy_len = RTE_MIN(desc_chunck_len, mbuf_avail);
+ cpy_len = RTE_MIN(desc_chunk_len, mbuf_avail);
rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *,
mbuf_offset),
(void *)((uintptr_t)(desc_addr + desc_offset)),
@@ -300,7 +300,7 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
mbuf_offset += cpy_len;
desc_avail -= cpy_len;
desc_offset += cpy_len;
- desc_chunck_len -= cpy_len;
+ desc_chunk_len -= cpy_len;
/* This desc reaches to its end, get the next one */
if (desc_avail == 0) {
@@ -312,22 +312,22 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr,
return -1;
desc = &vr->desc[desc->next];
- desc_chunck_len = desc->len;
+ desc_chunk_len = desc->len;
desc_gaddr = desc->addr;
desc_addr = rte_vhost_va_from_guest_pa(
- dev->mem, desc_gaddr, &desc_chunck_len);
+ dev->mem, desc_gaddr, &desc_chunk_len);
if (unlikely(!desc_addr))
return -1;
rte_prefetch0((void *)(uintptr_t)desc_addr);
desc_offset = 0;
desc_avail = desc->len;
- } else if (unlikely(desc_chunck_len == 0)) {
- desc_chunck_len = desc_avail;
+ } else if (unlikely(desc_chunk_len == 0)) {
+ desc_chunk_len = desc_avail;
desc_gaddr += desc_offset;
desc_addr = rte_vhost_va_from_guest_pa(dev->mem,
desc_gaddr,
- &desc_chunck_len);
+ &desc_chunk_len);
if (unlikely(!desc_addr))
return -1;
diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c
index 09331258..2426d57a 100644
--- a/lib/bpf/bpf_validate.c
+++ b/lib/bpf/bpf_validate.c
@@ -856,7 +856,7 @@ eval_mbuf_store(const struct bpf_reg_val *rv, uint32_t opsz)
static const struct {
size_t off;
size_t sz;
- } mbuf_ro_fileds[] = {
+ } mbuf_ro_fields[] = {
{ .off = offsetof(struct rte_mbuf, buf_addr), },
{ .off = offsetof(struct rte_mbuf, refcnt), },
{ .off = offsetof(struct rte_mbuf, nb_segs), },
@@ -866,13 +866,13 @@ eval_mbuf_store(const struct bpf_reg_val *rv, uint32_t opsz)
{ .off = offsetof(struct rte_mbuf, priv_size), },
};
- for (i = 0; i != RTE_DIM(mbuf_ro_fileds) &&
- (mbuf_ro_fileds[i].off + mbuf_ro_fileds[i].sz <=
- rv->u.max || rv->u.max + opsz <= mbuf_ro_fileds[i].off);
+ for (i = 0; i != RTE_DIM(mbuf_ro_fields) &&
+ (mbuf_ro_fields[i].off + mbuf_ro_fields[i].sz <=
+ rv->u.max || rv->u.max + opsz <= mbuf_ro_fields[i].off);
i++)
;
- if (i != RTE_DIM(mbuf_ro_fileds))
+ if (i != RTE_DIM(mbuf_ro_fields))
return "store to the read-only mbuf field";
return NULL;
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 59ea5a54..5f5cd029 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -27,7 +27,7 @@ extern "C" {
#include "rte_cryptodev_trace_fp.h"
-extern const char **rte_cyptodev_names;
+extern const char **rte_cryptodev_names;
/* Logging Macros */
diff --git a/lib/eal/common/eal_common_trace_ctf.c b/lib/eal/common/eal_common_trace_ctf.c
index 33e419aa..8f245941 100644
--- a/lib/eal/common/eal_common_trace_ctf.c
+++ b/lib/eal/common/eal_common_trace_ctf.c
@@ -321,7 +321,7 @@ meta_fix_freq(struct trace *trace, char *meta)
static void
meta_fix_freq_offset(struct trace *trace, char *meta)
{
- uint64_t uptime_tickes_floor, uptime_ticks, freq, uptime_sec;
+ uint64_t uptime_ticks_floor, uptime_ticks, freq, uptime_sec;
uint64_t offset, offset_s;
char *str;
int rc;
@@ -329,12 +329,12 @@ meta_fix_freq_offset(struct trace *trace, char *meta)
uptime_ticks = trace->uptime_ticks &
((1ULL << __RTE_TRACE_EVENT_HEADER_ID_SHIFT) - 1);
freq = rte_get_tsc_hz();
- uptime_tickes_floor = RTE_ALIGN_MUL_FLOOR(uptime_ticks, freq);
+ uptime_ticks_floor = RTE_ALIGN_MUL_FLOOR(uptime_ticks, freq);
- uptime_sec = uptime_tickes_floor / freq;
+ uptime_sec = uptime_ticks_floor / freq;
offset_s = trace->epoch_sec - uptime_sec;
- offset = uptime_ticks - uptime_tickes_floor;
+ offset = uptime_ticks - uptime_ticks_floor;
offset += trace->epoch_nsec * (freq / NSEC_PER_SEC);
str = RTE_PTR_ADD(meta, trace->ctf_meta_offset_freq_off_s);
diff --git a/lib/fib/trie_avx512.c b/lib/fib/trie_avx512.c
index d4d70d84..5df95dda 100644
--- a/lib/fib/trie_avx512.c
+++ b/lib/fib/trie_avx512.c
@@ -111,7 +111,7 @@ trie_vec_lookup_x16x2(void *p, uint8_t ips[32][RTE_FIB6_IPV6_ADDR_SIZE],
/**
* lookup in tbl24
- * Put it inside branch to make compiller happy with -O0
+ * Put it inside branch to make compiler happy with -O0
*/
if (size == sizeof(uint16_t)) {
res_1 = _mm512_i32gather_epi32(idxes_1,
diff --git a/lib/graph/graph_populate.c b/lib/graph/graph_populate.c
index 093512ef..62d2d69c 100644
--- a/lib/graph/graph_populate.c
+++ b/lib/graph/graph_populate.c
@@ -46,7 +46,7 @@ graph_fp_mem_calc_size(struct graph *graph)
}
static void
-graph_header_popluate(struct graph *_graph)
+graph_header_populate(struct graph *_graph)
{
struct rte_graph *graph = _graph->graph;
@@ -184,7 +184,7 @@ graph_fp_mem_populate(struct graph *graph)
{
int rc;
- graph_header_popluate(graph);
+ graph_header_populate(graph);
graph_nodes_populate(graph);
rc = graph_node_nexts_populate(graph);
rc |= graph_src_nodes_populate(graph);
diff --git a/lib/graph/graph_stats.c b/lib/graph/graph_stats.c
index aa70929d..8b0c711e 100644
--- a/lib/graph/graph_stats.c
+++ b/lib/graph/graph_stats.c
@@ -329,7 +329,7 @@ rte_graph_cluster_stats_destroy(struct rte_graph_cluster_stats *stat)
}
static inline void
-cluster_node_arregate_stats(struct cluster_node *cluster)
+cluster_node_aggregate_stats(struct cluster_node *cluster)
{
uint64_t calls = 0, cycles = 0, objs = 0, realloc_count = 0;
struct rte_graph_cluster_node_stats *stat = &cluster->stat;
@@ -373,7 +373,7 @@ rte_graph_cluster_stats_get(struct rte_graph_cluster_stats *stat, bool skip_cb)
cluster = stat->clusters;
for (count = 0; count < stat->max_nodes; count++) {
- cluster_node_arregate_stats(cluster);
+ cluster_node_aggregate_stats(cluster);
if (!skip_cb)
rc = stat->fn(!count, (count == stat->max_nodes - 1),
stat->cookie, &cluster->stat);
diff --git a/lib/hash/rte_crc_arm64.h b/lib/hash/rte_crc_arm64.h
index b4628cfc..6995b414 100644
--- a/lib/hash/rte_crc_arm64.h
+++ b/lib/hash/rte_crc_arm64.h
@@ -61,7 +61,7 @@ crc32c_arm64_u64(uint64_t data, uint32_t init_val)
}
/**
- * Allow or disallow use of arm64 SIMD instrinsics for CRC32 hash
+ * Allow or disallow use of arm64 SIMD intrinsics for CRC32 hash
* calculation.
*
* @param alg
diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c
index 6847e36f..e27ac8ac 100644
--- a/lib/hash/rte_thash.c
+++ b/lib/hash/rte_thash.c
@@ -27,7 +27,7 @@ static struct rte_tailq_elem rte_thash_tailq = {
EAL_REGISTER_TAILQ(rte_thash_tailq)
/**
- * Table of some irreducible polinomials over GF(2).
+ * Table of some irreducible polynomials over GF(2).
* For lfsr they are represented in BE bit order, and
* x^0 is masked out.
* For example, poly x^5 + x^2 + 1 will be represented
diff --git a/lib/ip_frag/ip_frag_internal.c b/lib/ip_frag/ip_frag_internal.c
index b436a4c9..01849284 100644
--- a/lib/ip_frag/ip_frag_internal.c
+++ b/lib/ip_frag/ip_frag_internal.c
@@ -172,7 +172,7 @@ ip_frag_process(struct ip_frag_pkt *fp, struct rte_ip_frag_death_row *dr,
mb = ipv6_frag_reassemble(fp);
}
- /* errorenous set of fragments. */
+ /* erroneous set of fragments. */
if (mb == NULL) {
/* report an error. */
diff --git a/lib/ipsec/ipsec_sad.c b/lib/ipsec/ipsec_sad.c
index 531e1e32..8548e2cf 100644
--- a/lib/ipsec/ipsec_sad.c
+++ b/lib/ipsec/ipsec_sad.c
@@ -69,14 +69,14 @@ add_specific(struct rte_ipsec_sad *sad, const void *key,
int key_type, void *sa)
{
void *tmp_val;
- int ret, notexist;
+ int ret, nonexistent;
/* Check if the key is present in the table.
- * Need for further accaunting in cnt_arr
+ * Need for further accounting in cnt_arr
*/
ret = rte_hash_lookup_with_hash(sad->hash[key_type], key,
rte_hash_crc(key, sad->keysize[key_type], sad->init_val));
- notexist = (ret == -ENOENT);
+ nonexistent = (ret == -ENOENT);
/* Add an SA to the corresponding table.*/
ret = rte_hash_add_key_with_hash_data(sad->hash[key_type], key,
@@ -107,9 +107,9 @@ add_specific(struct rte_ipsec_sad *sad, const void *key,
if (ret < 0)
return ret;
if (key_type == RTE_IPSEC_SAD_SPI_DIP)
- sad->cnt_arr[ret].cnt_dip += notexist;
+ sad->cnt_arr[ret].cnt_dip += nonexistent;
else
- sad->cnt_arr[ret].cnt_dip_sip += notexist;
+ sad->cnt_arr[ret].cnt_dip_sip += nonexistent;
return 0;
}
diff --git a/lib/vhost/vhost_user.h b/lib/vhost/vhost_user.h
index 16fe03f8..c97c9db4 100644
--- a/lib/vhost/vhost_user.h
+++ b/lib/vhost/vhost_user.h
@@ -106,7 +106,7 @@ typedef struct VhostUserCryptoSessionParam {
uint8_t dir;
uint8_t hash_mode;
uint8_t chaining_dir;
- uint8_t *ciphe_key;
+ uint8_t *cipher_key;
uint8_t *auth_key;
uint8_t cipher_key_buf[VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH];
uint8_t auth_key_buf[VHOST_USER_CRYPTO_MAX_HMAC_KEY_LENGTH];
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index b3d954aa..28a4dc1b 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -477,14 +477,14 @@ map_one_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
while (desc_len) {
uint64_t desc_addr;
- uint64_t desc_chunck_len = desc_len;
+ uint64_t desc_chunk_len = desc_len;
if (unlikely(vec_id >= BUF_VECTOR_MAX))
return -1;
desc_addr = vhost_iova_to_vva(dev, vq,
desc_iova,
- &desc_chunck_len,
+ &desc_chunk_len,
perm);
if (unlikely(!desc_addr))
return -1;
@@ -493,10 +493,10 @@ map_one_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
buf_vec[vec_id].buf_iova = desc_iova;
buf_vec[vec_id].buf_addr = desc_addr;
- buf_vec[vec_id].buf_len = desc_chunck_len;
+ buf_vec[vec_id].buf_len = desc_chunk_len;
- desc_len -= desc_chunck_len;
- desc_iova += desc_chunck_len;
+ desc_len -= desc_chunk_len;
+ desc_iova += desc_chunk_len;
vec_id++;
}
*vec_idx = vec_id;
--
2.32.0 (Apple Git-132)
next prev parent reply other threads:[~2022-01-12 7:28 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-12 7:28 [PATCH 0/1] Spelling code fixes* Josh Soref
2022-01-12 7:28 ` Josh Soref [this message]
2022-01-12 11:46 ` [PATCH 1/1] fix spelling in code Thomas Monjalon
2022-01-12 11:49 ` [PATCH 0/1] Spelling code fixes* Thomas Monjalon
2022-01-12 12:48 ` Josh Soref
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220112072808.59713-2-jsoref@gmail.com \
--to=jsoref@gmail.com \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).