From: Stephen Hemminger <stephen@networkplumber.org>
To: dev@dpdk.org
Cc: Stephen Hemminger <stephen@networkplumber.org>,
Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
Ciara Loftus <ciara.loftus@intel.com>,
Steven Webster <steven.webster@windriver.com>,
Matt Peters <matt.peters@windriver.com>,
Selwin Sebastian <selwin.sebastian@amd.com>,
Julien Aube <julien_dpdk@jaube.fr>,
Ajit Khaparde <ajit.khaparde@broadcom.com>,
Somnath Kotur <somnath.kotur@broadcom.com>,
Chas Williams <chas3@att.com>,
"Min Hu (Connor)" <humin29@huawei.com>,
Nithin Dabilpuram <ndabilpuram@marvell.com>,
Kiran Kumar K <kirankumark@marvell.com>,
Sunil Kumar Kori <skori@marvell.com>,
Satha Rao <skoteshwar@marvell.com>,
Harman Kalra <hkalra@marvell.com>,
Yuying Zhang <yuying.zhang@intel.com>,
Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>,
Hemant Agrawal <hemant.agrawal@nxp.com>,
Sachin Saxena <sachin.saxena@nxp.com>,
John Daley <johndale@cisco.com>,
Hyong Youb Kim <hyonkim@cisco.com>, Gaetan Rivet <grive@u256.net>,
Jeroen de Borst <jeroendb@google.com>,
Rushil Gupta <rushilg@google.com>,
Joshua Washington <joshwash@google.com>,
Ziyang Xuan <xuanziyang2@huawei.com>,
Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>,
Guoyang Zhou <zhouguoyang@huawei.com>,
Jie Hai <haijie1@huawei.com>,
Yisen Zhuang <yisen.zhuang@huawei.com>,
Jingjing Wu <jingjing.wu@intel.com>,
Rosen Xu <rosen.xu@intel.com>,
Jakub Grajciar <jgrajcia@cisco.com>,
Dariusz Sosnowski <dsosnowski@nvidia.com>,
Ori Kam <orika@nvidia.com>, Suanming Mou <suanmingm@nvidia.com>,
Matan Azrad <matan@nvidia.com>, Liron Himi <lironh@marvell.com>,
Long Li <longli@microsoft.com>,
Chaoyong He <chaoyong.he@corigine.com>,
Jiawen Wu <jiawenwu@trustnetic.com>,
Tetsuya Mukawa <mtetsuyah@gmail.com>,
Devendra Singh Rawat <dsinghrawat@marvell.com>,
Alok Prasad <palok@marvell.com>,
Bruce Richardson <bruce.richardson@intel.com>,
Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
Jian Wang <jianwang@trustnetic.com>,
Maxime Coquelin <maxime.coquelin@redhat.com>,
Chenbo Xia <chenbox@nvidia.com>
Subject: [PATCH v4 15/30] net: replace use of fixed size rte_memcpy
Date: Fri, 5 Apr 2024 09:53:26 -0700 [thread overview]
Message-ID: <20240405165518.367503-16-stephen@networkplumber.org> (raw)
In-Reply-To: <20240405165518.367503-1-stephen@networkplumber.org>
Automatically generated by devtools/cocci/rte_memcpy.cocci
Also remove unnecessary includes of rte_memcpy.h
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/af_xdp/rte_eth_af_xdp.c | 2 +-
drivers/net/avp/avp_ethdev.c | 4 +-
drivers/net/axgbe/axgbe_ethdev.c | 4 +-
drivers/net/bnx2x/bnx2x.c | 32 +++--
drivers/net/bnxt/bnxt_flow.c | 34 +++---
drivers/net/bonding/rte_eth_bond_8023ad.c | 4 +-
drivers/net/bonding/rte_eth_bond_flow.c | 2 +-
drivers/net/cnxk/cnxk_eswitch_devargs.c | 3 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 2 +-
drivers/net/cnxk/cnxk_rep.c | 3 +-
drivers/net/cnxk/cnxk_rep_flow.c | 6 +-
drivers/net/cnxk/cnxk_rep_msg.c | 8 +-
drivers/net/cnxk/cnxk_rep_ops.c | 2 +-
drivers/net/cnxk/cnxk_tm.c | 5 +-
drivers/net/cpfl/cpfl_ethdev.c | 3 +-
drivers/net/cpfl/cpfl_vchnl.c | 4 +-
drivers/net/cxgbe/clip_tbl.c | 2 +-
drivers/net/cxgbe/cxgbe_filter.c | 8 +-
drivers/net/cxgbe/l2t.c | 4 +-
drivers/net/cxgbe/smt.c | 20 ++--
drivers/net/dpaa2/base/dpaa2_hw_dpni.c | 1 -
drivers/net/dpaa2/dpaa2_ethdev.c | 1 -
drivers/net/dpaa2/dpaa2_recycle.c | 1 -
drivers/net/dpaa2/dpaa2_rxtx.c | 1 -
drivers/net/dpaa2/dpaa2_sparser.c | 1 -
drivers/net/dpaa2/dpaa2_tm.c | 2 +-
drivers/net/e1000/em_rxtx.c | 1 -
drivers/net/e1000/igb_flow.c | 22 ++--
drivers/net/e1000/igb_pf.c | 7 +-
drivers/net/e1000/igb_rxtx.c | 1 -
drivers/net/enic/enic_main.c | 8 +-
drivers/net/failsafe/failsafe_ops.c | 6 +-
drivers/net/gve/base/gve_adminq.c | 2 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 1 -
drivers/net/hinic/hinic_pmd_flow.c | 40 +++----
drivers/net/hns3/hns3_fdir.c | 2 +-
drivers/net/hns3/hns3_flow.c | 4 +-
drivers/net/i40e/i40e_ethdev.c | 109 ++++++++----------
drivers/net/i40e/i40e_fdir.c | 28 +++--
drivers/net/i40e/i40e_flow.c | 56 +++++----
drivers/net/i40e/i40e_pf.c | 3 +-
drivers/net/i40e/i40e_tm.c | 11 +-
drivers/net/i40e/rte_pmd_i40e.c | 34 +++---
drivers/net/iavf/iavf_fdir.c | 93 +++++++--------
drivers/net/iavf/iavf_fsub.c | 50 ++++----
drivers/net/iavf/iavf_generic_flow.c | 2 +-
drivers/net/iavf/iavf_tm.c | 11 +-
drivers/net/iavf/iavf_vchnl.c | 9 +-
drivers/net/ice/ice_dcf.c | 5 +-
drivers/net/ice/ice_dcf_parent.c | 2 +-
drivers/net/ice/ice_dcf_sched.c | 11 +-
drivers/net/ice/ice_diagnose.c | 4 +-
drivers/net/ice/ice_ethdev.c | 14 +--
drivers/net/ice/ice_fdir_filter.c | 37 +++---
drivers/net/ice/ice_generic_flow.c | 2 +-
drivers/net/ice/ice_hash.c | 2 +-
drivers/net/ice/ice_tm.c | 11 +-
drivers/net/idpf/idpf_ethdev.c | 7 +-
drivers/net/idpf/idpf_rxtx.c | 10 +-
drivers/net/ipn3ke/ipn3ke_flow.c | 32 +++--
drivers/net/ipn3ke/ipn3ke_representor.c | 16 +--
drivers/net/ipn3ke/ipn3ke_tm.c | 6 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 9 +-
drivers/net/ixgbe/ixgbe_fdir.c | 7 +-
drivers/net/ixgbe/ixgbe_flow.c | 65 +++++------
drivers/net/ixgbe/ixgbe_ipsec.c | 8 +-
drivers/net/ixgbe/ixgbe_pf.c | 5 +-
drivers/net/ixgbe/ixgbe_tm.c | 11 +-
drivers/net/ixgbe/rte_pmd_ixgbe.c | 4 +-
drivers/net/memif/memif_socket.c | 4 +-
drivers/net/mlx5/mlx5_devx.c | 4 +-
drivers/net/mlx5/mlx5_flow.c | 38 +++---
drivers/net/mlx5/mlx5_flow_aso.c | 6 +-
drivers/net/mlx5/mlx5_flow_hw.c | 27 ++---
drivers/net/mlx5/mlx5_rx.c | 6 +-
drivers/net/mlx5/mlx5_rxtx_vec.c | 8 +-
drivers/net/mvpp2/mrvl_tm.c | 2 +-
drivers/net/netvsc/hn_ethdev.c | 1 -
drivers/net/nfp/flower/nfp_conntrack.c | 2 +-
drivers/net/nfp/flower/nfp_flower_flow.c | 16 +--
.../net/nfp/flower/nfp_flower_representor.c | 2 +-
drivers/net/nfp/nfp_mtr.c | 10 +-
drivers/net/ngbe/ngbe_pf.c | 4 +-
drivers/net/null/rte_eth_null.c | 6 +-
drivers/net/pcap/pcap_ethdev.c | 2 +-
drivers/net/pcap/pcap_osdep_freebsd.c | 3 +-
drivers/net/pcap/pcap_osdep_linux.c | 3 +-
drivers/net/qede/qede_main.c | 2 +-
drivers/net/ring/rte_eth_ring.c | 1 -
drivers/net/sfc/sfc.c | 2 +-
drivers/net/sfc/sfc_ef10_tx.c | 2 +-
drivers/net/sfc/sfc_ethdev.c | 11 +-
drivers/net/sfc/sfc_flow.c | 20 ++--
| 2 +-
drivers/net/sfc/sfc_mae.c | 2 +-
drivers/net/sfc/sfc_rx.c | 2 +-
drivers/net/sfc/sfc_tso.c | 2 +-
drivers/net/sfc/sfc_tso.h | 9 +-
drivers/net/tap/rte_eth_tap.c | 14 +--
drivers/net/txgbe/txgbe_ethdev.c | 9 +-
drivers/net/txgbe/txgbe_fdir.c | 6 +-
drivers/net/txgbe/txgbe_flow.c | 65 +++++------
drivers/net/txgbe/txgbe_ipsec.c | 8 +-
drivers/net/txgbe/txgbe_pf.c | 5 +-
drivers/net/txgbe/txgbe_tm.c | 11 +-
drivers/net/vhost/rte_eth_vhost.c | 1 -
drivers/net/virtio/virtio_ethdev.c | 1 -
107 files changed, 582 insertions(+), 664 deletions(-)
diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index 268a130c49..6977516613 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -2094,7 +2094,7 @@ get_iface_info(const char *if_name,
if (ioctl(sock, SIOCGIFHWADDR, &ifr))
goto error;
- rte_memcpy(eth_addr, ifr.ifr_hwaddr.sa_data, RTE_ETHER_ADDR_LEN);
+ memcpy(eth_addr, ifr.ifr_hwaddr.sa_data, RTE_ETHER_ADDR_LEN);
close(sock);
return 0;
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 53d9e38c93..9bd0530172 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -248,7 +248,7 @@ avp_dev_process_request(struct avp_dev *avp, struct rte_avp_request *request)
while (avp_fifo_get(avp->resp_q, (void **)&resp_addr, 1))
PMD_DRV_LOG(DEBUG, "Discarding stale response\n");
- rte_memcpy(avp->sync_addr, request, sizeof(*request));
+ memcpy(avp->sync_addr, request, sizeof(*request));
count = avp_fifo_put(avp->req_q, &avp->host_sync_addr, 1);
if (count < 1) {
PMD_DRV_LOG(ERR, "Cannot send request %u to host\n",
@@ -285,7 +285,7 @@ avp_dev_process_request(struct avp_dev *avp, struct rte_avp_request *request)
}
/* copy to user buffer */
- rte_memcpy(request, avp->sync_addr, sizeof(*request));
+ memcpy(request, avp->sync_addr, sizeof(*request));
ret = 0;
PMD_DRV_LOG(DEBUG, "Result %d received for request %u\n",
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index dd681f15a0..7ac30106e3 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -594,7 +594,7 @@ axgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
if (rss_conf->rss_key != NULL &&
rss_conf->rss_key_len == AXGBE_RSS_HASH_KEY_SIZE) {
- rte_memcpy(pdata->rss_key, rss_conf->rss_key,
+ memcpy(pdata->rss_key, rss_conf->rss_key,
AXGBE_RSS_HASH_KEY_SIZE);
/* Program the hash key */
ret = axgbe_write_rss_hash_key(pdata);
@@ -637,7 +637,7 @@ axgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
if (rss_conf->rss_key != NULL &&
rss_conf->rss_key_len >= AXGBE_RSS_HASH_KEY_SIZE) {
- rte_memcpy(rss_conf->rss_key, pdata->rss_key,
+ memcpy(rss_conf->rss_key, pdata->rss_key,
AXGBE_RSS_HASH_KEY_SIZE);
}
rss_conf->rss_key_len = AXGBE_RSS_HASH_KEY_SIZE;
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 597ee43359..16a9ff7f8c 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -2242,18 +2242,18 @@ int bnx2x_tx_encap(struct bnx2x_tx_queue *txq, struct rte_mbuf *m0)
tx_parse_bd->parsing_data =
(mac_type << ETH_TX_PARSE_BD_E2_ETH_ADDR_TYPE_SHIFT);
- rte_memcpy(&tx_parse_bd->data.mac_addr.dst_hi,
- &eh->dst_addr.addr_bytes[0], 2);
- rte_memcpy(&tx_parse_bd->data.mac_addr.dst_mid,
- &eh->dst_addr.addr_bytes[2], 2);
- rte_memcpy(&tx_parse_bd->data.mac_addr.dst_lo,
- &eh->dst_addr.addr_bytes[4], 2);
- rte_memcpy(&tx_parse_bd->data.mac_addr.src_hi,
- &eh->src_addr.addr_bytes[0], 2);
- rte_memcpy(&tx_parse_bd->data.mac_addr.src_mid,
- &eh->src_addr.addr_bytes[2], 2);
- rte_memcpy(&tx_parse_bd->data.mac_addr.src_lo,
- &eh->src_addr.addr_bytes[4], 2);
+ memcpy(&tx_parse_bd->data.mac_addr.dst_hi,
+ &eh->dst_addr.addr_bytes[0], 2);
+ memcpy(&tx_parse_bd->data.mac_addr.dst_mid,
+ &eh->dst_addr.addr_bytes[2], 2);
+ memcpy(&tx_parse_bd->data.mac_addr.dst_lo,
+ &eh->dst_addr.addr_bytes[4], 2);
+ memcpy(&tx_parse_bd->data.mac_addr.src_hi,
+ &eh->src_addr.addr_bytes[0], 2);
+ memcpy(&tx_parse_bd->data.mac_addr.src_mid,
+ &eh->src_addr.addr_bytes[2], 2);
+ memcpy(&tx_parse_bd->data.mac_addr.src_lo,
+ &eh->src_addr.addr_bytes[4], 2);
tx_parse_bd->data.mac_addr.dst_hi =
rte_cpu_to_be_16(tx_parse_bd->data.mac_addr.dst_hi);
@@ -6675,8 +6675,7 @@ bnx2x_config_rss_pf(struct bnx2x_softc *sc, struct ecore_rss_config_obj *rss_obj
/* Hash bits */
params.rss_result_mask = MULTI_MASK;
- rte_memcpy(params.ind_table, rss_obj->ind_table,
- sizeof(params.ind_table));
+ memcpy(params.ind_table, rss_obj->ind_table, sizeof(params.ind_table));
if (config_hash) {
/* RSS keys */
@@ -6742,8 +6741,7 @@ bnx2x_set_mac_one(struct bnx2x_softc *sc, uint8_t * mac,
/* fill a user request section if needed */
if (!rte_bit_relaxed_get32(RAMROD_CONT, ramrod_flags)) {
- rte_memcpy(ramrod_param.user_req.u.mac.mac, mac,
- ETH_ALEN);
+ memcpy(ramrod_param.user_req.u.mac.mac, mac, ETH_ALEN);
rte_bit_relaxed_set32(mac_type,
&ramrod_param.user_req.vlan_mac_flags);
@@ -6958,7 +6956,7 @@ static void bnx2x_link_report_locked(struct bnx2x_softc *sc)
ELINK_DEBUG_P1(sc, "link status change count = %x", sc->link_cnt);
/* report new link params and remember the state for the next time */
- rte_memcpy(&sc->last_reported_link, &cur_data, sizeof(cur_data));
+ memcpy(&sc->last_reported_link, &cur_data, sizeof(cur_data));
if (rte_bit_relaxed_get32(BNX2X_LINK_REPORT_LINK_DOWN,
&cur_data.link_report_flags)) {
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index f25bc6ff78..6466aa394a 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -233,8 +233,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
"DMAC is invalid!\n");
return -rte_errno;
}
- rte_memcpy(filter->dst_macaddr,
- ð_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN);
+ memcpy(filter->dst_macaddr,
+ ð_spec->hdr.dst_addr,
+ RTE_ETHER_ADDR_LEN);
en |= use_ntuple ?
NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR :
EM_FLOW_ALLOC_INPUT_EN_DST_MACADDR;
@@ -257,8 +258,9 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
"SMAC is invalid!\n");
return -rte_errno;
}
- rte_memcpy(filter->src_macaddr,
- ð_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN);
+ memcpy(filter->src_macaddr,
+ ð_spec->hdr.src_addr,
+ RTE_ETHER_ADDR_LEN);
en |= use_ntuple ?
NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR :
EM_FLOW_ALLOC_INPUT_EN_SRC_MACADDR;
@@ -423,23 +425,23 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
en |= EM_FLOW_ALLOC_INPUT_EN_SRC_IPADDR |
EM_FLOW_ALLOC_INPUT_EN_DST_IPADDR;
- rte_memcpy(filter->src_ipaddr,
- ipv6_spec->hdr.src_addr, 16);
- rte_memcpy(filter->dst_ipaddr,
- ipv6_spec->hdr.dst_addr, 16);
+ memcpy(filter->src_ipaddr, ipv6_spec->hdr.src_addr,
+ 16);
+ memcpy(filter->dst_ipaddr, ipv6_spec->hdr.dst_addr,
+ 16);
if (!bnxt_check_zero_bytes(ipv6_mask->hdr.src_addr,
16)) {
- rte_memcpy(filter->src_ipaddr_mask,
- ipv6_mask->hdr.src_addr, 16);
+ memcpy(filter->src_ipaddr_mask,
+ ipv6_mask->hdr.src_addr, 16);
en |= !use_ntuple ? 0 :
NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR_MASK;
}
if (!bnxt_check_zero_bytes(ipv6_mask->hdr.dst_addr,
16)) {
- rte_memcpy(filter->dst_ipaddr_mask,
- ipv6_mask->hdr.dst_addr, 16);
+ memcpy(filter->dst_ipaddr_mask,
+ ipv6_mask->hdr.dst_addr, 16);
en |= !use_ntuple ? 0 :
NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR_MASK;
}
@@ -591,8 +593,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
return -rte_errno;
}
- rte_memcpy(((uint8_t *)&tenant_id_be + 1),
- vxlan_spec->hdr.vni, 3);
+ memcpy(((uint8_t *)&tenant_id_be + 1),
+ vxlan_spec->hdr.vni, 3);
filter->vni =
rte_be_to_cpu_32(tenant_id_be);
filter->tunnel_type =
@@ -645,8 +647,8 @@ bnxt_validate_and_parse_flow_type(const struct rte_flow_attr *attr,
"Invalid TNI mask");
return -rte_errno;
}
- rte_memcpy(((uint8_t *)&tenant_id_be + 1),
- nvgre_spec->tni, 3);
+ memcpy(((uint8_t *)&tenant_id_be + 1),
+ nvgre_spec->tni, 3);
filter->vni =
rte_be_to_cpu_32(tenant_id_be);
filter->tunnel_type =
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 79f1b3f1a0..8ddf5dc80a 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -1539,10 +1539,10 @@ rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
info->selected = port->selected;
info->actor_state = port->actor_state;
- rte_memcpy(&info->actor, &port->actor, sizeof(port->actor));
+ memcpy(&info->actor, &port->actor, sizeof(port->actor));
info->partner_state = port->partner_state;
- rte_memcpy(&info->partner, &port->partner, sizeof(port->partner));
+ memcpy(&info->partner, &port->partner, sizeof(port->partner));
info->agg_port_id = port->aggregator_port_id;
return 0;
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 5d0be5caf5..bb9d347e2b 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -182,7 +182,7 @@ bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
count->hits = 0;
count->bytes_set = 0;
count->hits_set = 0;
- rte_memcpy(&member_count, count, sizeof(member_count));
+ memcpy(&member_count, count, sizeof(member_count));
for (i = 0; i < internals->member_count; i++) {
ret = rte_flow_query(internals->members[i].port_id,
flow->flows[i], action,
diff --git a/drivers/net/cnxk/cnxk_eswitch_devargs.c b/drivers/net/cnxk/cnxk_eswitch_devargs.c
index 8167ce673a..70045c58c1 100644
--- a/drivers/net/cnxk/cnxk_eswitch_devargs.c
+++ b/drivers/net/cnxk/cnxk_eswitch_devargs.c
@@ -112,7 +112,8 @@ cnxk_eswitch_repr_devargs(struct rte_pci_device *pci_dev, struct cnxk_eswitch_de
goto fail;
}
- rte_memcpy(&eswitch_dev->esw_da[j].da, ð_da[i], sizeof(struct rte_eth_devargs));
+ memcpy(&eswitch_dev->esw_da[j].da, ð_da[i],
+ sizeof(struct rte_eth_devargs));
/* No of representor ports to be created */
eswitch_dev->repr_cnt.nb_repr_created += eswitch_dev->esw_da[j].nb_repr_ports;
j++;
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index c8260fcb9c..4366ef44d2 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -454,7 +454,7 @@ cnxk_nix_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr)
}
/* Update mac address to cnxk ethernet device */
- rte_memcpy(dev->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+ memcpy(dev->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
exit:
return rc;
diff --git a/drivers/net/cnxk/cnxk_rep.c b/drivers/net/cnxk/cnxk_rep.c
index ca0637bde5..35f6be63a6 100644
--- a/drivers/net/cnxk/cnxk_rep.c
+++ b/drivers/net/cnxk/cnxk_rep.c
@@ -383,7 +383,8 @@ cnxk_representee_notification(void *roc_nix, struct roc_eswitch_repte_notify_msg
goto done;
}
- rte_memcpy(msg->notify_msg, notify_msg, sizeof(struct roc_eswitch_repte_notify_msg));
+ memcpy(msg->notify_msg, notify_msg,
+ sizeof(struct roc_eswitch_repte_notify_msg));
plt_rep_dbg("Pushing new notification : msg type %d", msg->notify_msg->type);
pthread_mutex_lock(&eswitch_dev->repte_msg_proc.mutex);
TAILQ_INSERT_TAIL(&repte_msg_proc->msg_list, msg, next);
diff --git a/drivers/net/cnxk/cnxk_rep_flow.c b/drivers/net/cnxk/cnxk_rep_flow.c
index d26f5aa12c..9f09e6a7f0 100644
--- a/drivers/net/cnxk/cnxk_rep_flow.c
+++ b/drivers/net/cnxk/cnxk_rep_flow.c
@@ -81,7 +81,8 @@ prepare_pattern_data(const struct rte_flow_item *pattern, uint16_t nb_pattern,
hdr.mask_sz = term[pattern->type].item_size;
}
- rte_memcpy(RTE_PTR_ADD(pattern_data, len), &hdr, sizeof(cnxk_pattern_hdr_t));
+ memcpy(RTE_PTR_ADD(pattern_data, len), &hdr,
+ sizeof(cnxk_pattern_hdr_t));
len += sizeof(cnxk_pattern_hdr_t);
/* Copy pattern spec data */
@@ -228,7 +229,8 @@ prepare_action_data(const struct rte_flow_action *action, uint16_t nb_action, ui
hdr.type = action->type;
hdr.conf_sz = sz;
- rte_memcpy(RTE_PTR_ADD(action_data, len), &hdr, sizeof(cnxk_action_hdr_t));
+ memcpy(RTE_PTR_ADD(action_data, len), &hdr,
+ sizeof(cnxk_action_hdr_t));
len += sizeof(cnxk_action_hdr_t);
/* Copy action conf data */
diff --git a/drivers/net/cnxk/cnxk_rep_msg.c b/drivers/net/cnxk/cnxk_rep_msg.c
index f3a62a805e..76f07a51de 100644
--- a/drivers/net/cnxk/cnxk_rep_msg.c
+++ b/drivers/net/cnxk/cnxk_rep_msg.c
@@ -58,7 +58,7 @@ receive_control_message(int socketfd, void *data, uint32_t len)
if (cmsg->cmsg_type == SCM_CREDENTIALS) {
cr = (struct ucred *)CMSG_DATA(cmsg);
} else if (cmsg->cmsg_type == SCM_RIGHTS) {
- rte_memcpy(&afd, CMSG_DATA(cmsg), sizeof(int));
+ memcpy(&afd, CMSG_DATA(cmsg), sizeof(int));
plt_rep_dbg("afd %d", afd);
}
}
@@ -90,7 +90,7 @@ send_message_on_socket(int socketfd, void *data, uint32_t len, int afd)
cmsg->cmsg_len = CMSG_LEN(sizeof(int));
cmsg->cmsg_level = SOL_SOCKET;
cmsg->cmsg_type = SCM_RIGHTS;
- rte_memcpy(CMSG_DATA(cmsg), &afd, sizeof(int));
+ memcpy(CMSG_DATA(cmsg), &afd, sizeof(int));
}
size = sendmsg(socketfd, &mh, MSG_DONTWAIT);
@@ -198,7 +198,7 @@ cnxk_rep_msg_populate_type(void *buffer, uint32_t *length, cnxk_type_t type, uin
data.length = sz;
/* Populate the type data */
- rte_memcpy(RTE_PTR_ADD(buffer, len), &data, sizeof(cnxk_type_data_t));
+ memcpy(RTE_PTR_ADD(buffer, len), &data, sizeof(cnxk_type_data_t));
len += sizeof(cnxk_type_data_t);
*length = len;
@@ -218,7 +218,7 @@ cnxk_rep_msg_populate_header(void *buffer, uint32_t *length)
hdr.signature = CTRL_MSG_SIGNATURE;
/* Populate header data */
- rte_memcpy(RTE_PTR_ADD(buffer, len), &hdr, sizeof(cnxk_header_t));
+ memcpy(RTE_PTR_ADD(buffer, len), &hdr, sizeof(cnxk_header_t));
len += sizeof(cnxk_header_t);
*length = len;
diff --git a/drivers/net/cnxk/cnxk_rep_ops.c b/drivers/net/cnxk/cnxk_rep_ops.c
index 8bcb689468..cd9ae52f99 100644
--- a/drivers/net/cnxk/cnxk_rep_ops.c
+++ b/drivers/net/cnxk/cnxk_rep_ops.c
@@ -677,7 +677,7 @@ cnxk_rep_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr)
cnxk_rep_msg_populate_header(buffer, &len);
msg_sm_meta.portid = rep_dev->rep_id;
- rte_memcpy(&msg_sm_meta.addr_bytes, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+ memcpy(&msg_sm_meta.addr_bytes, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
cnxk_rep_msg_populate_command_meta(buffer, &len, &msg_sm_meta,
sizeof(cnxk_rep_msg_eth_set_mac_meta_t),
CNXK_REP_MSG_ETH_SET_MAC);
diff --git a/drivers/net/cnxk/cnxk_tm.c b/drivers/net/cnxk/cnxk_tm.c
index c799193cb8..5c8b0997ca 100644
--- a/drivers/net/cnxk/cnxk_tm.c
+++ b/drivers/net/cnxk/cnxk_tm.c
@@ -300,8 +300,7 @@ cnxk_nix_tm_shaper_profile_add(struct rte_eth_dev *eth_dev, uint32_t id,
profile->profile.pkt_len_adj = params->pkt_length_adjust;
profile->profile.pkt_mode = params->packet_mode;
profile->profile.free_fn = rte_free;
- rte_memcpy(&profile->params, params,
- sizeof(struct rte_tm_shaper_params));
+ memcpy(&profile->params, params, sizeof(struct rte_tm_shaper_params));
rc = roc_nix_tm_shaper_profile_add(nix, &profile->profile);
@@ -373,7 +372,7 @@ cnxk_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id,
if (!node)
return -ENOMEM;
- rte_memcpy(&node->params, params, sizeof(struct rte_tm_node_params));
+ memcpy(&node->params, params, sizeof(struct rte_tm_node_params));
node->nix_node.id = node_id;
node->nix_node.parent_id = parent_node_id;
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index ef19aa1b6a..1037aec68d 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -2292,7 +2292,8 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE);
- rte_memcpy(&base->caps, &req_caps, sizeof(struct virtchnl2_get_capabilities));
+ memcpy(&base->caps, &req_caps,
+ sizeof(struct virtchnl2_get_capabilities));
ret = idpf_adapter_init(base);
if (ret != 0) {
diff --git a/drivers/net/cpfl/cpfl_vchnl.c b/drivers/net/cpfl/cpfl_vchnl.c
index 7d277a0e8e..e914014d8a 100644
--- a/drivers/net/cpfl/cpfl_vchnl.c
+++ b/drivers/net/cpfl/cpfl_vchnl.c
@@ -32,7 +32,7 @@ cpfl_cc_vport_list_get(struct cpfl_adapter_ext *adapter,
return err;
}
- rte_memcpy(response, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
+ memcpy(response, args.out_buffer, IDPF_DFLT_MBX_BUF_SIZE);
return 0;
}
@@ -66,7 +66,7 @@ cpfl_cc_vport_info_get(struct cpfl_adapter_ext *adapter,
return err;
}
- rte_memcpy(response, args.out_buffer, sizeof(*response));
+ memcpy(response, args.out_buffer, sizeof(*response));
return 0;
}
diff --git a/drivers/net/cxgbe/clip_tbl.c b/drivers/net/cxgbe/clip_tbl.c
index b709e26f6a..d30fa6425f 100644
--- a/drivers/net/cxgbe/clip_tbl.c
+++ b/drivers/net/cxgbe/clip_tbl.c
@@ -115,7 +115,7 @@ static struct clip_entry *t4_clip_alloc(struct rte_eth_dev *dev,
if (ce) {
t4_os_lock(&ce->lock);
if (__atomic_load_n(&ce->refcnt, __ATOMIC_RELAXED) == 0) {
- rte_memcpy(ce->addr, lip, sizeof(ce->addr));
+ memcpy(ce->addr, lip, sizeof(ce->addr));
if (v6) {
ce->type = FILTER_TYPE_IPV6;
__atomic_store_n(&ce->refcnt, 1,
diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index 5a7efe7a73..3d1d087ec2 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -851,10 +851,10 @@ static int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
fwr->ivlanm = cpu_to_be16(f->fs.mask.ivlan);
fwr->ovlan = cpu_to_be16(f->fs.val.ovlan);
fwr->ovlanm = cpu_to_be16(f->fs.mask.ovlan);
- rte_memcpy(fwr->lip, f->fs.val.lip, sizeof(fwr->lip));
- rte_memcpy(fwr->lipm, f->fs.mask.lip, sizeof(fwr->lipm));
- rte_memcpy(fwr->fip, f->fs.val.fip, sizeof(fwr->fip));
- rte_memcpy(fwr->fipm, f->fs.mask.fip, sizeof(fwr->fipm));
+ memcpy(fwr->lip, f->fs.val.lip, sizeof(fwr->lip));
+ memcpy(fwr->lipm, f->fs.mask.lip, sizeof(fwr->lipm));
+ memcpy(fwr->fip, f->fs.val.fip, sizeof(fwr->fip));
+ memcpy(fwr->fipm, f->fs.mask.fip, sizeof(fwr->fipm));
fwr->lp = cpu_to_be16(f->fs.val.lport);
fwr->lpm = cpu_to_be16(f->fs.mask.lport);
fwr->fp = cpu_to_be16(f->fs.val.fport);
diff --git a/drivers/net/cxgbe/l2t.c b/drivers/net/cxgbe/l2t.c
index 21f4019ae6..7721c7953e 100644
--- a/drivers/net/cxgbe/l2t.c
+++ b/drivers/net/cxgbe/l2t.c
@@ -82,7 +82,7 @@ static int write_l2e(struct rte_eth_dev *dev, struct l2t_entry *e, int sync,
V_L2T_W_NOREPLY(!sync));
req->l2t_idx = cpu_to_be16(l2t_idx);
req->vlan = cpu_to_be16(e->vlan);
- rte_memcpy(req->dst_mac, e->dmac, RTE_ETHER_ADDR_LEN);
+ memcpy(req->dst_mac, e->dmac, RTE_ETHER_ADDR_LEN);
if (loopback)
memset(req->dst_mac, 0, RTE_ETHER_ADDR_LEN);
@@ -155,7 +155,7 @@ static struct l2t_entry *t4_l2t_alloc_switching(struct rte_eth_dev *dev,
e->state = L2T_STATE_SWITCHING;
e->vlan = vlan;
e->lport = port;
- rte_memcpy(e->dmac, eth_addr, RTE_ETHER_ADDR_LEN);
+ memcpy(e->dmac, eth_addr, RTE_ETHER_ADDR_LEN);
__atomic_store_n(&e->refcnt, 1, __ATOMIC_RELAXED);
ret = write_l2e(dev, e, 0, !L2T_LPBK, !L2T_ARPMISS);
if (ret < 0)
diff --git a/drivers/net/cxgbe/smt.c b/drivers/net/cxgbe/smt.c
index 4e14a73753..a74b2e0794 100644
--- a/drivers/net/cxgbe/smt.c
+++ b/drivers/net/cxgbe/smt.c
@@ -55,26 +55,24 @@ static int write_smt_entry(struct rte_eth_dev *dev, struct smt_entry *e)
*/
if (e->idx & 1) {
req->pfvf1 = 0x0;
- rte_memcpy(req->src_mac1, e->src_mac,
- RTE_ETHER_ADDR_LEN);
+ memcpy(req->src_mac1, e->src_mac, RTE_ETHER_ADDR_LEN);
/* fill pfvf0/src_mac0 with entry
* at prev index from smt-tab.
*/
req->pfvf0 = 0x0;
- rte_memcpy(req->src_mac0, s->smtab[e->idx - 1].src_mac,
- RTE_ETHER_ADDR_LEN);
+ memcpy(req->src_mac0, s->smtab[e->idx - 1].src_mac,
+ RTE_ETHER_ADDR_LEN);
} else {
req->pfvf0 = 0x0;
- rte_memcpy(req->src_mac0, e->src_mac,
- RTE_ETHER_ADDR_LEN);
+ memcpy(req->src_mac0, e->src_mac, RTE_ETHER_ADDR_LEN);
/* fill pfvf1/src_mac1 with entry
* at next index from smt-tab
*/
req->pfvf1 = 0x0;
- rte_memcpy(req->src_mac1, s->smtab[e->idx + 1].src_mac,
- RTE_ETHER_ADDR_LEN);
+ memcpy(req->src_mac1, s->smtab[e->idx + 1].src_mac,
+ RTE_ETHER_ADDR_LEN);
}
row = (e->hw_idx >> 1);
} else {
@@ -87,8 +85,8 @@ static int write_smt_entry(struct rte_eth_dev *dev, struct smt_entry *e)
/* fill pfvf0/src_mac0 from smt-tab */
t6req->pfvf0 = 0x0;
- rte_memcpy(t6req->src_mac0, s->smtab[e->idx].src_mac,
- RTE_ETHER_ADDR_LEN);
+ memcpy(t6req->src_mac0, s->smtab[e->idx].src_mac,
+ RTE_ETHER_ADDR_LEN);
row = e->hw_idx;
req = (struct cpl_smt_write_req *)t6req;
}
@@ -158,7 +156,7 @@ static struct smt_entry *t4_smt_alloc_switching(struct rte_eth_dev *dev,
t4_os_lock(&e->lock);
if (__atomic_load_n(&e->refcnt, __ATOMIC_RELAXED) == 0) {
e->pfvf = pfvf;
- rte_memcpy(e->src_mac, smac, RTE_ETHER_ADDR_LEN);
+ memcpy(e->src_mac, smac, RTE_ETHER_ADDR_LEN);
ret = write_smt_entry(dev, e);
if (ret) {
e->pfvf = 0;
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 4d33b51fea..747fa40532 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -11,7 +11,6 @@
#include <rte_mbuf.h>
#include <ethdev_driver.h>
#include <rte_malloc.h>
-#include <rte_memcpy.h>
#include <rte_string_fns.h>
#include <rte_cycles.h>
#include <rte_kvargs.h>
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 97edc00420..5799770fde 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -11,7 +11,6 @@
#include <rte_mbuf.h>
#include <ethdev_driver.h>
#include <rte_malloc.h>
-#include <rte_memcpy.h>
#include <rte_string_fns.h>
#include <rte_cycles.h>
#include <rte_kvargs.h>
diff --git a/drivers/net/dpaa2/dpaa2_recycle.c b/drivers/net/dpaa2/dpaa2_recycle.c
index fbfdf360d1..cda08a6cee 100644
--- a/drivers/net/dpaa2/dpaa2_recycle.c
+++ b/drivers/net/dpaa2/dpaa2_recycle.c
@@ -10,7 +10,6 @@
#include <rte_mbuf.h>
#include <ethdev_driver.h>
#include <rte_malloc.h>
-#include <rte_memcpy.h>
#include <rte_string_fns.h>
#include <rte_cycles.h>
#include <rte_kvargs.h>
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 23f7c4132d..b83015a94c 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -11,7 +11,6 @@
#include <rte_mbuf.h>
#include <ethdev_driver.h>
#include <rte_malloc.h>
-#include <rte_memcpy.h>
#include <rte_string_fns.h>
#include <dev_driver.h>
#include <rte_hexdump.h>
diff --git a/drivers/net/dpaa2/dpaa2_sparser.c b/drivers/net/dpaa2/dpaa2_sparser.c
index 36a14526a5..e2976282bf 100644
--- a/drivers/net/dpaa2/dpaa2_sparser.c
+++ b/drivers/net/dpaa2/dpaa2_sparser.c
@@ -5,7 +5,6 @@
#include <rte_mbuf.h>
#include <rte_ethdev.h>
#include <rte_malloc.h>
-#include <rte_memcpy.h>
#include <rte_string_fns.h>
#include <dev_driver.h>
diff --git a/drivers/net/dpaa2/dpaa2_tm.c b/drivers/net/dpaa2/dpaa2_tm.c
index 8fe5bfa013..1749b1be22 100644
--- a/drivers/net/dpaa2/dpaa2_tm.c
+++ b/drivers/net/dpaa2/dpaa2_tm.c
@@ -324,7 +324,7 @@ dpaa2_shaper_profile_add(struct rte_eth_dev *dev, uint32_t shaper_profile_id,
NULL, NULL);
profile->id = shaper_profile_id;
- rte_memcpy(&profile->params, params, sizeof(profile->params));
+ memcpy(&profile->params, params, sizeof(profile->params));
LIST_INSERT_HEAD(&priv->shaper_profiles, profile, next);
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index df5fbb7823..3a069ce33e 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -20,7 +20,6 @@
#include <rte_pci.h>
#include <bus_pci_driver.h>
#include <rte_memory.h>
-#include <rte_memcpy.h>
#include <rte_memzone.h>
#include <rte_launch.h>
#include <rte_eal.h>
diff --git a/drivers/net/e1000/igb_flow.c b/drivers/net/e1000/igb_flow.c
index ea9b290e1c..8bca484960 100644
--- a/drivers/net/e1000/igb_flow.c
+++ b/drivers/net/e1000/igb_flow.c
@@ -1484,9 +1484,9 @@ igb_flow_create(struct rte_eth_dev *dev,
goto out;
}
- rte_memcpy(&ntuple_filter_ptr->filter_info,
- &ntuple_filter,
- sizeof(struct rte_eth_ntuple_filter));
+ memcpy(&ntuple_filter_ptr->filter_info,
+ &ntuple_filter,
+ sizeof(struct rte_eth_ntuple_filter));
TAILQ_INSERT_TAIL(&igb_filter_ntuple_list,
ntuple_filter_ptr, entries);
flow->rule = ntuple_filter_ptr;
@@ -1511,9 +1511,9 @@ igb_flow_create(struct rte_eth_dev *dev,
goto out;
}
- rte_memcpy(ðertype_filter_ptr->filter_info,
- ðertype_filter,
- sizeof(struct rte_eth_ethertype_filter));
+ memcpy(ðertype_filter_ptr->filter_info,
+ ðertype_filter,
+ sizeof(struct rte_eth_ethertype_filter));
TAILQ_INSERT_TAIL(&igb_filter_ethertype_list,
ethertype_filter_ptr, entries);
flow->rule = ethertype_filter_ptr;
@@ -1536,9 +1536,8 @@ igb_flow_create(struct rte_eth_dev *dev,
goto out;
}
- rte_memcpy(&syn_filter_ptr->filter_info,
- &syn_filter,
- sizeof(struct rte_eth_syn_filter));
+ memcpy(&syn_filter_ptr->filter_info, &syn_filter,
+ sizeof(struct rte_eth_syn_filter));
TAILQ_INSERT_TAIL(&igb_filter_syn_list,
syn_filter_ptr,
entries);
@@ -1562,9 +1561,8 @@ igb_flow_create(struct rte_eth_dev *dev,
goto out;
}
- rte_memcpy(&flex_filter_ptr->filter_info,
- &flex_filter,
- sizeof(struct igb_flex_filter));
+ memcpy(&flex_filter_ptr->filter_info, &flex_filter,
+ sizeof(struct igb_flex_filter));
TAILQ_INSERT_TAIL(&igb_filter_flex_list,
flex_filter_ptr, entries);
flow->rule = flex_filter_ptr;
diff --git a/drivers/net/e1000/igb_pf.c b/drivers/net/e1000/igb_pf.c
index c7588ea57e..efb806af56 100644
--- a/drivers/net/e1000/igb_pf.c
+++ b/drivers/net/e1000/igb_pf.c
@@ -17,7 +17,6 @@
#include <rte_eal.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
-#include <rte_memcpy.h>
#include <rte_malloc.h>
#include <rte_random.h>
@@ -290,7 +289,7 @@ igb_vf_reset(struct rte_eth_dev *dev, uint16_t vf, uint32_t *msgbuf)
/* reply to reset with ack and vf mac address */
msgbuf[0] = E1000_VF_RESET | E1000_VT_MSGTYPE_ACK;
- rte_memcpy(new_mac, vf_mac, RTE_ETHER_ADDR_LEN);
+ memcpy(new_mac, vf_mac, RTE_ETHER_ADDR_LEN);
e1000_write_mbx(hw, msgbuf, 3, vf);
return 0;
@@ -308,8 +307,8 @@ igb_vf_set_mac_addr(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
if (rte_is_unicast_ether_addr((struct rte_ether_addr *)new_mac)) {
if (!rte_is_zero_ether_addr((struct rte_ether_addr *)new_mac))
- rte_memcpy(vfinfo[vf].vf_mac_addresses, new_mac,
- sizeof(vfinfo[vf].vf_mac_addresses));
+ memcpy(vfinfo[vf].vf_mac_addresses, new_mac,
+ sizeof(vfinfo[vf].vf_mac_addresses));
hw->mac.ops.rar_set(hw, new_mac, rar_entry);
rah = E1000_READ_REG(hw, E1000_RAH(rar_entry));
rah |= (0x1 << (E1000_RAH_POOLSEL_SHIFT + vf));
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 5cafd6f1ce..4a41a3cea5 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -19,7 +19,6 @@
#include <rte_debug.h>
#include <rte_pci.h>
#include <rte_memory.h>
-#include <rte_memcpy.h>
#include <rte_memzone.h>
#include <rte_launch.h>
#include <rte_eal.h>
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2f681315b6..9e3e026856 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -1221,8 +1221,8 @@ static int enic_set_rsskey(struct enic *enic, uint8_t *user_key)
/* Save for later queries */
if (!err) {
- rte_memcpy(&enic->rss_key, rss_key_buf_va,
- sizeof(union vnic_rss_key));
+ memcpy(&enic->rss_key, rss_key_buf_va,
+ sizeof(union vnic_rss_key));
}
enic_free_consistent(enic, sizeof(union vnic_rss_key),
rss_key_buf_va, rss_key_buf_pa);
@@ -1243,7 +1243,7 @@ int enic_set_rss_reta(struct enic *enic, union vnic_rss_cpu *rss_cpu)
if (!rss_cpu_buf_va)
return -ENOMEM;
- rte_memcpy(rss_cpu_buf_va, rss_cpu, sizeof(union vnic_rss_cpu));
+ memcpy(rss_cpu_buf_va, rss_cpu, sizeof(union vnic_rss_cpu));
err = enic_set_rss_cpu(enic,
rss_cpu_buf_pa,
@@ -1254,7 +1254,7 @@ int enic_set_rss_reta(struct enic *enic, union vnic_rss_cpu *rss_cpu)
/* Save for later queries */
if (!err)
- rte_memcpy(&enic->rss_cpu, rss_cpu, sizeof(union vnic_rss_cpu));
+ memcpy(&enic->rss_cpu, rss_cpu, sizeof(union vnic_rss_cpu));
return err;
}
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 9c013e0419..47d453ef80 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -902,16 +902,16 @@ fs_stats_get(struct rte_eth_dev *dev,
ret = fs_lock(dev, 0);
if (ret != 0)
return ret;
- rte_memcpy(stats, &PRIV(dev)->stats_accumulator, sizeof(*stats));
+ memcpy(stats, &PRIV(dev)->stats_accumulator, sizeof(*stats));
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_ACTIVE) {
struct rte_eth_stats *snapshot = &sdev->stats_snapshot.stats;
uint64_t *timestamp = &sdev->stats_snapshot.timestamp;
- rte_memcpy(&backup, snapshot, sizeof(backup));
+ memcpy(&backup, snapshot, sizeof(backup));
ret = rte_eth_stats_get(PORT_ID(sdev), snapshot);
if (ret) {
if (!fs_err(sdev, ret)) {
- rte_memcpy(snapshot, &backup, sizeof(backup));
+ memcpy(snapshot, &backup, sizeof(backup));
goto inc;
}
ERROR("Operation rte_eth_stats_get failed for sub_device %d with error %d",
diff --git a/drivers/net/gve/base/gve_adminq.c b/drivers/net/gve/base/gve_adminq.c
index 629d15cfbe..24b00a1fbf 100644
--- a/drivers/net/gve/base/gve_adminq.c
+++ b/drivers/net/gve/base/gve_adminq.c
@@ -785,7 +785,7 @@ int gve_adminq_describe_device(struct gve_priv *priv)
}
priv->max_mtu = mtu;
priv->num_event_counters = be16_to_cpu(descriptor->counters);
- rte_memcpy(priv->dev_addr.addr_bytes, descriptor->mac, ETH_ALEN);
+ memcpy(priv->dev_addr.addr_bytes, descriptor->mac, ETH_ALEN);
PMD_DRV_LOG(INFO, "MAC addr: " RTE_ETHER_ADDR_PRT_FMT,
RTE_ETHER_ADDR_BYTES(&priv->dev_addr));
priv->tx_pages_per_qpl = be16_to_cpu(descriptor->tx_pages_per_qpl);
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index d4978e0649..65ae92b1ff 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -7,7 +7,6 @@
#include <ethdev_pci.h>
#include <rte_mbuf.h>
#include <rte_malloc.h>
-#include <rte_memcpy.h>
#include <rte_mempool.h>
#include <rte_errno.h>
#include <rte_ether.h>
diff --git a/drivers/net/hinic/hinic_pmd_flow.c b/drivers/net/hinic/hinic_pmd_flow.c
index d1a564a163..cb61c989fd 100644
--- a/drivers/net/hinic/hinic_pmd_flow.c
+++ b/drivers/net/hinic/hinic_pmd_flow.c
@@ -983,8 +983,7 @@ static int hinic_normal_item_check_ip(const struct rte_flow_item **in_out_item,
}
ipv6_spec = (const struct rte_flow_item_ipv6 *)item->spec;
- rte_memcpy(rule->hinic_fdir.dst_ipv6,
- ipv6_spec->hdr.dst_addr, 16);
+ memcpy(rule->hinic_fdir.dst_ipv6, ipv6_spec->hdr.dst_addr, 16);
/*
* Check if the next not void item is TCP or UDP or ICMP.
@@ -2193,8 +2192,8 @@ static int hinic_add_del_ntuple_filter(struct rte_eth_dev *dev,
sizeof(struct hinic_5tuple_filter), 0);
if (filter == NULL)
return -ENOMEM;
- rte_memcpy(&filter->filter_info, &filter_5tuple,
- sizeof(struct hinic_5tuple_filter_info));
+ memcpy(&filter->filter_info, &filter_5tuple,
+ sizeof(struct hinic_5tuple_filter_info));
filter->queue = ntuple_filter->queue;
filter_info->qid = ntuple_filter->queue;
@@ -2912,8 +2911,7 @@ static int hinic_add_del_tcam_fdir_filter(struct rte_eth_dev *dev,
sizeof(struct hinic_tcam_filter), 0);
if (tcam_filter == NULL)
return -ENOMEM;
- (void)rte_memcpy(&tcam_filter->tcam_key,
- &tcam_key, sizeof(struct tag_tcam_key));
+ memcpy(&tcam_filter->tcam_key, &tcam_key, sizeof(struct tag_tcam_key));
tcam_filter->queue = fdir_tcam_rule.data.qid;
ret = hinic_add_tcam_filter(dev, tcam_filter, &fdir_tcam_rule);
@@ -2990,9 +2988,9 @@ static struct rte_flow *hinic_flow_create(struct rte_eth_dev *dev,
&ntuple_filter, FALSE);
goto out;
}
- rte_memcpy(&ntuple_filter_ptr->filter_info,
- &ntuple_filter,
- sizeof(struct rte_eth_ntuple_filter));
+ memcpy(&ntuple_filter_ptr->filter_info,
+ &ntuple_filter,
+ sizeof(struct rte_eth_ntuple_filter));
TAILQ_INSERT_TAIL(&nic_dev->filter_ntuple_list,
ntuple_filter_ptr, entries);
flow->rule = ntuple_filter_ptr;
@@ -3022,9 +3020,9 @@ static struct rte_flow *hinic_flow_create(struct rte_eth_dev *dev,
ðertype_filter, FALSE);
goto out;
}
- rte_memcpy(ðertype_filter_ptr->filter_info,
- ðertype_filter,
- sizeof(struct rte_eth_ethertype_filter));
+ memcpy(ðertype_filter_ptr->filter_info,
+ ðertype_filter,
+ sizeof(struct rte_eth_ethertype_filter));
TAILQ_INSERT_TAIL(&nic_dev->filter_ethertype_list,
ethertype_filter_ptr, entries);
flow->rule = ethertype_filter_ptr;
@@ -3065,8 +3063,8 @@ static struct rte_flow *hinic_flow_create(struct rte_eth_dev *dev,
goto out;
}
- rte_memcpy(&fdir_rule_ptr->filter_info, &fdir_rule,
- sizeof(struct hinic_fdir_rule));
+ memcpy(&fdir_rule_ptr->filter_info, &fdir_rule,
+ sizeof(struct hinic_fdir_rule));
TAILQ_INSERT_TAIL(&nic_dev->filter_fdir_rule_list,
fdir_rule_ptr, entries);
flow->rule = fdir_rule_ptr;
@@ -3109,8 +3107,8 @@ static int hinic_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
case RTE_ETH_FILTER_NTUPLE:
ntuple_filter_ptr = (struct hinic_ntuple_filter_ele *)
pmd_flow->rule;
- rte_memcpy(&ntuple_filter, &ntuple_filter_ptr->filter_info,
- sizeof(struct rte_eth_ntuple_filter));
+ memcpy(&ntuple_filter, &ntuple_filter_ptr->filter_info,
+ sizeof(struct rte_eth_ntuple_filter));
ret = hinic_add_del_ntuple_filter(dev, &ntuple_filter, FALSE);
if (!ret) {
TAILQ_REMOVE(&nic_dev->filter_ntuple_list,
@@ -3121,9 +3119,8 @@ static int hinic_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
case RTE_ETH_FILTER_ETHERTYPE:
ethertype_filter_ptr = (struct hinic_ethertype_filter_ele *)
pmd_flow->rule;
- rte_memcpy(ðertype_filter,
- ðertype_filter_ptr->filter_info,
- sizeof(struct rte_eth_ethertype_filter));
+ memcpy(ðertype_filter, ðertype_filter_ptr->filter_info,
+ sizeof(struct rte_eth_ethertype_filter));
ret = hinic_add_del_ethertype_filter(dev,
ðertype_filter, FALSE);
if (!ret) {
@@ -3134,9 +3131,8 @@ static int hinic_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
break;
case RTE_ETH_FILTER_FDIR:
fdir_rule_ptr = (struct hinic_fdir_rule_ele *)pmd_flow->rule;
- rte_memcpy(&fdir_rule,
- &fdir_rule_ptr->filter_info,
- sizeof(struct hinic_fdir_rule));
+ memcpy(&fdir_rule, &fdir_rule_ptr->filter_info,
+ sizeof(struct hinic_fdir_rule));
if (fdir_rule.mode == HINIC_FDIR_MODE_NORMAL) {
ret = hinic_add_del_fdir_filter(dev, &fdir_rule, FALSE);
} else if (fdir_rule.mode == HINIC_FDIR_MODE_TCAM) {
diff --git a/drivers/net/hns3/hns3_fdir.c b/drivers/net/hns3/hns3_fdir.c
index d100e58d10..332cbb847b 100644
--- a/drivers/net/hns3/hns3_fdir.c
+++ b/drivers/net/hns3/hns3_fdir.c
@@ -996,7 +996,7 @@ int hns3_fdir_filter_program(struct hns3_adapter *hns,
return -ENOMEM;
}
- rte_memcpy(&node->fdir_conf, rule, sizeof(struct hns3_fdir_rule));
+ memcpy(&node->fdir_conf, rule, sizeof(struct hns3_fdir_rule));
ret = hns3_insert_fdir_filter(hw, fdir_info, node);
if (ret < 0) {
rte_free(node);
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 7fbe65313c..96b91bed6b 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -2416,8 +2416,8 @@ hns3_flow_query(struct rte_eth_dev *dev, struct rte_flow *flow,
}
rss_conf = (struct rte_flow_action_rss *)data;
rss_rule = (struct hns3_rss_conf_ele *)flow->rule;
- rte_memcpy(rss_conf, &rss_rule->filter_info.conf,
- sizeof(struct rte_flow_action_rss));
+ memcpy(rss_conf, &rss_rule->filter_info.conf,
+ sizeof(struct rte_flow_action_rss));
break;
default:
return rte_flow_error_set(error, ENOTSUP,
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 380ce1a720..bf128074b7 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -22,7 +22,6 @@
#include <ethdev_pci.h>
#include <rte_memzone.h>
#include <rte_malloc.h>
-#include <rte_memcpy.h>
#include <rte_alarm.h>
#include <dev_driver.h>
#include <rte_tailq.h>
@@ -4448,7 +4447,7 @@ i40e_macaddr_add(struct rte_eth_dev *dev,
return -EINVAL;
}
- rte_memcpy(&mac_filter.mac_addr, mac_addr, RTE_ETHER_ADDR_LEN);
+ memcpy(&mac_filter.mac_addr, mac_addr, RTE_ETHER_ADDR_LEN);
if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
mac_filter.filter_type = I40E_MACVLAN_PERFECT_MATCH;
else
@@ -5333,7 +5332,7 @@ i40e_vsi_vlan_pvid_set(struct i40e_vsi *vsi,
vsi->info.valid_sections =
rte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID);
memset(&ctxt, 0, sizeof(ctxt));
- rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+ memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
ctxt.seid = vsi->seid;
hw = I40E_VSI_TO_HW(vsi);
@@ -5372,8 +5371,8 @@ i40e_vsi_update_tc_bandwidth(struct i40e_vsi *vsi, uint8_t enabled_tcmap)
return ret;
}
- rte_memcpy(vsi->info.qs_handle, tc_bw_data.qs_handles,
- sizeof(vsi->info.qs_handle));
+ memcpy(vsi->info.qs_handle, tc_bw_data.qs_handles,
+ sizeof(vsi->info.qs_handle));
return I40E_SUCCESS;
}
@@ -5630,8 +5629,7 @@ i40e_update_default_filter_setting(struct i40e_vsi *vsi)
if (vsi->type != I40E_VSI_MAIN)
return I40E_ERR_CONFIG;
memset(&def_filter, 0, sizeof(def_filter));
- rte_memcpy(def_filter.mac_addr, hw->mac.perm_addr,
- ETH_ADDR_LEN);
+ memcpy(def_filter.mac_addr, hw->mac.perm_addr, ETH_ADDR_LEN);
def_filter.vlan_tag = 0;
def_filter.flags = I40E_AQC_MACVLAN_DEL_PERFECT_MATCH |
I40E_AQC_MACVLAN_DEL_IGNORE_VLAN;
@@ -5649,16 +5647,15 @@ i40e_update_default_filter_setting(struct i40e_vsi *vsi)
return I40E_ERR_NO_MEMORY;
}
mac = &f->mac_info.mac_addr;
- rte_memcpy(&mac->addr_bytes, hw->mac.perm_addr,
- ETH_ADDR_LEN);
+ memcpy(&mac->addr_bytes, hw->mac.perm_addr, ETH_ADDR_LEN);
f->mac_info.filter_type = I40E_MACVLAN_PERFECT_MATCH;
TAILQ_INSERT_TAIL(&vsi->mac_list, f, next);
vsi->mac_num++;
return ret;
}
- rte_memcpy(&filter.mac_addr,
- (struct rte_ether_addr *)(hw->mac.perm_addr), ETH_ADDR_LEN);
+ memcpy(&filter.mac_addr, (struct rte_ether_addr *)(hw->mac.perm_addr),
+ ETH_ADDR_LEN);
filter.filter_type = I40E_MACVLAN_PERFECT_MATCH;
return i40e_vsi_add_mac(vsi, &filter);
}
@@ -5977,8 +5974,8 @@ i40e_vsi_setup(struct i40e_pf *pf,
PMD_DRV_LOG(ERR, "Failed to get VSI params");
goto fail_msix_alloc;
}
- rte_memcpy(&vsi->info, &ctxt.info,
- sizeof(struct i40e_aqc_vsi_properties_data));
+ memcpy(&vsi->info, &ctxt.info,
+ sizeof(struct i40e_aqc_vsi_properties_data));
vsi->vsi_id = ctxt.vsi_number;
vsi->info.valid_sections = 0;
@@ -5995,8 +5992,8 @@ i40e_vsi_setup(struct i40e_pf *pf,
rte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID);
vsi->info.port_vlan_flags = I40E_AQ_VSI_PVLAN_MODE_ALL |
I40E_AQ_VSI_PVLAN_EMOD_STR_BOTH;
- rte_memcpy(&ctxt.info, &vsi->info,
- sizeof(struct i40e_aqc_vsi_properties_data));
+ memcpy(&ctxt.info, &vsi->info,
+ sizeof(struct i40e_aqc_vsi_properties_data));
ret = i40e_vsi_config_tc_queue_mapping(vsi, &ctxt.info,
I40E_DEFAULT_TCMAP);
if (ret != I40E_SUCCESS) {
@@ -6016,16 +6013,15 @@ i40e_vsi_setup(struct i40e_pf *pf,
goto fail_msix_alloc;
}
- rte_memcpy(&vsi->info.tc_mapping, &ctxt.info.tc_mapping,
- sizeof(vsi->info.tc_mapping));
- rte_memcpy(&vsi->info.queue_mapping,
- &ctxt.info.queue_mapping,
- sizeof(vsi->info.queue_mapping));
+ memcpy(&vsi->info.tc_mapping, &ctxt.info.tc_mapping,
+ sizeof(vsi->info.tc_mapping));
+ memcpy(&vsi->info.queue_mapping, &ctxt.info.queue_mapping,
+ sizeof(vsi->info.queue_mapping));
vsi->info.mapping_flags = ctxt.info.mapping_flags;
vsi->info.valid_sections = 0;
- rte_memcpy(pf->dev_addr.addr_bytes, hw->mac.perm_addr,
- ETH_ADDR_LEN);
+ memcpy(pf->dev_addr.addr_bytes, hw->mac.perm_addr,
+ ETH_ADDR_LEN);
/**
* Updating default filter settings are necessary to prevent
@@ -6168,7 +6164,7 @@ i40e_vsi_setup(struct i40e_pf *pf,
if (vsi->type != I40E_VSI_FDIR) {
/* MAC/VLAN configuration for non-FDIR VSI*/
- rte_memcpy(&filter.mac_addr, &broadcast, RTE_ETHER_ADDR_LEN);
+ memcpy(&filter.mac_addr, &broadcast, RTE_ETHER_ADDR_LEN);
filter.filter_type = I40E_MACVLAN_PERFECT_MATCH;
ret = i40e_vsi_add_mac(vsi, &filter);
@@ -6281,7 +6277,7 @@ i40e_vsi_config_vlan_stripping(struct i40e_vsi *vsi, bool on)
vsi->info.port_vlan_flags &= ~(I40E_AQ_VSI_PVLAN_EMOD_MASK);
vsi->info.port_vlan_flags |= vlan_flags;
ctxt.seid = vsi->seid;
- rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+ memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL);
if (ret)
PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan stripping",
@@ -7148,8 +7144,8 @@ i40e_add_macvlan_filters(struct i40e_vsi *vsi,
memset(req_list, 0, ele_buff_size);
for (i = 0; i < actual_num; i++) {
- rte_memcpy(req_list[i].mac_addr,
- &filter[num + i].macaddr, ETH_ADDR_LEN);
+ memcpy(req_list[i].mac_addr, &filter[num + i].macaddr,
+ ETH_ADDR_LEN);
req_list[i].vlan_tag =
rte_cpu_to_le_16(filter[num + i].vlan_id);
@@ -7224,8 +7220,8 @@ i40e_remove_macvlan_filters(struct i40e_vsi *vsi,
memset(req_list, 0, ele_buff_size);
for (i = 0; i < actual_num; i++) {
- rte_memcpy(req_list[i].mac_addr,
- &filter[num + i].macaddr, ETH_ADDR_LEN);
+ memcpy(req_list[i].mac_addr, &filter[num + i].macaddr,
+ ETH_ADDR_LEN);
req_list[i].vlan_tag =
rte_cpu_to_le_16(filter[num + i].vlan_id);
@@ -7381,8 +7377,8 @@ i40e_find_all_vlan_for_mac(struct i40e_vsi *vsi,
"vlan number doesn't match");
return I40E_ERR_PARAM;
}
- rte_memcpy(&mv_f[i].macaddr,
- addr, ETH_ADDR_LEN);
+ memcpy(&mv_f[i].macaddr, addr,
+ ETH_ADDR_LEN);
mv_f[i].vlan_id =
j * I40E_UINT32_BIT_SIZE + k;
i++;
@@ -7410,8 +7406,7 @@ i40e_find_all_mac_for_vlan(struct i40e_vsi *vsi,
PMD_DRV_LOG(ERR, "buffer number not match");
return I40E_ERR_PARAM;
}
- rte_memcpy(&mv_f[i].macaddr, &f->mac_info.mac_addr,
- ETH_ADDR_LEN);
+ memcpy(&mv_f[i].macaddr, &f->mac_info.mac_addr, ETH_ADDR_LEN);
mv_f[i].vlan_id = vlan;
mv_f[i].filter_type = f->mac_info.filter_type;
i++;
@@ -7446,8 +7441,8 @@ i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi)
i = 0;
if (vsi->vlan_num == 0) {
TAILQ_FOREACH(f, &vsi->mac_list, next) {
- rte_memcpy(&mv_f[i].macaddr,
- &f->mac_info.mac_addr, ETH_ADDR_LEN);
+ memcpy(&mv_f[i].macaddr, &f->mac_info.mac_addr,
+ ETH_ADDR_LEN);
mv_f[i].filter_type = f->mac_info.filter_type;
mv_f[i].vlan_id = 0;
i++;
@@ -7616,8 +7611,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
for (i = 0; i < vlan_num; i++) {
mv_f[i].filter_type = mac_filter->filter_type;
- rte_memcpy(&mv_f[i].macaddr, &mac_filter->mac_addr,
- ETH_ADDR_LEN);
+ memcpy(&mv_f[i].macaddr, &mac_filter->mac_addr, ETH_ADDR_LEN);
}
if (mac_filter->filter_type == I40E_MACVLAN_PERFECT_MATCH ||
@@ -7639,8 +7633,7 @@ i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter)
ret = I40E_ERR_NO_MEMORY;
goto DONE;
}
- rte_memcpy(&f->mac_info.mac_addr, &mac_filter->mac_addr,
- ETH_ADDR_LEN);
+ memcpy(&f->mac_info.mac_addr, &mac_filter->mac_addr, ETH_ADDR_LEN);
f->mac_info.filter_type = mac_filter->filter_type;
TAILQ_INSERT_TAIL(&vsi->mac_list, f, next);
vsi->mac_num++;
@@ -7686,8 +7679,7 @@ i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct rte_ether_addr *addr)
for (i = 0; i < vlan_num; i++) {
mv_f[i].filter_type = filter_type;
- rte_memcpy(&mv_f[i].macaddr, &f->mac_info.mac_addr,
- ETH_ADDR_LEN);
+ memcpy(&mv_f[i].macaddr, &f->mac_info.mac_addr, ETH_ADDR_LEN);
}
if (filter_type == I40E_MACVLAN_PERFECT_MATCH ||
filter_type == I40E_MACVLAN_HASH_MATCH) {
@@ -7973,9 +7965,8 @@ i40e_tunnel_filter_convert(
tunnel_filter->input.flags = cld_filter->element.flags;
tunnel_filter->input.tenant_id = cld_filter->element.tenant_id;
tunnel_filter->queue = cld_filter->element.queue_number;
- rte_memcpy(tunnel_filter->input.general_fields,
- cld_filter->general_fields,
- sizeof(cld_filter->general_fields));
+ memcpy(tunnel_filter->input.general_fields,
+ cld_filter->general_fields, sizeof(cld_filter->general_fields));
return 0;
}
@@ -8522,9 +8513,8 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV4;
ipv4_addr = rte_be_to_cpu_32(tunnel_filter->ip_addr.ipv4_addr);
ipv4_addr_le = rte_cpu_to_le_32(ipv4_addr);
- rte_memcpy(&pfilter->element.ipaddr.v4.data,
- &ipv4_addr_le,
- sizeof(pfilter->element.ipaddr.v4.data));
+ memcpy(&pfilter->element.ipaddr.v4.data, &ipv4_addr_le,
+ sizeof(pfilter->element.ipaddr.v4.data));
} else {
ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV6;
for (i = 0; i < 4; i++) {
@@ -8532,9 +8522,8 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
rte_cpu_to_le_32(rte_be_to_cpu_32(
tunnel_filter->ip_addr.ipv6_addr[i]));
}
- rte_memcpy(&pfilter->element.ipaddr.v6.data,
- &convert_ipv6,
- sizeof(pfilter->element.ipaddr.v6.data));
+ memcpy(&pfilter->element.ipaddr.v6.data, &convert_ipv6,
+ sizeof(pfilter->element.ipaddr.v6.data));
}
/* check tunneled type */
@@ -8779,7 +8768,7 @@ i40e_dev_consistent_tunnel_filter_set(struct i40e_pf *pf,
return -ENOMEM;
}
- rte_memcpy(tunnel, &check_filter, sizeof(check_filter));
+ memcpy(tunnel, &check_filter, sizeof(check_filter));
ret = i40e_sw_tunnel_filter_insert(pf, tunnel);
if (ret < 0)
rte_free(tunnel);
@@ -9904,8 +9893,7 @@ static int
i40e_ethertype_filter_convert(const struct rte_eth_ethertype_filter *input,
struct i40e_ethertype_filter *filter)
{
- rte_memcpy(&filter->input.mac_addr, &input->mac_addr,
- RTE_ETHER_ADDR_LEN);
+ memcpy(&filter->input.mac_addr, &input->mac_addr, RTE_ETHER_ADDR_LEN);
filter->input.ether_type = input->ether_type;
filter->flags = input->flags;
filter->queue = input->queue;
@@ -10052,8 +10040,7 @@ i40e_ethertype_filter_set(struct i40e_pf *pf,
return -ENOMEM;
}
- rte_memcpy(ethertype_filter, &check_filter,
- sizeof(check_filter));
+ memcpy(ethertype_filter, &check_filter, sizeof(check_filter));
ret = i40e_sw_ethertype_filter_insert(pf, ethertype_filter);
if (ret < 0)
rte_free(ethertype_filter);
@@ -10933,11 +10920,10 @@ i40e_vsi_config_tc(struct i40e_vsi *vsi, uint8_t tc_map)
goto out;
}
/* update the local VSI info with updated queue map */
- rte_memcpy(&vsi->info.tc_mapping, &ctxt.info.tc_mapping,
- sizeof(vsi->info.tc_mapping));
- rte_memcpy(&vsi->info.queue_mapping,
- &ctxt.info.queue_mapping,
- sizeof(vsi->info.queue_mapping));
+ memcpy(&vsi->info.tc_mapping, &ctxt.info.tc_mapping,
+ sizeof(vsi->info.tc_mapping));
+ memcpy(&vsi->info.queue_mapping, &ctxt.info.queue_mapping,
+ sizeof(vsi->info.queue_mapping));
vsi->info.mapping_flags = ctxt.info.mapping_flags;
vsi->info.valid_sections = 0;
@@ -11689,9 +11675,8 @@ i40e_tunnel_filter_restore(struct i40e_pf *pf)
cld_filter.element.flags = f->input.flags;
cld_filter.element.tenant_id = f->input.tenant_id;
cld_filter.element.queue_number = f->queue;
- rte_memcpy(cld_filter.general_fields,
- f->input.general_fields,
- sizeof(f->input.general_fields));
+ memcpy(cld_filter.general_fields, f->input.general_fields,
+ sizeof(f->input.general_fields));
if (((f->input.flags &
I40E_AQC_ADD_CLOUD_FILTER_0X11) ==
diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c
index 47f79ecf11..554b763e9f 100644
--- a/drivers/net/i40e/i40e_fdir.c
+++ b/drivers/net/i40e/i40e_fdir.c
@@ -464,10 +464,10 @@ fill_ip6_head(const struct i40e_fdir_input *fdir_input, unsigned char *raw_pkt,
* need to be presented in a reversed order with respect
* to the expected received packets.
*/
- rte_memcpy(&ip6->src_addr, &fdir_input->flow.ipv6_flow.dst_ip,
- IPV6_ADDR_LEN);
- rte_memcpy(&ip6->dst_addr, &fdir_input->flow.ipv6_flow.src_ip,
- IPV6_ADDR_LEN);
+ memcpy(&ip6->src_addr, &fdir_input->flow.ipv6_flow.dst_ip,
+ IPV6_ADDR_LEN);
+ memcpy(&ip6->dst_addr, &fdir_input->flow.ipv6_flow.src_ip,
+ IPV6_ADDR_LEN);
len += sizeof(struct rte_ipv6_hdr);
return len;
@@ -528,18 +528,16 @@ i40e_flow_fdir_fill_eth_ip_head(struct i40e_pf *pf,
[I40E_FILTER_PCTYPE_NONF_IPV6_OTHER] = IPPROTO_NONE,
};
- rte_memcpy(raw_pkt, &fdir_input->flow.l2_flow.dst,
- sizeof(struct rte_ether_addr));
- rte_memcpy(raw_pkt + sizeof(struct rte_ether_addr),
- &fdir_input->flow.l2_flow.src,
- sizeof(struct rte_ether_addr));
+ memcpy(raw_pkt, &fdir_input->flow.l2_flow.dst,
+ sizeof(struct rte_ether_addr));
+ memcpy(raw_pkt + sizeof(struct rte_ether_addr),
+ &fdir_input->flow.l2_flow.src, sizeof(struct rte_ether_addr));
raw_pkt += 2 * sizeof(struct rte_ether_addr);
if (vlan && fdir_input->flow_ext.vlan_tci) {
- rte_memcpy(raw_pkt, vlan_frame, sizeof(vlan_frame));
- rte_memcpy(raw_pkt + sizeof(uint16_t),
- &fdir_input->flow_ext.vlan_tci,
- sizeof(uint16_t));
+ memcpy(raw_pkt, vlan_frame, sizeof(vlan_frame));
+ memcpy(raw_pkt + sizeof(uint16_t),
+ &fdir_input->flow_ext.vlan_tci, sizeof(uint16_t));
raw_pkt += sizeof(vlan_frame);
len += sizeof(vlan_frame);
}
@@ -1003,7 +1001,7 @@ static int
i40e_fdir_filter_convert(const struct i40e_fdir_filter_conf *input,
struct i40e_fdir_filter *filter)
{
- rte_memcpy(&filter->fdir, input, sizeof(struct i40e_fdir_filter_conf));
+ memcpy(&filter->fdir, input, sizeof(struct i40e_fdir_filter_conf));
if (input->input.flow_ext.pkt_template) {
filter->fdir.input.flow.raw_flow.packet = NULL;
filter->fdir.input.flow.raw_flow.length =
@@ -1060,7 +1058,7 @@ i40e_sw_fdir_filter_insert(struct i40e_pf *pf, struct i40e_fdir_filter *filter)
return -1;
hash_filter = &fdir_info->fdir_filter_array[ret];
- rte_memcpy(hash_filter, filter, sizeof(*filter));
+ memcpy(hash_filter, filter, sizeof(*filter));
fdir_info->hash_map[ret] = hash_filter;
TAILQ_INSERT_TAIL(&fdir_info->fdir_list, hash_filter, rules);
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 92165c8422..9afb7a540d 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -1175,7 +1175,7 @@ i40e_pattern_skip_void_item(struct rte_flow_item *items,
pb = pe + 1;
}
/* Copy the END item. */
- rte_memcpy(items, pe, sizeof(struct rte_flow_item));
+ memcpy(items, pe, sizeof(struct rte_flow_item));
}
/* Check if the pattern matches a supported item type array */
@@ -1986,10 +1986,10 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
filter->input.flow_ext.oip_type =
I40E_FDIR_IPTYPE_IPV6;
- rte_memcpy(filter->input.flow.ipv6_flow.src_ip,
- ipv6_spec->hdr.src_addr, 16);
- rte_memcpy(filter->input.flow.ipv6_flow.dst_ip,
- ipv6_spec->hdr.dst_addr, 16);
+ memcpy(filter->input.flow.ipv6_flow.src_ip,
+ ipv6_spec->hdr.src_addr, 16);
+ memcpy(filter->input.flow.ipv6_flow.dst_ip,
+ ipv6_spec->hdr.dst_addr, 16);
/* Check if it is fragment. */
if (ipv6_spec->hdr.proto ==
@@ -2926,14 +2926,14 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
}
if (!vxlan_flag) {
- rte_memcpy(&filter->outer_mac,
- ð_spec->hdr.dst_addr,
- RTE_ETHER_ADDR_LEN);
+ memcpy(&filter->outer_mac,
+ ð_spec->hdr.dst_addr,
+ RTE_ETHER_ADDR_LEN);
filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
} else {
- rte_memcpy(&filter->inner_mac,
- ð_spec->hdr.dst_addr,
- RTE_ETHER_ADDR_LEN);
+ memcpy(&filter->inner_mac,
+ ð_spec->hdr.dst_addr,
+ RTE_ETHER_ADDR_LEN);
filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
}
}
@@ -3026,8 +3026,8 @@ i40e_flow_parse_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
return -rte_errno;
}
- rte_memcpy(((uint8_t *)&tenant_id_be + 1),
- vxlan_spec->hdr.vni, 3);
+ memcpy(((uint8_t *)&tenant_id_be + 1),
+ vxlan_spec->hdr.vni, 3);
filter->tenant_id =
rte_be_to_cpu_32(tenant_id_be);
filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
@@ -3156,14 +3156,14 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
}
if (!nvgre_flag) {
- rte_memcpy(&filter->outer_mac,
- ð_spec->hdr.dst_addr,
- RTE_ETHER_ADDR_LEN);
+ memcpy(&filter->outer_mac,
+ ð_spec->hdr.dst_addr,
+ RTE_ETHER_ADDR_LEN);
filter_type |= RTE_ETH_TUNNEL_FILTER_OMAC;
} else {
- rte_memcpy(&filter->inner_mac,
- ð_spec->hdr.dst_addr,
- RTE_ETHER_ADDR_LEN);
+ memcpy(&filter->inner_mac,
+ ð_spec->hdr.dst_addr,
+ RTE_ETHER_ADDR_LEN);
filter_type |= RTE_ETH_TUNNEL_FILTER_IMAC;
}
}
@@ -3278,8 +3278,8 @@ i40e_flow_parse_nvgre_pattern(__rte_unused struct rte_eth_dev *dev,
"Invalid NVGRE item");
return -rte_errno;
}
- rte_memcpy(((uint8_t *)&tenant_id_be + 1),
- nvgre_spec->tni, 3);
+ memcpy(((uint8_t *)&tenant_id_be + 1),
+ nvgre_spec->tni, 3);
filter->tenant_id =
rte_be_to_cpu_32(tenant_id_be);
filter_type |= RTE_ETH_TUNNEL_FILTER_TENID;
@@ -3447,8 +3447,8 @@ i40e_flow_parse_mpls_pattern(__rte_unused struct rte_eth_dev *dev,
"Invalid MPLS label mask");
return -rte_errno;
}
- rte_memcpy(((uint8_t *)&label_be + 1),
- mpls_spec->label_tc_s, 3);
+ memcpy(((uint8_t *)&label_be + 1),
+ mpls_spec->label_tc_s, 3);
filter->tenant_id = rte_be_to_cpu_32(label_be) >> 4;
break;
default:
@@ -4051,9 +4051,8 @@ i40e_flow_destroy_tunnel_filter(struct i40e_pf *pf,
cld_filter.element.flags = filter->input.flags;
cld_filter.element.tenant_id = filter->input.tenant_id;
cld_filter.element.queue_number = filter->queue;
- rte_memcpy(cld_filter.general_fields,
- filter->input.general_fields,
- sizeof(cld_filter.general_fields));
+ memcpy(cld_filter.general_fields, filter->input.general_fields,
+ sizeof(cld_filter.general_fields));
if (!filter->is_to_vf)
vsi = pf->main_vsi;
@@ -4271,9 +4270,8 @@ i40e_flow_query(struct rte_eth_dev *dev __rte_unused,
"action not supported");
return -rte_errno;
}
- rte_memcpy(rss_conf,
- &rss_rule->rss_filter_info.conf,
- sizeof(struct rte_flow_action_rss));
+ memcpy(rss_conf, &rss_rule->rss_filter_info.conf,
+ sizeof(struct rte_flow_action_rss));
break;
default:
return rte_flow_error_set(error, ENOTSUP,
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index 15d9ff868f..f8073ef9cb 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -16,7 +16,6 @@
#include <rte_ether.h>
#include <ethdev_driver.h>
#include <rte_malloc.h>
-#include <rte_memcpy.h>
#include "i40e_logs.h"
#include "base/i40e_prototype.h"
@@ -869,7 +868,7 @@ i40e_pf_host_process_cmd_add_ether_address(struct i40e_pf_vf *vf,
for (i = 0; i < addr_list->num_elements; i++) {
mac = (struct rte_ether_addr *)(addr_list->list[i].addr);
- rte_memcpy(&filter.mac_addr, mac, RTE_ETHER_ADDR_LEN);
+ memcpy(&filter.mac_addr, mac, RTE_ETHER_ADDR_LEN);
filter.filter_type = I40E_MACVLAN_PERFECT_MATCH;
if (rte_is_zero_ether_addr(mac) ||
i40e_vsi_add_mac(vf->vsi, &filter)) {
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index cab296e1a4..3f77a7b3a0 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -279,8 +279,8 @@ i40e_shaper_profile_add(struct rte_eth_dev *dev,
if (!shaper_profile)
return -ENOMEM;
shaper_profile->shaper_profile_id = shaper_profile_id;
- rte_memcpy(&shaper_profile->profile, profile,
- sizeof(struct rte_tm_shaper_params));
+ memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
TAILQ_INSERT_TAIL(&pf->tm_conf.shaper_profile_list,
shaper_profile, node);
@@ -526,8 +526,8 @@ i40e_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->reference_count = 0;
tm_node->parent = NULL;
tm_node->shaper_profile = shaper_profile;
- rte_memcpy(&tm_node->params, params,
- sizeof(struct rte_tm_node_params));
+ memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
pf->tm_conf.root = tm_node;
/* increase the reference counter of the shaper profile */
@@ -600,8 +600,7 @@ i40e_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->reference_count = 0;
tm_node->parent = parent_node;
tm_node->shaper_profile = shaper_profile;
- rte_memcpy(&tm_node->params, params,
- sizeof(struct rte_tm_node_params));
+ memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params));
if (parent_node_type == I40E_TM_NODE_TYPE_PORT) {
TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
tm_node, node);
diff --git a/drivers/net/i40e/rte_pmd_i40e.c b/drivers/net/i40e/rte_pmd_i40e.c
index 9d39984ea1..03d0b61902 100644
--- a/drivers/net/i40e/rte_pmd_i40e.c
+++ b/drivers/net/i40e/rte_pmd_i40e.c
@@ -90,7 +90,7 @@ rte_pmd_i40e_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf_id, uint8_t on)
vsi->info.sec_flags &= ~I40E_AQ_VSI_SEC_FLAG_ENABLE_MAC_CHK;
memset(&ctxt, 0, sizeof(ctxt));
- rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+ memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
ctxt.seid = vsi->seid;
hw = I40E_VSI_TO_HW(vsi);
@@ -192,7 +192,7 @@ rte_pmd_i40e_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf_id, uint8_t on)
vsi->info.sec_flags &= ~I40E_AQ_VSI_SEC_FLAG_ENABLE_VLAN_CHK;
memset(&ctxt, 0, sizeof(ctxt));
- rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+ memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
ctxt.seid = vsi->seid;
hw = I40E_VSI_TO_HW(vsi);
@@ -237,9 +237,8 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
for (i = 0; i < vlan_num; i++) {
mv_f[i].filter_type = filter_type;
- rte_memcpy(&mv_f[i].macaddr,
- &f->mac_info.mac_addr,
- ETH_ADDR_LEN);
+ memcpy(&mv_f[i].macaddr, &f->mac_info.mac_addr,
+ ETH_ADDR_LEN);
}
if (filter_type == I40E_MACVLAN_PERFECT_MATCH ||
filter_type == I40E_MACVLAN_HASH_MATCH) {
@@ -298,9 +297,8 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
for (i = 0; i < vlan_num; i++) {
mv_f[i].filter_type = f->mac_info.filter_type;
- rte_memcpy(&mv_f[i].macaddr,
- &f->mac_info.mac_addr,
- ETH_ADDR_LEN);
+ memcpy(&mv_f[i].macaddr, &f->mac_info.mac_addr,
+ ETH_ADDR_LEN);
}
if (f->mac_info.filter_type == I40E_MACVLAN_PERFECT_MATCH ||
@@ -380,7 +378,7 @@ i40e_vsi_set_tx_loopback(struct i40e_vsi *vsi, uint8_t on)
vsi->info.switch_id &= ~I40E_AQ_VSI_SW_ID_FLAG_ALLOW_LB;
memset(&ctxt, 0, sizeof(ctxt));
- rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+ memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
ctxt.seid = vsi->seid;
ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL);
@@ -705,7 +703,7 @@ int rte_pmd_i40e_set_vf_vlan_insert(uint16_t port, uint16_t vf_id,
vsi->info.port_vlan_flags &= ~I40E_AQ_VSI_PVLAN_INSERT_PVID;
memset(&ctxt, 0, sizeof(ctxt));
- rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+ memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
ctxt.seid = vsi->seid;
hw = I40E_VSI_TO_HW(vsi);
@@ -767,7 +765,7 @@ int rte_pmd_i40e_set_vf_broadcast(uint16_t port, uint16_t vf_id,
}
if (on) {
- rte_memcpy(&filter.mac_addr, &broadcast, RTE_ETHER_ADDR_LEN);
+ memcpy(&filter.mac_addr, &broadcast, RTE_ETHER_ADDR_LEN);
filter.filter_type = I40E_MACVLAN_PERFECT_MATCH;
ret = i40e_vsi_add_mac(vsi, &filter);
} else {
@@ -839,7 +837,7 @@ int rte_pmd_i40e_set_vf_vlan_tag(uint16_t port, uint16_t vf_id, uint8_t on)
}
memset(&ctxt, 0, sizeof(ctxt));
- rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+ memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
ctxt.seid = vsi->seid;
hw = I40E_VSI_TO_HW(vsi);
@@ -2586,11 +2584,10 @@ i40e_vsi_update_queue_region_mapping(struct i40e_hw *hw,
return ret;
}
/* update the local VSI info with updated queue map */
- rte_memcpy(&vsi->info.tc_mapping, &ctxt.info.tc_mapping,
- sizeof(vsi->info.tc_mapping));
- rte_memcpy(&vsi->info.queue_mapping,
- &ctxt.info.queue_mapping,
- sizeof(vsi->info.queue_mapping));
+ memcpy(&vsi->info.tc_mapping, &ctxt.info.tc_mapping,
+ sizeof(vsi->info.tc_mapping));
+ memcpy(&vsi->info.queue_mapping, &ctxt.info.queue_mapping,
+ sizeof(vsi->info.queue_mapping));
vsi->info.mapping_flags = ctxt.info.mapping_flags;
vsi->info.valid_sections = 0;
@@ -2961,8 +2958,7 @@ i40e_queue_region_get_all_info(struct i40e_pf *pf,
{
struct i40e_queue_regions *info = &pf->queue_region;
- rte_memcpy(regions_ptr, info,
- sizeof(struct i40e_queue_regions));
+ memcpy(regions_ptr, info, sizeof(struct i40e_queue_regions));
return 0;
}
diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c
index 811a10287b..35257c43f1 100644
--- a/drivers/net/iavf/iavf_fdir.c
+++ b/drivers/net/iavf/iavf_fdir.c
@@ -374,7 +374,7 @@ iavf_fdir_create(struct iavf_adapter *ad,
if (filter->mark_flag == 1)
iavf_fdir_rx_proc_enable(ad, 1);
- rte_memcpy(rule, filter, sizeof(*rule));
+ memcpy(rule, filter, sizeof(*rule));
flow->rule = rule;
return 0;
@@ -672,15 +672,13 @@ iavf_fdir_refine_input_set(const uint64_t input_set,
VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV4, PROT);
memset(&ipv4_spec, 0, sizeof(ipv4_spec));
ipv4_spec.hdr.next_proto_id = proto_id;
- rte_memcpy(hdr->buffer, &ipv4_spec.hdr,
- sizeof(ipv4_spec.hdr));
+ memcpy(hdr->buffer, &ipv4_spec.hdr, sizeof(ipv4_spec.hdr));
return true;
case VIRTCHNL_PROTO_HDR_IPV6:
VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, PROT);
memset(&ipv6_spec, 0, sizeof(ipv6_spec));
ipv6_spec.hdr.proto = proto_id;
- rte_memcpy(hdr->buffer, &ipv6_spec.hdr,
- sizeof(ipv6_spec.hdr));
+ memcpy(hdr->buffer, &ipv6_spec.hdr, sizeof(ipv6_spec.hdr));
return true;
default:
return false;
@@ -885,8 +883,8 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
ETHERTYPE);
}
- rte_memcpy(hdr1->buffer, eth_spec,
- sizeof(struct rte_ether_hdr));
+ memcpy(hdr1->buffer, eth_spec,
+ sizeof(struct rte_ether_hdr));
}
hdrs->count = ++layer;
@@ -976,8 +974,8 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
input_set |= IAVF_PROT_IPV4_INNER;
}
- rte_memcpy(hdr->buffer, &ipv4_spec->hdr,
- sizeof(ipv4_spec->hdr));
+ memcpy(hdr->buffer, &ipv4_spec->hdr,
+ sizeof(ipv4_spec->hdr));
hdrs->count = ++layer;
@@ -1066,8 +1064,8 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
input_set |= IAVF_PROT_IPV6_INNER;
}
- rte_memcpy(hdr->buffer, &ipv6_spec->hdr,
- sizeof(ipv6_spec->hdr));
+ memcpy(hdr->buffer, &ipv6_spec->hdr,
+ sizeof(ipv6_spec->hdr));
hdrs->count = ++layer;
break;
@@ -1101,8 +1099,8 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1, ETH,
ETHERTYPE);
- rte_memcpy(hdr->buffer, &ipv6_frag_spec->hdr,
- sizeof(ipv6_frag_spec->hdr));
+ memcpy(hdr->buffer, &ipv6_frag_spec->hdr,
+ sizeof(ipv6_frag_spec->hdr));
} else if (ipv6_frag_mask->hdr.id == UINT32_MAX) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM,
@@ -1153,13 +1151,11 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
}
if (l3 == RTE_FLOW_ITEM_TYPE_IPV4)
- rte_memcpy(hdr->buffer,
- &udp_spec->hdr,
- sizeof(udp_spec->hdr));
+ memcpy(hdr->buffer, &udp_spec->hdr,
+ sizeof(udp_spec->hdr));
else if (l3 == RTE_FLOW_ITEM_TYPE_IPV6)
- rte_memcpy(hdr->buffer,
- &udp_spec->hdr,
- sizeof(udp_spec->hdr));
+ memcpy(hdr->buffer, &udp_spec->hdr,
+ sizeof(udp_spec->hdr));
}
hdrs->count = ++layer;
@@ -1210,13 +1206,11 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
}
if (l3 == RTE_FLOW_ITEM_TYPE_IPV4)
- rte_memcpy(hdr->buffer,
- &tcp_spec->hdr,
- sizeof(tcp_spec->hdr));
+ memcpy(hdr->buffer, &tcp_spec->hdr,
+ sizeof(tcp_spec->hdr));
else if (l3 == RTE_FLOW_ITEM_TYPE_IPV6)
- rte_memcpy(hdr->buffer,
- &tcp_spec->hdr,
- sizeof(tcp_spec->hdr));
+ memcpy(hdr->buffer, &tcp_spec->hdr,
+ sizeof(tcp_spec->hdr));
}
hdrs->count = ++layer;
@@ -1256,13 +1250,11 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
}
if (l3 == RTE_FLOW_ITEM_TYPE_IPV4)
- rte_memcpy(hdr->buffer,
- &sctp_spec->hdr,
- sizeof(sctp_spec->hdr));
+ memcpy(hdr->buffer, &sctp_spec->hdr,
+ sizeof(sctp_spec->hdr));
else if (l3 == RTE_FLOW_ITEM_TYPE_IPV6)
- rte_memcpy(hdr->buffer,
- &sctp_spec->hdr,
- sizeof(sctp_spec->hdr));
+ memcpy(hdr->buffer, &sctp_spec->hdr,
+ sizeof(sctp_spec->hdr));
}
hdrs->count = ++layer;
@@ -1291,8 +1283,8 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, GTPU_IP, TEID);
}
- rte_memcpy(hdr->buffer,
- gtp_spec, sizeof(*gtp_spec));
+ memcpy(hdr->buffer, gtp_spec,
+ sizeof(*gtp_spec));
}
tun_inner = 1;
@@ -1346,8 +1338,8 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
psc.qfi = gtp_psc_spec->hdr.qfi;
psc.type = gtp_psc_spec->hdr.type;
psc.next = 0;
- rte_memcpy(hdr->buffer, &psc,
- sizeof(struct iavf_gtp_psc_spec_hdr));
+ memcpy(hdr->buffer, &psc,
+ sizeof(struct iavf_gtp_psc_spec_hdr));
}
hdrs->count = ++layer;
@@ -1367,8 +1359,8 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, L2TPV3, SESS_ID);
}
- rte_memcpy(hdr->buffer, l2tpv3oip_spec,
- sizeof(*l2tpv3oip_spec));
+ memcpy(hdr->buffer, l2tpv3oip_spec,
+ sizeof(*l2tpv3oip_spec));
}
hdrs->count = ++layer;
@@ -1388,8 +1380,8 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, ESP, SPI);
}
- rte_memcpy(hdr->buffer, &esp_spec->hdr,
- sizeof(esp_spec->hdr));
+ memcpy(hdr->buffer, &esp_spec->hdr,
+ sizeof(esp_spec->hdr));
}
hdrs->count = ++layer;
@@ -1409,8 +1401,7 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, AH, SPI);
}
- rte_memcpy(hdr->buffer, ah_spec,
- sizeof(*ah_spec));
+ memcpy(hdr->buffer, ah_spec, sizeof(*ah_spec));
}
hdrs->count = ++layer;
@@ -1430,8 +1421,8 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, PFCP, S_FIELD);
}
- rte_memcpy(hdr->buffer, pfcp_spec,
- sizeof(*pfcp_spec));
+ memcpy(hdr->buffer, pfcp_spec,
+ sizeof(*pfcp_spec));
}
hdrs->count = ++layer;
@@ -1455,8 +1446,8 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
PC_RTC_ID);
}
- rte_memcpy(hdr->buffer, ecpri_spec,
- sizeof(*ecpri_spec));
+ memcpy(hdr->buffer, ecpri_spec,
+ sizeof(*ecpri_spec));
}
hdrs->count = ++layer;
@@ -1471,8 +1462,8 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, GRE);
if (gre_spec && gre_mask) {
- rte_memcpy(hdr->buffer, gre_spec,
- sizeof(*gre_spec));
+ memcpy(hdr->buffer, gre_spec,
+ sizeof(*gre_spec));
}
tun_inner = 1;
@@ -1520,8 +1511,8 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
SESS_ID);
}
- rte_memcpy(hdr->buffer, l2tpv2_spec,
- sizeof(*l2tpv2_spec));
+ memcpy(hdr->buffer, l2tpv2_spec,
+ sizeof(*l2tpv2_spec));
}
tun_inner = 1;
@@ -1538,8 +1529,8 @@ iavf_fdir_parse_pattern(__rte_unused struct iavf_adapter *ad,
VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, PPP);
if (ppp_spec && ppp_mask) {
- rte_memcpy(hdr->buffer, ppp_spec,
- sizeof(*ppp_spec));
+ memcpy(hdr->buffer, ppp_spec,
+ sizeof(*ppp_spec));
}
hdrs->count = ++layer;
diff --git a/drivers/net/iavf/iavf_fsub.c b/drivers/net/iavf/iavf_fsub.c
index 74e1e7099b..d98cde0fa5 100644
--- a/drivers/net/iavf/iavf_fsub.c
+++ b/drivers/net/iavf/iavf_fsub.c
@@ -92,7 +92,7 @@ iavf_fsub_create(struct iavf_adapter *ad, struct rte_flow *flow,
goto free_entry;
}
- rte_memcpy(rule, filter, sizeof(*rule));
+ memcpy(rule, filter, sizeof(*rule));
flow->rule = rule;
rte_free(meta);
@@ -272,10 +272,10 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
input_set_byte += 2;
}
- rte_memcpy(hdr1->buffer_spec, eth_spec,
- sizeof(struct rte_ether_hdr));
- rte_memcpy(hdr1->buffer_mask, eth_mask,
- sizeof(struct rte_ether_hdr));
+ memcpy(hdr1->buffer_spec, eth_spec,
+ sizeof(struct rte_ether_hdr));
+ memcpy(hdr1->buffer_mask, eth_mask,
+ sizeof(struct rte_ether_hdr));
} else {
/* flow subscribe filter will add dst mac in kernel */
input_set_byte += 6;
@@ -325,10 +325,10 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
input_set_byte++;
}
- rte_memcpy(hdr->buffer_spec, &ipv4_spec->hdr,
- sizeof(ipv4_spec->hdr));
- rte_memcpy(hdr->buffer_mask, &ipv4_mask->hdr,
- sizeof(ipv4_spec->hdr));
+ memcpy(hdr->buffer_spec, &ipv4_spec->hdr,
+ sizeof(ipv4_spec->hdr));
+ memcpy(hdr->buffer_mask, &ipv4_mask->hdr,
+ sizeof(ipv4_spec->hdr));
}
hdrs->count = ++layer;
@@ -388,10 +388,10 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
input_set_byte += 4;
}
- rte_memcpy(hdr->buffer_spec, &ipv6_spec->hdr,
- sizeof(ipv6_spec->hdr));
- rte_memcpy(hdr->buffer_mask, &ipv6_mask->hdr,
- sizeof(ipv6_spec->hdr));
+ memcpy(hdr->buffer_spec, &ipv6_spec->hdr,
+ sizeof(ipv6_spec->hdr));
+ memcpy(hdr->buffer_mask, &ipv6_mask->hdr,
+ sizeof(ipv6_spec->hdr));
}
hdrs->count = ++layer;
@@ -425,10 +425,10 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
input_set_byte += 2;
}
- rte_memcpy(hdr->buffer_spec, &udp_spec->hdr,
- sizeof(udp_spec->hdr));
- rte_memcpy(hdr->buffer_mask, &udp_mask->hdr,
- sizeof(udp_mask->hdr));
+ memcpy(hdr->buffer_spec, &udp_spec->hdr,
+ sizeof(udp_spec->hdr));
+ memcpy(hdr->buffer_mask, &udp_mask->hdr,
+ sizeof(udp_mask->hdr));
}
hdrs->count = ++layer;
@@ -466,10 +466,10 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
input_set_byte += 2;
}
- rte_memcpy(hdr->buffer_spec, &tcp_spec->hdr,
- sizeof(tcp_spec->hdr));
- rte_memcpy(hdr->buffer_mask, &tcp_mask->hdr,
- sizeof(tcp_mask->hdr));
+ memcpy(hdr->buffer_spec, &tcp_spec->hdr,
+ sizeof(tcp_spec->hdr));
+ memcpy(hdr->buffer_mask, &tcp_mask->hdr,
+ sizeof(tcp_mask->hdr));
}
hdrs->count = ++layer;
@@ -498,10 +498,10 @@ iavf_fsub_parse_pattern(const struct rte_flow_item pattern[],
return -rte_errno;
}
- rte_memcpy(hdr->buffer_spec, &vlan_spec->hdr,
- sizeof(vlan_spec->hdr));
- rte_memcpy(hdr->buffer_mask, &vlan_mask->hdr,
- sizeof(vlan_mask->hdr));
+ memcpy(hdr->buffer_spec, &vlan_spec->hdr,
+ sizeof(vlan_spec->hdr));
+ memcpy(hdr->buffer_mask, &vlan_mask->hdr,
+ sizeof(vlan_mask->hdr));
}
hdrs->count = ++layer;
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index 6f6e95fc45..0bcfb5bf24 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -2019,7 +2019,7 @@ iavf_pattern_skip_void_item(struct rte_flow_item *items,
pb = pe + 1;
}
/* Copy the END item. */
- rte_memcpy(items, pe, sizeof(struct rte_flow_item));
+ memcpy(items, pe, sizeof(struct rte_flow_item));
}
/* Check if the pattern matches a supported item type array */
diff --git a/drivers/net/iavf/iavf_tm.c b/drivers/net/iavf/iavf_tm.c
index 32bb3be45e..a6ad6bb0a2 100644
--- a/drivers/net/iavf/iavf_tm.c
+++ b/drivers/net/iavf/iavf_tm.c
@@ -342,8 +342,8 @@ iavf_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->id = node_id;
tm_node->parent = NULL;
tm_node->reference_count = 0;
- rte_memcpy(&tm_node->params, params,
- sizeof(struct rte_tm_node_params));
+ memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
vf->tm_conf.root = tm_node;
return 0;
}
@@ -403,8 +403,7 @@ iavf_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->reference_count = 0;
tm_node->parent = parent_node;
tm_node->shaper_profile = shaper_profile;
- rte_memcpy(&tm_node->params, params,
- sizeof(struct rte_tm_node_params));
+ memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params));
if (parent_node_type == IAVF_TM_NODE_TYPE_PORT) {
TAILQ_INSERT_TAIL(&vf->tm_conf.tc_list,
tm_node, node);
@@ -543,8 +542,8 @@ iavf_shaper_profile_add(struct rte_eth_dev *dev,
if (!shaper_profile)
return -ENOMEM;
shaper_profile->shaper_profile_id = shaper_profile_id;
- rte_memcpy(&shaper_profile->profile, profile,
- sizeof(struct rte_tm_shaper_params));
+ memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
TAILQ_INSERT_TAIL(&vf->tm_conf.shaper_profile_list,
shaper_profile, node);
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 1111d30f57..711186c1b5 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -920,7 +920,7 @@ iavf_get_vlan_offload_caps_v2(struct iavf_adapter *adapter)
return ret;
}
- rte_memcpy(&vf->vlan_v2_caps, vf->aq_resp, sizeof(vf->vlan_v2_caps));
+ memcpy(&vf->vlan_v2_caps, vf->aq_resp, sizeof(vf->vlan_v2_caps));
return 0;
}
@@ -1427,8 +1427,8 @@ iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add)
addr = &adapter->dev_data->mac_addrs[i];
if (rte_is_zero_ether_addr(addr))
continue;
- rte_memcpy(list->list[j].addr, addr->addr_bytes,
- sizeof(addr->addr_bytes));
+ memcpy(list->list[j].addr, addr->addr_bytes,
+ sizeof(addr->addr_bytes));
list->list[j].type = (j == 0 ?
VIRTCHNL_ETHER_ADDR_PRIMARY :
VIRTCHNL_ETHER_ADDR_EXTRA);
@@ -1547,8 +1547,7 @@ iavf_add_del_eth_addr(struct iavf_adapter *adapter, struct rte_ether_addr *addr,
list->vsi_id = vf->vsi_res->vsi_id;
list->num_elements = 1;
list->list[0].type = type;
- rte_memcpy(list->list[0].addr, addr->addr_bytes,
- sizeof(addr->addr_bytes));
+ memcpy(list->list[0].addr, addr->addr_bytes, sizeof(addr->addr_bytes));
args.ops = add ? VIRTCHNL_OP_ADD_ETH_ADDR : VIRTCHNL_OP_DEL_ETH_ADDR;
args.in_args = cmd_buffer;
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7f8f5163ac..42e5b30b2b 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -731,7 +731,7 @@ dcf_get_vlan_offload_caps_v2(struct ice_dcf_hw *hw)
return ret;
}
- rte_memcpy(&hw->vlan_v2_caps, &vlan_v2_caps, sizeof(vlan_v2_caps));
+ memcpy(&hw->vlan_v2_caps, &vlan_v2_caps, sizeof(vlan_v2_caps));
return 0;
}
@@ -1407,8 +1407,7 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw,
return -ENOMEM;
}
- rte_memcpy(list->list[0].addr, addr->addr_bytes,
- sizeof(addr->addr_bytes));
+ memcpy(list->list[0].addr, addr->addr_bytes, sizeof(addr->addr_bytes));
PMD_DRV_LOG(DEBUG, "add/rm mac:" RTE_ETHER_ADDR_PRT_FMT,
RTE_ETHER_ADDR_BYTES(addr));
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index 6e845f458a..0c53755c9d 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -407,7 +407,7 @@ ice_dcf_load_pkg(struct ice_adapter *adapter)
use_dsn = ice_dcf_execute_virtchnl_cmd(&dcf_adapter->real_hw, &vc_cmd) == 0;
if (use_dsn)
- rte_memcpy(&dsn, pkg_info.dsn, sizeof(dsn));
+ memcpy(&dsn, pkg_info.dsn, sizeof(dsn));
return ice_load_pkg(adapter, use_dsn, dsn);
}
diff --git a/drivers/net/ice/ice_dcf_sched.c b/drivers/net/ice/ice_dcf_sched.c
index b08bc5f1de..465ae75d5c 100644
--- a/drivers/net/ice/ice_dcf_sched.c
+++ b/drivers/net/ice/ice_dcf_sched.c
@@ -308,8 +308,8 @@ ice_dcf_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->id = node_id;
tm_node->parent = NULL;
tm_node->reference_count = 0;
- rte_memcpy(&tm_node->params, params,
- sizeof(struct rte_tm_node_params));
+ memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
hw->tm_conf.root = tm_node;
return 0;
@@ -373,8 +373,7 @@ ice_dcf_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->shaper_profile = shaper_profile;
tm_node->reference_count = 0;
tm_node->parent = parent_node;
- rte_memcpy(&tm_node->params, params,
- sizeof(struct rte_tm_node_params));
+ memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_DCF_TM_NODE_TYPE_PORT) {
TAILQ_INSERT_TAIL(&hw->tm_conf.tc_list,
tm_node, node);
@@ -520,8 +519,8 @@ ice_dcf_shaper_profile_add(struct rte_eth_dev *dev,
if (!shaper_profile)
return -ENOMEM;
shaper_profile->shaper_profile_id = shaper_profile_id;
- rte_memcpy(&shaper_profile->profile, profile,
- sizeof(struct rte_tm_shaper_params));
+ memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
TAILQ_INSERT_TAIL(&hw->tm_conf.shaper_profile_list,
shaper_profile, node);
diff --git a/drivers/net/ice/ice_diagnose.c b/drivers/net/ice/ice_diagnose.c
index 3be819d7f8..c22f71e48e 100644
--- a/drivers/net/ice/ice_diagnose.c
+++ b/drivers/net/ice/ice_diagnose.c
@@ -362,13 +362,13 @@ ice_dump_pkg(struct rte_eth_dev *dev, uint8_t **buff, uint32_t *size)
count = *size / ICE_PKG_BUF_SIZE;
for (i = 0; i < count; i++) {
next_buff = (uint8_t *)(*buff) + i * ICE_PKG_BUF_SIZE;
- rte_memcpy(pkg_buff.buf, next_buff, ICE_PKG_BUF_SIZE);
+ memcpy(pkg_buff.buf, next_buff, ICE_PKG_BUF_SIZE);
if (ice_aq_upload_section(hw,
(struct ice_buf_hdr *)&pkg_buff.buf[0],
ICE_PKG_BUF_SIZE,
NULL))
return -EINVAL;
- rte_memcpy(next_buff, pkg_buff.buf, ICE_PKG_BUF_SIZE);
+ memcpy(next_buff, pkg_buff.buf, ICE_PKG_BUF_SIZE);
}
cache_size = sizeof(struct ice_package_header) + *size;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 87385d2649..126afb763c 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3453,11 +3453,11 @@ static int ice_init_rss(struct ice_pf *pf)
RTE_MIN(rss_conf->rss_key_len,
vsi->rss_key_size));
- rte_memcpy(key.standard_rss_key, vsi->rss_key,
- ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE);
- rte_memcpy(key.extended_hash_key,
- &vsi->rss_key[ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE],
- ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE);
+ memcpy(key.standard_rss_key, vsi->rss_key,
+ ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE);
+ memcpy(key.extended_hash_key,
+ &vsi->rss_key[ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE],
+ ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE);
ret = ice_aq_set_rss_key(hw, vsi->idx, &key);
if (ret)
goto out;
@@ -4549,7 +4549,7 @@ ice_vsi_config_vlan_filter(struct ice_vsi *vsi, bool on)
vsi->info.sw_flags2 &= ~sw_flags2;
vsi->info.sw_id = hw->port_info->sw_id;
- (void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+ memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
ctxt.info.valid_sections =
rte_cpu_to_le_16(ICE_AQ_VSI_PROP_SW_VALID |
ICE_AQ_VSI_PROP_SECURITY_VALID);
@@ -5367,7 +5367,7 @@ ice_vsi_vlan_pvid_set(struct ice_vsi *vsi, struct ice_vsi_vlan_pvid_info *info)
ICE_AQ_VSI_INNER_VLAN_EMODE_M);
vsi->info.inner_vlan_flags |= vlan_flags;
memset(&ctxt, 0, sizeof(ctxt));
- rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
+ memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
ctxt.info.valid_sections =
rte_cpu_to_le_16(ICE_AQ_VSI_PROP_VLAN_VALID);
ctxt.vsi_num = vsi->vsi_id;
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 0b7920ad44..de7b531aa0 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -1224,13 +1224,13 @@ ice_fdir_extract_fltr_key(struct ice_fdir_fltr_pattern *key,
memset(key, 0, sizeof(*key));
key->flow_type = input->flow_type;
- rte_memcpy(&key->ip, &input->ip, sizeof(key->ip));
- rte_memcpy(&key->mask, &input->mask, sizeof(key->mask));
- rte_memcpy(&key->ext_data, &input->ext_data, sizeof(key->ext_data));
- rte_memcpy(&key->ext_mask, &input->ext_mask, sizeof(key->ext_mask));
+ memcpy(&key->ip, &input->ip, sizeof(key->ip));
+ memcpy(&key->mask, &input->mask, sizeof(key->mask));
+ memcpy(&key->ext_data, &input->ext_data, sizeof(key->ext_data));
+ memcpy(&key->ext_mask, &input->ext_mask, sizeof(key->ext_mask));
- rte_memcpy(&key->gtpu_data, &input->gtpu_data, sizeof(key->gtpu_data));
- rte_memcpy(&key->gtpu_mask, &input->gtpu_mask, sizeof(key->gtpu_mask));
+ memcpy(&key->gtpu_data, &input->gtpu_data, sizeof(key->gtpu_data));
+ memcpy(&key->gtpu_mask, &input->gtpu_mask, sizeof(key->gtpu_mask));
key->tunnel_type = filter->tunnel_type;
}
@@ -1358,7 +1358,7 @@ ice_fdir_create_filter(struct ice_adapter *ad,
if (!entry)
goto error;
- rte_memcpy(entry, filter, sizeof(*filter));
+ memcpy(entry, filter, sizeof(*filter));
flow->rule = entry;
@@ -1419,7 +1419,7 @@ ice_fdir_create_filter(struct ice_adapter *ad,
if (filter->mark_flag == 1)
ice_fdir_rx_parsing_enable(ad, 1);
- rte_memcpy(entry, filter, sizeof(*entry));
+ memcpy(entry, filter, sizeof(*entry));
ret = ice_fdir_entry_insert(pf, entry, &key);
if (ret) {
rte_flow_error_set(error, -ret,
@@ -1720,8 +1720,8 @@ ice_fdir_parse_action(struct ice_adapter *ad,
act_count = actions->conf;
filter->input.cnt_ena = ICE_FXD_FLTR_QW0_STAT_ENA_PKTS;
- rte_memcpy(&filter->act_count, act_count,
- sizeof(filter->act_count));
+ memcpy(&filter->act_count, act_count,
+ sizeof(filter->act_count));
break;
default:
@@ -1978,12 +1978,13 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
p_ext_data = (tunnel_type && is_outer) ?
&filter->input.ext_data_outer :
&filter->input.ext_data;
- rte_memcpy(&p_ext_data->src_mac,
- ð_spec->hdr.src_addr, RTE_ETHER_ADDR_LEN);
- rte_memcpy(&p_ext_data->dst_mac,
- ð_spec->hdr.dst_addr, RTE_ETHER_ADDR_LEN);
- rte_memcpy(&p_ext_data->ether_type,
- ð_spec->hdr.ether_type, sizeof(eth_spec->hdr.ether_type));
+ memcpy(&p_ext_data->src_mac, ð_spec->hdr.src_addr,
+ RTE_ETHER_ADDR_LEN);
+ memcpy(&p_ext_data->dst_mac, ð_spec->hdr.dst_addr,
+ RTE_ETHER_ADDR_LEN);
+ memcpy(&p_ext_data->ether_type,
+ ð_spec->hdr.ether_type,
+ sizeof(eth_spec->hdr.ether_type));
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
flow_type = ICE_FLTR_PTYPE_NONF_IPV4_OTHER;
@@ -2108,8 +2109,8 @@ ice_fdir_parse_pattern(__rte_unused struct ice_adapter *ad,
if (ipv6_mask->hdr.hop_limits == UINT8_MAX)
*input_set |= ICE_INSET_IPV6_HOP_LIMIT;
- rte_memcpy(&p_v6->dst_ip, ipv6_spec->hdr.dst_addr, 16);
- rte_memcpy(&p_v6->src_ip, ipv6_spec->hdr.src_addr, 16);
+ memcpy(&p_v6->dst_ip, ipv6_spec->hdr.dst_addr, 16);
+ memcpy(&p_v6->src_ip, ipv6_spec->hdr.src_addr, 16);
vtc_flow_cpu = rte_be_to_cpu_32(ipv6_spec->hdr.vtc_flow);
p_v6->tc = (uint8_t)(vtc_flow_cpu >> ICE_FDIR_IPV6_TC_OFFSET);
p_v6->proto = ipv6_spec->hdr.proto;
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 50d760004f..9e7de43575 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -1969,7 +1969,7 @@ ice_pattern_skip_void_item(struct rte_flow_item *items,
pb = pe + 1;
}
/* Copy the END item. */
- rte_memcpy(items, pe, sizeof(struct rte_flow_item));
+ memcpy(items, pe, sizeof(struct rte_flow_item));
}
/* Check if the pattern matches a supported item type array */
diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c
index f923641533..80b44713a9 100644
--- a/drivers/net/ice/ice_hash.c
+++ b/drivers/net/ice/ice_hash.c
@@ -715,7 +715,7 @@ ice_hash_parse_raw_pattern(struct ice_adapter *ad,
pkt_len, ICE_BLK_RSS, true, &prof))
return -rte_errno;
- rte_memcpy(&meta->raw.prof, &prof, sizeof(prof));
+ memcpy(&meta->raw.prof, &prof, sizeof(prof));
rte_free(pkt_buf);
rte_free(msk_buf);
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 17f0ca0ce0..7515d738cd 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -293,8 +293,8 @@ ice_shaper_profile_add(struct rte_eth_dev *dev,
if (!shaper_profile)
return -ENOMEM;
shaper_profile->shaper_profile_id = shaper_profile_id;
- rte_memcpy(&shaper_profile->profile, profile,
- sizeof(struct rte_tm_shaper_params));
+ memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
TAILQ_INSERT_TAIL(&pf->tm_conf.shaper_profile_list,
shaper_profile, node);
@@ -403,8 +403,8 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->shaper_profile = shaper_profile;
tm_node->children =
(void *)((uint8_t *)tm_node + sizeof(struct ice_tm_node));
- rte_memcpy(&tm_node->params, params,
- sizeof(struct rte_tm_node_params));
+ memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
pf->tm_conf.root = tm_node;
return 0;
}
@@ -480,8 +480,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
PMD_DRV_LOG(WARNING, "weight != 1 not supported in level %d",
level_id);
- rte_memcpy(&tm_node->params, params,
- sizeof(struct rte_tm_node_params));
+ memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params));
tm_node->parent->reference_count++;
return 0;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 86151c9ec9..9cf33c4b70 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -1088,8 +1088,8 @@ idpf_handle_virtchnl_msg(struct idpf_adapter_ext *adapter_ex)
return;
}
- rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
- IDPF_DFLT_MBX_BUF_SIZE);
+ memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
+ IDPF_DFLT_MBX_BUF_SIZE);
mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
@@ -1202,7 +1202,8 @@ idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *a
strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE);
- rte_memcpy(&base->caps, &req_caps, sizeof(struct virtchnl2_get_capabilities));
+ memcpy(&base->caps, &req_caps,
+ sizeof(struct virtchnl2_get_capabilities));
ret = idpf_adapter_init(base);
if (ret != 0) {
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 64f2235580..da659e1653 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -76,7 +76,7 @@ idpf_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
else
ring_size = RTE_ALIGN(len * sizeof(struct idpf_base_tx_desc),
IDPF_DMA_MEM_ALIGN);
- rte_memcpy(ring_name, "idpf Tx ring", sizeof("idpf Tx ring"));
+ memcpy(ring_name, "idpf Tx ring", sizeof("idpf Tx ring"));
break;
case VIRTCHNL2_QUEUE_TYPE_RX:
if (splitq)
@@ -85,17 +85,19 @@ idpf_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
else
ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_singleq_rx_buf_desc),
IDPF_DMA_MEM_ALIGN);
- rte_memcpy(ring_name, "idpf Rx ring", sizeof("idpf Rx ring"));
+ memcpy(ring_name, "idpf Rx ring", sizeof("idpf Rx ring"));
break;
case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
ring_size = RTE_ALIGN(len * sizeof(struct idpf_splitq_tx_compl_desc),
IDPF_DMA_MEM_ALIGN);
- rte_memcpy(ring_name, "idpf Tx compl ring", sizeof("idpf Tx compl ring"));
+ memcpy(ring_name, "idpf Tx compl ring",
+ sizeof("idpf Tx compl ring"));
break;
case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_splitq_rx_buf_desc),
IDPF_DMA_MEM_ALIGN);
- rte_memcpy(ring_name, "idpf Rx buf ring", sizeof("idpf Rx buf ring"));
+ memcpy(ring_name, "idpf Rx buf ring",
+ sizeof("idpf Rx buf ring"));
break;
default:
PMD_INIT_LOG(ERR, "Invalid queue type");
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index d20a29b9a2..eeb0ec55d9 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -100,15 +100,14 @@ ipn3ke_pattern_vxlan(const struct rte_flow_item patterns[],
case RTE_FLOW_ITEM_TYPE_ETH:
eth = item->spec;
- rte_memcpy(&parser->key[0],
- eth->hdr.src_addr.addr_bytes,
- RTE_ETHER_ADDR_LEN);
+ memcpy(&parser->key[0], eth->hdr.src_addr.addr_bytes,
+ RTE_ETHER_ADDR_LEN);
break;
case RTE_FLOW_ITEM_TYPE_VXLAN:
vxlan = item->spec;
- rte_memcpy(&parser->key[6], vxlan->hdr.vni, 3);
+ memcpy(&parser->key[6], vxlan->hdr.vni, 3);
break;
default:
@@ -164,9 +163,8 @@ ipn3ke_pattern_mac(const struct rte_flow_item patterns[],
case RTE_FLOW_ITEM_TYPE_ETH:
eth = item->spec;
- rte_memcpy(parser->key,
- eth->hdr.src_addr.addr_bytes,
- RTE_ETHER_ADDR_LEN);
+ memcpy(parser->key, eth->hdr.src_addr.addr_bytes,
+ RTE_ETHER_ADDR_LEN);
break;
default:
@@ -369,13 +367,13 @@ ipn3ke_pattern_ip_tcp(const struct rte_flow_item patterns[],
case RTE_FLOW_ITEM_TYPE_IPV4:
ipv4 = item->spec;
- rte_memcpy(&parser->key[0], &ipv4->hdr.src_addr, 4);
+ memcpy(&parser->key[0], &ipv4->hdr.src_addr, 4);
break;
case RTE_FLOW_ITEM_TYPE_TCP:
tcp = item->spec;
- rte_memcpy(&parser->key[4], &tcp->hdr.src_port, 2);
+ memcpy(&parser->key[4], &tcp->hdr.src_port, 2);
break;
default:
@@ -434,13 +432,13 @@ ipn3ke_pattern_ip_udp(const struct rte_flow_item patterns[],
case RTE_FLOW_ITEM_TYPE_IPV4:
ipv4 = item->spec;
- rte_memcpy(&parser->key[0], &ipv4->hdr.src_addr, 4);
+ memcpy(&parser->key[0], &ipv4->hdr.src_addr, 4);
break;
case RTE_FLOW_ITEM_TYPE_UDP:
udp = item->spec;
- rte_memcpy(&parser->key[4], &udp->hdr.src_port, 2);
+ memcpy(&parser->key[4], &udp->hdr.src_port, 2);
break;
default:
@@ -502,19 +500,19 @@ ipn3ke_pattern_ip_nvgre(const struct rte_flow_item patterns[],
case RTE_FLOW_ITEM_TYPE_IPV4:
ipv4 = item->spec;
- rte_memcpy(&parser->key[0], &ipv4->hdr.src_addr, 4);
+ memcpy(&parser->key[0], &ipv4->hdr.src_addr, 4);
break;
case RTE_FLOW_ITEM_TYPE_UDP:
udp = item->spec;
- rte_memcpy(&parser->key[4], &udp->hdr.src_port, 2);
+ memcpy(&parser->key[4], &udp->hdr.src_port, 2);
break;
case RTE_FLOW_ITEM_TYPE_NVGRE:
nvgre = item->spec;
- rte_memcpy(&parser->key[6], nvgre->tni, 3);
+ memcpy(&parser->key[6], nvgre->tni, 3);
break;
default:
@@ -576,19 +574,19 @@ ipn3ke_pattern_vxlan_ip_udp(const struct rte_flow_item patterns[],
case RTE_FLOW_ITEM_TYPE_VXLAN:
vxlan = item->spec;
- rte_memcpy(&parser->key[0], vxlan->hdr.vni, 3);
+ memcpy(&parser->key[0], vxlan->hdr.vni, 3);
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
ipv4 = item->spec;
- rte_memcpy(&parser->key[3], &ipv4->hdr.src_addr, 4);
+ memcpy(&parser->key[3], &ipv4->hdr.src_addr, 4);
break;
case RTE_FLOW_ITEM_TYPE_UDP:
udp = item->spec;
- rte_memcpy(&parser->key[7], &udp->hdr.src_port, 2);
+ memcpy(&parser->key[7], &udp->hdr.src_port, 2);
break;
default:
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 8145f1bb2a..99527d1879 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -137,17 +137,17 @@ ipn3ke_rpst_dev_start(struct rte_eth_dev *dev)
if (hw->retimer.mac_type == IFPGA_RAWDEV_RETIMER_MAC_TYPE_10GE_XFI) {
/* Set mac address */
- rte_memcpy(((char *)(&val)),
- (char *)&dev->data->mac_addrs->addr_bytes[0],
- sizeof(uint32_t));
+ memcpy(((char *)(&val)),
+ (char *)&dev->data->mac_addrs->addr_bytes[0],
+ sizeof(uint32_t));
(*hw->f_mac_write)(hw,
val,
IPN3KE_MAC_PRIMARY_MAC_ADDR0,
rpst->port_id,
0);
- rte_memcpy(((char *)(&val)),
- (char *)&dev->data->mac_addrs->addr_bytes[4],
- sizeof(uint16_t));
+ memcpy(((char *)(&val)),
+ (char *)&dev->data->mac_addrs->addr_bytes[4],
+ sizeof(uint16_t));
(*hw->f_mac_write)(hw,
val,
IPN3KE_MAC_PRIMARY_MAC_ADDR1,
@@ -2753,13 +2753,13 @@ ipn3ke_rpst_mac_addr_set(struct rte_eth_dev *ethdev,
rte_ether_addr_copy(&mac_addr[0], &rpst->mac_addr);
/* Set mac address */
- rte_memcpy(((char *)(&val)), &mac_addr[0], sizeof(uint32_t));
+ memcpy(((char *)(&val)), &mac_addr[0], sizeof(uint32_t));
(*hw->f_mac_write)(hw,
val,
IPN3KE_MAC_PRIMARY_MAC_ADDR0,
rpst->port_id,
0);
- rte_memcpy(((char *)(&val)), &mac_addr[4], sizeof(uint16_t));
+ memcpy(((char *)(&val)), &mac_addr[4], sizeof(uint16_t));
(*hw->f_mac_write)(hw,
val,
IPN3KE_MAC_PRIMARY_MAC_ADDR0,
diff --git a/drivers/net/ipn3ke/ipn3ke_tm.c b/drivers/net/ipn3ke/ipn3ke_tm.c
index 0260227900..b7097083d2 100644
--- a/drivers/net/ipn3ke/ipn3ke_tm.c
+++ b/drivers/net/ipn3ke/ipn3ke_tm.c
@@ -814,7 +814,7 @@ ipn3ke_tm_shaper_profile_add(struct rte_eth_dev *dev,
rte_strerror(EINVAL));
} else {
sp->valid = 1;
- rte_memcpy(&sp->params, profile, sizeof(sp->params));
+ memcpy(&sp->params, profile, sizeof(sp->params));
}
tm->h.n_shaper_profiles++;
@@ -960,7 +960,7 @@ ipn3ke_tm_tdrop_profile_add(struct rte_eth_dev *dev,
IPN3KE_TDROP_TH2_MASK);
tp->th1 = th1;
tp->th2 = th2;
- rte_memcpy(&tp->params, profile, sizeof(tp->params));
+ memcpy(&tp->params, profile, sizeof(tp->params));
/* Add to list */
tm->h.n_tdrop_profiles++;
@@ -1308,7 +1308,7 @@ ipn3ke_tm_node_add(struct rte_eth_dev *dev,
n->tdrop_profile = ipn3ke_hw_tm_tdrop_profile_search(hw,
params->leaf.wred.wred_profile_id);
- rte_memcpy(&n->params, params, sizeof(n->params));
+ memcpy(&n->params, params, sizeof(n->params));
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index c61c52b296..68f46d443a 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -6597,9 +6597,8 @@ ixgbe_add_del_ntuple_filter(struct rte_eth_dev *dev,
sizeof(struct ixgbe_5tuple_filter), 0);
if (filter == NULL)
return -ENOMEM;
- rte_memcpy(&filter->filter_info,
- &filter_5tuple,
- sizeof(struct ixgbe_5tuple_filter_info));
+ memcpy(&filter->filter_info, &filter_5tuple,
+ sizeof(struct ixgbe_5tuple_filter_info));
filter->queue = ntuple_filter->queue;
ret = ixgbe_add_5tuple_filter(dev, filter);
if (ret < 0) {
@@ -7596,9 +7595,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
if (!node)
return -ENOMEM;
- rte_memcpy(&node->key,
- &key,
- sizeof(struct ixgbe_l2_tn_key));
+ memcpy(&node->key, &key, sizeof(struct ixgbe_l2_tn_key));
node->pool = l2_tunnel->pool;
ret = ixgbe_insert_l2_tn_filter(l2_tn_info, node);
if (ret < 0) {
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 06d6e2126d..b168ab8278 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -74,7 +74,7 @@
else \
ipv6_addr[i] = 0; \
} \
- rte_memcpy((ipaddr), ipv6_addr, sizeof(ipv6_addr));\
+ memcpy((ipaddr), ipv6_addr, sizeof(ipv6_addr));\
} while (0)
#define IXGBE_FDIRIP6M_INNER_MAC_SHIFT 4
@@ -1217,9 +1217,8 @@ ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
0);
if (!node)
return -ENOMEM;
- rte_memcpy(&node->ixgbe_fdir,
- &rule->ixgbe_fdir,
- sizeof(union ixgbe_atr_input));
+ memcpy(&node->ixgbe_fdir, &rule->ixgbe_fdir,
+ sizeof(union ixgbe_atr_input));
node->fdirflags = fdircmd_flags;
node->fdirhash = fdirhash;
node->queue = queue;
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 687341c6b8..8a13f47f2b 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -1944,10 +1944,10 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev,
if (item->spec) {
rule->b_spec = TRUE;
ipv6_spec = item->spec;
- rte_memcpy(rule->ixgbe_fdir.formatted.src_ip,
- ipv6_spec->hdr.src_addr, 16);
- rte_memcpy(rule->ixgbe_fdir.formatted.dst_ip,
- ipv6_spec->hdr.dst_addr, 16);
+ memcpy(rule->ixgbe_fdir.formatted.src_ip,
+ ipv6_spec->hdr.src_addr, 16);
+ memcpy(rule->ixgbe_fdir.formatted.dst_ip,
+ ipv6_spec->hdr.dst_addr, 16);
}
/**
@@ -3070,9 +3070,9 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "failed to allocate memory");
goto out;
}
- rte_memcpy(&ntuple_filter_ptr->filter_info,
- &ntuple_filter,
- sizeof(struct rte_eth_ntuple_filter));
+ memcpy(&ntuple_filter_ptr->filter_info,
+ &ntuple_filter,
+ sizeof(struct rte_eth_ntuple_filter));
TAILQ_INSERT_TAIL(&filter_ntuple_list,
ntuple_filter_ptr, entries);
flow->rule = ntuple_filter_ptr;
@@ -3096,9 +3096,9 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "failed to allocate memory");
goto out;
}
- rte_memcpy(ðertype_filter_ptr->filter_info,
- ðertype_filter,
- sizeof(struct rte_eth_ethertype_filter));
+ memcpy(ðertype_filter_ptr->filter_info,
+ ðertype_filter,
+ sizeof(struct rte_eth_ethertype_filter));
TAILQ_INSERT_TAIL(&filter_ethertype_list,
ethertype_filter_ptr, entries);
flow->rule = ethertype_filter_ptr;
@@ -3120,9 +3120,8 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "failed to allocate memory");
goto out;
}
- rte_memcpy(&syn_filter_ptr->filter_info,
- &syn_filter,
- sizeof(struct rte_eth_syn_filter));
+ memcpy(&syn_filter_ptr->filter_info, &syn_filter,
+ sizeof(struct rte_eth_syn_filter));
TAILQ_INSERT_TAIL(&filter_syn_list,
syn_filter_ptr,
entries);
@@ -3141,9 +3140,8 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
if (fdir_rule.b_mask) {
if (!fdir_info->mask_added) {
/* It's the first time the mask is set. */
- rte_memcpy(&fdir_info->mask,
- &fdir_rule.mask,
- sizeof(struct ixgbe_hw_fdir_mask));
+ memcpy(&fdir_info->mask, &fdir_rule.mask,
+ sizeof(struct ixgbe_hw_fdir_mask));
if (fdir_rule.mask.flex_bytes_mask) {
ret = ixgbe_fdir_set_flexbytes_offset(dev,
@@ -3185,9 +3183,9 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "failed to allocate memory");
goto out;
}
- rte_memcpy(&fdir_rule_ptr->filter_info,
- &fdir_rule,
- sizeof(struct ixgbe_fdir_rule));
+ memcpy(&fdir_rule_ptr->filter_info,
+ &fdir_rule,
+ sizeof(struct ixgbe_fdir_rule));
TAILQ_INSERT_TAIL(&filter_fdir_list,
fdir_rule_ptr, entries);
flow->rule = fdir_rule_ptr;
@@ -3222,9 +3220,8 @@ ixgbe_flow_create(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "failed to allocate memory");
goto out;
}
- rte_memcpy(&l2_tn_filter_ptr->filter_info,
- &l2_tn_filter,
- sizeof(struct ixgbe_l2_tunnel_conf));
+ memcpy(&l2_tn_filter_ptr->filter_info, &l2_tn_filter,
+ sizeof(struct ixgbe_l2_tunnel_conf));
TAILQ_INSERT_TAIL(&filter_l2_tunnel_list,
l2_tn_filter_ptr, entries);
flow->rule = l2_tn_filter_ptr;
@@ -3351,9 +3348,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
case RTE_ETH_FILTER_NTUPLE:
ntuple_filter_ptr = (struct ixgbe_ntuple_filter_ele *)
pmd_flow->rule;
- rte_memcpy(&ntuple_filter,
- &ntuple_filter_ptr->filter_info,
- sizeof(struct rte_eth_ntuple_filter));
+ memcpy(&ntuple_filter, &ntuple_filter_ptr->filter_info,
+ sizeof(struct rte_eth_ntuple_filter));
ret = ixgbe_add_del_ntuple_filter(dev, &ntuple_filter, FALSE);
if (!ret) {
TAILQ_REMOVE(&filter_ntuple_list,
@@ -3364,9 +3360,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
case RTE_ETH_FILTER_ETHERTYPE:
ethertype_filter_ptr = (struct ixgbe_ethertype_filter_ele *)
pmd_flow->rule;
- rte_memcpy(ðertype_filter,
- ðertype_filter_ptr->filter_info,
- sizeof(struct rte_eth_ethertype_filter));
+ memcpy(ðertype_filter, ðertype_filter_ptr->filter_info,
+ sizeof(struct rte_eth_ethertype_filter));
ret = ixgbe_add_del_ethertype_filter(dev,
ðertype_filter, FALSE);
if (!ret) {
@@ -3378,9 +3373,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
case RTE_ETH_FILTER_SYN:
syn_filter_ptr = (struct ixgbe_eth_syn_filter_ele *)
pmd_flow->rule;
- rte_memcpy(&syn_filter,
- &syn_filter_ptr->filter_info,
- sizeof(struct rte_eth_syn_filter));
+ memcpy(&syn_filter, &syn_filter_ptr->filter_info,
+ sizeof(struct rte_eth_syn_filter));
ret = ixgbe_syn_filter_set(dev, &syn_filter, FALSE);
if (!ret) {
TAILQ_REMOVE(&filter_syn_list,
@@ -3390,9 +3384,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_FDIR:
fdir_rule_ptr = (struct ixgbe_fdir_rule_ele *)pmd_flow->rule;
- rte_memcpy(&fdir_rule,
- &fdir_rule_ptr->filter_info,
- sizeof(struct ixgbe_fdir_rule));
+ memcpy(&fdir_rule, &fdir_rule_ptr->filter_info,
+ sizeof(struct ixgbe_fdir_rule));
ret = ixgbe_fdir_filter_program(dev, &fdir_rule, TRUE, FALSE);
if (!ret) {
TAILQ_REMOVE(&filter_fdir_list,
@@ -3405,8 +3398,8 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev,
case RTE_ETH_FILTER_L2_TUNNEL:
l2_tn_filter_ptr = (struct ixgbe_eth_l2_tunnel_conf_ele *)
pmd_flow->rule;
- rte_memcpy(&l2_tn_filter, &l2_tn_filter_ptr->filter_info,
- sizeof(struct ixgbe_l2_tunnel_conf));
+ memcpy(&l2_tn_filter, &l2_tn_filter_ptr->filter_info,
+ sizeof(struct ixgbe_l2_tunnel_conf));
ret = ixgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_filter);
if (!ret) {
TAILQ_REMOVE(&filter_l2_tunnel_list,
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
index d331308556..d8ed095dce 100644
--- a/drivers/net/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -680,10 +680,10 @@ ixgbe_crypto_add_ingress_sa_from_flow(const void *sess,
const struct rte_flow_item_ipv6 *ipv6 = ip_spec;
ic_session->src_ip.type = IPv6;
ic_session->dst_ip.type = IPv6;
- rte_memcpy(ic_session->src_ip.ipv6,
- ipv6->hdr.src_addr, 16);
- rte_memcpy(ic_session->dst_ip.ipv6,
- ipv6->hdr.dst_addr, 16);
+ memcpy(ic_session->src_ip.ipv6, ipv6->hdr.src_addr,
+ 16);
+ memcpy(ic_session->dst_ip.ipv6, ipv6->hdr.dst_addr,
+ 16);
} else {
const struct rte_flow_item_ipv4 *ipv4 = ip_spec;
ic_session->src_ip.type = IPv4;
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 0a0f639e39..f16bd45dbf 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -16,7 +16,6 @@
#include <rte_eal.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
-#include <rte_memcpy.h>
#include <rte_malloc.h>
#include <rte_random.h>
@@ -450,7 +449,7 @@ ixgbe_vf_reset(struct rte_eth_dev *dev, uint16_t vf, uint32_t *msgbuf)
/* reply to reset with ack and vf mac address */
msgbuf[0] = IXGBE_VF_RESET | IXGBE_VT_MSGTYPE_ACK;
- rte_memcpy(new_mac, vf_mac, RTE_ETHER_ADDR_LEN);
+ memcpy(new_mac, vf_mac, RTE_ETHER_ADDR_LEN);
/*
* Piggyback the multicast filter type so VF can compute the
* correct vectors
@@ -472,7 +471,7 @@ ixgbe_vf_set_mac_addr(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
if (rte_is_valid_assigned_ether_addr(
(struct rte_ether_addr *)new_mac)) {
- rte_memcpy(vfinfo[vf].vf_mac_addresses, new_mac, 6);
+ memcpy(vfinfo[vf].vf_mac_addresses, new_mac, 6);
return hw->mac.ops.set_rar(hw, rar_entry, new_mac, vf, IXGBE_RAH_AV);
}
return -1;
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index ac8976062f..00d9de4393 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -289,8 +289,8 @@ ixgbe_shaper_profile_add(struct rte_eth_dev *dev,
if (!shaper_profile)
return -ENOMEM;
shaper_profile->shaper_profile_id = shaper_profile_id;
- rte_memcpy(&shaper_profile->profile, profile,
- sizeof(struct rte_tm_shaper_params));
+ memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
TAILQ_INSERT_TAIL(&tm_conf->shaper_profile_list,
shaper_profile, node);
@@ -637,8 +637,8 @@ ixgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->no = 0;
tm_node->parent = NULL;
tm_node->shaper_profile = shaper_profile;
- rte_memcpy(&tm_node->params, params,
- sizeof(struct rte_tm_node_params));
+ memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
tm_conf->root = tm_node;
/* increase the reference counter of the shaper profile */
@@ -718,8 +718,7 @@ ixgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->reference_count = 0;
tm_node->parent = parent_node;
tm_node->shaper_profile = shaper_profile;
- rte_memcpy(&tm_node->params, params,
- sizeof(struct rte_tm_node_params));
+ memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params));
if (parent_node_type == IXGBE_TM_NODE_TYPE_PORT) {
tm_node->no = parent_node->reference_count;
TAILQ_INSERT_TAIL(&tm_conf->tc_list,
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.c b/drivers/net/ixgbe/rte_pmd_ixgbe.c
index f76ef63921..ba700fe023 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.c
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.c
@@ -37,8 +37,8 @@ rte_pmd_ixgbe_set_vf_mac_addr(uint16_t port, uint16_t vf,
if (rte_is_valid_assigned_ether_addr(
(struct rte_ether_addr *)new_mac)) {
- rte_memcpy(vfinfo[vf].vf_mac_addresses, new_mac,
- RTE_ETHER_ADDR_LEN);
+ memcpy(vfinfo[vf].vf_mac_addresses, new_mac,
+ RTE_ETHER_ADDR_LEN);
return hw->mac.ops.set_rar(hw, rar_entry, new_mac, vf,
IXGBE_RAH_AV);
}
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index 649f8d0e61..f5a3354c46 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -48,7 +48,7 @@ memif_msg_send(int fd, memif_msg_t *msg, int afd)
cmsg->cmsg_len = CMSG_LEN(sizeof(int));
cmsg->cmsg_level = SOL_SOCKET;
cmsg->cmsg_type = SCM_RIGHTS;
- rte_memcpy(CMSG_DATA(cmsg), &afd, sizeof(int));
+ memcpy(CMSG_DATA(cmsg), &afd, sizeof(int));
}
return sendmsg(fd, &mh, 0);
@@ -675,7 +675,7 @@ memif_msg_receive(struct memif_control_channel *cc)
if (cmsg->cmsg_type == SCM_CREDENTIALS)
cr = (struct ucred *)CMSG_DATA(cmsg);
else if (cmsg->cmsg_type == SCM_RIGHTS)
- rte_memcpy(&afd, CMSG_DATA(cmsg), sizeof(int));
+ memcpy(&afd, CMSG_DATA(cmsg), sizeof(int));
}
cmsg = CMSG_NXTHDR(&mh, cmsg);
}
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 9fa400fc48..6380a5c83c 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -512,7 +512,7 @@ mlx5_rxq_obj_hairpin_new(struct mlx5_rxq_priv *rxq)
* during queue setup.
*/
MLX5_ASSERT(hca_attr->hairpin_data_buffer_locked);
- rte_memcpy(&locked_attr, &unlocked_attr, sizeof(locked_attr));
+ memcpy(&locked_attr, &unlocked_attr, sizeof(locked_attr));
locked_attr.hairpin_data_buffer_type =
MLX5_RQC_HAIRPIN_DATA_BUFFER_TYPE_LOCKED_INTERNAL_BUFFER;
tmpl->rq = mlx5_devx_cmd_create_rq(priv->sh->cdev->ctx, &locked_attr,
@@ -1289,7 +1289,7 @@ mlx5_txq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx)
*/
MLX5_ASSERT(hca_attr->hairpin_sq_wq_in_host_mem);
MLX5_ASSERT(hca_attr->hairpin_sq_wqe_bb_size > 0);
- rte_memcpy(&host_mem_attr, &dev_mem_attr, sizeof(host_mem_attr));
+ memcpy(&host_mem_attr, &dev_mem_attr, sizeof(host_mem_attr));
umem_size = MLX5_WQE_SIZE *
RTE_BIT32(host_mem_attr.wq_attr.log_hairpin_num_packets);
umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index f31fdfbf3d..dba8afbb28 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -4516,8 +4516,8 @@ flow_action_handles_translate(struct rte_eth_dev *dev,
(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_NUM,
NULL, "too many shared actions");
}
- rte_memcpy(&handle[copied_n].action, &actions[n].conf,
- sizeof(actions[n].conf));
+ memcpy(&handle[copied_n].action, &actions[n].conf,
+ sizeof(actions[n].conf));
handle[copied_n].index = n;
copied_n++;
}
@@ -5383,30 +5383,30 @@ flow_hairpin_split(struct rte_eth_dev *dev,
case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN:
case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP:
- rte_memcpy(actions_tx, actions,
+ memcpy(actions_tx, actions,
sizeof(struct rte_flow_action));
actions_tx++;
break;
case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID:
if (push_vlan) {
- rte_memcpy(actions_tx, actions,
- sizeof(struct rte_flow_action));
+ memcpy(actions_tx, actions,
+ sizeof(struct rte_flow_action));
actions_tx++;
} else {
- rte_memcpy(actions_rx, actions,
- sizeof(struct rte_flow_action));
+ memcpy(actions_rx, actions,
+ sizeof(struct rte_flow_action));
actions_rx++;
}
break;
case RTE_FLOW_ACTION_TYPE_COUNT:
case RTE_FLOW_ACTION_TYPE_AGE:
if (encap) {
- rte_memcpy(actions_tx, actions,
- sizeof(struct rte_flow_action));
+ memcpy(actions_tx, actions,
+ sizeof(struct rte_flow_action));
actions_tx++;
} else {
- rte_memcpy(actions_rx, actions,
- sizeof(struct rte_flow_action));
+ memcpy(actions_rx, actions,
+ sizeof(struct rte_flow_action));
actions_rx++;
}
break;
@@ -5418,8 +5418,8 @@ flow_hairpin_split(struct rte_eth_dev *dev,
actions_tx++;
encap = 1;
} else {
- rte_memcpy(actions_rx, actions,
- sizeof(struct rte_flow_action));
+ memcpy(actions_rx, actions,
+ sizeof(struct rte_flow_action));
actions_rx++;
}
break;
@@ -5430,14 +5430,14 @@ flow_hairpin_split(struct rte_eth_dev *dev,
sizeof(struct rte_flow_action));
actions_tx++;
} else {
- rte_memcpy(actions_rx, actions,
- sizeof(struct rte_flow_action));
+ memcpy(actions_rx, actions,
+ sizeof(struct rte_flow_action));
actions_rx++;
}
break;
default:
- rte_memcpy(actions_rx, actions,
- sizeof(struct rte_flow_action));
+ memcpy(actions_rx, actions,
+ sizeof(struct rte_flow_action));
actions_rx++;
break;
}
@@ -5447,7 +5447,7 @@ flow_hairpin_split(struct rte_eth_dev *dev,
tag_action->type = (enum rte_flow_action_type)
MLX5_RTE_FLOW_ACTION_TYPE_TAG;
actions_rx++;
- rte_memcpy(actions_rx, actions, sizeof(struct rte_flow_action));
+ memcpy(actions_rx, actions, sizeof(struct rte_flow_action));
actions_rx++;
set_tag = (void *)actions_rx;
*set_tag = (struct mlx5_rte_flow_action_set_tag) {
@@ -5457,7 +5457,7 @@ flow_hairpin_split(struct rte_eth_dev *dev,
MLX5_ASSERT(set_tag->id > REG_NON);
tag_action->conf = set_tag;
/* Create Tx item list. */
- rte_memcpy(actions_tx, actions, sizeof(struct rte_flow_action));
+ memcpy(actions_tx, actions, sizeof(struct rte_flow_action));
addr = (void *)&pattern_tx[2];
item = pattern_tx;
item->type = (enum rte_flow_item_type)
diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c
index ab9eb21e01..0a44599737 100644
--- a/drivers/net/mlx5/mlx5_flow_aso.c
+++ b/drivers/net/mlx5/mlx5_flow_aso.c
@@ -1377,9 +1377,9 @@ mlx5_aso_ct_status_update(struct mlx5_aso_sq *sq, uint16_t num)
MLX5_ASSERT(ct);
MLX5_ASO_CT_UPDATE_STATE(ct, ASO_CONNTRACK_READY);
if (sq->elts[idx].query_data)
- rte_memcpy(sq->elts[idx].query_data,
- (char *)((uintptr_t)sq->mr.addr + idx * 64),
- 64);
+ memcpy(sq->elts[idx].query_data,
+ (char *)((uintptr_t)sq->mr.addr + idx * 64),
+ 64);
}
}
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 9ebbe664d1..9f0f8d0907 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2072,7 +2072,7 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev,
return rte_flow_error_set(error, ENOMEM,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
NULL, "translate modify_header: no memory for modify header context");
- rte_memcpy(acts->mhdr, mhdr, sizeof(*mhdr));
+ memcpy(acts->mhdr, mhdr, sizeof(*mhdr));
pattern.data = (__be64 *)acts->mhdr->mhdr_cmds;
if (mhdr->shared) {
uint32_t flags = mlx5_hw_act_flag[!!attr->group][tbl_type] |
@@ -2669,8 +2669,8 @@ flow_hw_populate_rule_acts_caches(struct rte_eth_dev *dev,
struct mlx5dr_rule_action *rule_acts =
flow_hw_get_dr_action_buffer(priv, table, at_idx, q);
- rte_memcpy(rule_acts, table->ats[at_idx].acts.rule_acts,
- sizeof(table->ats[at_idx].acts.rule_acts));
+ memcpy(rule_acts, table->ats[at_idx].acts.rule_acts,
+ sizeof(table->ats[at_idx].acts.rule_acts));
}
}
@@ -2972,9 +2972,9 @@ flow_hw_modify_field_construct(struct mlx5_modification_cmd *mhdr_cmd,
mhdr_action->src.field != RTE_FLOW_FIELD_POINTER)
return 0;
if (mhdr_action->src.field == RTE_FLOW_FIELD_VALUE)
- rte_memcpy(values, &mhdr_action->src.value, sizeof(values));
+ memcpy(values, &mhdr_action->src.value, sizeof(values));
else
- rte_memcpy(values, mhdr_action->src.pvalue, sizeof(values));
+ memcpy(values, mhdr_action->src.pvalue, sizeof(values));
if (mhdr_action->dst.field == RTE_FLOW_FIELD_META ||
mhdr_action->dst.field == RTE_FLOW_FIELD_TAG ||
mhdr_action->dst.field == RTE_FLOW_FIELD_METER_COLOR ||
@@ -4825,7 +4825,7 @@ flow_hw_table_create(struct rte_eth_dev *dev,
rte_flow_error_set(error, err, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
"Failed to create template table");
else
- rte_memcpy(error, &sub_error, sizeof(sub_error));
+ memcpy(error, &sub_error, sizeof(sub_error));
}
return NULL;
}
@@ -6917,8 +6917,9 @@ flow_hw_set_vlan_vid(struct rte_eth_dev *dev,
if (masked) {
uint32_t mask_val = 0xffffffff;
- rte_memcpy(spec->src.value, &conf->vlan_vid, sizeof(conf->vlan_vid));
- rte_memcpy(mask->src.value, &mask_val, sizeof(mask_val));
+ memcpy(spec->src.value, &conf->vlan_vid,
+ sizeof(conf->vlan_vid));
+ memcpy(mask->src.value, &mask_val, sizeof(mask_val));
}
ra[set_vlan_vid_ix].type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD;
ra[set_vlan_vid_ix].conf = spec;
@@ -6954,7 +6955,7 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev,
.conf = &conf
};
- rte_memcpy(conf.src.value, &vid, sizeof(vid));
+ memcpy(conf.src.value, &vid, sizeof(vid));
return flow_hw_modify_field_construct(mhdr_cmd, act_data, hw_acts, &modify_action);
}
@@ -8577,8 +8578,8 @@ flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev,
struct rte_flow_action actions_m[4] = { { 0 } };
unsigned int idx = 0;
- rte_memcpy(set_tag_v.src.value, &tag_value, sizeof(tag_value));
- rte_memcpy(set_tag_m.src.value, &tag_mask, sizeof(tag_mask));
+ memcpy(set_tag_v.src.value, &tag_value, sizeof(tag_value));
+ memcpy(set_tag_m.src.value, &tag_mask, sizeof(tag_mask));
flow_hw_update_action_mask(&actions_v[idx], &actions_m[idx],
RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
&set_tag_v, &set_tag_m);
@@ -8985,8 +8986,8 @@ flow_hw_create_ctrl_regc_jump_actions_template(struct rte_eth_dev *dev,
};
set_reg_v.dst.offset = rte_bsf32(marker_mask);
- rte_memcpy(set_reg_v.src.value, &marker_bits, sizeof(marker_bits));
- rte_memcpy(set_reg_m.src.value, &marker_mask, sizeof(marker_mask));
+ memcpy(set_reg_v.src.value, &marker_bits, sizeof(marker_bits));
+ memcpy(set_reg_m.src.value, &marker_mask, sizeof(marker_mask));
return flow_hw_actions_template_create(dev, &attr, actions_v, actions_m, error);
}
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index cc087348a4..afd3194553 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -767,9 +767,9 @@ mlx5_rx_poll_len(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cqe,
ret = check_cqe_iteration(next, rxq->cqe_n, rxq->cq_ci);
if (ret != MLX5_CQE_STATUS_SW_OWN ||
MLX5_CQE_FORMAT(next->op_own) == MLX5_COMPRESSED)
- rte_memcpy(&rxq->title_cqe,
- (const void *)(uintptr_t)cqe,
- sizeof(struct mlx5_cqe));
+ memcpy(&rxq->title_cqe,
+ (const void *)(uintptr_t)cqe,
+ sizeof(struct mlx5_cqe));
}
}
}
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c
index 2363d7ed27..c3bcd3ef16 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.c
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.c
@@ -349,8 +349,8 @@ rxq_burst_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts,
ret = check_cqe_iteration(next, rxq->cqe_n, rxq->cq_ci);
if (ret != MLX5_CQE_STATUS_SW_OWN ||
MLX5_CQE_FORMAT(next->op_own) == MLX5_COMPRESSED)
- rte_memcpy(&rxq->title_pkt, elts[nocmp_n - 1],
- sizeof(struct rte_mbuf));
+ memcpy(&rxq->title_pkt, elts[nocmp_n - 1],
+ sizeof(struct rte_mbuf));
}
/* Decompress the last CQE if compressed. */
if (comp_idx < MLX5_VPMD_DESCS_PER_LOOP) {
@@ -499,8 +499,8 @@ rxq_burst_mprq_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts,
ret = check_cqe_iteration(next, rxq->cqe_n, rxq->cq_ci);
if (ret != MLX5_CQE_STATUS_SW_OWN ||
MLX5_CQE_FORMAT(next->op_own) == MLX5_COMPRESSED)
- rte_memcpy(&rxq->title_pkt, elts[nocmp_n - 1],
- sizeof(struct rte_mbuf));
+ memcpy(&rxq->title_pkt, elts[nocmp_n - 1],
+ sizeof(struct rte_mbuf));
}
/* Decompress the last CQE if compressed. */
if (comp_idx < MLX5_VPMD_DESCS_PER_LOOP) {
diff --git a/drivers/net/mvpp2/mrvl_tm.c b/drivers/net/mvpp2/mrvl_tm.c
index 9fac80b867..a5cdae6d1d 100644
--- a/drivers/net/mvpp2/mrvl_tm.c
+++ b/drivers/net/mvpp2/mrvl_tm.c
@@ -437,7 +437,7 @@ mrvl_shaper_profile_add(struct rte_eth_dev *dev, uint32_t shaper_profile_id,
NULL, NULL);
profile->id = shaper_profile_id;
- rte_memcpy(&profile->params, params, sizeof(profile->params));
+ memcpy(&profile->params, params, sizeof(profile->params));
LIST_INSERT_HEAD(&priv->shaper_profiles, profile, next);
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index b8a32832d7..b1be12c2d5 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -16,7 +16,6 @@
#include <sys/ioctl.h>
#include <rte_ethdev.h>
-#include <rte_memcpy.h>
#include <rte_string_fns.h>
#include <rte_memzone.h>
#include <rte_devargs.h>
diff --git a/drivers/net/nfp/flower/nfp_conntrack.c b/drivers/net/nfp/flower/nfp_conntrack.c
index f89003be8b..279bf17eb3 100644
--- a/drivers/net/nfp/flower/nfp_conntrack.c
+++ b/drivers/net/nfp/flower/nfp_conntrack.c
@@ -1470,7 +1470,7 @@ nfp_ct_do_flow_merge(struct nfp_ct_zone_entry *ze,
merge_entry->ze = ze;
merge_entry->pre_ct_parent = pre_ct_entry;
merge_entry->post_ct_parent = post_ct_entry;
- rte_memcpy(merge_entry->cookie, new_cookie, sizeof(new_cookie));
+ memcpy(merge_entry->cookie, new_cookie, sizeof(new_cookie));
merge_entry->rule.items_cnt = pre_ct_entry->rule.items_cnt +
post_ct_entry->rule.items_cnt - cnt_same_item - 1;
merge_entry->rule.actions_cnt = pre_ct_entry->rule.actions_cnt +
diff --git a/drivers/net/nfp/flower/nfp_flower_flow.c b/drivers/net/nfp/flower/nfp_flower_flow.c
index 086cc8079a..4a6c587b6b 100644
--- a/drivers/net/nfp/flower/nfp_flower_flow.c
+++ b/drivers/net/nfp/flower/nfp_flower_flow.c
@@ -179,10 +179,10 @@ nfp_mask_id_alloc(struct nfp_flow_priv *priv,
return -ENOENT;
}
- rte_memcpy(&temp_id, &ring->buf[ring->tail], NFP_FLOWER_MASK_ELEMENT_RS);
+ memcpy(&temp_id, &ring->buf[ring->tail], NFP_FLOWER_MASK_ELEMENT_RS);
*mask_id = temp_id;
- rte_memcpy(&ring->buf[ring->tail], &freed_id, NFP_FLOWER_MASK_ELEMENT_RS);
+ memcpy(&ring->buf[ring->tail], &freed_id, NFP_FLOWER_MASK_ELEMENT_RS);
ring->tail = (ring->tail + NFP_FLOWER_MASK_ELEMENT_RS) %
(NFP_FLOWER_MASK_ENTRY_RS * NFP_FLOWER_MASK_ELEMENT_RS);
@@ -201,7 +201,7 @@ nfp_mask_id_free(struct nfp_flow_priv *priv,
if (CIRC_SPACE(ring->head, ring->tail, NFP_FLOWER_MASK_ENTRY_RS) == 0)
return -ENOBUFS;
- rte_memcpy(&ring->buf[ring->head], &mask_id, NFP_FLOWER_MASK_ELEMENT_RS);
+ memcpy(&ring->buf[ring->head], &mask_id, NFP_FLOWER_MASK_ELEMENT_RS);
ring->head = (ring->head + NFP_FLOWER_MASK_ELEMENT_RS) %
(NFP_FLOWER_MASK_ENTRY_RS * NFP_FLOWER_MASK_ELEMENT_RS);
@@ -2255,13 +2255,13 @@ nfp_flow_action_set_mac(char *act_data,
set_mac = action->conf;
if (mac_src_flag) {
- rte_memcpy(&set_eth->eth_addr[RTE_ETHER_ADDR_LEN],
- set_mac->mac_addr, RTE_ETHER_ADDR_LEN);
+ memcpy(&set_eth->eth_addr[RTE_ETHER_ADDR_LEN],
+ set_mac->mac_addr, RTE_ETHER_ADDR_LEN);
for (i = 0; i < RTE_ETHER_ADDR_LEN; i++)
set_eth->eth_addr_mask[RTE_ETHER_ADDR_LEN + i] = 0xff;
} else {
- rte_memcpy(&set_eth->eth_addr[0],
- set_mac->mac_addr, RTE_ETHER_ADDR_LEN);
+ memcpy(&set_eth->eth_addr[0], set_mac->mac_addr,
+ RTE_ETHER_ADDR_LEN);
for (i = 0; i < RTE_ETHER_ADDR_LEN; i++)
set_eth->eth_addr_mask[i] = 0xff;
}
@@ -2337,7 +2337,7 @@ nfp_flow_action_set_ipv6(char *act_data,
set_ip->reserved = 0;
for (i = 0; i < 4; i++) {
- rte_memcpy(&tmp, &set_ipv6->ipv6_addr[i * 4], 4);
+ memcpy(&tmp, &set_ipv6->ipv6_addr[i * 4], 4);
set_ip->ipv6[i].exact = tmp;
set_ip->ipv6[i].mask = RTE_BE32(0xffffffff);
}
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c
index b2c55879ca..007e202e26 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.c
+++ b/drivers/net/nfp/flower/nfp_flower_representor.c
@@ -194,7 +194,7 @@ nfp_flower_repr_stats_get(struct rte_eth_dev *ethdev,
struct nfp_flower_representor *repr;
repr = ethdev->data->dev_private;
- rte_memcpy(stats, &repr->repr_stats, sizeof(struct rte_eth_stats));
+ memcpy(stats, &repr->repr_stats, sizeof(struct rte_eth_stats));
return 0;
}
diff --git a/drivers/net/nfp/nfp_mtr.c b/drivers/net/nfp/nfp_mtr.c
index 6abc6dc9bc..d69139d861 100644
--- a/drivers/net/nfp/nfp_mtr.c
+++ b/drivers/net/nfp/nfp_mtr.c
@@ -243,7 +243,7 @@ nfp_mtr_profile_mod(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_profile_conf old_conf;
/* Get the old profile config */
- rte_memcpy(&old_conf, &mtr_profile->conf, sizeof(old_conf));
+ memcpy(&old_conf, &mtr_profile->conf, sizeof(old_conf));
memset(&mtr_profile->conf, 0, sizeof(struct nfp_profile_conf));
@@ -267,7 +267,7 @@ nfp_mtr_profile_mod(struct nfp_app_fw_flower *app_fw_flower,
return 0;
rollback:
- rte_memcpy(&mtr_profile->conf, &old_conf, sizeof(old_conf));
+ memcpy(&mtr_profile->conf, &old_conf, sizeof(old_conf));
return ret;
}
@@ -492,8 +492,8 @@ nfp_mtr_policy_add(struct rte_eth_dev *dev,
}
mtr_policy->policy_id = mtr_policy_id;
- rte_memcpy(&mtr_policy->policy, policy,
- sizeof(struct rte_mtr_meter_policy_params));
+ memcpy(&mtr_policy->policy, policy,
+ sizeof(struct rte_mtr_meter_policy_params));
/* Insert policy into policy list */
LIST_INSERT_HEAD(&priv->policies, mtr_policy, next);
@@ -1028,7 +1028,7 @@ nfp_mtr_stats_read(struct rte_eth_dev *dev,
*stats_mask = mtr->stats_mask;
rte_spinlock_lock(&priv->mtr_stats_lock);
- rte_memcpy(&curr, &mtr->mtr_stats.curr, sizeof(curr));
+ memcpy(&curr, &mtr->mtr_stats.curr, sizeof(curr));
rte_spinlock_unlock(&priv->mtr_stats_lock);
prev = &mtr->mtr_stats.prev;
diff --git a/drivers/net/ngbe/ngbe_pf.c b/drivers/net/ngbe/ngbe_pf.c
index 947ae7fe94..48f578b066 100644
--- a/drivers/net/ngbe/ngbe_pf.c
+++ b/drivers/net/ngbe/ngbe_pf.c
@@ -347,7 +347,7 @@ ngbe_vf_reset(struct rte_eth_dev *eth_dev, uint16_t vf, uint32_t *msgbuf)
/* reply to reset with ack and vf mac address */
msgbuf[0] = NGBE_VF_RESET | NGBE_VT_MSGTYPE_ACK;
- rte_memcpy(new_mac, vf_mac, RTE_ETHER_ADDR_LEN);
+ memcpy(new_mac, vf_mac, RTE_ETHER_ADDR_LEN);
/*
* Piggyback the multicast filter type so VF can compute the
* correct vectors
@@ -369,7 +369,7 @@ ngbe_vf_set_mac_addr(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *ea = (struct rte_ether_addr *)new_mac;
if (rte_is_valid_assigned_ether_addr(ea)) {
- rte_memcpy(vfinfo[vf].vf_mac_addresses, new_mac, 6);
+ memcpy(vfinfo[vf].vf_mac_addresses, new_mac, 6);
return hw->mac.set_rar(hw, rar_entry, new_mac, vf, true);
}
return -1;
diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
index 7c46004f1e..fd8b01b2b1 100644
--- a/drivers/net/null/rte_eth_null.c
+++ b/drivers/net/null/rte_eth_null.c
@@ -465,7 +465,7 @@ eth_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf)
rss_conf->rss_hf & internal->flow_type_rss_offloads;
if (rss_conf->rss_key)
- rte_memcpy(internal->rss_key, rss_conf->rss_key, 40);
+ memcpy(internal->rss_key, rss_conf->rss_key, 40);
rte_spinlock_unlock(&internal->rss_lock);
@@ -482,7 +482,7 @@ eth_rss_hash_conf_get(struct rte_eth_dev *dev,
rss_conf->rss_hf = dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
if (rss_conf->rss_key)
- rte_memcpy(rss_conf->rss_key, internal->rss_key, 40);
+ memcpy(rss_conf->rss_key, internal->rss_key, 40);
rte_spinlock_unlock(&internal->rss_lock);
@@ -577,7 +577,7 @@ eth_dev_null_create(struct rte_vdev_device *dev, struct pmd_options *args)
internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_ETH_RETA_GROUP_SIZE;
- rte_memcpy(internals->rss_key, default_rss_key, 40);
+ memcpy(internals->rss_key, default_rss_key, 40);
data = eth_dev->data;
data->nb_rx_queues = (uint16_t)nb_rx_queues;
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index bfec085045..54f0dfffbd 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -1270,7 +1270,7 @@ eth_pcap_update_mac(const char *if_name, struct rte_eth_dev *eth_dev,
return -1;
PMD_LOG(INFO, "Setting phy MAC for %s", if_name);
- rte_memcpy(mac_addrs, mac.addr_bytes, RTE_ETHER_ADDR_LEN);
+ memcpy(mac_addrs, mac.addr_bytes, RTE_ETHER_ADDR_LEN);
eth_dev->data->mac_addrs = mac_addrs;
return 0;
}
diff --git a/drivers/net/pcap/pcap_osdep_freebsd.c b/drivers/net/pcap/pcap_osdep_freebsd.c
index 20556b3e92..8811485aef 100644
--- a/drivers/net/pcap/pcap_osdep_freebsd.c
+++ b/drivers/net/pcap/pcap_osdep_freebsd.c
@@ -9,7 +9,6 @@
#include <sys/sysctl.h>
#include <rte_malloc.h>
-#include <rte_memcpy.h>
#include "pcap_osdep.h"
@@ -52,7 +51,7 @@ osdep_iface_mac_get(const char *if_name, struct rte_ether_addr *mac)
ifm = (struct if_msghdr *)buf;
sdl = (struct sockaddr_dl *)(ifm + 1);
- rte_memcpy(mac->addr_bytes, LLADDR(sdl), RTE_ETHER_ADDR_LEN);
+ memcpy(mac->addr_bytes, LLADDR(sdl), RTE_ETHER_ADDR_LEN);
rte_free(buf);
return 0;
diff --git a/drivers/net/pcap/pcap_osdep_linux.c b/drivers/net/pcap/pcap_osdep_linux.c
index 97033f57c5..943e947296 100644
--- a/drivers/net/pcap/pcap_osdep_linux.c
+++ b/drivers/net/pcap/pcap_osdep_linux.c
@@ -9,7 +9,6 @@
#include <sys/socket.h>
#include <unistd.h>
-#include <rte_memcpy.h>
#include <rte_string_fns.h>
#include "pcap_osdep.h"
@@ -35,7 +34,7 @@ osdep_iface_mac_get(const char *if_name, struct rte_ether_addr *mac)
return -1;
}
- rte_memcpy(mac->addr_bytes, ifr.ifr_hwaddr.sa_data, RTE_ETHER_ADDR_LEN);
+ memcpy(mac->addr_bytes, ifr.ifr_hwaddr.sa_data, RTE_ETHER_ADDR_LEN);
close(if_fd);
return 0;
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index fd63262f3a..32fa2016d2 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -180,7 +180,7 @@ static void qed_handle_bulletin_change(struct ecore_hwfn *hwfn)
is_mac_exist = ecore_vf_bulletin_get_forced_mac(hwfn, mac,
&is_mac_forced);
if (is_mac_exist && is_mac_forced)
- rte_memcpy(hwfn->hw_info.hw_mac_addr, mac, ETH_ALEN);
+ memcpy(hwfn->hw_info.hw_mac_addr, mac, ETH_ALEN);
/* Always update link configuration according to bulletin */
qed_link_update(hwfn);
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index 48953dd7a0..8d53983965 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -8,7 +8,6 @@
#include <rte_mbuf.h>
#include <ethdev_driver.h>
#include <rte_malloc.h>
-#include <rte_memcpy.h>
#include <rte_os_shim.h>
#include <rte_string_fns.h>
#include <bus_vdev_driver.h>
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 2cfff20f47..2fd160e99e 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -850,7 +850,7 @@ sfc_rss_attach(struct sfc_adapter *sa)
efx_ev_fini(sa->nic);
efx_intr_fini(sa->nic);
- rte_memcpy(rss->key, default_rss_key, sizeof(rss->key));
+ memcpy(rss->key, default_rss_key, sizeof(rss->key));
memset(&rss->dummy_ctx, 0, sizeof(rss->dummy_ctx));
rss->dummy_ctx.conf.qid_span = 1;
rss->dummy_ctx.dummy = true;
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 116229382b..0b78a9eacc 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -514,7 +514,7 @@ sfc_ef10_xmit_tso_pkt(struct sfc_ef10_txq * const txq, struct rte_mbuf *m_seg,
first_m_seg->outer_l2_len);
th = (const struct rte_tcp_hdr *)(hdr_addr + tcph_off);
- rte_memcpy(&sent_seq, &th->sent_seq, sizeof(uint32_t));
+ memcpy(&sent_seq, &th->sent_seq, sizeof(uint32_t));
sent_seq = rte_be_to_cpu_32(sent_seq);
sfc_ef10_tx_qdesc_tso2_create(txq, *added, packet_id, outer_packet_id,
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 92ca5e7a60..a6f0743f10 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1287,8 +1287,7 @@ sfc_set_mc_addr_list(struct rte_eth_dev *dev,
}
for (i = 0; i < nb_mc_addr; ++i) {
- rte_memcpy(mc_addrs, mc_addr_set[i].addr_bytes,
- EFX_MAC_ADDR_LEN);
+ memcpy(mc_addrs, mc_addr_set[i].addr_bytes, EFX_MAC_ADDR_LEN);
mc_addrs += EFX_MAC_ADDR_LEN;
}
@@ -1672,7 +1671,7 @@ sfc_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
rss_conf->rss_hf = sfc_rx_hf_efx_to_rte(rss, rss->hash_types);
rss_conf->rss_key_len = EFX_RSS_KEY_SIZE;
if (rss_conf->rss_key != NULL)
- rte_memcpy(rss_conf->rss_key, rss->key, EFX_RSS_KEY_SIZE);
+ memcpy(rss_conf->rss_key, rss->key, EFX_RSS_KEY_SIZE);
return 0;
}
@@ -1741,7 +1740,7 @@ sfc_dev_rss_hash_update(struct rte_eth_dev *dev,
}
}
- rte_memcpy(rss->key, rss_conf->rss_key, sizeof(rss->key));
+ memcpy(rss->key, rss_conf->rss_key, sizeof(rss->key));
}
rss->hash_types = efx_hash_types;
@@ -1840,7 +1839,7 @@ sfc_dev_rss_reta_update(struct rte_eth_dev *dev,
sfc_adapter_lock(sa);
- rte_memcpy(rss_tbl_new, rss->tbl, sizeof(rss->tbl));
+ memcpy(rss_tbl_new, rss->tbl, sizeof(rss->tbl));
for (entry = 0; entry < reta_size; entry++) {
int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
@@ -1864,7 +1863,7 @@ sfc_dev_rss_reta_update(struct rte_eth_dev *dev,
goto fail_scale_tbl_set;
}
- rte_memcpy(rss->tbl, rss_tbl_new, sizeof(rss->tbl));
+ memcpy(rss->tbl, rss_tbl_new, sizeof(rss->tbl));
fail_scale_tbl_set:
bad_reta_entry:
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 1b50aefe5c..2bb98a4433 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -325,8 +325,8 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
efx_spec->efs_match_flags |= is_ifrm ?
EFX_FILTER_MATCH_IFRM_LOC_MAC :
EFX_FILTER_MATCH_LOC_MAC;
- rte_memcpy(loc_mac, spec->hdr.dst_addr.addr_bytes,
- EFX_MAC_ADDR_LEN);
+ memcpy(loc_mac, spec->hdr.dst_addr.addr_bytes,
+ EFX_MAC_ADDR_LEN);
} else if (memcmp(mask->hdr.dst_addr.addr_bytes, ig_mask,
EFX_MAC_ADDR_LEN) == 0) {
if (rte_is_unicast_ether_addr(&spec->hdr.dst_addr))
@@ -348,8 +348,8 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
*/
if (rte_is_same_ether_addr(&mask->hdr.src_addr, &supp_mask.hdr.src_addr)) {
efx_spec->efs_match_flags |= EFX_FILTER_MATCH_REM_MAC;
- rte_memcpy(efx_spec->efs_rem_mac, spec->hdr.src_addr.addr_bytes,
- EFX_MAC_ADDR_LEN);
+ memcpy(efx_spec->efs_rem_mac, spec->hdr.src_addr.addr_bytes,
+ EFX_MAC_ADDR_LEN);
} else if (!rte_is_zero_ether_addr(&mask->hdr.src_addr)) {
goto fail_bad_mask;
}
@@ -624,8 +624,8 @@ sfc_flow_parse_ipv6(const struct rte_flow_item *item,
RTE_BUILD_BUG_ON(sizeof(efx_spec->efs_rem_host) !=
sizeof(spec->hdr.src_addr));
- rte_memcpy(&efx_spec->efs_rem_host, spec->hdr.src_addr,
- sizeof(efx_spec->efs_rem_host));
+ memcpy(&efx_spec->efs_rem_host, spec->hdr.src_addr,
+ sizeof(efx_spec->efs_rem_host));
} else if (!sfc_flow_is_zero(mask->hdr.src_addr,
sizeof(mask->hdr.src_addr))) {
goto fail_bad_mask;
@@ -637,8 +637,8 @@ sfc_flow_parse_ipv6(const struct rte_flow_item *item,
RTE_BUILD_BUG_ON(sizeof(efx_spec->efs_loc_host) !=
sizeof(spec->hdr.dst_addr));
- rte_memcpy(&efx_spec->efs_loc_host, spec->hdr.dst_addr,
- sizeof(efx_spec->efs_loc_host));
+ memcpy(&efx_spec->efs_loc_host, spec->hdr.dst_addr,
+ sizeof(efx_spec->efs_loc_host));
} else if (!sfc_flow_is_zero(mask->hdr.dst_addr,
sizeof(mask->hdr.dst_addr))) {
goto fail_bad_mask;
@@ -889,8 +889,8 @@ sfc_flow_set_efx_spec_vni_or_vsid(efx_filter_spec_t *efx_spec,
if (memcmp(vni_or_vsid_mask, vni_or_vsid_full_mask,
EFX_VNI_OR_VSID_LEN) == 0) {
efx_spec->efs_match_flags |= EFX_FILTER_MATCH_VNI_OR_VSID;
- rte_memcpy(efx_spec->efs_vni_or_vsid, vni_or_vsid_val,
- EFX_VNI_OR_VSID_LEN);
+ memcpy(efx_spec->efs_vni_or_vsid, vni_or_vsid_val,
+ EFX_VNI_OR_VSID_LEN);
} else if (!sfc_flow_is_zero(vni_or_vsid_mask, EFX_VNI_OR_VSID_LEN)) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM, item,
--git a/drivers/net/sfc/sfc_flow_rss.c b/drivers/net/sfc/sfc_flow_rss.c
index e28c943335..a46ce1fa87 100644
--- a/drivers/net/sfc/sfc_flow_rss.c
+++ b/drivers/net/sfc/sfc_flow_rss.c
@@ -119,7 +119,7 @@ sfc_flow_rss_parse_conf(struct sfc_adapter *sa,
key = ethdev_rss->key;
}
- rte_memcpy(out->key, key, sizeof(out->key));
+ memcpy(out->key, key, sizeof(out->key));
switch (in->func) {
case RTE_ETH_HASH_FUNCTION_DEFAULT:
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 60ff6d2181..1f243e798e 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -547,7 +547,7 @@ sfc_mae_mac_addr_add(struct sfc_adapter *sa,
if (mac_addr == NULL)
return ENOMEM;
- rte_memcpy(mac_addr->addr_bytes, addr_bytes, EFX_MAC_ADDR_LEN);
+ memcpy(mac_addr->addr_bytes, addr_bytes, EFX_MAC_ADDR_LEN);
mac_addr->refcnt = 1;
mac_addr->fw_rsrc.mac_id.id = EFX_MAE_RSRC_ID_INVALID;
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index a193229265..55aae9ef04 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -1526,7 +1526,7 @@ sfc_rx_process_adv_conf_rss(struct sfc_adapter *sa,
sizeof(rss->key));
return EINVAL;
}
- rte_memcpy(rss->key, conf->rss_key, sizeof(rss->key));
+ memcpy(rss->key, conf->rss_key, sizeof(rss->key));
}
rss->hash_types = efx_hash_types;
diff --git a/drivers/net/sfc/sfc_tso.c b/drivers/net/sfc/sfc_tso.c
index a0827d1c0d..5da2de3c3d 100644
--- a/drivers/net/sfc/sfc_tso.c
+++ b/drivers/net/sfc/sfc_tso.c
@@ -159,7 +159,7 @@ sfc_efx_tso_do(struct sfc_efx_txq *txq, unsigned int idx,
/* Handle TCP header */
th = (const struct rte_tcp_hdr *)(tsoh + tcph_off);
- rte_memcpy(&sent_seq, &th->sent_seq, sizeof(uint32_t));
+ memcpy(&sent_seq, &th->sent_seq, sizeof(uint32_t));
sent_seq = rte_be_to_cpu_32(sent_seq);
efx_tx_qdesc_tso2_create(txq->common, packet_id, 0, sent_seq,
diff --git a/drivers/net/sfc/sfc_tso.h b/drivers/net/sfc/sfc_tso.h
index 9029ad1590..e914eae77e 100644
--- a/drivers/net/sfc/sfc_tso.h
+++ b/drivers/net/sfc/sfc_tso.h
@@ -35,7 +35,7 @@ sfc_tso_ip4_get_ipid(const uint8_t *pkt_hdrp, size_t ip_hdr_off)
uint16_t ipid;
ip_hdrp = (const struct rte_ipv4_hdr *)(pkt_hdrp + ip_hdr_off);
- rte_memcpy(&ipid, &ip_hdrp->packet_id, sizeof(ipid));
+ memcpy(&ipid, &ip_hdrp->packet_id, sizeof(ipid));
return rte_be_to_cpu_16(ipid);
}
@@ -46,9 +46,8 @@ sfc_tso_outer_udp_fix_len(const struct rte_mbuf *m, uint8_t *tsoh)
rte_be16_t len = rte_cpu_to_be_16(m->l2_len + m->l3_len + m->l4_len +
m->tso_segsz);
- rte_memcpy(tsoh + m->outer_l2_len + m->outer_l3_len +
- offsetof(struct rte_udp_hdr, dgram_len),
- &len, sizeof(len));
+ memcpy(tsoh + m->outer_l2_len + m->outer_l3_len + offsetof(struct rte_udp_hdr, dgram_len),
+ &len, sizeof(len));
}
static inline void
@@ -67,7 +66,7 @@ sfc_tso_innermost_ip_fix_len(const struct rte_mbuf *m, uint8_t *tsoh,
len = rte_cpu_to_be_16(ip_payload_len);
}
- rte_memcpy(tsoh + iph_ofst + field_ofst, &len, sizeof(len));
+ memcpy(tsoh + iph_ofst + field_ofst, &len, sizeof(len));
}
unsigned int sfc_tso_prepare_header(uint8_t *tsoh, size_t header_len,
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index 69d9da695b..518619c53b 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1404,11 +1404,11 @@ tap_mac_set(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)
mac_addr))
mode = LOCAL_AND_REMOTE;
ifr.ifr_hwaddr.sa_family = AF_LOCAL;
- rte_memcpy(ifr.ifr_hwaddr.sa_data, mac_addr, RTE_ETHER_ADDR_LEN);
+ memcpy(ifr.ifr_hwaddr.sa_data, mac_addr, RTE_ETHER_ADDR_LEN);
ret = tap_ioctl(pmd, SIOCSIFHWADDR, &ifr, 1, mode);
if (ret < 0)
return ret;
- rte_memcpy(&pmd->eth_addr, mac_addr, RTE_ETHER_ADDR_LEN);
+ memcpy(&pmd->eth_addr, mac_addr, RTE_ETHER_ADDR_LEN);
if (pmd->remote_if_index && !pmd->flow_isolate) {
/* Replace MAC redirection rule after a MAC change */
ret = tap_flow_implicit_destroy(pmd, TAP_REMOTE_LOCAL_MAC);
@@ -2010,7 +2010,7 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
if (rte_is_zero_ether_addr(mac_addr))
rte_eth_random_addr((uint8_t *)&pmd->eth_addr);
else
- rte_memcpy(&pmd->eth_addr, mac_addr, sizeof(*mac_addr));
+ memcpy(&pmd->eth_addr, mac_addr, sizeof(*mac_addr));
}
/*
@@ -2033,8 +2033,8 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
if (pmd->type == ETH_TUNTAP_TYPE_TAP) {
memset(&ifr, 0, sizeof(struct ifreq));
ifr.ifr_hwaddr.sa_family = AF_LOCAL;
- rte_memcpy(ifr.ifr_hwaddr.sa_data, &pmd->eth_addr,
- RTE_ETHER_ADDR_LEN);
+ memcpy(ifr.ifr_hwaddr.sa_data, &pmd->eth_addr,
+ RTE_ETHER_ADDR_LEN);
if (tap_ioctl(pmd, SIOCSIFHWADDR, &ifr, 0, LOCAL_ONLY) < 0)
goto error_exit;
}
@@ -2091,8 +2091,8 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
pmd->name, pmd->remote_iface);
goto error_remote;
}
- rte_memcpy(&pmd->eth_addr, ifr.ifr_hwaddr.sa_data,
- RTE_ETHER_ADDR_LEN);
+ memcpy(&pmd->eth_addr, ifr.ifr_hwaddr.sa_data,
+ RTE_ETHER_ADDR_LEN);
/* The desired MAC is already in ifreq after SIOCGIFHWADDR. */
if (tap_ioctl(pmd, SIOCSIFHWADDR, &ifr, 0, LOCAL_ONLY) < 0) {
TAP_LOG(ERR, "%s: failed to get %s MAC address.",
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index b75e8898e2..1c42fd74b4 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -4304,9 +4304,8 @@ txgbe_add_del_ntuple_filter(struct rte_eth_dev *dev,
sizeof(struct txgbe_5tuple_filter), 0);
if (filter == NULL)
return -ENOMEM;
- rte_memcpy(&filter->filter_info,
- &filter_5tuple,
- sizeof(struct txgbe_5tuple_filter_info));
+ memcpy(&filter->filter_info, &filter_5tuple,
+ sizeof(struct txgbe_5tuple_filter_info));
filter->queue = ntuple_filter->queue;
ret = txgbe_add_5tuple_filter(dev, filter);
if (ret < 0) {
@@ -5109,9 +5108,7 @@ txgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
if (!node)
return -ENOMEM;
- rte_memcpy(&node->key,
- &key,
- sizeof(struct txgbe_l2_tn_key));
+ memcpy(&node->key, &key, sizeof(struct txgbe_l2_tn_key));
node->pool = l2_tunnel->pool;
ret = txgbe_insert_l2_tn_filter(l2_tn_info, node);
if (ret < 0) {
diff --git a/drivers/net/txgbe/txgbe_fdir.c b/drivers/net/txgbe/txgbe_fdir.c
index a198b6781b..00366ed873 100644
--- a/drivers/net/txgbe/txgbe_fdir.c
+++ b/drivers/net/txgbe/txgbe_fdir.c
@@ -42,7 +42,7 @@
else \
ipv6_addr[i] = 0; \
} \
- rte_memcpy((ipaddr), ipv6_addr, sizeof(ipv6_addr));\
+ memcpy((ipaddr), ipv6_addr, sizeof(ipv6_addr));\
} while (0)
/**
@@ -858,8 +858,8 @@ txgbe_fdir_filter_program(struct rte_eth_dev *dev,
sizeof(struct txgbe_fdir_filter), 0);
if (!node)
return -ENOMEM;
- rte_memcpy(&node->input, &rule->input,
- sizeof(struct txgbe_atr_input));
+ memcpy(&node->input, &rule->input,
+ sizeof(struct txgbe_atr_input));
node->fdirflags = rule->fdirflags;
node->fdirhash = fdirhash;
node->queue = queue;
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index 7ef52d0b0f..c76fc0eed0 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -1834,10 +1834,10 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused,
if (item->spec) {
rule->b_spec = TRUE;
ipv6_spec = item->spec;
- rte_memcpy(rule->input.src_ip,
- ipv6_spec->hdr.src_addr, 16);
- rte_memcpy(rule->input.dst_ip,
- ipv6_spec->hdr.dst_addr, 16);
+ memcpy(rule->input.src_ip, ipv6_spec->hdr.src_addr,
+ 16);
+ memcpy(rule->input.dst_ip, ipv6_spec->hdr.dst_addr,
+ 16);
}
/**
@@ -2756,9 +2756,9 @@ txgbe_flow_create(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "failed to allocate memory");
goto out;
}
- rte_memcpy(&ntuple_filter_ptr->filter_info,
- &ntuple_filter,
- sizeof(struct rte_eth_ntuple_filter));
+ memcpy(&ntuple_filter_ptr->filter_info,
+ &ntuple_filter,
+ sizeof(struct rte_eth_ntuple_filter));
TAILQ_INSERT_TAIL(&filter_ntuple_list,
ntuple_filter_ptr, entries);
flow->rule = ntuple_filter_ptr;
@@ -2782,9 +2782,9 @@ txgbe_flow_create(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "failed to allocate memory");
goto out;
}
- rte_memcpy(ðertype_filter_ptr->filter_info,
- ðertype_filter,
- sizeof(struct rte_eth_ethertype_filter));
+ memcpy(ðertype_filter_ptr->filter_info,
+ ðertype_filter,
+ sizeof(struct rte_eth_ethertype_filter));
TAILQ_INSERT_TAIL(&filter_ethertype_list,
ethertype_filter_ptr, entries);
flow->rule = ethertype_filter_ptr;
@@ -2806,9 +2806,8 @@ txgbe_flow_create(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "failed to allocate memory");
goto out;
}
- rte_memcpy(&syn_filter_ptr->filter_info,
- &syn_filter,
- sizeof(struct rte_eth_syn_filter));
+ memcpy(&syn_filter_ptr->filter_info, &syn_filter,
+ sizeof(struct rte_eth_syn_filter));
TAILQ_INSERT_TAIL(&filter_syn_list,
syn_filter_ptr,
entries);
@@ -2827,9 +2826,8 @@ txgbe_flow_create(struct rte_eth_dev *dev,
if (fdir_rule.b_mask) {
if (!fdir_info->mask_added) {
/* It's the first time the mask is set. */
- rte_memcpy(&fdir_info->mask,
- &fdir_rule.mask,
- sizeof(struct txgbe_hw_fdir_mask));
+ memcpy(&fdir_info->mask, &fdir_rule.mask,
+ sizeof(struct txgbe_hw_fdir_mask));
fdir_info->flex_bytes_offset =
fdir_rule.flex_bytes_offset;
@@ -2873,9 +2871,9 @@ txgbe_flow_create(struct rte_eth_dev *dev,
"failed to allocate memory");
goto out;
}
- rte_memcpy(&fdir_rule_ptr->filter_info,
- &fdir_rule,
- sizeof(struct txgbe_fdir_rule));
+ memcpy(&fdir_rule_ptr->filter_info,
+ &fdir_rule,
+ sizeof(struct txgbe_fdir_rule));
TAILQ_INSERT_TAIL(&filter_fdir_list,
fdir_rule_ptr, entries);
flow->rule = fdir_rule_ptr;
@@ -2910,9 +2908,8 @@ txgbe_flow_create(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "failed to allocate memory");
goto out;
}
- rte_memcpy(&l2_tn_filter_ptr->filter_info,
- &l2_tn_filter,
- sizeof(struct txgbe_l2_tunnel_conf));
+ memcpy(&l2_tn_filter_ptr->filter_info, &l2_tn_filter,
+ sizeof(struct txgbe_l2_tunnel_conf));
TAILQ_INSERT_TAIL(&filter_l2_tunnel_list,
l2_tn_filter_ptr, entries);
flow->rule = l2_tn_filter_ptr;
@@ -3038,9 +3035,8 @@ txgbe_flow_destroy(struct rte_eth_dev *dev,
case RTE_ETH_FILTER_NTUPLE:
ntuple_filter_ptr = (struct txgbe_ntuple_filter_ele *)
pmd_flow->rule;
- rte_memcpy(&ntuple_filter,
- &ntuple_filter_ptr->filter_info,
- sizeof(struct rte_eth_ntuple_filter));
+ memcpy(&ntuple_filter, &ntuple_filter_ptr->filter_info,
+ sizeof(struct rte_eth_ntuple_filter));
ret = txgbe_add_del_ntuple_filter(dev, &ntuple_filter, FALSE);
if (!ret) {
TAILQ_REMOVE(&filter_ntuple_list,
@@ -3051,9 +3047,8 @@ txgbe_flow_destroy(struct rte_eth_dev *dev,
case RTE_ETH_FILTER_ETHERTYPE:
ethertype_filter_ptr = (struct txgbe_ethertype_filter_ele *)
pmd_flow->rule;
- rte_memcpy(ðertype_filter,
- ðertype_filter_ptr->filter_info,
- sizeof(struct rte_eth_ethertype_filter));
+ memcpy(ðertype_filter, ðertype_filter_ptr->filter_info,
+ sizeof(struct rte_eth_ethertype_filter));
ret = txgbe_add_del_ethertype_filter(dev,
ðertype_filter, FALSE);
if (!ret) {
@@ -3065,9 +3060,8 @@ txgbe_flow_destroy(struct rte_eth_dev *dev,
case RTE_ETH_FILTER_SYN:
syn_filter_ptr = (struct txgbe_eth_syn_filter_ele *)
pmd_flow->rule;
- rte_memcpy(&syn_filter,
- &syn_filter_ptr->filter_info,
- sizeof(struct rte_eth_syn_filter));
+ memcpy(&syn_filter, &syn_filter_ptr->filter_info,
+ sizeof(struct rte_eth_syn_filter));
ret = txgbe_syn_filter_set(dev, &syn_filter, FALSE);
if (!ret) {
TAILQ_REMOVE(&filter_syn_list,
@@ -3077,9 +3071,8 @@ txgbe_flow_destroy(struct rte_eth_dev *dev,
break;
case RTE_ETH_FILTER_FDIR:
fdir_rule_ptr = (struct txgbe_fdir_rule_ele *)pmd_flow->rule;
- rte_memcpy(&fdir_rule,
- &fdir_rule_ptr->filter_info,
- sizeof(struct txgbe_fdir_rule));
+ memcpy(&fdir_rule, &fdir_rule_ptr->filter_info,
+ sizeof(struct txgbe_fdir_rule));
ret = txgbe_fdir_filter_program(dev, &fdir_rule, TRUE, FALSE);
if (!ret) {
TAILQ_REMOVE(&filter_fdir_list,
@@ -3092,8 +3085,8 @@ txgbe_flow_destroy(struct rte_eth_dev *dev,
case RTE_ETH_FILTER_L2_TUNNEL:
l2_tn_filter_ptr = (struct txgbe_eth_l2_tunnel_conf_ele *)
pmd_flow->rule;
- rte_memcpy(&l2_tn_filter, &l2_tn_filter_ptr->filter_info,
- sizeof(struct txgbe_l2_tunnel_conf));
+ memcpy(&l2_tn_filter, &l2_tn_filter_ptr->filter_info,
+ sizeof(struct txgbe_l2_tunnel_conf));
ret = txgbe_dev_l2_tunnel_filter_del(dev, &l2_tn_filter);
if (!ret) {
TAILQ_REMOVE(&filter_l2_tunnel_list,
diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c
index f9f8108fb8..000dd5ec6d 100644
--- a/drivers/net/txgbe/txgbe_ipsec.c
+++ b/drivers/net/txgbe/txgbe_ipsec.c
@@ -658,10 +658,10 @@ txgbe_crypto_add_ingress_sa_from_flow(const void *sess,
const struct rte_flow_item_ipv6 *ipv6 = ip_spec;
ic_session->src_ip.type = IPv6;
ic_session->dst_ip.type = IPv6;
- rte_memcpy(ic_session->src_ip.ipv6,
- ipv6->hdr.src_addr, 16);
- rte_memcpy(ic_session->dst_ip.ipv6,
- ipv6->hdr.dst_addr, 16);
+ memcpy(ic_session->src_ip.ipv6, ipv6->hdr.src_addr,
+ 16);
+ memcpy(ic_session->dst_ip.ipv6, ipv6->hdr.dst_addr,
+ 16);
} else {
const struct rte_flow_item_ipv4 *ipv4 = ip_spec;
ic_session->src_ip.type = IPv4;
diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
index 176f79005c..1e8668cbc9 100644
--- a/drivers/net/txgbe/txgbe_pf.c
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -17,7 +17,6 @@
#include <rte_eal.h>
#include <rte_ether.h>
#include <ethdev_driver.h>
-#include <rte_memcpy.h>
#include <rte_malloc.h>
#include <rte_random.h>
#include <bus_pci_driver.h>
@@ -435,7 +434,7 @@ txgbe_vf_reset(struct rte_eth_dev *eth_dev, uint16_t vf, uint32_t *msgbuf)
/* reply to reset with ack and vf mac address */
msgbuf[0] = TXGBE_VF_RESET | TXGBE_VT_MSGTYPE_ACK;
- rte_memcpy(new_mac, vf_mac, RTE_ETHER_ADDR_LEN);
+ memcpy(new_mac, vf_mac, RTE_ETHER_ADDR_LEN);
/*
* Piggyback the multicast filter type so VF can compute the
* correct vectors
@@ -457,7 +456,7 @@ txgbe_vf_set_mac_addr(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *ea = (struct rte_ether_addr *)new_mac;
if (rte_is_valid_assigned_ether_addr(ea)) {
- rte_memcpy(vfinfo[vf].vf_mac_addresses, new_mac, 6);
+ memcpy(vfinfo[vf].vf_mac_addresses, new_mac, 6);
return hw->mac.set_rar(hw, rar_entry, new_mac, vf, true);
}
return -1;
diff --git a/drivers/net/txgbe/txgbe_tm.c b/drivers/net/txgbe/txgbe_tm.c
index 3171be73d0..7d77b01dfe 100644
--- a/drivers/net/txgbe/txgbe_tm.c
+++ b/drivers/net/txgbe/txgbe_tm.c
@@ -280,8 +280,8 @@ txgbe_shaper_profile_add(struct rte_eth_dev *dev,
if (!shaper_profile)
return -ENOMEM;
shaper_profile->shaper_profile_id = shaper_profile_id;
- rte_memcpy(&shaper_profile->profile, profile,
- sizeof(struct rte_tm_shaper_params));
+ memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
TAILQ_INSERT_TAIL(&tm_conf->shaper_profile_list,
shaper_profile, node);
@@ -625,8 +625,8 @@ txgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->no = 0;
tm_node->parent = NULL;
tm_node->shaper_profile = shaper_profile;
- rte_memcpy(&tm_node->params, params,
- sizeof(struct rte_tm_node_params));
+ memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
tm_conf->root = tm_node;
/* increase the reference counter of the shaper profile */
@@ -706,8 +706,7 @@ txgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->reference_count = 0;
tm_node->parent = parent_node;
tm_node->shaper_profile = shaper_profile;
- rte_memcpy(&tm_node->params, params,
- sizeof(struct rte_tm_node_params));
+ memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params));
if (parent_node_type == TXGBE_TM_NODE_TYPE_PORT) {
tm_node->no = parent_node->reference_count;
TAILQ_INSERT_TAIL(&tm_conf->tc_list,
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 21bbb008e0..a88a3a18a0 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -12,7 +12,6 @@
#include <ethdev_driver.h>
#include <ethdev_vdev.h>
#include <rte_malloc.h>
-#include <rte_memcpy.h>
#include <rte_net.h>
#include <bus_vdev_driver.h>
#include <rte_kvargs.h>
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 517585740e..5c727cc4c0 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -10,7 +10,6 @@
#include <unistd.h>
#include <ethdev_driver.h>
-#include <rte_memcpy.h>
#include <rte_string_fns.h>
#include <rte_memzone.h>
#include <rte_malloc.h>
--
2.43.0
next prev parent reply other threads:[~2024-04-05 16:57 UTC|newest]
Thread overview: 88+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-03 16:32 [PATCH 0/2] uuid: enhancements and tests Stephen Hemminger
2024-04-03 16:32 ` [PATCH 1/2] eal: add functions to generate uuid values Stephen Hemminger
2024-04-04 16:11 ` Tyler Retzlaff
2024-04-03 16:32 ` [PATCH 2/2] test: add functional test for uuid Stephen Hemminger
2024-04-03 22:11 ` [PATCH v2 0/2] uuid: add generate functions and tests Stephen Hemminger
2024-04-03 22:11 ` [PATCH v2 1/2] eal: add functions to generate uuid values Stephen Hemminger
2024-04-04 16:16 ` Tyler Retzlaff
2024-04-03 22:11 ` [PATCH v2 2/2] test: add functional test for uuid Stephen Hemminger
2024-04-04 16:18 ` Tyler Retzlaff
2024-04-04 16:22 ` [PATCH v3 0/2] uuid: add generate functions and tests Stephen Hemminger
2024-04-04 16:22 ` [PATCH v3 1/2] eal: add functions to generate uuid values Stephen Hemminger
2024-04-04 16:22 ` [PATCH v3 2/2] test: add functional test for uuid Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 00/30] replace use of rte_memcpy with fixed sizes Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 01/30] cocci/rte_memcpy: add script to eliminate fixed size rte_memcpy Stephen Hemminger
2024-04-06 9:01 ` Morten Brørup
2024-04-05 16:53 ` [PATCH v4 02/30] eal: replace use of " Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 03/30] ethdev: replace uses of rte_memcpy Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 04/30] eventdev: replace use of fixed size rte_memcpy Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 05/30] cryptodev: " Stephen Hemminger
2024-04-10 15:40 ` [EXTERNAL] " Akhil Goyal
2024-04-05 16:53 ` [PATCH v4 06/30] ip_frag: " Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 07/30] net: " Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 08/30] lpm: " Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 09/30] node: " Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 10/30] pdcp: " Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 11/30] pipeline: " Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 12/30] rib: " Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 13/30] security: " Stephen Hemminger
2024-04-10 15:40 ` [EXTERNAL] " Akhil Goyal
2024-04-05 16:53 ` [PATCH v4 14/30] bus: remove unneeded rte_memcpy.h include Stephen Hemminger
2024-04-05 16:53 ` Stephen Hemminger [this message]
2024-04-05 16:53 ` [PATCH v4 16/30] raw: replace use of fixed size rte_memcpy Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 17/30] baseband: " Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 18/30] common: " Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 19/30] crypto: " Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 20/30] " Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 21/30] event: " Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 22/30] mempool: " Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 23/30] ml/cnxk: " Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 24/30] app/test-pmd: " Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 25/30] app/graph: " Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 26/30] app/test-eventdev: " Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 27/30] app/test: " Stephen Hemminger
2024-04-10 18:28 ` [EXTERNAL] " Akhil Goyal
2024-04-05 16:53 ` [PATCH v4 28/30] app/test-pipeline: remove unused rte_memcpy.h include Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 29/30] app/test-bbdev: remove unnecessary include of rte_memcpy.h Stephen Hemminger
2024-04-05 16:53 ` [PATCH v4 30/30] examples: replace use of fixed size rte_memcpy Stephen Hemminger
2024-04-09 17:05 ` [PATCH v4 0/2] uuid: generator functions and unit test Stephen Hemminger
2024-04-09 17:05 ` [PATCH v4 1/2] eal: add functions to generate uuid values Stephen Hemminger
2024-04-09 17:05 ` [PATCH v4 2/2] test: add functional test for uuid Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 00/32] replace use of rte_memcpy() with fixed size Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 01/32] cocci/rte_memcpy: add script to eliminate fixed size rte_memcpy Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 02/32] eal: replace use of " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 03/32] ethdev: replace uses of rte_memcpy Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 04/32] eventdev: replace use of fixed size rte_memcpy Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 05/32] cryptodev: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 06/32] ip_frag: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 07/32] net: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 08/32] lpm: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 09/32] node: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 10/32] pdcp: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 11/32] pipeline: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 12/32] rib: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 13/32] security: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 14/32] bus: remove unneeded rte_memcpy.h include Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 15/32] raw: replace use of fixed size rte_memcpy Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 16/32] baseband: " Stephen Hemminger
2024-05-23 18:28 ` Chautru, Nicolas
2024-05-22 3:27 ` [PATCH v5 17/32] common: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 18/32] crypto: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 19/32] event: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 20/32] mempool: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 21/32] ml/cnxk: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 22/32] app/test-pmd: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 23/32] app/graph: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 24/32] app/test-eventdev: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 25/32] app/test: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 26/32] app/test-pipeline: remove unused rte_memcpy.h include Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 27/32] app/test-bbdev: remove unnecessary include of rte_memcpy.h Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 28/32] examples: replace use of fixed size rte_memcpy Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 29/32] net/null: replace use of fixed size memcpy Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 30/32] net/tap: replace use of fixed size rte_memcpy Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 31/32] net/pcap: " Stephen Hemminger
2024-05-22 3:27 ` [PATCH v5 32/32] net/af_xdp:: " Stephen Hemminger
2024-05-26 14:51 ` [PATCH v5 00/32] replace use of rte_memcpy() with fixed size Mattias Rönnblom
2024-05-26 23:32 ` Stephen Hemminger
2024-05-27 6:06 ` Mattias Rönnblom
2024-05-27 6:38 ` Morten Brørup
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240405165518.367503-16-stephen@networkplumber.org \
--to=stephen@networkplumber.org \
--cc=ajit.khaparde@broadcom.com \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=bruce.richardson@intel.com \
--cc=chaoyong.he@corigine.com \
--cc=chas3@att.com \
--cc=chenbox@nvidia.com \
--cc=ciara.loftus@intel.com \
--cc=cloud.wangxiaoyun@huawei.com \
--cc=dev@dpdk.org \
--cc=dsinghrawat@marvell.com \
--cc=dsosnowski@nvidia.com \
--cc=grive@u256.net \
--cc=haijie1@huawei.com \
--cc=hemant.agrawal@nxp.com \
--cc=hkalra@marvell.com \
--cc=humin29@huawei.com \
--cc=hyonkim@cisco.com \
--cc=jeroendb@google.com \
--cc=jgrajcia@cisco.com \
--cc=jianwang@trustnetic.com \
--cc=jiawenwu@trustnetic.com \
--cc=jingjing.wu@intel.com \
--cc=johndale@cisco.com \
--cc=joshwash@google.com \
--cc=julien_dpdk@jaube.fr \
--cc=kirankumark@marvell.com \
--cc=lironh@marvell.com \
--cc=longli@microsoft.com \
--cc=matan@nvidia.com \
--cc=matt.peters@windriver.com \
--cc=maxime.coquelin@redhat.com \
--cc=mtetsuyah@gmail.com \
--cc=ndabilpuram@marvell.com \
--cc=orika@nvidia.com \
--cc=palok@marvell.com \
--cc=rahul.lakkireddy@chelsio.com \
--cc=rosen.xu@intel.com \
--cc=rushilg@google.com \
--cc=sachin.saxena@nxp.com \
--cc=selwin.sebastian@amd.com \
--cc=skori@marvell.com \
--cc=skoteshwar@marvell.com \
--cc=somnath.kotur@broadcom.com \
--cc=steven.webster@windriver.com \
--cc=suanmingm@nvidia.com \
--cc=viacheslavo@nvidia.com \
--cc=xuanziyang2@huawei.com \
--cc=yisen.zhuang@huawei.com \
--cc=yuying.zhang@intel.com \
--cc=zhouguoyang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).