From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 9DC861AF03 for ; Sat, 9 Sep 2017 16:47:44 +0200 (CEST) Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Sep 2017 07:47:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.42,367,1500966000"; d="scan'208";a="149407259" Received: from unknown (HELO dpdk5.bj.intel.com) ([172.16.182.182]) by fmsmga005.fm.intel.com with ESMTP; 09 Sep 2017 07:47:33 -0700 From: Zhiyong Yang To: dev@dpdk.org Cc: thomas@monjalon.net, ferruh.yigit@intel.com, hemant.agrawal@nxp.com, david.hunt@intel.com, Zhiyong Yang Date: Sat, 9 Sep 2017 22:47:24 +0800 Message-Id: <20170909144727.46388-2-zhiyong.yang@intel.com> X-Mailer: git-send-email 2.13.3 In-Reply-To: <20170909144727.46388-1-zhiyong.yang@intel.com> References: <20170904055734.21354-1-zhiyong.yang@intel.com> <20170909144727.46388-1-zhiyong.yang@intel.com> Subject: [dpdk-dev] [PATCH v3 1/4] ethdev: increase port_id range X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 09 Sep 2017 14:47:48 -0000 Extend port_id definition from uint8_t to uint16_t in lib and drivers data structures, specifically rte_eth_dev_data. Modify the APIs, drivers and app using port_id at the same time. Fix some checkpatch issues from the original code and remove some unnecessary cast operations. Signed-off-by: Zhiyong Yang --- app/pdump/main.c | 2 +- app/test-pmd/cmdline.c | 6 +- app/test-pmd/config.c | 4 +- app/test-pmd/ieee1588fwd.c | 26 ++- app/test-pmd/parameters.c | 2 +- app/test-pmd/rxonly.c | 2 +- app/test-pmd/testpmd.c | 19 +- app/test-pmd/testpmd.h | 4 +- drivers/net/af_packet/rte_eth_af_packet.c | 2 +- drivers/net/ark/ark_ethdev.c | 2 +- drivers/net/avp/avp_ethdev.c | 2 +- drivers/net/bnx2x/bnx2x.c | 4 +- drivers/net/bnx2x/bnx2x_rxtx.h | 4 +- drivers/net/bnx2x/elink.h | 11 +- drivers/net/bnxt/bnxt.h | 2 +- drivers/net/bnxt/bnxt_ethdev.c | 8 +- drivers/net/bnxt/bnxt_rxq.h | 2 +- drivers/net/bnxt/bnxt_txq.h | 2 +- drivers/net/bnxt/rte_pmd_bnxt.c | 32 +-- drivers/net/bnxt/rte_pmd_bnxt.h | 36 ++-- drivers/net/bonding/rte_eth_bond.h | 42 ++-- drivers/net/bonding/rte_eth_bond_8023ad.c | 81 ++++---- drivers/net/bonding/rte_eth_bond_8023ad.h | 44 ++-- drivers/net/bonding/rte_eth_bond_8023ad_private.h | 12 +- drivers/net/bonding/rte_eth_bond_alb.c | 6 +- drivers/net/bonding/rte_eth_bond_alb.h | 6 +- drivers/net/bonding/rte_eth_bond_api.c | 64 +++--- drivers/net/bonding/rte_eth_bond_args.c | 2 +- drivers/net/bonding/rte_eth_bond_pmd.c | 64 +++--- drivers/net/bonding/rte_eth_bond_private.h | 49 ++--- drivers/net/e1000/em_ethdev.c | 2 +- drivers/net/e1000/em_rxtx.c | 4 +- drivers/net/e1000/igb_rxtx.c | 4 +- drivers/net/failsafe/failsafe_ether.c | 4 +- drivers/net/failsafe/failsafe_private.h | 4 +- drivers/net/fm10k/fm10k.h | 6 +- drivers/net/i40e/i40e_ethdev.c | 5 +- drivers/net/i40e/i40e_rxtx.h | 4 +- drivers/net/i40e/rte_pmd_i40e.c | 50 ++--- drivers/net/i40e/rte_pmd_i40e.h | 48 ++--- drivers/net/ixgbe/ixgbe_ethdev.c | 5 +- drivers/net/ixgbe/ixgbe_rxtx.h | 4 +- drivers/net/ixgbe/rte_pmd_ixgbe.c | 60 +++--- drivers/net/ixgbe/rte_pmd_ixgbe.h | 70 ++++--- drivers/net/mlx4/mlx4.h | 2 +- drivers/net/mlx5/mlx5.h | 2 +- drivers/net/mlx5/mlx5_ethdev.c | 2 +- drivers/net/mlx5/mlx5_rxtx.h | 2 +- drivers/net/nfp/nfp_net.c | 26 +-- drivers/net/nfp/nfp_net_pmd.h | 2 +- drivers/net/null/rte_eth_null.c | 2 +- drivers/net/pcap/rte_eth_pcap.c | 2 +- drivers/net/qede/qede_if.h | 2 +- drivers/net/ring/rte_eth_ring.c | 2 +- drivers/net/szedata2/rte_eth_szedata2.c | 2 +- drivers/net/thunderx/nicvf_struct.h | 2 +- drivers/net/vhost/rte_eth_vhost.c | 8 +- drivers/net/vhost/rte_eth_vhost.h | 6 +- drivers/net/virtio/virtio_pci.h | 2 +- drivers/net/virtio/virtio_rxtx.h | 6 +- drivers/net/virtio/virtio_user/virtio_user_dev.h | 2 +- drivers/net/vmxnet3/vmxnet3_ring.h | 8 +- drivers/net/xenvirt/virtqueue.h | 2 +- lib/librte_bitratestats/rte_bitrate.c | 2 +- lib/librte_bitratestats/rte_bitrate.h | 2 +- lib/librte_ether/rte_ethdev.c | 239 +++++++++++----------- lib/librte_ether/rte_ethdev.h | 238 ++++++++++----------- lib/librte_ether/rte_tm.c | 62 +++--- lib/librte_ether/rte_tm.h | 60 +++--- lib/librte_ether/rte_tm_driver.h | 2 +- lib/librte_kni/rte_kni.h | 6 +- lib/librte_latencystats/rte_latencystats.c | 12 +- lib/librte_pdump/rte_pdump.c | 16 +- lib/librte_pdump/rte_pdump.h | 4 +- lib/librte_port/rte_port_ethdev.c | 39 ++-- lib/librte_port/rte_port_ethdev.h | 6 +- 76 files changed, 793 insertions(+), 789 deletions(-) diff --git a/app/pdump/main.c b/app/pdump/main.c index 3b13753d9..090a50cfc 100644 --- a/app/pdump/main.c +++ b/app/pdump/main.c @@ -623,7 +623,7 @@ static void create_mp_ring_vdev(void) { int i; - uint8_t portid; + uint16_t portid; struct pdump_tuples *pt = NULL; struct rte_mempool *mbuf_pool = NULL; char vdev_args[SIZE]; diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index cd8c35850..168897e99 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -4584,7 +4584,7 @@ struct cmd_show_bonding_config_result { cmdline_fixed_string_t show; cmdline_fixed_string_t bonding; cmdline_fixed_string_t config; - uint8_t port_id; + uint16_t port_id; }; static void cmd_show_bonding_config_parsed(void *parsed_result, @@ -4593,7 +4593,7 @@ static void cmd_show_bonding_config_parsed(void *parsed_result, { struct cmd_show_bonding_config_result *res = parsed_result; int bonding_mode, agg_mode; - uint8_t slaves[RTE_MAX_ETHPORTS]; + uint16_t slaves[RTE_MAX_ETHPORTS]; int num_slaves, num_active_slaves; int primary_id; int i; @@ -11496,7 +11496,7 @@ struct cmd_vf_vlan_stripq_result { cmdline_fixed_string_t vf; cmdline_fixed_string_t vlan; cmdline_fixed_string_t stripq; - uint8_t port_id; + uint16_t port_id; uint16_t vf_id; cmdline_fixed_string_t on_off; }; diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 3ae3e1cd8..155136dd5 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -358,7 +358,7 @@ rx_queue_infos_display(portid_t port_id, uint16_t queue_id) rc = rte_eth_rx_queue_info_get(port_id, queue_id, &qinfo); if (rc != 0) { - printf("Failed to retrieve information for port: %hhu, " + printf("Failed to retrieve information for port: %u, " "RX queue: %hu\nerror desc: %s(%d)\n", port_id, queue_id, strerror(-rc), rc); return; @@ -391,7 +391,7 @@ tx_queue_infos_display(portid_t port_id, uint16_t queue_id) rc = rte_eth_tx_queue_info_get(port_id, queue_id, &qinfo); if (rc != 0) { - printf("Failed to retrieve information for port: %hhu, " + printf("Failed to retrieve information for port: %u, " "TX queue: %hu\nerror desc: %s(%d)\n", port_id, queue_id, strerror(-rc), rc); return; diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c index 51170ee3e..66a3bf11e 100644 --- a/app/test-pmd/ieee1588fwd.c +++ b/app/test-pmd/ieee1588fwd.c @@ -86,12 +86,11 @@ port_ieee1588_rx_timestamp_check(portid_t pi, uint32_t index) struct timespec timestamp = {0, 0}; if (rte_eth_timesync_read_rx_timestamp(pi, ×tamp, index) < 0) { - printf("Port %u RX timestamp registers not valid\n", - (unsigned) pi); + printf("Port %u RX timestamp registers not valid\n", pi); return; } printf("Port %u RX timestamp value %lu s %lu ns\n", - (unsigned) pi, timestamp.tv_sec, timestamp.tv_nsec); + pi, timestamp.tv_sec, timestamp.tv_nsec); } #define MAX_TX_TMST_WAIT_MICROSECS 1000 /**< 1 milli-second */ @@ -110,12 +109,12 @@ port_ieee1588_tx_timestamp_check(portid_t pi) if (wait_us >= MAX_TX_TMST_WAIT_MICROSECS) { printf("Port %u TX timestamp registers not valid after " "%u micro-seconds\n", - (unsigned) pi, (unsigned) MAX_TX_TMST_WAIT_MICROSECS); + pi, MAX_TX_TMST_WAIT_MICROSECS); return; } printf("Port %u TX timestamp value %lu s %lu ns validated after " "%u micro-second%s\n", - (unsigned) pi, timestamp.tv_sec, timestamp.tv_nsec, wait_us, + pi, timestamp.tv_sec, timestamp.tv_nsec, wait_us, (wait_us == 1) ? "" : "s"); } @@ -148,11 +147,11 @@ ieee1588_packet_fwd(struct fwd_stream *fs) if (eth_type == ETHER_TYPE_1588) { printf("Port %u Received PTP packet not filtered" " by hardware\n", - (unsigned) fs->rx_port); + fs->rx_port); } else { printf("Port %u Received non PTP packet type=0x%4x " "len=%u\n", - (unsigned) fs->rx_port, eth_type, + fs->rx_port, eth_type, (unsigned) mb->pkt_len); } rte_pktmbuf_free(mb); @@ -161,7 +160,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs) if (eth_type != ETHER_TYPE_1588) { printf("Port %u Received NON PTP packet incorrectly" " detected by hardware\n", - (unsigned) fs->rx_port); + fs->rx_port); rte_pktmbuf_free(mb); return; } @@ -175,19 +174,19 @@ ieee1588_packet_fwd(struct fwd_stream *fs) if (ptp_hdr->version != 0x02) { printf("Port %u Received PTP V2 Ethernet frame with wrong PTP" " protocol version 0x%x (should be 0x02)\n", - (unsigned) fs->rx_port, ptp_hdr->version); + fs->rx_port, ptp_hdr->version); rte_pktmbuf_free(mb); return; } if (ptp_hdr->msg_id != PTP_SYNC_MESSAGE) { printf("Port %u Received PTP V2 Ethernet frame with unexpected" " message ID 0x%x (expected 0x0 - PTP_SYNC_MESSAGE)\n", - (unsigned) fs->rx_port, ptp_hdr->msg_id); + fs->rx_port, ptp_hdr->msg_id); rte_pktmbuf_free(mb); return; } printf("Port %u IEEE1588 PTP V2 SYNC Message filtered by hardware\n", - (unsigned) fs->rx_port); + fs->rx_port); /* * Check that the received PTP packet has been timestamped by the @@ -196,7 +195,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs) if (! (mb->ol_flags & PKT_RX_IEEE1588_TMST)) { printf("Port %u Received PTP packet not timestamped" " by hardware\n", - (unsigned) fs->rx_port); + fs->rx_port); rte_pktmbuf_free(mb); return; } @@ -216,8 +215,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs) mb->ol_flags |= PKT_TX_IEEE1588_TMST; fs->tx_packets += 1; if (rte_eth_tx_burst(fs->rx_port, fs->tx_queue, &mb, 1) == 0) { - printf("Port %u sent PTP packet dropped\n", - (unsigned) fs->rx_port); + printf("Port %u sent PTP packet dropped\n", fs->rx_port); fs->fwd_dropped += 1; rte_pktmbuf_free(mb); return; diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 2f7f70fd6..31287d71d 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -734,7 +734,7 @@ launch_args_parse(int argc, char** argv) if (!strcmp(lgopts[opt_idx].name, "nb-ports")) { n = atoi(optarg); if (n > 0 && n <= nb_ports) - nb_fwd_ports = (uint8_t) n; + nb_fwd_ports = n; else rte_exit(EXIT_FAILURE, "Invalid port %d\n", n); diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c index 5ef021905..137246e88 100644 --- a/app/test-pmd/rxonly.c +++ b/app/test-pmd/rxonly.c @@ -122,7 +122,7 @@ pkt_burst_receive(struct fwd_stream *fs) */ if (verbose_level > 0) printf("port %u/queue %u: received %u packets\n", - (unsigned) fs->rx_port, + fs->rx_port, (unsigned) fs->rx_queue, (unsigned) nb_rx); for (i = 0; i < nb_rx; i++) { diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 7d4013941..368d2313d 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -390,7 +390,7 @@ struct gro_status gro_ports[RTE_MAX_ETHPORTS]; /* Forward function declarations */ static void map_port_queue_stats_mapping_registers(uint8_t pi, struct rte_port *port); static void check_all_ports_link_status(uint32_t port_mask); -static int eth_event_callback(uint8_t port_id, +static int eth_event_callback(uint16_t port_id, enum rte_eth_event_type type, void *param, void *ret_param); @@ -673,7 +673,6 @@ init_config(void) fwd_config_setup(); } - void reconfig(portid_t new_port_id, unsigned socket_id) { @@ -1775,7 +1774,8 @@ check_all_ports_link_status(uint32_t port_mask) { #define CHECK_INTERVAL 100 /* 100ms */ #define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */ - uint8_t portid, count, all_ports_up, print_flag = 0; + uint16_t portid; + uint8_t count, all_ports_up, print_flag = 0; struct rte_eth_link link; printf("Checking link statuses...\n"); @@ -1790,14 +1790,13 @@ check_all_ports_link_status(uint32_t port_mask) /* print link status if flag set */ if (print_flag == 1) { if (link.link_status) - printf("Port %d Link Up - speed %u " - "Mbps - %s\n", (uint8_t)portid, - (unsigned)link.link_speed, + printf( + "Port%d Link Up. speed %u Mbps- %s\n", + portid, link.link_speed, (link.link_duplex == ETH_LINK_FULL_DUPLEX) ? ("full-duplex") : ("half-duplex\n")); else - printf("Port %d Link Down\n", - (uint8_t)portid); + printf("Port %d Link Down\n", portid); continue; } /* clear all_ports_up flag if any link down */ @@ -1844,7 +1843,7 @@ rmv_event_callback(void *arg) /* This function is used by the interrupt thread */ static int -eth_event_callback(uint8_t port_id, enum rte_eth_event_type type, void *param, +eth_event_callback(uint16_t port_id, enum rte_eth_event_type type, void *param, void *ret_param) { static const char * const event_desc[] = { @@ -2287,7 +2286,7 @@ int main(int argc, char** argv) { int diag; - uint8_t port_id; + uint16_t port_id; signal(SIGINT, signal_handler); signal(SIGTERM, signal_handler); diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index c9d7739b8..c045afd64 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -78,7 +78,7 @@ #define UMA_NO_CONFIG 0xFF typedef uint8_t lcoreid_t; -typedef uint8_t portid_t; +typedef uint16_t portid_t; typedef uint16_t queueid_t; typedef uint16_t streamid_t; @@ -283,7 +283,7 @@ enum dcb_mode_enable #define MAX_RX_QUEUE_STATS_MAPPINGS 4096 /* MAX_PORT of 32 @ 128 rx_queues/port */ struct queue_stats_mappings { - uint8_t port_id; + uint16_t port_id; uint16_t queue_id; uint8_t stats_counter_id; } __rte_cache_aligned; diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c index 9a47852ca..483b0c107 100644 --- a/drivers/net/af_packet/rte_eth_af_packet.c +++ b/drivers/net/af_packet/rte_eth_af_packet.c @@ -75,7 +75,7 @@ struct pkt_rx_queue { unsigned int framenum; struct rte_mempool *mb_pool; - uint8_t in_port; + uint16_t in_port; volatile unsigned long rx_pkts; volatile unsigned long err_pkts; diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c index 6db362b04..893284733 100644 --- a/drivers/net/ark/ark_ethdev.c +++ b/drivers/net/ark/ark_ethdev.c @@ -641,7 +641,7 @@ eth_ark_dev_stop(struct rte_eth_dev *dev) for (i = 0; i < dev->data->nb_tx_queues; i++) { status = eth_ark_tx_queue_stop(dev, i); if (status != 0) { - uint8_t port = dev->data->port_id; + uint16_t port = dev->data->port_id; PMD_DRV_LOG(ERR, "tx_queue stop anomaly" " port %u, queue %u\n", diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c index c746a0e2c..b5cc955f2 100644 --- a/drivers/net/avp/avp_ethdev.c +++ b/drivers/net/avp/avp_ethdev.c @@ -190,7 +190,7 @@ struct avp_dev { struct rte_eth_dev_data *dev_data; /**< Back pointer to ethernet device data */ volatile uint32_t flags; /**< Device operational flags */ - uint8_t port_id; /**< Ethernet port identifier */ + uint16_t port_id; /**< Ethernet port identifier */ struct rte_mempool *pool; /**< pkt mbuf mempool */ unsigned int guest_mbuf_size; /**< local pool mbuf size */ unsigned int host_mbuf_size; /**< host mbuf size */ diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c index 06733d153..0765f62d7 100644 --- a/drivers/net/bnx2x/bnx2x.c +++ b/drivers/net/bnx2x/bnx2x.c @@ -703,7 +703,7 @@ bnx2x_gpio_mult_write(struct bnx2x_softc *sc, uint8_t pins, uint32_t mode) static int bnx2x_gpio_int_write(struct bnx2x_softc *sc, int gpio_num, uint32_t mode, - uint8_t port) + uint8_t port) { /* The GPIO should be swapped if swap register is set and active */ int gpio_port = ((REG_RD(sc, NIG_REG_PORT_SWAP) && @@ -749,7 +749,7 @@ bnx2x_gpio_int_write(struct bnx2x_softc *sc, int gpio_num, uint32_t mode, } uint32_t -elink_cb_gpio_read(struct bnx2x_softc * sc, uint16_t gpio_num, uint8_t port) +elink_cb_gpio_read(struct bnx2x_softc *sc, uint16_t gpio_num, uint8_t port) { return bnx2x_gpio_read(sc, gpio_num, port); } diff --git a/drivers/net/bnx2x/bnx2x_rxtx.h b/drivers/net/bnx2x/bnx2x_rxtx.h index 2e38ec26a..48d540476 100644 --- a/drivers/net/bnx2x/bnx2x_rxtx.h +++ b/drivers/net/bnx2x/bnx2x_rxtx.h @@ -41,7 +41,7 @@ struct bnx2x_rx_queue { uint16_t rx_cq_head; /**< Index of current rcq bd. */ uint16_t rx_cq_tail; /**< Index of last rcq bd. */ uint16_t queue_id; /**< RX queue index. */ - uint8_t port_id; /**< Device port identifier. */ + uint16_t port_id; /**< Device port identifier. */ struct bnx2x_softc *sc; /**< Ptr to dev_private data. */ }; @@ -62,7 +62,7 @@ struct bnx2x_tx_queue { uint16_t nb_tx_avail; /**< Number of TX descriptors available. */ uint16_t nb_tx_pages; /**< number of TX pages */ uint16_t queue_id; /**< TX queue index. */ - uint8_t port_id; /**< Device port identifier. */ + uint16_t port_id; /**< Device port identifier. */ struct bnx2x_softc *sc; /**< Ptr to dev_private data */ }; diff --git a/drivers/net/bnx2x/elink.h b/drivers/net/bnx2x/elink.h index 9401b7cd5..38f504426 100644 --- a/drivers/net/bnx2x/elink.h +++ b/drivers/net/bnx2x/elink.h @@ -34,16 +34,17 @@ extern void elink_cb_reg_write(struct bnx2x_softc *sc, uint32_t reg_addr, uint32 /* mode - 0( LOW ) /1(HIGH)*/ extern uint8_t elink_cb_gpio_write(struct bnx2x_softc *sc, - uint16_t gpio_num, - uint8_t mode, uint8_t port); + uint16_t gpio_num, + uint8_t mode, uint8_t port); extern uint8_t elink_cb_gpio_mult_write(struct bnx2x_softc *sc, uint8_t pins, uint8_t mode); extern uint32_t elink_cb_gpio_read(struct bnx2x_softc *sc, uint16_t gpio_num, uint8_t port); + extern uint8_t elink_cb_gpio_int_write(struct bnx2x_softc *sc, - uint16_t gpio_num, - uint8_t mode, uint8_t port); + uint16_t gpio_num, + uint8_t mode, uint8_t port); extern uint32_t elink_cb_fw_command(struct bnx2x_softc *sc, uint32_t command, uint32_t param); @@ -500,7 +501,7 @@ elink_status_t elink_phy_probe(struct elink_params *params); /* Checks if fan failure detection is required on one of the phys on board */ uint8_t elink_fan_failure_det_req(struct bnx2x_softc *sc, uint32_t shmem_base, - uint32_t shmem2_base, uint8_t port); + uint32_t shmem2_base, uint8_t port); /* Open / close the gate between the NIG and the BRB */ void elink_set_rx_filter(struct elink_params *params, uint8_t en); diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 405d94deb..26a9018b5 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -126,7 +126,7 @@ struct bnxt_pf_info { #define BNXT_FIRST_VF_FID 128 #define BNXT_PF_RINGS_USED(bp) bnxt_get_num_queues(bp) #define BNXT_PF_RINGS_AVAIL(bp) (bp->pf.max_cp_rings - BNXT_PF_RINGS_USED(bp)) - uint8_t port_id; + uint16_t port_id; uint16_t first_vf_id; uint16_t active_vfs; uint16_t max_vfs; diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index c9d11228b..d8face1bc 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -488,14 +488,14 @@ static void bnxt_print_link_info(struct rte_eth_dev *eth_dev) struct rte_eth_link *link = ð_dev->data->dev_link; if (link->link_status) - RTE_LOG(INFO, PMD, "Port %d Link Up - speed %u Mbps - %s\n", - (uint8_t)(eth_dev->data->port_id), + RTE_LOG(INFO, PMD, "Port %u Link Up - speed %u Mbps - %s\n", + eth_dev->data->port_id, (uint32_t)link->link_speed, (link->link_duplex == ETH_LINK_FULL_DUPLEX) ? ("full-duplex") : ("half-duplex\n")); else - RTE_LOG(INFO, PMD, "Port %d Link Down\n", - (uint8_t)(eth_dev->data->port_id)); + RTE_LOG(INFO, PMD, "Port %u Link Down\n", + eth_dev->data->port_id); } static int bnxt_dev_lsc_intr_setup(struct rte_eth_dev *eth_dev) diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h index 01aaa007f..cea0785d1 100644 --- a/drivers/net/bnxt/bnxt_rxq.h +++ b/drivers/net/bnxt/bnxt_rxq.h @@ -48,7 +48,7 @@ struct bnxt_rx_queue { uint16_t rx_free_thresh; /* max free RX desc to hold */ uint16_t queue_id; /* RX queue index */ uint16_t reg_idx; /* RX queue register index */ - uint8_t port_id; /* Device port identifier */ + uint16_t port_id; /* Device port identifier */ uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */ struct bnxt *bp; diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h index 16f3a0bdd..f753c10f2 100644 --- a/drivers/net/bnxt/bnxt_txq.h +++ b/drivers/net/bnxt/bnxt_txq.h @@ -46,7 +46,7 @@ struct bnxt_tx_queue { uint16_t tx_next_rs; /* next desc to set RS bit */ uint16_t queue_id; /* TX queue index */ uint16_t reg_idx; /* TX queue register index */ - uint8_t port_id; /* Device port identifier */ + uint16_t port_id; /* Device port identifier */ uint8_t pthresh; /* Prefetch threshold register */ uint8_t hthresh; /* Host threshold register */ uint8_t wthresh; /* Write-back threshold reg */ diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c index c343d9033..63fc27911 100644 --- a/drivers/net/bnxt/rte_pmd_bnxt.c +++ b/drivers/net/bnxt/rte_pmd_bnxt.c @@ -67,7 +67,7 @@ int bnxt_rcv_msg_from_vf(struct bnxt *bp, uint16_t vf_id, void *msg) true : false; } -int rte_pmd_bnxt_set_tx_loopback(uint8_t port, uint8_t on) +int rte_pmd_bnxt_set_tx_loopback(uint16_t port, uint8_t on) { struct rte_eth_dev *eth_dev; struct bnxt *bp; @@ -108,7 +108,7 @@ rte_pmd_bnxt_set_all_queues_drop_en_cb(struct bnxt_vnic_info *vnic, void *onptr) vnic->bd_stall = !(*on); } -int rte_pmd_bnxt_set_all_queues_drop_en(uint8_t port, uint8_t on) +int rte_pmd_bnxt_set_all_queues_drop_en(uint16_t port, uint8_t on) { struct rte_eth_dev *eth_dev; struct bnxt *bp; @@ -159,7 +159,7 @@ int rte_pmd_bnxt_set_all_queues_drop_en(uint8_t port, uint8_t on) return rc; } -int rte_pmd_bnxt_set_vf_mac_addr(uint8_t port, uint16_t vf, +int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port, uint16_t vf, struct ether_addr *mac_addr) { struct rte_eth_dev *dev; @@ -191,7 +191,7 @@ int rte_pmd_bnxt_set_vf_mac_addr(uint8_t port, uint16_t vf, return rc; } -int rte_pmd_bnxt_set_vf_rate_limit(uint8_t port, uint16_t vf, +int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port, uint16_t vf, uint16_t tx_rate, uint64_t q_msk) { struct rte_eth_dev *eth_dev; @@ -241,7 +241,7 @@ int rte_pmd_bnxt_set_vf_rate_limit(uint8_t port, uint16_t vf, return rc; } -int rte_pmd_bnxt_set_vf_mac_anti_spoof(uint8_t port, uint16_t vf, uint8_t on) +int rte_pmd_bnxt_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf, uint8_t on) { struct rte_eth_dev_info dev_info; struct rte_eth_dev *dev; @@ -294,7 +294,7 @@ int rte_pmd_bnxt_set_vf_mac_anti_spoof(uint8_t port, uint16_t vf, uint8_t on) return rc; } -int rte_pmd_bnxt_set_vf_vlan_anti_spoof(uint8_t port, uint16_t vf, uint8_t on) +int rte_pmd_bnxt_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf, uint8_t on) { struct rte_eth_dev_info dev_info; struct rte_eth_dev *dev; @@ -350,7 +350,7 @@ rte_pmd_bnxt_set_vf_vlan_stripq_cb(struct bnxt_vnic_info *vnic, void *onptr) } int -rte_pmd_bnxt_set_vf_vlan_stripq(uint8_t port, uint16_t vf, uint8_t on) +rte_pmd_bnxt_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on) { struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; @@ -385,7 +385,7 @@ rte_pmd_bnxt_set_vf_vlan_stripq(uint8_t port, uint16_t vf, uint8_t on) return rc; } -int rte_pmd_bnxt_set_vf_rxmode(uint8_t port, uint16_t vf, +int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf, uint16_t rx_mask, uint8_t on) { struct rte_eth_dev *dev; @@ -477,7 +477,7 @@ static int bnxt_set_vf_table(struct bnxt *bp, uint16_t vf) return rc; } -int rte_pmd_bnxt_set_vf_vlan_filter(uint8_t port, uint16_t vlan, +int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan, uint64_t vf_mask, uint8_t vlan_on) { struct bnxt_vlan_table_entry *ve; @@ -570,7 +570,7 @@ int rte_pmd_bnxt_set_vf_vlan_filter(uint8_t port, uint16_t vlan, return rc; } -int rte_pmd_bnxt_get_vf_stats(uint8_t port, +int rte_pmd_bnxt_get_vf_stats(uint16_t port, uint16_t vf_id, struct rte_eth_stats *stats) { @@ -598,7 +598,7 @@ int rte_pmd_bnxt_get_vf_stats(uint8_t port, return bnxt_hwrm_func_qstats(bp, bp->pf.first_vf_id + vf_id, stats); } -int rte_pmd_bnxt_reset_vf_stats(uint8_t port, +int rte_pmd_bnxt_reset_vf_stats(uint16_t port, uint16_t vf_id) { struct rte_eth_dev *dev; @@ -625,7 +625,7 @@ int rte_pmd_bnxt_reset_vf_stats(uint8_t port, return bnxt_hwrm_func_clr_stats(bp, bp->pf.first_vf_id + vf_id); } -int rte_pmd_bnxt_get_vf_rx_status(uint8_t port, uint16_t vf_id) +int rte_pmd_bnxt_get_vf_rx_status(uint16_t port, uint16_t vf_id) { struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; @@ -651,7 +651,7 @@ int rte_pmd_bnxt_get_vf_rx_status(uint8_t port, uint16_t vf_id) return bnxt_vf_vnic_count(bp, vf_id); } -int rte_pmd_bnxt_get_vf_tx_drop_count(uint8_t port, uint16_t vf_id, +int rte_pmd_bnxt_get_vf_tx_drop_count(uint16_t port, uint16_t vf_id, uint64_t *count) { struct rte_eth_dev *dev; @@ -679,7 +679,7 @@ int rte_pmd_bnxt_get_vf_tx_drop_count(uint8_t port, uint16_t vf_id, count); } -int rte_pmd_bnxt_mac_addr_add(uint8_t port, struct ether_addr *addr, +int rte_pmd_bnxt_mac_addr_add(uint16_t port, struct ether_addr *addr, uint32_t vf_id) { struct rte_eth_dev *dev; @@ -756,7 +756,7 @@ int rte_pmd_bnxt_mac_addr_add(uint8_t port, struct ether_addr *addr, } int -rte_pmd_bnxt_set_vf_vlan_insert(uint8_t port, uint16_t vf, +rte_pmd_bnxt_set_vf_vlan_insert(uint16_t port, uint16_t vf, uint16_t vlan_id) { struct rte_eth_dev *dev; @@ -793,7 +793,7 @@ rte_pmd_bnxt_set_vf_vlan_insert(uint8_t port, uint16_t vf, return rc; } -int rte_pmd_bnxt_set_vf_persist_stats(uint8_t port, uint16_t vf, uint8_t on) +int rte_pmd_bnxt_set_vf_persist_stats(uint16_t port, uint16_t vf, uint8_t on) { struct rte_eth_dev_info dev_info; struct rte_eth_dev *dev; diff --git a/drivers/net/bnxt/rte_pmd_bnxt.h b/drivers/net/bnxt/rte_pmd_bnxt.h index c4c4770e3..548e5b3e5 100644 --- a/drivers/net/bnxt/rte_pmd_bnxt.h +++ b/drivers/net/bnxt/rte_pmd_bnxt.h @@ -78,7 +78,7 @@ struct rte_pmd_bnxt_mb_event_param { * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_bnxt_set_vf_mac_anti_spoof(uint8_t port, uint16_t vf, uint8_t on); +int rte_pmd_bnxt_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf, uint8_t on); /** * Set the VF MAC address. @@ -94,7 +94,7 @@ int rte_pmd_bnxt_set_vf_mac_anti_spoof(uint8_t port, uint16_t vf, uint8_t on); * - (-ENODEV) if *port* invalid. * - (-EINVAL) if *vf* or *mac_addr* is invalid. */ -int rte_pmd_bnxt_set_vf_mac_addr(uint8_t port, uint16_t vf, +int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port, uint16_t vf, struct ether_addr *mac_addr); /** @@ -115,7 +115,7 @@ int rte_pmd_bnxt_set_vf_mac_addr(uint8_t port, uint16_t vf, * - (-EINVAL) if bad parameter. */ int -rte_pmd_bnxt_set_vf_vlan_stripq(uint8_t port, uint16_t vf, uint8_t on); +rte_pmd_bnxt_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on); /** * Enable/Disable vf vlan insert @@ -134,8 +134,8 @@ rte_pmd_bnxt_set_vf_vlan_stripq(uint8_t port, uint16_t vf, uint8_t on); * - (-EINVAL) if bad parameter. */ int -rte_pmd_bnxt_set_vf_vlan_insert(uint8_t port, uint16_t vf, - uint16_t vlan_id); +rte_pmd_bnxt_set_vf_vlan_insert(uint16_t port, uint16_t vf, + uint16_t vlan_id); /** * Enable/Disable hardware VF VLAN filtering by an Ethernet device of @@ -156,7 +156,7 @@ rte_pmd_bnxt_set_vf_vlan_insert(uint8_t port, uint16_t vf, * - (-ENODEV) if *port_id* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_bnxt_set_vf_vlan_filter(uint8_t port, uint16_t vlan, +int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan, uint64_t vf_mask, uint8_t vlan_on); /** @@ -173,7 +173,7 @@ int rte_pmd_bnxt_set_vf_vlan_filter(uint8_t port, uint16_t vlan, * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_bnxt_set_tx_loopback(uint8_t port, uint8_t on); +int rte_pmd_bnxt_set_tx_loopback(uint16_t port, uint8_t on); /** * set all queues drop enable bit @@ -189,7 +189,7 @@ int rte_pmd_bnxt_set_tx_loopback(uint8_t port, uint8_t on); * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_bnxt_set_all_queues_drop_en(uint8_t port, uint8_t on); +int rte_pmd_bnxt_set_all_queues_drop_en(uint16_t port, uint8_t on); /** * Set the VF rate limit. @@ -207,7 +207,7 @@ int rte_pmd_bnxt_set_all_queues_drop_en(uint8_t port, uint8_t on); * - (-ENODEV) if *port* invalid. * - (-EINVAL) if *vf* or *mac_addr* is invalid. */ -int rte_pmd_bnxt_set_vf_rate_limit(uint8_t port, uint16_t vf, +int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port, uint16_t vf, uint16_t tx_rate, uint64_t q_msk); /** @@ -226,7 +226,7 @@ int rte_pmd_bnxt_set_vf_rate_limit(uint8_t port, uint16_t vf, * - (-EINVAL) if bad parameter. */ -int rte_pmd_bnxt_get_vf_stats(uint8_t port, +int rte_pmd_bnxt_get_vf_stats(uint16_t port, uint16_t vf_id, struct rte_eth_stats *stats); @@ -242,7 +242,7 @@ int rte_pmd_bnxt_get_vf_stats(uint8_t port, * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_bnxt_reset_vf_stats(uint8_t port, +int rte_pmd_bnxt_reset_vf_stats(uint16_t port, uint16_t vf_id); /** @@ -261,7 +261,7 @@ int rte_pmd_bnxt_reset_vf_stats(uint8_t port, * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_bnxt_set_vf_vlan_anti_spoof(uint8_t port, uint16_t vf, uint8_t on); +int rte_pmd_bnxt_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf, uint8_t on); /** * Set RX L2 Filtering mode of a VF of an Ethernet device. @@ -280,7 +280,7 @@ int rte_pmd_bnxt_set_vf_vlan_anti_spoof(uint8_t port, uint16_t vf, uint8_t on); * - (-ENODEV) if *port_id* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_bnxt_set_vf_rxmode(uint8_t port, uint16_t vf, +int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf, uint16_t rx_mask, uint8_t on); /** @@ -297,7 +297,7 @@ int rte_pmd_bnxt_set_vf_rxmode(uint8_t port, uint16_t vf, * - (-ENOMEM) on an allocation failure * - (-1) firmware interface error */ -int rte_pmd_bnxt_get_vf_rx_status(uint8_t port, uint16_t vf_id); +int rte_pmd_bnxt_get_vf_rx_status(uint16_t port, uint16_t vf_id); /** * Queries the TX drop counter for the function @@ -313,7 +313,7 @@ int rte_pmd_bnxt_get_vf_rx_status(uint8_t port, uint16_t vf_id); * - (-EINVAL) invalid vf_id specified. * - (-ENOTSUP) Ethernet device is not a PF */ -int rte_pmd_bnxt_get_vf_tx_drop_count(uint8_t port, uint16_t vf_id, +int rte_pmd_bnxt_get_vf_tx_drop_count(uint16_t port, uint16_t vf_id, uint64_t *count); /** @@ -331,8 +331,8 @@ int rte_pmd_bnxt_get_vf_tx_drop_count(uint8_t port, uint16_t vf_id, * - (-ENOTSUP) Ethernet device is not a PF * - (-ENOMEM) on an allocation failure */ -int rte_pmd_bnxt_mac_addr_add(uint8_t port, struct ether_addr *mac_addr, - uint32_t vf_id); +int rte_pmd_bnxt_mac_addr_add(uint16_t port, struct ether_addr *mac_addr, + uint32_t vf_id); /** * Enable/Disable VF statistics retention @@ -350,5 +350,5 @@ int rte_pmd_bnxt_mac_addr_add(uint8_t port, struct ether_addr *mac_addr, * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_bnxt_set_vf_persist_stats(uint8_t port, uint16_t vf, uint8_t on); +int rte_pmd_bnxt_set_vf_persist_stats(uint16_t port, uint16_t vf, uint8_t on); #endif /* _PMD_BNXT_H_ */ diff --git a/drivers/net/bonding/rte_eth_bond.h b/drivers/net/bonding/rte_eth_bond.h index 8efbf0713..36b4e0643 100644 --- a/drivers/net/bonding/rte_eth_bond.h +++ b/drivers/net/bonding/rte_eth_bond.h @@ -151,7 +151,7 @@ rte_eth_bond_free(const char *name); * 0 on success, negative value otherwise */ int -rte_eth_bond_slave_add(uint8_t bonded_port_id, uint8_t slave_port_id); +rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id); /** * Remove a slave rte_eth_dev device from the bonded device @@ -163,7 +163,7 @@ rte_eth_bond_slave_add(uint8_t bonded_port_id, uint8_t slave_port_id); * 0 on success, negative value otherwise */ int -rte_eth_bond_slave_remove(uint8_t bonded_port_id, uint8_t slave_port_id); +rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id); /** * Set link bonding mode of bonded device @@ -175,7 +175,7 @@ rte_eth_bond_slave_remove(uint8_t bonded_port_id, uint8_t slave_port_id); * 0 on success, negative value otherwise */ int -rte_eth_bond_mode_set(uint8_t bonded_port_id, uint8_t mode); +rte_eth_bond_mode_set(uint16_t bonded_port_id, uint8_t mode); /** * Get link bonding mode of bonded device @@ -186,7 +186,7 @@ rte_eth_bond_mode_set(uint8_t bonded_port_id, uint8_t mode); * link bonding mode on success, negative value otherwise */ int -rte_eth_bond_mode_get(uint8_t bonded_port_id); +rte_eth_bond_mode_get(uint16_t bonded_port_id); /** * Set slave rte_eth_dev as primary slave of bonded device @@ -198,7 +198,7 @@ rte_eth_bond_mode_get(uint8_t bonded_port_id); * 0 on success, negative value otherwise */ int -rte_eth_bond_primary_set(uint8_t bonded_port_id, uint8_t slave_port_id); +rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id); /** * Get primary slave of bonded device @@ -209,7 +209,7 @@ rte_eth_bond_primary_set(uint8_t bonded_port_id, uint8_t slave_port_id); * Port Id of primary slave on success, -1 on failure */ int -rte_eth_bond_primary_get(uint8_t bonded_port_id); +rte_eth_bond_primary_get(uint16_t bonded_port_id); /** * Populate an array with list of the slaves port id's of the bonded device @@ -223,7 +223,8 @@ rte_eth_bond_primary_get(uint8_t bonded_port_id); * negative value otherwise */ int -rte_eth_bond_slaves_get(uint8_t bonded_port_id, uint8_t slaves[], uint8_t len); +rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[], + uint16_t len); /** * Populate an array with list of the active slaves port id's of the bonded @@ -238,8 +239,8 @@ rte_eth_bond_slaves_get(uint8_t bonded_port_id, uint8_t slaves[], uint8_t len); * negative value otherwise */ int -rte_eth_bond_active_slaves_get(uint8_t bonded_port_id, uint8_t slaves[], - uint8_t len); +rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[], + uint16_t len); /** * Set explicit MAC address to use on bonded device and it's slaves. @@ -252,7 +253,7 @@ rte_eth_bond_active_slaves_get(uint8_t bonded_port_id, uint8_t slaves[], * 0 on success, negative value otherwise */ int -rte_eth_bond_mac_address_set(uint8_t bonded_port_id, +rte_eth_bond_mac_address_set(uint16_t bonded_port_id, struct ether_addr *mac_addr); /** @@ -265,7 +266,7 @@ rte_eth_bond_mac_address_set(uint8_t bonded_port_id, * 0 on success, negative value otherwise */ int -rte_eth_bond_mac_address_reset(uint8_t bonded_port_id); +rte_eth_bond_mac_address_reset(uint16_t bonded_port_id); /** * Set the transmit policy for bonded device to use when it is operating in @@ -279,7 +280,7 @@ rte_eth_bond_mac_address_reset(uint8_t bonded_port_id); * 0 on success, negative value otherwise. */ int -rte_eth_bond_xmit_policy_set(uint8_t bonded_port_id, uint8_t policy); +rte_eth_bond_xmit_policy_set(uint16_t bonded_port_id, uint8_t policy); /** * Get the transmit policy set on bonded device for balance mode operation @@ -290,7 +291,7 @@ rte_eth_bond_xmit_policy_set(uint8_t bonded_port_id, uint8_t policy); * Balance transmit policy on success, negative value otherwise. */ int -rte_eth_bond_xmit_policy_get(uint8_t bonded_port_id); +rte_eth_bond_xmit_policy_get(uint16_t bonded_port_id); /** * Set the link monitoring frequency (in ms) for monitoring the link status of @@ -304,7 +305,7 @@ rte_eth_bond_xmit_policy_get(uint8_t bonded_port_id); */ int -rte_eth_bond_link_monitoring_set(uint8_t bonded_port_id, uint32_t internal_ms); +rte_eth_bond_link_monitoring_set(uint16_t bonded_port_id, uint32_t internal_ms); /** * Get the current link monitoring frequency (in ms) for monitoring of the link @@ -316,8 +317,7 @@ rte_eth_bond_link_monitoring_set(uint8_t bonded_port_id, uint32_t internal_ms); * Monitoring interval on success, negative value otherwise. */ int -rte_eth_bond_link_monitoring_get(uint8_t bonded_port_id); - +rte_eth_bond_link_monitoring_get(uint16_t bonded_port_id); /** * Set the period in milliseconds for delaying the disabling of a bonded link @@ -330,7 +330,8 @@ rte_eth_bond_link_monitoring_get(uint8_t bonded_port_id); * 0 on success, negative value otherwise. */ int -rte_eth_bond_link_down_prop_delay_set(uint8_t bonded_port_id, uint32_t delay_ms); +rte_eth_bond_link_down_prop_delay_set(uint16_t bonded_port_id, + uint32_t delay_ms); /** * Get the period in milliseconds set for delaying the disabling of a bonded @@ -342,7 +343,7 @@ rte_eth_bond_link_down_prop_delay_set(uint8_t bonded_port_id, uint32_t delay_ms) * Delay period on success, negative value otherwise. */ int -rte_eth_bond_link_down_prop_delay_get(uint8_t bonded_port_id); +rte_eth_bond_link_down_prop_delay_get(uint16_t bonded_port_id); /** * Set the period in milliseconds for delaying the enabling of a bonded link @@ -355,7 +356,8 @@ rte_eth_bond_link_down_prop_delay_get(uint8_t bonded_port_id); * 0 on success, negative value otherwise. */ int -rte_eth_bond_link_up_prop_delay_set(uint8_t bonded_port_id, uint32_t delay_ms); +rte_eth_bond_link_up_prop_delay_set(uint16_t bonded_port_id, + uint32_t delay_ms); /** * Get the period in milliseconds set for delaying the enabling of a bonded @@ -367,7 +369,7 @@ rte_eth_bond_link_up_prop_delay_set(uint8_t bonded_port_id, uint32_t delay_ms); * Delay period on success, negative value otherwise. */ int -rte_eth_bond_link_up_prop_delay_get(uint8_t bonded_port_id); +rte_eth_bond_link_up_prop_delay_get(uint16_t bonded_port_id); #ifdef __cplusplus diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c index 20b5a8961..1ca43b60c 100644 --- a/drivers/net/bonding/rte_eth_bond_8023ad.c +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c @@ -209,7 +209,7 @@ set_warning_flags(struct port *port, uint16_t flags) } static void -show_warnings(uint8_t slave_id) +show_warnings(uint16_t slave_id) { struct port *port = &mode_8023ad_ports[slave_id]; uint8_t warnings; @@ -278,7 +278,7 @@ record_default(struct port *port) * @param port Port on which LACPDU was received. */ static void -rx_machine(struct bond_dev_private *internals, uint8_t slave_id, +rx_machine(struct bond_dev_private *internals, uint16_t slave_id, struct lacpdu *lacp) { struct port *agg, *port = &mode_8023ad_ports[slave_id]; @@ -399,7 +399,7 @@ rx_machine(struct bond_dev_private *internals, uint8_t slave_id, * @param port Port to handle state machine. */ static void -periodic_machine(struct bond_dev_private *internals, uint8_t slave_id) +periodic_machine(struct bond_dev_private *internals, uint16_t slave_id) { struct port *port = &mode_8023ad_ports[slave_id]; /* Calculate if either site is LACP enabled */ @@ -461,7 +461,7 @@ periodic_machine(struct bond_dev_private *internals, uint8_t slave_id) * @param port Port to handle state machine. */ static void -mux_machine(struct bond_dev_private *internals, uint8_t slave_id) +mux_machine(struct bond_dev_private *internals, uint16_t slave_id) { struct port *port = &mode_8023ad_ports[slave_id]; @@ -511,7 +511,6 @@ mux_machine(struct bond_dev_private *internals, uint8_t slave_id) ACTOR_STATE_CLR(port, SYNCHRONIZATION); MODE4_DEBUG("Out of sync -> ATTACHED\n"); } - if (!ACTOR_STATE(port, SYNCHRONIZATION)) { /* attach mux to aggregator */ RTE_ASSERT((port->actor_state & (STATE_COLLECTING | @@ -564,7 +563,7 @@ mux_machine(struct bond_dev_private *internals, uint8_t slave_id) * @param port */ static void -tx_machine(struct bond_dev_private *internals, uint8_t slave_id) +tx_machine(struct bond_dev_private *internals, uint16_t slave_id) { struct port *agg, *port = &mode_8023ad_ports[slave_id]; @@ -685,11 +684,11 @@ max_index(uint64_t *a, int n) * @param port_pos Port to assign. */ static void -selection_logic(struct bond_dev_private *internals, uint8_t slave_id) +selection_logic(struct bond_dev_private *internals, uint16_t slave_id) { struct port *agg, *port; - uint8_t slaves_count, new_agg_id, i, j = 0; - uint8_t *slaves; + uint8_t slaves_count, i, j = 0; + uint16_t *slaves, new_agg_id; uint64_t agg_bandwidth[8] = {0}; uint64_t agg_count[8] = {0}; uint8_t default_slave = 0; @@ -923,7 +922,8 @@ bond_mode_8023ad_periodic_cb(void *arg) } void -bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, uint8_t slave_id) +bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, + uint16_t slave_id) { struct bond_dev_private *internals = bond_dev->data->dev_private; @@ -951,7 +951,7 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, uint8_t slave_id) memcpy(&port->actor, &initial, sizeof(struct port_params)); /* Standard requires that port ID must be grater than 0. * Add 1 do get corresponding port_number */ - port->actor.port_number = rte_cpu_to_be_16((uint16_t)slave_id + 1); + port->actor.port_number = rte_cpu_to_be_16(slave_id + 1); memcpy(&port->partner, &initial, sizeof(struct port_params)); @@ -1022,12 +1022,12 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, uint8_t slave_id) int bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev, - uint8_t slave_id) + uint16_t slave_id) { struct bond_dev_private *internals = bond_dev->data->dev_private; void *pkt = NULL; struct port *port; - uint8_t i; + uint16_t i; /* Given slave must be in active list */ RTE_ASSERT(find_slave_by_id(internals->active_slaves, @@ -1066,7 +1066,7 @@ bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev) struct bond_dev_private *internals = bond_dev->data->dev_private; struct ether_addr slave_addr; struct port *slave, *agg_slave; - uint8_t slave_id, i, j; + uint16_t slave_id, i, j; bond_mode_8023ad_stop(bond_dev); @@ -1277,7 +1277,7 @@ bond_mode_8023ad_stop(struct rte_eth_dev *bond_dev) void bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals, - uint8_t slave_id, struct rte_mbuf *pkt) + uint16_t slave_id, struct rte_mbuf *pkt) { struct mode8023ad_private *mode4 = &internals->mode4; struct port *port = &mode_8023ad_ports[slave_id]; @@ -1358,7 +1358,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals, } int -rte_eth_bond_8023ad_conf_get_v20(uint8_t port_id, +rte_eth_bond_8023ad_conf_get_v20(uint16_t port_id, struct rte_eth_bond_8023ad_conf *conf) { struct rte_eth_dev *bond_dev; @@ -1376,7 +1376,7 @@ rte_eth_bond_8023ad_conf_get_v20(uint8_t port_id, VERSION_SYMBOL(rte_eth_bond_8023ad_conf_get, _v20, 2.0); int -rte_eth_bond_8023ad_conf_get_v1607(uint8_t port_id, +rte_eth_bond_8023ad_conf_get_v1607(uint16_t port_id, struct rte_eth_bond_8023ad_conf *conf) { struct rte_eth_dev *bond_dev; @@ -1394,7 +1394,7 @@ rte_eth_bond_8023ad_conf_get_v1607(uint8_t port_id, VERSION_SYMBOL(rte_eth_bond_8023ad_conf_get, _v1607, 16.07); int -rte_eth_bond_8023ad_conf_get_v1708(uint8_t port_id, +rte_eth_bond_8023ad_conf_get_v1708(uint16_t port_id, struct rte_eth_bond_8023ad_conf *conf) { struct rte_eth_dev *bond_dev; @@ -1409,13 +1409,13 @@ rte_eth_bond_8023ad_conf_get_v1708(uint8_t port_id, bond_mode_8023ad_conf_get_v1708(bond_dev, conf); return 0; } -MAP_STATIC_SYMBOL(int rte_eth_bond_8023ad_conf_get(uint8_t port_id, +MAP_STATIC_SYMBOL(int rte_eth_bond_8023ad_conf_get(uint16_t port_id, struct rte_eth_bond_8023ad_conf *conf), rte_eth_bond_8023ad_conf_get_v1708); BIND_DEFAULT_SYMBOL(rte_eth_bond_8023ad_conf_get, _v1708, 17.08); int -rte_eth_bond_8023ad_agg_selection_set(uint8_t port_id, +rte_eth_bond_8023ad_agg_selection_set(uint16_t port_id, enum rte_bond_8023ad_agg_selection agg_selection) { struct rte_eth_dev *bond_dev; @@ -1437,7 +1437,7 @@ rte_eth_bond_8023ad_agg_selection_set(uint8_t port_id, return 0; } -int rte_eth_bond_8023ad_agg_selection_get(uint8_t port_id) +int rte_eth_bond_8023ad_agg_selection_get(uint16_t port_id) { struct rte_eth_dev *bond_dev; struct bond_dev_private *internals; @@ -1458,7 +1458,7 @@ int rte_eth_bond_8023ad_agg_selection_get(uint8_t port_id) static int -bond_8023ad_setup_validate(uint8_t port_id, +bond_8023ad_setup_validate(uint16_t port_id, struct rte_eth_bond_8023ad_conf *conf) { if (valid_bonded_port_id(port_id) != 0) @@ -1483,7 +1483,7 @@ bond_8023ad_setup_validate(uint8_t port_id, } int -rte_eth_bond_8023ad_setup_v20(uint8_t port_id, +rte_eth_bond_8023ad_setup_v20(uint16_t port_id, struct rte_eth_bond_8023ad_conf *conf) { struct rte_eth_dev *bond_dev; @@ -1501,7 +1501,7 @@ rte_eth_bond_8023ad_setup_v20(uint8_t port_id, VERSION_SYMBOL(rte_eth_bond_8023ad_setup, _v20, 2.0); int -rte_eth_bond_8023ad_setup_v1607(uint8_t port_id, +rte_eth_bond_8023ad_setup_v1607(uint16_t port_id, struct rte_eth_bond_8023ad_conf *conf) { struct rte_eth_dev *bond_dev; @@ -1520,7 +1520,7 @@ VERSION_SYMBOL(rte_eth_bond_8023ad_setup, _v1607, 16.07); int -rte_eth_bond_8023ad_setup_v1708(uint8_t port_id, +rte_eth_bond_8023ad_setup_v1708(uint16_t port_id, struct rte_eth_bond_8023ad_conf *conf) { struct rte_eth_dev *bond_dev; @@ -1536,17 +1536,12 @@ rte_eth_bond_8023ad_setup_v1708(uint8_t port_id, return 0; } BIND_DEFAULT_SYMBOL(rte_eth_bond_8023ad_setup, _v1708, 17.08); -MAP_STATIC_SYMBOL(int rte_eth_bond_8023ad_setup(uint8_t port_id, +MAP_STATIC_SYMBOL(int rte_eth_bond_8023ad_setup(uint16_t port_id, struct rte_eth_bond_8023ad_conf *conf), rte_eth_bond_8023ad_setup_v1708); - - - - - int -rte_eth_bond_8023ad_slave_info(uint8_t port_id, uint8_t slave_id, +rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id, struct rte_eth_bond_8023ad_slave_info *info) { struct rte_eth_dev *bond_dev; @@ -1579,7 +1574,7 @@ rte_eth_bond_8023ad_slave_info(uint8_t port_id, uint8_t slave_id, } static int -bond_8023ad_ext_validate(uint8_t port_id, uint8_t slave_id) +bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id) { struct rte_eth_dev *bond_dev; struct bond_dev_private *internals; @@ -1607,7 +1602,8 @@ bond_8023ad_ext_validate(uint8_t port_id, uint8_t slave_id) } int -rte_eth_bond_8023ad_ext_collect(uint8_t port_id, uint8_t slave_id, int enabled) +rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id, + int enabled) { struct port *port; int res; @@ -1622,12 +1618,13 @@ rte_eth_bond_8023ad_ext_collect(uint8_t port_id, uint8_t slave_id, int enabled) ACTOR_STATE_SET(port, COLLECTING); else ACTOR_STATE_CLR(port, COLLECTING); - + printf("enabled port->actor_state = %d \r\n", port->actor_state); return 0; } int -rte_eth_bond_8023ad_ext_distrib(uint8_t port_id, uint8_t slave_id, int enabled) +rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id, + int enabled) { struct port *port; int res; @@ -1647,7 +1644,7 @@ rte_eth_bond_8023ad_ext_distrib(uint8_t port_id, uint8_t slave_id, int enabled) } int -rte_eth_bond_8023ad_ext_distrib_get(uint8_t port_id, uint8_t slave_id) +rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id) { struct port *port; int err; @@ -1661,7 +1658,7 @@ rte_eth_bond_8023ad_ext_distrib_get(uint8_t port_id, uint8_t slave_id) } int -rte_eth_bond_8023ad_ext_collect_get(uint8_t port_id, uint8_t slave_id) +rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id) { struct port *port; int err; @@ -1675,8 +1672,8 @@ rte_eth_bond_8023ad_ext_collect_get(uint8_t port_id, uint8_t slave_id) } int -rte_eth_bond_8023ad_ext_slowtx(uint8_t port_id, uint8_t slave_id, - struct rte_mbuf *lacp_pkt) +rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id, + struct rte_mbuf *lacp_pkt) { struct port *port; int res; @@ -1736,7 +1733,7 @@ bond_mode_8023ad_ext_periodic_cb(void *arg) } int -rte_eth_bond_8023ad_dedicated_queues_enable(uint8_t port) +rte_eth_bond_8023ad_dedicated_queues_enable(uint16_t port) { int retval = 0; struct rte_eth_dev *dev = &rte_eth_devices[port]; @@ -1760,7 +1757,7 @@ rte_eth_bond_8023ad_dedicated_queues_enable(uint8_t port) } int -rte_eth_bond_8023ad_dedicated_queues_disable(uint8_t port) +rte_eth_bond_8023ad_dedicated_queues_disable(uint16_t port) { int retval = 0; struct rte_eth_dev *dev = &rte_eth_devices[port]; diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.h b/drivers/net/bonding/rte_eth_bond_8023ad.h index 1d353c734..6d36e8300 100644 --- a/drivers/net/bonding/rte_eth_bond_8023ad.h +++ b/drivers/net/bonding/rte_eth_bond_8023ad.h @@ -64,7 +64,7 @@ extern "C" { #define MARKER_TLV_TYPE_INFO 0x01 #define MARKER_TLV_TYPE_RESP 0x02 -typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint8_t slave_id, +typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t slave_id, struct rte_mbuf *lacp_pkt); enum rte_bond_8023ad_selection { @@ -176,7 +176,7 @@ struct rte_eth_bond_8023ad_slave_info { struct port_params actor; uint8_t partner_state; struct port_params partner; - uint8_t agg_port_id; + uint16_t agg_port_id; }; /** @@ -192,16 +192,16 @@ struct rte_eth_bond_8023ad_slave_info { * -EINVAL if conf is NULL */ int -rte_eth_bond_8023ad_conf_get(uint8_t port_id, +rte_eth_bond_8023ad_conf_get(uint16_t port_id, struct rte_eth_bond_8023ad_conf *conf); int -rte_eth_bond_8023ad_conf_get_v20(uint8_t port_id, +rte_eth_bond_8023ad_conf_get_v20(uint16_t port_id, struct rte_eth_bond_8023ad_conf *conf); int -rte_eth_bond_8023ad_conf_get_v1607(uint8_t port_id, +rte_eth_bond_8023ad_conf_get_v1607(uint16_t port_id, struct rte_eth_bond_8023ad_conf *conf); int -rte_eth_bond_8023ad_conf_get_v1708(uint8_t port_id, +rte_eth_bond_8023ad_conf_get_v1708(uint16_t port_id, struct rte_eth_bond_8023ad_conf *conf); /** @@ -216,16 +216,16 @@ rte_eth_bond_8023ad_conf_get_v1708(uint8_t port_id, * -EINVAL if configuration is invalid. */ int -rte_eth_bond_8023ad_setup(uint8_t port_id, +rte_eth_bond_8023ad_setup(uint16_t port_id, struct rte_eth_bond_8023ad_conf *conf); int -rte_eth_bond_8023ad_setup_v20(uint8_t port_id, +rte_eth_bond_8023ad_setup_v20(uint16_t port_id, struct rte_eth_bond_8023ad_conf *conf); int -rte_eth_bond_8023ad_setup_v1607(uint8_t port_id, +rte_eth_bond_8023ad_setup_v1607(uint16_t port_id, struct rte_eth_bond_8023ad_conf *conf); int -rte_eth_bond_8023ad_setup_v1708(uint8_t port_id, +rte_eth_bond_8023ad_setup_v1708(uint16_t port_id, struct rte_eth_bond_8023ad_conf *conf); /** @@ -241,7 +241,7 @@ rte_eth_bond_8023ad_setup_v1708(uint8_t port_id, * bonded device or is not inactive). */ int -rte_eth_bond_8023ad_slave_info(uint8_t port_id, uint8_t slave_id, +rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id, struct rte_eth_bond_8023ad_slave_info *conf); #ifdef __cplusplus @@ -259,7 +259,8 @@ rte_eth_bond_8023ad_slave_info(uint8_t port_id, uint8_t slave_id, * -EINVAL if slave is not valid. */ int -rte_eth_bond_8023ad_ext_collect(uint8_t port_id, uint8_t slave_id, int enabled); +rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id, + int enabled); /** * Get COLLECTING flag from slave port actor state. @@ -272,7 +273,7 @@ rte_eth_bond_8023ad_ext_collect(uint8_t port_id, uint8_t slave_id, int enabled); * -EINVAL if slave is not valid. */ int -rte_eth_bond_8023ad_ext_collect_get(uint8_t port_id, uint8_t slave_id); +rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id); /** * Configure a slave port to start distributing. @@ -285,7 +286,8 @@ rte_eth_bond_8023ad_ext_collect_get(uint8_t port_id, uint8_t slave_id); * -EINVAL if slave is not valid. */ int -rte_eth_bond_8023ad_ext_distrib(uint8_t port_id, uint8_t slave_id, int enabled); +rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id, + int enabled); /** * Get DISTRIBUTING flag from slave port actor state. @@ -298,7 +300,7 @@ rte_eth_bond_8023ad_ext_distrib(uint8_t port_id, uint8_t slave_id, int enabled); * -EINVAL if slave is not valid. */ int -rte_eth_bond_8023ad_ext_distrib_get(uint8_t port_id, uint8_t slave_id); +rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id); /** * LACPDU transmit path for external 802.3ad state machine. Caller retains @@ -312,8 +314,8 @@ rte_eth_bond_8023ad_ext_distrib_get(uint8_t port_id, uint8_t slave_id); * 0 on success, negative value otherwise. */ int -rte_eth_bond_8023ad_ext_slowtx(uint8_t port_id, uint8_t slave_id, - struct rte_mbuf *lacp_pkt); +rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id, + struct rte_mbuf *lacp_pkt); /** * Enable dedicated hw queues for 802.3ad control plane traffic on on slaves @@ -338,7 +340,7 @@ rte_eth_bond_8023ad_ext_slowtx(uint8_t port_id, uint8_t slave_id, * 0 on success, negative value otherwise. */ int -rte_eth_bond_8023ad_dedicated_queues_enable(uint8_t port_id); +rte_eth_bond_8023ad_dedicated_queues_enable(uint16_t port_id); /** * Disable slow queue on slaves @@ -355,7 +357,7 @@ rte_eth_bond_8023ad_dedicated_queues_enable(uint8_t port_id); * */ int -rte_eth_bond_8023ad_dedicated_queues_disable(uint8_t port_id); +rte_eth_bond_8023ad_dedicated_queues_disable(uint16_t port_id); /* * Get aggregator mode for 8023ad @@ -365,7 +367,7 @@ rte_eth_bond_8023ad_dedicated_queues_disable(uint8_t port_id); * agregator mode on success, negative value otherwise */ int -rte_eth_bond_8023ad_agg_selection_get(uint8_t port_id); +rte_eth_bond_8023ad_agg_selection_get(uint16_t port_id); /** * Set aggregator mode for 8023ad @@ -374,6 +376,6 @@ rte_eth_bond_8023ad_agg_selection_get(uint8_t port_id); * 0 on success, negative value otherwise */ int -rte_eth_bond_8023ad_agg_selection_set(uint8_t port_id, +rte_eth_bond_8023ad_agg_selection_set(uint16_t port_id, enum rte_bond_8023ad_agg_selection agg_selection); #endif /* RTE_ETH_BOND_8023AD_H_ */ diff --git a/drivers/net/bonding/rte_eth_bond_8023ad_private.h b/drivers/net/bonding/rte_eth_bond_8023ad_private.h index d46e44a84..9ee5ca23d 100644 --- a/drivers/net/bonding/rte_eth_bond_8023ad_private.h +++ b/drivers/net/bonding/rte_eth_bond_8023ad_private.h @@ -279,7 +279,7 @@ bond_mode_8023ad_stop(struct rte_eth_dev *dev); */ void bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals, - uint8_t slave_id, struct rte_mbuf *pkt); + uint16_t slave_id, struct rte_mbuf *pkt); /** * @internal @@ -293,7 +293,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals, * 0 on success, negative value otherwise. */ void -bond_mode_8023ad_activate_slave(struct rte_eth_dev *dev, uint8_t port_id); +bond_mode_8023ad_activate_slave(struct rte_eth_dev *dev, uint16_t port_id); /** * @internal @@ -307,7 +307,7 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *dev, uint8_t port_id); * 0 on success, negative value otherwise. */ int -bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *dev, uint8_t slave_pos); +bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *dev, uint16_t slave_pos); /** * Updates state when MAC was changed on bonded device or one of its slaves. @@ -318,12 +318,12 @@ bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev); int bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev, - uint8_t slave_port); + uint16_t slave_port); int -bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint8_t slave_port); +bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port); int -bond_8023ad_slow_pkt_hw_filter_supported(uint8_t port_id); +bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id); #endif /* RTE_ETH_BOND_8023AD_H_ */ diff --git a/drivers/net/bonding/rte_eth_bond_alb.c b/drivers/net/bonding/rte_eth_bond_alb.c index d9d37495d..f7efbb78e 100644 --- a/drivers/net/bonding/rte_eth_bond_alb.c +++ b/drivers/net/bonding/rte_eth_bond_alb.c @@ -148,7 +148,7 @@ void bond_mode_alb_arp_recv(struct ether_hdr *eth_h, uint16_t offset, rte_spinlock_unlock(&internals->mode6.lock); } -uint8_t +uint16_t bond_mode_alb_arp_xmit(struct ether_hdr *eth_h, uint16_t offset, struct bond_dev_private *internals) { @@ -220,13 +220,13 @@ bond_mode_alb_arp_xmit(struct ether_hdr *eth_h, uint16_t offset, return internals->current_primary_port; } -uint8_t +uint16_t bond_mode_alb_arp_upd(struct client_data *client_info, struct rte_mbuf *pkt, struct bond_dev_private *internals) { struct ether_hdr *eth_h; struct arp_hdr *arp_h; - uint8_t slave_idx; + uint16_t slave_idx; rte_spinlock_lock(&internals->mode6.lock); eth_h = rte_pktmbuf_mtod(pkt, struct ether_hdr *); diff --git a/drivers/net/bonding/rte_eth_bond_alb.h b/drivers/net/bonding/rte_eth_bond_alb.h index fd7c3aeb4..9f17f7c85 100644 --- a/drivers/net/bonding/rte_eth_bond_alb.h +++ b/drivers/net/bonding/rte_eth_bond_alb.h @@ -51,7 +51,7 @@ struct client_data { uint32_t cli_ip; /**< Client IP address */ - uint8_t slave_idx; + uint16_t slave_idx; /**< Index of slave on which we connect with that client */ uint8_t in_use; /**< Flag indicating if entry in client table is currently used */ @@ -113,7 +113,7 @@ bond_mode_alb_arp_recv(struct ether_hdr *eth_h, uint16_t offset, * @return * Index of slave on which packet should be sent. */ -uint8_t +uint16_t bond_mode_alb_arp_xmit(struct ether_hdr *eth_h, uint16_t offset, struct bond_dev_private *internals); @@ -127,7 +127,7 @@ bond_mode_alb_arp_xmit(struct ether_hdr *eth_h, uint16_t offset, * @return * Index of slawe on which packet should be sent. */ -uint8_t +uint16_t bond_mode_alb_arp_upd(struct client_data *client_info, struct rte_mbuf *pkt, struct bond_dev_private *internals); diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c index de1d9e0db..957390f71 100644 --- a/drivers/net/bonding/rte_eth_bond_api.c +++ b/drivers/net/bonding/rte_eth_bond_api.c @@ -56,14 +56,14 @@ check_for_bonded_ethdev(const struct rte_eth_dev *eth_dev) } int -valid_bonded_port_id(uint8_t port_id) +valid_bonded_port_id(uint16_t port_id) { RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -1); return check_for_bonded_ethdev(&rte_eth_devices[port_id]); } int -valid_slave_port_id(uint8_t port_id, uint8_t mode) +valid_slave_port_id(uint16_t port_id, uint8_t mode) { RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -1); @@ -80,7 +80,7 @@ valid_slave_port_id(uint8_t port_id, uint8_t mode) } void -activate_slave(struct rte_eth_dev *eth_dev, uint8_t port_id) +activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id) { struct bond_dev_private *internals = eth_dev->data->dev_private; uint8_t active_count = internals->active_slave_count; @@ -107,11 +107,11 @@ activate_slave(struct rte_eth_dev *eth_dev, uint8_t port_id) } void -deactivate_slave(struct rte_eth_dev *eth_dev, uint8_t port_id) +deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id) { - uint8_t slave_pos; + uint16_t slave_pos; struct bond_dev_private *internals = eth_dev->data->dev_private; - uint8_t active_count = internals->active_slave_count; + uint16_t active_count = internals->active_slave_count; if (internals->mode == BONDING_MODE_8023AD) { bond_mode_8023ad_stop(eth_dev); @@ -153,7 +153,7 @@ rte_eth_bond_create(const char *name, uint8_t mode, uint8_t socket_id) { struct bond_dev_private *internals; char devargs[52]; - uint8_t port_id; + uint16_t port_id; int ret; if (name == NULL) { @@ -193,7 +193,7 @@ rte_eth_bond_free(const char *name) } static int -slave_vlan_filter_set(uint8_t bonded_port_id, uint8_t slave_port_id) +slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id) { struct rte_eth_dev *bonded_eth_dev; struct bond_dev_private *internals; @@ -233,7 +233,7 @@ slave_vlan_filter_set(uint8_t bonded_port_id, uint8_t slave_port_id) } static int -__eth_bond_slave_add_lock_free(uint8_t bonded_port_id, uint8_t slave_port_id) +__eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id) { struct rte_eth_dev *bonded_eth_dev, *slave_eth_dev; struct bond_dev_private *internals; @@ -363,7 +363,7 @@ __eth_bond_slave_add_lock_free(uint8_t bonded_port_id, uint8_t slave_port_id) } int -rte_eth_bond_slave_add(uint8_t bonded_port_id, uint8_t slave_port_id) +rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id) { struct rte_eth_dev *bonded_eth_dev; struct bond_dev_private *internals; @@ -387,7 +387,8 @@ rte_eth_bond_slave_add(uint8_t bonded_port_id, uint8_t slave_port_id) } static int -__eth_bond_slave_remove_lock_free(uint8_t bonded_port_id, uint8_t slave_port_id) +__eth_bond_slave_remove_lock_free(uint16_t bonded_port_id, + uint16_t slave_port_id) { struct rte_eth_dev *bonded_eth_dev; struct bond_dev_private *internals; @@ -466,7 +467,7 @@ __eth_bond_slave_remove_lock_free(uint8_t bonded_port_id, uint8_t slave_port_id) } int -rte_eth_bond_slave_remove(uint8_t bonded_port_id, uint8_t slave_port_id) +rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id) { struct rte_eth_dev *bonded_eth_dev; struct bond_dev_private *internals; @@ -488,7 +489,7 @@ rte_eth_bond_slave_remove(uint8_t bonded_port_id, uint8_t slave_port_id) } int -rte_eth_bond_mode_set(uint8_t bonded_port_id, uint8_t mode) +rte_eth_bond_mode_set(uint16_t bonded_port_id, uint8_t mode) { if (valid_bonded_port_id(bonded_port_id) != 0) return -1; @@ -497,7 +498,7 @@ rte_eth_bond_mode_set(uint8_t bonded_port_id, uint8_t mode) } int -rte_eth_bond_mode_get(uint8_t bonded_port_id) +rte_eth_bond_mode_get(uint16_t bonded_port_id) { struct bond_dev_private *internals; @@ -510,7 +511,7 @@ rte_eth_bond_mode_get(uint8_t bonded_port_id) } int -rte_eth_bond_primary_set(uint8_t bonded_port_id, uint8_t slave_port_id) +rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id) { struct bond_dev_private *internals; @@ -531,7 +532,7 @@ rte_eth_bond_primary_set(uint8_t bonded_port_id, uint8_t slave_port_id) } int -rte_eth_bond_primary_get(uint8_t bonded_port_id) +rte_eth_bond_primary_get(uint16_t bonded_port_id) { struct bond_dev_private *internals; @@ -547,7 +548,8 @@ rte_eth_bond_primary_get(uint8_t bonded_port_id) } int -rte_eth_bond_slaves_get(uint8_t bonded_port_id, uint8_t slaves[], uint8_t len) +rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[], + uint16_t len) { struct bond_dev_private *internals; uint8_t i; @@ -570,8 +572,8 @@ rte_eth_bond_slaves_get(uint8_t bonded_port_id, uint8_t slaves[], uint8_t len) } int -rte_eth_bond_active_slaves_get(uint8_t bonded_port_id, uint8_t slaves[], - uint8_t len) +rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[], + uint16_t len) { struct bond_dev_private *internals; @@ -586,13 +588,14 @@ rte_eth_bond_active_slaves_get(uint8_t bonded_port_id, uint8_t slaves[], if (internals->active_slave_count > len) return -1; - memcpy(slaves, internals->active_slaves, internals->active_slave_count); + memcpy(slaves, internals->active_slaves, + internals->active_slave_count * sizeof(internals->active_slaves[0])); return internals->active_slave_count; } int -rte_eth_bond_mac_address_set(uint8_t bonded_port_id, +rte_eth_bond_mac_address_set(uint16_t bonded_port_id, struct ether_addr *mac_addr) { struct rte_eth_dev *bonded_eth_dev; @@ -618,7 +621,7 @@ rte_eth_bond_mac_address_set(uint8_t bonded_port_id, } int -rte_eth_bond_mac_address_reset(uint8_t bonded_port_id) +rte_eth_bond_mac_address_reset(uint16_t bonded_port_id) { struct rte_eth_dev *bonded_eth_dev; struct bond_dev_private *internals; @@ -647,7 +650,7 @@ rte_eth_bond_mac_address_reset(uint8_t bonded_port_id) } int -rte_eth_bond_xmit_policy_set(uint8_t bonded_port_id, uint8_t policy) +rte_eth_bond_xmit_policy_set(uint16_t bonded_port_id, uint8_t policy) { struct bond_dev_private *internals; @@ -677,7 +680,7 @@ rte_eth_bond_xmit_policy_set(uint8_t bonded_port_id, uint8_t policy) } int -rte_eth_bond_xmit_policy_get(uint8_t bonded_port_id) +rte_eth_bond_xmit_policy_get(uint16_t bonded_port_id) { struct bond_dev_private *internals; @@ -690,7 +693,7 @@ rte_eth_bond_xmit_policy_get(uint8_t bonded_port_id) } int -rte_eth_bond_link_monitoring_set(uint8_t bonded_port_id, uint32_t internal_ms) +rte_eth_bond_link_monitoring_set(uint16_t bonded_port_id, uint32_t internal_ms) { struct bond_dev_private *internals; @@ -704,7 +707,7 @@ rte_eth_bond_link_monitoring_set(uint8_t bonded_port_id, uint32_t internal_ms) } int -rte_eth_bond_link_monitoring_get(uint8_t bonded_port_id) +rte_eth_bond_link_monitoring_get(uint16_t bonded_port_id) { struct bond_dev_private *internals; @@ -717,7 +720,8 @@ rte_eth_bond_link_monitoring_get(uint8_t bonded_port_id) } int -rte_eth_bond_link_down_prop_delay_set(uint8_t bonded_port_id, uint32_t delay_ms) +rte_eth_bond_link_down_prop_delay_set(uint16_t bonded_port_id, + uint32_t delay_ms) { struct bond_dev_private *internals; @@ -732,7 +736,7 @@ rte_eth_bond_link_down_prop_delay_set(uint8_t bonded_port_id, uint32_t delay_ms) } int -rte_eth_bond_link_down_prop_delay_get(uint8_t bonded_port_id) +rte_eth_bond_link_down_prop_delay_get(uint16_t bonded_port_id) { struct bond_dev_private *internals; @@ -745,7 +749,7 @@ rte_eth_bond_link_down_prop_delay_get(uint8_t bonded_port_id) } int -rte_eth_bond_link_up_prop_delay_set(uint8_t bonded_port_id, uint32_t delay_ms) +rte_eth_bond_link_up_prop_delay_set(uint16_t bonded_port_id, uint32_t delay_ms) { struct bond_dev_private *internals; @@ -760,7 +764,7 @@ rte_eth_bond_link_up_prop_delay_set(uint8_t bonded_port_id, uint32_t delay_ms) } int -rte_eth_bond_link_up_prop_delay_get(uint8_t bonded_port_id) +rte_eth_bond_link_up_prop_delay_get(uint16_t bonded_port_id) { struct bond_dev_private *internals; diff --git a/drivers/net/bonding/rte_eth_bond_args.c b/drivers/net/bonding/rte_eth_bond_args.c index bb634c62e..04d1f4e8f 100644 --- a/drivers/net/bonding/rte_eth_bond_args.c +++ b/drivers/net/bonding/rte_eth_bond_args.c @@ -153,7 +153,7 @@ bond_ethdev_parse_slave_port_kvarg(const char *key, return -1; } else slave_ports->slaves[slave_ports->slave_count++] = - (uint8_t)port_id; + port_id; } return 0; } diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index 3ee70baa0..2fea9423f 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -174,7 +174,7 @@ const struct rte_flow_attr flow_attr_8023ad = { int bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev, - uint8_t slave_port) { + uint16_t slave_port) { struct rte_flow_error error; struct bond_dev_private *internals = (struct bond_dev_private *) (bond_dev->data->dev_private); @@ -202,12 +202,12 @@ bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev, } int -bond_8023ad_slow_pkt_hw_filter_supported(uint8_t port_id) { +bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) { struct rte_eth_dev *bond_dev = &rte_eth_devices[port_id]; struct bond_dev_private *internals = (struct bond_dev_private *) (bond_dev->data->dev_private); struct rte_eth_dev_info bond_info, slave_info; - uint8_t idx; + uint16_t idx; /* Verify if all slaves in bonding supports flow director and */ if (internals->slave_count > 0) { @@ -230,7 +230,7 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint8_t port_id) { } int -bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint8_t slave_port) { +bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) { struct rte_flow_error error; struct bond_dev_private *internals = (struct bond_dev_private *) @@ -270,10 +270,10 @@ bond_ethdev_rx_burst_8023ad_fast_queue(void *queue, struct rte_mbuf **bufs, struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue; struct bond_dev_private *internals = bd_rx_q->dev_private; uint16_t num_rx_total = 0; /* Total number of received packets */ - uint8_t slaves[RTE_MAX_ETHPORTS]; - uint8_t slave_count; + uint16_t slaves[RTE_MAX_ETHPORTS]; + uint16_t slave_count; - uint8_t i, idx; + uint16_t i, idx; /* Copy slave list to protect against slave up/down changes during tx * bursting */ @@ -302,8 +302,8 @@ bond_ethdev_tx_burst_8023ad_fast_queue(void *queue, struct rte_mbuf **bufs, struct bond_dev_private *internals; struct bond_tx_queue *bd_tx_q; - uint8_t num_of_slaves; - uint8_t slaves[RTE_MAX_ETHPORTS]; + uint16_t num_of_slaves; + uint16_t slaves[RTE_MAX_ETHPORTS]; /* positions in slaves, not ID */ uint8_t distributing_offsets[RTE_MAX_ETHPORTS]; uint8_t distributing_count; @@ -394,8 +394,8 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, const uint16_t ether_type_slow_be = rte_be_to_cpu_16(ETHER_TYPE_SLOW); uint16_t num_rx_total = 0; /* Total number of received packets */ - uint8_t slaves[RTE_MAX_ETHPORTS]; - uint8_t slave_count, idx; + uint16_t slaves[RTE_MAX_ETHPORTS]; + uint16_t slave_count, idx; uint8_t collecting; /* current slave collecting status */ const uint8_t promisc = internals->promiscuous_en; @@ -673,8 +673,8 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs, struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_pkts]; uint16_t slave_nb_pkts[RTE_MAX_ETHPORTS] = { 0 }; - uint8_t num_of_slaves; - uint8_t slaves[RTE_MAX_ETHPORTS]; + uint16_t num_of_slaves; + uint16_t slaves[RTE_MAX_ETHPORTS]; uint16_t num_tx_total = 0, num_tx_slave; @@ -904,7 +904,7 @@ bandwidth_cmp(const void *a, const void *b) } static void -bandwidth_left(uint8_t port_id, uint64_t load, uint8_t update_idx, +bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx, struct bwg_slave *bwg_slave) { struct rte_eth_link link_status; @@ -970,10 +970,10 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rte_eth_dev *primary_port = &rte_eth_devices[internals->primary_port]; uint16_t num_tx_total = 0; - uint8_t i, j; + uint16_t i, j; - uint8_t num_of_slaves = internals->active_slave_count; - uint8_t slaves[RTE_MAX_ETHPORTS]; + uint16_t num_of_slaves = internals->active_slave_count; + uint16_t slaves[RTE_MAX_ETHPORTS]; struct ether_hdr *ether_hdr; struct ether_addr primary_slave_addr; @@ -1059,7 +1059,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) uint16_t num_send, num_not_send = 0; uint16_t num_tx_total = 0; - uint8_t slave_idx; + uint16_t slave_idx; int i, j; @@ -1178,8 +1178,8 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs, struct bond_dev_private *internals; struct bond_tx_queue *bd_tx_q; - uint8_t num_of_slaves; - uint8_t slaves[RTE_MAX_ETHPORTS]; + uint16_t num_of_slaves; + uint16_t slaves[RTE_MAX_ETHPORTS]; uint16_t num_tx_total = 0, num_tx_slave = 0, tx_fail_total = 0; @@ -1239,8 +1239,8 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, struct bond_dev_private *internals; struct bond_tx_queue *bd_tx_q; - uint8_t num_of_slaves; - uint8_t slaves[RTE_MAX_ETHPORTS]; + uint16_t num_of_slaves; + uint16_t slaves[RTE_MAX_ETHPORTS]; /* positions in slaves, not ID */ uint8_t distributing_offsets[RTE_MAX_ETHPORTS]; uint8_t distributing_count; @@ -1333,7 +1333,7 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs, struct bond_tx_queue *bd_tx_q; uint8_t tx_failed_flag = 0, num_of_slaves; - uint8_t slaves[RTE_MAX_ETHPORTS]; + uint16_t slaves[RTE_MAX_ETHPORTS]; uint16_t max_nb_of_tx_pkts = 0; @@ -1861,7 +1861,7 @@ slave_add(struct bond_dev_private *internals, void bond_ethdev_primary_set(struct bond_dev_private *internals, - uint8_t slave_port_id) + uint16_t slave_port_id) { int i; @@ -2125,7 +2125,7 @@ static int bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) { int res; - uint8_t i; + uint16_t i; struct bond_dev_private *internals = dev->data->dev_private; /* don't do this while a slave is being added */ @@ -2137,7 +2137,7 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) rte_bitmap_clear(internals->vlan_filter_bmp, vlan_id); for (i = 0; i < internals->slave_count; i++) { - uint8_t port_id = internals->slaves[i].port_id; + uint16_t port_id = internals->slaves[i].port_id; res = rte_eth_dev_vlan_filter(port_id, vlan_id, on); if (res == ENOTSUP) @@ -2277,7 +2277,7 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg) static int bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete) { - void (*link_update)(uint8_t port_id, struct rte_eth_link *eth_link); + void (*link_update)(uint16_t port_id, struct rte_eth_link *eth_link); struct bond_dev_private *bond_ctx; struct rte_eth_link slave_link; @@ -2466,8 +2466,8 @@ bond_ethdev_delayed_lsc_propagation(void *arg) } int -bond_ethdev_lsc_event_callback(uint8_t port_id, enum rte_eth_event_type type, - void *param, void *ret_param __rte_unused) +bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type, + void *param, void *ret_param __rte_unused) { struct rte_eth_dev *bonded_eth_dev; struct bond_dev_private *internals; @@ -2951,7 +2951,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev) struct bond_dev_private *internals = dev->data->dev_private; struct rte_kvargs *kvlist = internals->kvlist; int arg_count; - uint8_t port_id = dev - rte_eth_devices; + uint16_t port_id = dev - rte_eth_devices; uint8_t agg_mode; static const uint8_t default_rss_key[40] = { @@ -3086,7 +3086,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev) /* Parse/set primary slave port id*/ arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_SLAVE_KVARG); if (arg_count == 1) { - uint8_t primary_slave_port_id; + uint16_t primary_slave_port_id; if (rte_kvargs_process(kvlist, PMD_BOND_PRIMARY_SLAVE_KVARG, @@ -3099,7 +3099,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev) } /* Set balance mode transmit policy*/ - if (rte_eth_bond_primary_set(port_id, (uint8_t)primary_slave_port_id) + if (rte_eth_bond_primary_set(port_id, primary_slave_port_id) != 0) { RTE_LOG(ERR, EAL, "Failed to set primary slave port %d on bonded device %s\n", diff --git a/drivers/net/bonding/rte_eth_bond_private.h b/drivers/net/bonding/rte_eth_bond_private.h index 1fe6ff880..ec604875e 100644 --- a/drivers/net/bonding/rte_eth_bond_private.h +++ b/drivers/net/bonding/rte_eth_bond_private.h @@ -93,12 +93,12 @@ struct bond_tx_queue { /** Bonded slave devices structure */ struct bond_ethdev_slave_ports { - uint8_t slaves[RTE_MAX_ETHPORTS]; /**< Slave port id array */ - uint8_t slave_count; /**< Number of slaves */ + uint16_t slaves[RTE_MAX_ETHPORTS]; /**< Slave port id array */ + uint16_t slave_count; /**< Number of slaves */ }; struct bond_slave_details { - uint8_t port_id; + uint16_t port_id; uint8_t link_status_poll_enabled; uint8_t link_status_wait_to_complete; @@ -114,14 +114,14 @@ typedef uint16_t (*xmit_hash_t)(const struct rte_mbuf *buf, uint8_t slave_count) /** Link Bonding PMD device private configuration Structure */ struct bond_dev_private { - uint8_t port_id; /**< Port Id of Bonded Port */ - uint8_t mode; /**< Link Bonding Mode */ + uint16_t port_id; /**< Port Id of Bonded Port */ + uint8_t mode; /**< Link Bonding Mode */ rte_spinlock_t lock; - uint8_t primary_port; /**< Primary Slave Port */ - uint8_t current_primary_port; /**< Primary Slave Port */ - uint8_t user_defined_primary_port; + uint16_t primary_port; /**< Primary Slave Port */ + uint16_t current_primary_port; /**< Primary Slave Port */ + uint16_t user_defined_primary_port; /**< Flag for whether primary port is user defined or not */ uint8_t balance_xmit_policy; @@ -144,16 +144,17 @@ struct bond_dev_private { uint16_t nb_rx_queues; /**< Total number of rx queues */ uint16_t nb_tx_queues; /**< Total number of tx queues*/ - uint8_t active_slave; /**< Next active_slave to poll */ - uint8_t active_slave_count; /**< Number of active slaves */ - uint8_t active_slaves[RTE_MAX_ETHPORTS]; /**< Active slave list */ + uint16_t active_slave; /**< Next active_slave to poll */ + uint16_t active_slave_count; /**< Number of active slaves */ + uint16_t active_slaves[RTE_MAX_ETHPORTS]; /**< Active slave list */ - uint8_t slave_count; /**< Number of bonded slaves */ + uint16_t slave_count; /**< Number of bonded slaves */ struct bond_slave_details slaves[RTE_MAX_ETHPORTS]; /**< Arary of bonded slaves details */ struct mode8023ad_private mode4; - uint8_t tlb_slaves_order[RTE_MAX_ETHPORTS]; /* TLB active slaves send order */ + /**< TLB active slaves send order */ + uint16_t tlb_slaves_order[RTE_MAX_ETHPORTS]; struct mode_alb_private mode6; uint32_t rx_offload_capa; /** Rx offload capability */ @@ -186,10 +187,10 @@ check_for_bonded_ethdev(const struct rte_eth_dev *eth_dev); /* Search given slave array to find position of given id. * Return slave pos or slaves_count if not found. */ -static inline uint8_t -find_slave_by_id(uint8_t *slaves, uint8_t slaves_count, uint8_t slave_id) { +static inline uint16_t +find_slave_by_id(uint16_t *slaves, uint16_t slaves_count, uint16_t slave_id) { - uint8_t pos; + uint16_t pos; for (pos = 0; pos < slaves_count; pos++) { if (slave_id == slaves[pos]) break; @@ -199,19 +200,19 @@ find_slave_by_id(uint8_t *slaves, uint8_t slaves_count, uint8_t slave_id) { } int -valid_port_id(uint8_t port_id); +valid_port_id(uint16_t port_id); int -valid_bonded_port_id(uint8_t port_id); +valid_bonded_port_id(uint16_t port_id); int -valid_slave_port_id(uint8_t port_id, uint8_t mode); +valid_slave_port_id(uint16_t port_id, uint8_t mode); void -deactivate_slave(struct rte_eth_dev *eth_dev, uint8_t port_id); +deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id); void -activate_slave(struct rte_eth_dev *eth_dev, uint8_t port_id); +activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id); void link_properties_set(struct rte_eth_dev *bonded_eth_dev, @@ -255,11 +256,11 @@ xmit_l34_hash(const struct rte_mbuf *buf, uint8_t slave_count); void bond_ethdev_primary_set(struct bond_dev_private *internals, - uint8_t slave_port_id); + uint16_t slave_port_id); int -bond_ethdev_lsc_event_callback(uint8_t port_id, enum rte_eth_event_type type, - void *param, void *ret_param); +bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type, + void *param, void *ret_param); int bond_ethdev_parse_slave_port_kvarg(const char *key, diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c index 3d4ab9368..a59947d78 100644 --- a/drivers/net/e1000/em_ethdev.c +++ b/drivers/net/e1000/em_ethdev.c @@ -1624,7 +1624,7 @@ eth_em_interrupt_action(struct rte_eth_dev *dev, rte_em_dev_atomic_read_link_status(dev, &link); if (link.link_status) { PMD_INIT_LOG(INFO, " Port %d: Link Up - speed %u Mbps - %s", - dev->data->port_id, (unsigned)link.link_speed, + dev->data->port_id, link.link_speed, link.link_duplex == ETH_LINK_FULL_DUPLEX ? "full-duplex" : "half-duplex"); } else { diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c index 31819c5bd..06ba68e39 100644 --- a/drivers/net/e1000/em_rxtx.c +++ b/drivers/net/e1000/em_rxtx.c @@ -119,7 +119,7 @@ struct em_rx_queue { uint16_t nb_rx_hold; /**< number of held free RX desc. */ uint16_t rx_free_thresh; /**< max free RX desc to hold. */ uint16_t queue_id; /**< RX queue index. */ - uint8_t port_id; /**< Device port identifier. */ + uint16_t port_id; /**< Device port identifier. */ uint8_t pthresh; /**< Prefetch threshold register. */ uint8_t hthresh; /**< Host threshold register. */ uint8_t wthresh; /**< Write-back threshold register. */ @@ -186,7 +186,7 @@ struct em_tx_queue { /** Total number of TX descriptors ready to be allocated. */ uint16_t nb_tx_free; uint16_t queue_id; /**< TX queue index. */ - uint8_t port_id; /**< Device port identifier. */ + uint16_t port_id; /**< Device port identifier. */ uint8_t pthresh; /**< Prefetch threshold register. */ uint8_t hthresh; /**< Host threshold register. */ uint8_t wthresh; /**< Write-back threshold register. */ diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c index 1c80a2a1b..a800d9c2b 100644 --- a/drivers/net/e1000/igb_rxtx.c +++ b/drivers/net/e1000/igb_rxtx.c @@ -122,7 +122,7 @@ struct igb_rx_queue { uint16_t rx_free_thresh; /**< max free RX desc to hold. */ uint16_t queue_id; /**< RX queue index. */ uint16_t reg_idx; /**< RX queue register index. */ - uint8_t port_id; /**< Device port identifier. */ + uint16_t port_id; /**< Device port identifier. */ uint8_t pthresh; /**< Prefetch threshold register. */ uint8_t hthresh; /**< Host threshold register. */ uint8_t wthresh; /**< Write-back threshold register. */ @@ -191,7 +191,7 @@ struct igb_tx_queue { /**< Index of first used TX descriptor. */ uint16_t queue_id; /**< TX queue index. */ uint16_t reg_idx; /**< TX queue register index. */ - uint8_t port_id; /**< Device port identifier. */ + uint16_t port_id; /**< Device port identifier. */ uint8_t pthresh; /**< Prefetch threshold register. */ uint8_t hthresh; /**< Host threshold register. */ uint8_t wthresh; /**< Write-back threshold register. */ diff --git a/drivers/net/failsafe/failsafe_ether.c b/drivers/net/failsafe/failsafe_ether.c index a3a8cce95..1c8a9337e 100644 --- a/drivers/net/failsafe/failsafe_ether.c +++ b/drivers/net/failsafe/failsafe_ether.c @@ -400,7 +400,7 @@ failsafe_eth_dev_state_sync(struct rte_eth_dev *dev) } int -failsafe_eth_rmv_event_callback(uint8_t port_id __rte_unused, +failsafe_eth_rmv_event_callback(uint16_t port_id __rte_unused, enum rte_eth_event_type event __rte_unused, void *cb_arg, void *out __rte_unused) { @@ -419,7 +419,7 @@ failsafe_eth_rmv_event_callback(uint8_t port_id __rte_unused, } int -failsafe_eth_lsc_event_callback(uint8_t port_id __rte_unused, +failsafe_eth_lsc_event_callback(uint16_t port_id __rte_unused, enum rte_eth_event_type event __rte_unused, void *cb_arg, void *out __rte_unused) { diff --git a/drivers/net/failsafe/failsafe_private.h b/drivers/net/failsafe/failsafe_private.h index 0361cf434..4ae6e6c5f 100644 --- a/drivers/net/failsafe/failsafe_private.h +++ b/drivers/net/failsafe/failsafe_private.h @@ -180,10 +180,10 @@ int failsafe_eal_uninit(struct rte_eth_dev *dev); int failsafe_eth_dev_state_sync(struct rte_eth_dev *dev); void failsafe_dev_remove(struct rte_eth_dev *dev); -int failsafe_eth_rmv_event_callback(uint8_t port_id, +int failsafe_eth_rmv_event_callback(uint16_t port_id, enum rte_eth_event_type type, void *arg, void *out); -int failsafe_eth_lsc_event_callback(uint8_t port_id, +int failsafe_eth_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type event, void *cb_arg, void *out); diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h index 8e1a95062..060982b10 100644 --- a/drivers/net/fm10k/fm10k.h +++ b/drivers/net/fm10k/fm10k.h @@ -204,7 +204,7 @@ struct fm10k_rx_queue { uint16_t rxrearm_nb; /* number of remaining to be re-armed */ uint16_t rxrearm_start; /* the idx we start the re-arming from */ uint16_t rx_using_sse; /* indicates that vector RX is in use */ - uint8_t port_id; + uint16_t port_id; uint8_t drop_en; uint8_t rx_deferred_start; /* don't start this queue in dev start. */ uint16_t rx_ftag_en; /* indicates FTAG RX supported */ @@ -241,7 +241,7 @@ struct fm10k_tx_queue { volatile uint32_t *tail_ptr; uint32_t txq_flags; /* Holds flags for this TXq */ uint16_t nb_desc; - uint8_t port_id; + uint16_t port_id; uint8_t tx_deferred_start; /** don't start this queue in dev start. */ uint16_t queue_id; uint16_t tx_ftag_en; /* indicates FTAG TX supported */ @@ -289,7 +289,7 @@ static inline uint16_t fifo_remove(struct fifo *fifo) } static inline void -fm10k_pktmbuf_reset(struct rte_mbuf *mb, uint8_t in_port) +fm10k_pktmbuf_reset(struct rte_mbuf *mb, uint16_t in_port) { rte_mbuf_refcnt_set(mb, 1); mb->next = NULL; diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index 5f26e24a3..d17d7497f 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -1918,8 +1918,9 @@ i40e_dev_start(struct rte_eth_dev *dev) hw->adapter_stopped = 0; if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) { - PMD_INIT_LOG(ERR, "Invalid link_speeds for port %hhu; autonegotiation disabled", - dev->data->port_id); + PMD_INIT_LOG(ERR, + "Invalid link_speeds for port %u, autonegotiation disabled", + dev->data->port_id); return -EINVAL; } diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h index 20084d649..ff2ab8575 100644 --- a/drivers/net/i40e/i40e_rxtx.h +++ b/drivers/net/i40e/i40e_rxtx.h @@ -121,7 +121,7 @@ struct i40e_rx_queue { uint16_t rxrearm_start; /**< the idx we start the re-arming from */ uint64_t mbuf_initializer; /**< value to init mbufs */ - uint8_t port_id; /**< device port ID */ + uint16_t port_id; /**< device port ID */ uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise */ uint16_t queue_id; /**< RX queue index */ uint16_t reg_idx; /**< RX queue register index */ @@ -167,7 +167,7 @@ struct i40e_tx_queue { uint8_t pthresh; /**< Prefetch threshold register. */ uint8_t hthresh; /**< Host threshold register. */ uint8_t wthresh; /**< Write-back threshold reg. */ - uint8_t port_id; /**< Device port identifier. */ + uint16_t port_id; /**< Device port identifier. */ uint16_t queue_id; /**< TX queue index. */ uint16_t reg_idx; uint32_t txq_flags; diff --git a/drivers/net/i40e/rte_pmd_i40e.c b/drivers/net/i40e/rte_pmd_i40e.c index f12b7f4a1..3728d39b9 100644 --- a/drivers/net/i40e/rte_pmd_i40e.c +++ b/drivers/net/i40e/rte_pmd_i40e.c @@ -41,7 +41,7 @@ #include "rte_pmd_i40e.h" int -rte_pmd_i40e_ping_vfs(uint8_t port, uint16_t vf) +rte_pmd_i40e_ping_vfs(uint16_t port, uint16_t vf) { struct rte_eth_dev *dev; struct i40e_pf *pf; @@ -66,7 +66,7 @@ rte_pmd_i40e_ping_vfs(uint8_t port, uint16_t vf) } int -rte_pmd_i40e_set_vf_mac_anti_spoof(uint8_t port, uint16_t vf_id, uint8_t on) +rte_pmd_i40e_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf_id, uint8_t on) { struct rte_eth_dev *dev; struct i40e_pf *pf; @@ -170,7 +170,7 @@ i40e_add_rm_all_vlan_filter(struct i40e_vsi *vsi, uint8_t add) } int -rte_pmd_i40e_set_vf_vlan_anti_spoof(uint8_t port, uint16_t vf_id, uint8_t on) +rte_pmd_i40e_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf_id, uint8_t on) { struct rte_eth_dev *dev; struct i40e_pf *pf; @@ -430,7 +430,7 @@ i40e_vsi_set_tx_loopback(struct i40e_vsi *vsi, uint8_t on) } int -rte_pmd_i40e_set_tx_loopback(uint8_t port, uint8_t on) +rte_pmd_i40e_set_tx_loopback(uint16_t port, uint8_t on) { struct rte_eth_dev *dev; struct i40e_pf *pf; @@ -473,7 +473,7 @@ rte_pmd_i40e_set_tx_loopback(uint8_t port, uint8_t on) } int -rte_pmd_i40e_set_vf_unicast_promisc(uint8_t port, uint16_t vf_id, uint8_t on) +rte_pmd_i40e_set_vf_unicast_promisc(uint16_t port, uint16_t vf_id, uint8_t on) { struct rte_eth_dev *dev; struct i40e_pf *pf; @@ -514,7 +514,7 @@ rte_pmd_i40e_set_vf_unicast_promisc(uint8_t port, uint16_t vf_id, uint8_t on) } int -rte_pmd_i40e_set_vf_multicast_promisc(uint8_t port, uint16_t vf_id, uint8_t on) +rte_pmd_i40e_set_vf_multicast_promisc(uint16_t port, uint16_t vf_id, uint8_t on) { struct rte_eth_dev *dev; struct i40e_pf *pf; @@ -555,7 +555,7 @@ rte_pmd_i40e_set_vf_multicast_promisc(uint8_t port, uint16_t vf_id, uint8_t on) } int -rte_pmd_i40e_set_vf_mac_addr(uint8_t port, uint16_t vf_id, +rte_pmd_i40e_set_vf_mac_addr(uint16_t port, uint16_t vf_id, struct ether_addr *mac_addr) { struct i40e_mac_filter *f; @@ -598,7 +598,7 @@ rte_pmd_i40e_set_vf_mac_addr(uint8_t port, uint16_t vf_id, /* Set vlan strip on/off for specific VF from host */ int -rte_pmd_i40e_set_vf_vlan_stripq(uint8_t port, uint16_t vf_id, uint8_t on) +rte_pmd_i40e_set_vf_vlan_stripq(uint16_t port, uint16_t vf_id, uint8_t on) { struct rte_eth_dev *dev; struct i40e_pf *pf; @@ -633,7 +633,7 @@ rte_pmd_i40e_set_vf_vlan_stripq(uint8_t port, uint16_t vf_id, uint8_t on) return ret; } -int rte_pmd_i40e_set_vf_vlan_insert(uint8_t port, uint16_t vf_id, +int rte_pmd_i40e_set_vf_vlan_insert(uint16_t port, uint16_t vf_id, uint16_t vlan_id) { struct rte_eth_dev *dev; @@ -698,7 +698,7 @@ int rte_pmd_i40e_set_vf_vlan_insert(uint8_t port, uint16_t vf_id, return ret; } -int rte_pmd_i40e_set_vf_broadcast(uint8_t port, uint16_t vf_id, +int rte_pmd_i40e_set_vf_broadcast(uint16_t port, uint16_t vf_id, uint8_t on) { struct rte_eth_dev *dev; @@ -764,7 +764,7 @@ int rte_pmd_i40e_set_vf_broadcast(uint8_t port, uint16_t vf_id, return ret; } -int rte_pmd_i40e_set_vf_vlan_tag(uint8_t port, uint16_t vf_id, uint8_t on) +int rte_pmd_i40e_set_vf_vlan_tag(uint16_t port, uint16_t vf_id, uint8_t on) { struct rte_eth_dev *dev; struct i40e_pf *pf; @@ -858,7 +858,7 @@ i40e_vlan_filter_count(struct i40e_vsi *vsi) return count; } -int rte_pmd_i40e_set_vf_vlan_filter(uint8_t port, uint16_t vlan_id, +int rte_pmd_i40e_set_vf_vlan_filter(uint16_t port, uint16_t vlan_id, uint64_t vf_mask, uint8_t on) { struct rte_eth_dev *dev; @@ -941,7 +941,7 @@ int rte_pmd_i40e_set_vf_vlan_filter(uint8_t port, uint16_t vlan_id, } int -rte_pmd_i40e_get_vf_stats(uint8_t port, +rte_pmd_i40e_get_vf_stats(uint16_t port, uint16_t vf_id, struct rte_eth_stats *stats) { @@ -986,7 +986,7 @@ rte_pmd_i40e_get_vf_stats(uint8_t port, } int -rte_pmd_i40e_reset_vf_stats(uint8_t port, +rte_pmd_i40e_reset_vf_stats(uint16_t port, uint16_t vf_id) { struct rte_eth_dev *dev; @@ -1020,7 +1020,7 @@ rte_pmd_i40e_reset_vf_stats(uint8_t port, } int -rte_pmd_i40e_set_vf_max_bw(uint8_t port, uint16_t vf_id, uint32_t bw) +rte_pmd_i40e_set_vf_max_bw(uint16_t port, uint16_t vf_id, uint32_t bw) { struct rte_eth_dev *dev; struct i40e_pf *pf; @@ -1109,7 +1109,7 @@ rte_pmd_i40e_set_vf_max_bw(uint8_t port, uint16_t vf_id, uint32_t bw) } int -rte_pmd_i40e_set_vf_tc_bw_alloc(uint8_t port, uint16_t vf_id, +rte_pmd_i40e_set_vf_tc_bw_alloc(uint16_t port, uint16_t vf_id, uint8_t tc_num, uint8_t *bw_weight) { struct rte_eth_dev *dev; @@ -1223,7 +1223,7 @@ rte_pmd_i40e_set_vf_tc_bw_alloc(uint8_t port, uint16_t vf_id, } int -rte_pmd_i40e_set_vf_tc_max_bw(uint8_t port, uint16_t vf_id, +rte_pmd_i40e_set_vf_tc_max_bw(uint16_t port, uint16_t vf_id, uint8_t tc_no, uint32_t bw) { struct rte_eth_dev *dev; @@ -1341,7 +1341,7 @@ rte_pmd_i40e_set_vf_tc_max_bw(uint8_t port, uint16_t vf_id, } int -rte_pmd_i40e_set_tc_strict_prio(uint8_t port, uint8_t tc_map) +rte_pmd_i40e_set_tc_strict_prio(uint16_t port, uint8_t tc_map) { struct rte_eth_dev *dev; struct i40e_pf *pf; @@ -1513,7 +1513,7 @@ i40e_add_rm_profile_info(struct i40e_hw *hw, uint8_t *profile_info_sec) /* Check if the profile info exists */ static int -i40e_check_profile_info(uint8_t port, uint8_t *profile_info_sec) +i40e_check_profile_info(uint16_t port, uint8_t *profile_info_sec) { struct rte_eth_dev *dev = &rte_eth_devices[port]; struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -1557,7 +1557,7 @@ i40e_check_profile_info(uint8_t port, uint8_t *profile_info_sec) } int -rte_pmd_i40e_process_ddp_package(uint8_t port, uint8_t *buff, +rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff, uint32_t size, enum rte_pmd_i40e_package_op op) { @@ -1863,7 +1863,7 @@ int rte_pmd_i40e_get_ddp_info(uint8_t *pkg_buff, uint32_t pkg_size, } int -rte_pmd_i40e_get_ddp_list(uint8_t port, uint8_t *buff, uint32_t size) +rte_pmd_i40e_get_ddp_list(uint16_t port, uint8_t *buff, uint32_t size) { struct rte_eth_dev *dev; struct i40e_hw *hw; @@ -1991,7 +1991,7 @@ static int check_invalid_ptype_mapping( int rte_pmd_i40e_ptype_mapping_update( - uint8_t port, + uint16_t port, struct rte_pmd_i40e_ptype_mapping *mapping_items, uint16_t count, uint8_t exclusive) @@ -2027,7 +2027,7 @@ rte_pmd_i40e_ptype_mapping_update( return 0; } -int rte_pmd_i40e_ptype_mapping_reset(uint8_t port) +int rte_pmd_i40e_ptype_mapping_reset(uint16_t port) { struct rte_eth_dev *dev; @@ -2044,7 +2044,7 @@ int rte_pmd_i40e_ptype_mapping_reset(uint8_t port) } int rte_pmd_i40e_ptype_mapping_get( - uint8_t port, + uint16_t port, struct rte_pmd_i40e_ptype_mapping *mapping_items, uint16_t size, uint16_t *count, @@ -2078,7 +2078,7 @@ int rte_pmd_i40e_ptype_mapping_get( return 0; } -int rte_pmd_i40e_ptype_mapping_replace(uint8_t port, +int rte_pmd_i40e_ptype_mapping_replace(uint16_t port, uint32_t target, uint8_t mask, uint32_t pkt_type) diff --git a/drivers/net/i40e/rte_pmd_i40e.h b/drivers/net/i40e/rte_pmd_i40e.h index 356fa89d7..7f32a59b1 100644 --- a/drivers/net/i40e/rte_pmd_i40e.h +++ b/drivers/net/i40e/rte_pmd_i40e.h @@ -157,7 +157,7 @@ struct rte_pmd_i40e_ptype_mapping { * - (-ENODEV) if *port* invalid. * - (-EINVAL) if *vf* invalid. */ -int rte_pmd_i40e_ping_vfs(uint8_t port, uint16_t vf); +int rte_pmd_i40e_ping_vfs(uint16_t port, uint16_t vf); /** * Enable/Disable VF MAC anti spoofing. @@ -174,7 +174,7 @@ int rte_pmd_i40e_ping_vfs(uint8_t port, uint16_t vf); * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_i40e_set_vf_mac_anti_spoof(uint8_t port, +int rte_pmd_i40e_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf_id, uint8_t on); @@ -193,7 +193,7 @@ int rte_pmd_i40e_set_vf_mac_anti_spoof(uint8_t port, * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_i40e_set_vf_vlan_anti_spoof(uint8_t port, +int rte_pmd_i40e_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf_id, uint8_t on); @@ -210,7 +210,7 @@ int rte_pmd_i40e_set_vf_vlan_anti_spoof(uint8_t port, * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_i40e_set_tx_loopback(uint8_t port, +int rte_pmd_i40e_set_tx_loopback(uint16_t port, uint8_t on); /** @@ -228,7 +228,7 @@ int rte_pmd_i40e_set_tx_loopback(uint8_t port, * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_i40e_set_vf_unicast_promisc(uint8_t port, +int rte_pmd_i40e_set_vf_unicast_promisc(uint16_t port, uint16_t vf_id, uint8_t on); @@ -247,7 +247,7 @@ int rte_pmd_i40e_set_vf_unicast_promisc(uint8_t port, * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_i40e_set_vf_multicast_promisc(uint8_t port, +int rte_pmd_i40e_set_vf_multicast_promisc(uint16_t port, uint16_t vf_id, uint8_t on); @@ -271,7 +271,7 @@ int rte_pmd_i40e_set_vf_multicast_promisc(uint8_t port, * - (-ENODEV) if *port* invalid. * - (-EINVAL) if *vf* or *mac_addr* is invalid. */ -int rte_pmd_i40e_set_vf_mac_addr(uint8_t port, uint16_t vf_id, +int rte_pmd_i40e_set_vf_mac_addr(uint16_t port, uint16_t vf_id, struct ether_addr *mac_addr); /** @@ -291,7 +291,7 @@ int rte_pmd_i40e_set_vf_mac_addr(uint8_t port, uint16_t vf_id, * - (-EINVAL) if bad parameter. */ int -rte_pmd_i40e_set_vf_vlan_stripq(uint8_t port, uint16_t vf, uint8_t on); +rte_pmd_i40e_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on); /** * Enable/Disable vf vlan insert @@ -309,7 +309,7 @@ rte_pmd_i40e_set_vf_vlan_stripq(uint8_t port, uint16_t vf, uint8_t on); * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_i40e_set_vf_vlan_insert(uint8_t port, uint16_t vf_id, +int rte_pmd_i40e_set_vf_vlan_insert(uint16_t port, uint16_t vf_id, uint16_t vlan_id); /** @@ -328,7 +328,7 @@ int rte_pmd_i40e_set_vf_vlan_insert(uint8_t port, uint16_t vf_id, * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_i40e_set_vf_broadcast(uint8_t port, uint16_t vf_id, +int rte_pmd_i40e_set_vf_broadcast(uint16_t port, uint16_t vf_id, uint8_t on); /** @@ -347,7 +347,7 @@ int rte_pmd_i40e_set_vf_broadcast(uint8_t port, uint16_t vf_id, * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_i40e_set_vf_vlan_tag(uint8_t port, uint16_t vf_id, uint8_t on); +int rte_pmd_i40e_set_vf_vlan_tag(uint16_t port, uint16_t vf_id, uint8_t on); /** * Enable/Disable VF VLAN filter @@ -368,7 +368,7 @@ int rte_pmd_i40e_set_vf_vlan_tag(uint8_t port, uint16_t vf_id, uint8_t on); * - (-EINVAL) if bad parameter. * - (-ENOTSUP) not supported by firmware. */ -int rte_pmd_i40e_set_vf_vlan_filter(uint8_t port, uint16_t vlan_id, +int rte_pmd_i40e_set_vf_vlan_filter(uint16_t port, uint16_t vlan_id, uint64_t vf_mask, uint8_t on); /** @@ -393,7 +393,7 @@ int rte_pmd_i40e_set_vf_vlan_filter(uint8_t port, uint16_t vlan_id, * - (-EINVAL) if bad parameter. */ -int rte_pmd_i40e_get_vf_stats(uint8_t port, +int rte_pmd_i40e_get_vf_stats(uint16_t port, uint16_t vf_id, struct rte_eth_stats *stats); @@ -409,7 +409,7 @@ int rte_pmd_i40e_get_vf_stats(uint8_t port, * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_i40e_reset_vf_stats(uint8_t port, +int rte_pmd_i40e_reset_vf_stats(uint16_t port, uint16_t vf_id); /** @@ -434,7 +434,7 @@ int rte_pmd_i40e_reset_vf_stats(uint8_t port, * - (-EINVAL) if bad parameter. * - (-ENOTSUP) not supported by firmware. */ -int rte_pmd_i40e_set_vf_max_bw(uint8_t port, +int rte_pmd_i40e_set_vf_max_bw(uint16_t port, uint16_t vf_id, uint32_t bw); @@ -459,7 +459,7 @@ int rte_pmd_i40e_set_vf_max_bw(uint8_t port, * - (-EINVAL) if bad parameter. * - (-ENOTSUP) not supported by firmware. */ -int rte_pmd_i40e_set_vf_tc_bw_alloc(uint8_t port, +int rte_pmd_i40e_set_vf_tc_bw_alloc(uint16_t port, uint16_t vf_id, uint8_t tc_num, uint8_t *bw_weight); @@ -484,7 +484,7 @@ int rte_pmd_i40e_set_vf_tc_bw_alloc(uint8_t port, * - (-EINVAL) if bad parameter. * - (-ENOTSUP) not supported by firmware. */ -int rte_pmd_i40e_set_vf_tc_max_bw(uint8_t port, +int rte_pmd_i40e_set_vf_tc_max_bw(uint16_t port, uint16_t vf_id, uint8_t tc_no, uint32_t bw); @@ -502,7 +502,7 @@ int rte_pmd_i40e_set_vf_tc_max_bw(uint8_t port, * - (-EINVAL) if bad parameter. * - (-ENOTSUP) not supported by firmware. */ -int rte_pmd_i40e_set_tc_strict_prio(uint8_t port, uint8_t tc_map); +int rte_pmd_i40e_set_tc_strict_prio(uint16_t port, uint8_t tc_map); /** * Load/Unload a ddp package @@ -523,7 +523,7 @@ int rte_pmd_i40e_set_tc_strict_prio(uint8_t port, uint8_t tc_map); * - (-EACCES) if profile does not exist. * - (-ENOTSUP) if operation not supported. */ -int rte_pmd_i40e_process_ddp_package(uint8_t port, uint8_t *buff, +int rte_pmd_i40e_process_ddp_package(uint16_t port, uint8_t *buff, uint32_t size, enum rte_pmd_i40e_package_op op); @@ -561,7 +561,7 @@ int rte_pmd_i40e_get_ddp_info(uint8_t *pkg, uint32_t pkg_size, * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_i40e_get_ddp_list(uint8_t port, uint8_t *buff, uint32_t size); +int rte_pmd_i40e_get_ddp_list(uint16_t port, uint8_t *buff, uint32_t size); /** * Update hardware defined ptype to software defined packet type @@ -581,7 +581,7 @@ int rte_pmd_i40e_get_ddp_list(uint8_t port, uint8_t *buff, uint32_t size); * set other PTYPEs maps to PTYPE_UNKNOWN. */ int rte_pmd_i40e_ptype_mapping_update( - uint8_t port, + uint16_t port, struct rte_pmd_i40e_ptype_mapping *mapping_items, uint16_t count, uint8_t exclusive); @@ -593,7 +593,7 @@ int rte_pmd_i40e_ptype_mapping_update( * @param port * pointer to port identifier of the device */ -int rte_pmd_i40e_ptype_mapping_reset(uint8_t port); +int rte_pmd_i40e_ptype_mapping_reset(uint16_t port); /** * Get hardware defined ptype to software defined ptype @@ -612,7 +612,7 @@ int rte_pmd_i40e_ptype_mapping_reset(uint8_t port); * -(!0) only return mapping items which packet_type != RTE_PTYPE_UNKNOWN. */ int rte_pmd_i40e_ptype_mapping_get( - uint8_t port, + uint16_t port, struct rte_pmd_i40e_ptype_mapping *mapping_items, uint16_t size, uint16_t *count, @@ -632,7 +632,7 @@ int rte_pmd_i40e_ptype_mapping_get( * @param pkt_type * the new packet type to overwrite */ -int rte_pmd_i40e_ptype_mapping_replace(uint8_t port, +int rte_pmd_i40e_ptype_mapping_replace(uint16_t port, uint32_t target, uint8_t mask, uint32_t pkt_type); diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index 22171d866..c5e039886 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -2504,8 +2504,9 @@ ixgbe_dev_start(struct rte_eth_dev *dev) * - fixed speed: TODO implement */ if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) { - PMD_INIT_LOG(ERR, "Invalid link_speeds for port %hhu; fix speed not supported", - dev->data->port_id); + PMD_INIT_LOG(ERR, + "Invalid link_speeds for port %u, fix speed not supported", + dev->data->port_id); return -EINVAL; } diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h index 85feb0bdc..176c9d4a2 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.h +++ b/drivers/net/ixgbe/ixgbe_rxtx.h @@ -148,7 +148,7 @@ struct ixgbe_rx_queue { uint16_t queue_id; /**< RX queue index. */ uint16_t reg_idx; /**< RX queue register index. */ uint16_t pkt_type_mask; /**< Packet type mask for different NICs. */ - uint8_t port_id; /**< Device port identifier. */ + uint16_t port_id; /**< Device port identifier. */ uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise. */ uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En. */ uint8_t rx_deferred_start; /**< not in global dev start. */ @@ -237,7 +237,7 @@ struct ixgbe_tx_queue { uint16_t tx_next_rs; /**< next desc to set RS bit */ uint16_t queue_id; /**< TX queue index. */ uint16_t reg_idx; /**< TX queue register index. */ - uint8_t port_id; /**< Device port identifier. */ + uint16_t port_id; /**< Device port identifier. */ uint8_t pthresh; /**< Prefetch threshold register. */ uint8_t hthresh; /**< Host threshold register. */ uint8_t wthresh; /**< Write-back threshold reg. */ diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.c b/drivers/net/ixgbe/rte_pmd_ixgbe.c index 79897ff64..f12737857 100644 --- a/drivers/net/ixgbe/rte_pmd_ixgbe.c +++ b/drivers/net/ixgbe/rte_pmd_ixgbe.c @@ -38,7 +38,7 @@ #include "rte_pmd_ixgbe.h" int -rte_pmd_ixgbe_set_vf_mac_addr(uint8_t port, uint16_t vf, +rte_pmd_ixgbe_set_vf_mac_addr(uint16_t port, uint16_t vf, struct ether_addr *mac_addr) { struct ixgbe_hw *hw; @@ -73,7 +73,7 @@ rte_pmd_ixgbe_set_vf_mac_addr(uint8_t port, uint16_t vf, } int -rte_pmd_ixgbe_ping_vf(uint8_t port, uint16_t vf) +rte_pmd_ixgbe_ping_vf(uint16_t port, uint16_t vf) { struct ixgbe_hw *hw; struct ixgbe_vf_info *vfinfo; @@ -105,7 +105,7 @@ rte_pmd_ixgbe_ping_vf(uint8_t port, uint16_t vf) } int -rte_pmd_ixgbe_set_vf_vlan_anti_spoof(uint8_t port, uint16_t vf, uint8_t on) +rte_pmd_ixgbe_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf, uint8_t on) { struct ixgbe_hw *hw; struct ixgbe_mac_info *mac; @@ -135,7 +135,7 @@ rte_pmd_ixgbe_set_vf_vlan_anti_spoof(uint8_t port, uint16_t vf, uint8_t on) } int -rte_pmd_ixgbe_set_vf_mac_anti_spoof(uint8_t port, uint16_t vf, uint8_t on) +rte_pmd_ixgbe_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf, uint8_t on) { struct ixgbe_hw *hw; struct ixgbe_mac_info *mac; @@ -164,7 +164,7 @@ rte_pmd_ixgbe_set_vf_mac_anti_spoof(uint8_t port, uint16_t vf, uint8_t on) } int -rte_pmd_ixgbe_set_vf_vlan_insert(uint8_t port, uint16_t vf, uint16_t vlan_id) +rte_pmd_ixgbe_set_vf_vlan_insert(uint16_t port, uint16_t vf, uint16_t vlan_id) { struct ixgbe_hw *hw; uint32_t ctrl; @@ -200,7 +200,7 @@ rte_pmd_ixgbe_set_vf_vlan_insert(uint8_t port, uint16_t vf, uint16_t vlan_id) } int -rte_pmd_ixgbe_set_tx_loopback(uint8_t port, uint8_t on) +rte_pmd_ixgbe_set_tx_loopback(uint16_t port, uint8_t on) { struct ixgbe_hw *hw; uint32_t ctrl; @@ -230,7 +230,7 @@ rte_pmd_ixgbe_set_tx_loopback(uint8_t port, uint8_t on) } int -rte_pmd_ixgbe_set_all_queues_drop_en(uint8_t port, uint8_t on) +rte_pmd_ixgbe_set_all_queues_drop_en(uint16_t port, uint8_t on) { struct ixgbe_hw *hw; uint32_t reg_value; @@ -260,7 +260,7 @@ rte_pmd_ixgbe_set_all_queues_drop_en(uint8_t port, uint8_t on) } int -rte_pmd_ixgbe_set_vf_split_drop_en(uint8_t port, uint16_t vf, uint8_t on) +rte_pmd_ixgbe_set_vf_split_drop_en(uint16_t port, uint16_t vf, uint8_t on) { struct ixgbe_hw *hw; uint32_t reg_value; @@ -295,7 +295,7 @@ rte_pmd_ixgbe_set_vf_split_drop_en(uint8_t port, uint16_t vf, uint8_t on) } int -rte_pmd_ixgbe_set_vf_vlan_stripq(uint8_t port, uint16_t vf, uint8_t on) +rte_pmd_ixgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on) { struct rte_eth_dev *dev; struct rte_pci_device *pci_dev; @@ -342,7 +342,7 @@ rte_pmd_ixgbe_set_vf_vlan_stripq(uint8_t port, uint16_t vf, uint8_t on) } int -rte_pmd_ixgbe_set_vf_rxmode(uint8_t port, uint16_t vf, +rte_pmd_ixgbe_set_vf_rxmode(uint16_t port, uint16_t vf, uint16_t rx_mask, uint8_t on) { int val = 0; @@ -389,7 +389,7 @@ rte_pmd_ixgbe_set_vf_rxmode(uint8_t port, uint16_t vf, } int -rte_pmd_ixgbe_set_vf_rx(uint8_t port, uint16_t vf, uint8_t on) +rte_pmd_ixgbe_set_vf_rx(uint16_t port, uint16_t vf, uint8_t on) { struct rte_eth_dev *dev; struct rte_pci_device *pci_dev; @@ -439,7 +439,7 @@ rte_pmd_ixgbe_set_vf_rx(uint8_t port, uint16_t vf, uint8_t on) } int -rte_pmd_ixgbe_set_vf_tx(uint8_t port, uint16_t vf, uint8_t on) +rte_pmd_ixgbe_set_vf_tx(uint16_t port, uint16_t vf, uint8_t on) { struct rte_eth_dev *dev; struct rte_pci_device *pci_dev; @@ -489,7 +489,7 @@ rte_pmd_ixgbe_set_vf_tx(uint8_t port, uint16_t vf, uint8_t on) } int -rte_pmd_ixgbe_set_vf_vlan_filter(uint8_t port, uint16_t vlan, +rte_pmd_ixgbe_set_vf_vlan_filter(uint16_t port, uint16_t vlan, uint64_t vf_mask, uint8_t vlan_on) { struct rte_eth_dev *dev; @@ -524,7 +524,7 @@ rte_pmd_ixgbe_set_vf_vlan_filter(uint8_t port, uint16_t vlan, } int -rte_pmd_ixgbe_set_vf_rate_limit(uint8_t port, uint16_t vf, +rte_pmd_ixgbe_set_vf_rate_limit(uint16_t port, uint16_t vf, uint16_t tx_rate, uint64_t q_msk) { struct rte_eth_dev *dev; @@ -540,7 +540,7 @@ rte_pmd_ixgbe_set_vf_rate_limit(uint8_t port, uint16_t vf, } int -rte_pmd_ixgbe_macsec_enable(uint8_t port, uint8_t en, uint8_t rp) +rte_pmd_ixgbe_macsec_enable(uint16_t port, uint8_t en, uint8_t rp) { struct ixgbe_hw *hw; struct rte_eth_dev *dev; @@ -623,7 +623,7 @@ rte_pmd_ixgbe_macsec_enable(uint8_t port, uint8_t en, uint8_t rp) } int -rte_pmd_ixgbe_macsec_disable(uint8_t port) +rte_pmd_ixgbe_macsec_disable(uint16_t port) { struct ixgbe_hw *hw; struct rte_eth_dev *dev; @@ -687,7 +687,7 @@ rte_pmd_ixgbe_macsec_disable(uint8_t port) } int -rte_pmd_ixgbe_macsec_config_txsc(uint8_t port, uint8_t *mac) +rte_pmd_ixgbe_macsec_config_txsc(uint16_t port, uint8_t *mac) { struct ixgbe_hw *hw; struct rte_eth_dev *dev; @@ -712,7 +712,7 @@ rte_pmd_ixgbe_macsec_config_txsc(uint8_t port, uint8_t *mac) } int -rte_pmd_ixgbe_macsec_config_rxsc(uint8_t port, uint8_t *mac, uint16_t pi) +rte_pmd_ixgbe_macsec_config_rxsc(uint16_t port, uint8_t *mac, uint16_t pi) { struct ixgbe_hw *hw; struct rte_eth_dev *dev; @@ -738,7 +738,7 @@ rte_pmd_ixgbe_macsec_config_rxsc(uint8_t port, uint8_t *mac, uint16_t pi) } int -rte_pmd_ixgbe_macsec_select_txsa(uint8_t port, uint8_t idx, uint8_t an, +rte_pmd_ixgbe_macsec_select_txsa(uint16_t port, uint8_t idx, uint8_t an, uint32_t pn, uint8_t *key) { struct ixgbe_hw *hw; @@ -794,7 +794,7 @@ rte_pmd_ixgbe_macsec_select_txsa(uint8_t port, uint8_t idx, uint8_t an, } int -rte_pmd_ixgbe_macsec_select_rxsa(uint8_t port, uint8_t idx, uint8_t an, +rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an, uint32_t pn, uint8_t *key) { struct ixgbe_hw *hw; @@ -837,7 +837,7 @@ rte_pmd_ixgbe_macsec_select_rxsa(uint8_t port, uint8_t idx, uint8_t an, } int -rte_pmd_ixgbe_set_tc_bw_alloc(uint8_t port, +rte_pmd_ixgbe_set_tc_bw_alloc(uint16_t port, uint8_t tc_num, uint8_t *bw_weight) { @@ -911,7 +911,7 @@ rte_pmd_ixgbe_set_tc_bw_alloc(uint8_t port, #ifdef RTE_LIBRTE_IXGBE_BYPASS int -rte_pmd_ixgbe_bypass_init(uint8_t port_id) +rte_pmd_ixgbe_bypass_init(uint16_t port_id) { struct rte_eth_dev *dev; @@ -926,7 +926,7 @@ rte_pmd_ixgbe_bypass_init(uint8_t port_id) } int -rte_pmd_ixgbe_bypass_state_show(uint8_t port_id, uint32_t *state) +rte_pmd_ixgbe_bypass_state_show(uint16_t port_id, uint32_t *state) { struct rte_eth_dev *dev; @@ -940,7 +940,7 @@ rte_pmd_ixgbe_bypass_state_show(uint8_t port_id, uint32_t *state) } int -rte_pmd_ixgbe_bypass_state_set(uint8_t port_id, uint32_t *new_state) +rte_pmd_ixgbe_bypass_state_set(uint16_t port_id, uint32_t *new_state) { struct rte_eth_dev *dev; @@ -954,7 +954,7 @@ rte_pmd_ixgbe_bypass_state_set(uint8_t port_id, uint32_t *new_state) } int -rte_pmd_ixgbe_bypass_event_show(uint8_t port_id, +rte_pmd_ixgbe_bypass_event_show(uint16_t port_id, uint32_t event, uint32_t *state) { @@ -970,7 +970,7 @@ rte_pmd_ixgbe_bypass_event_show(uint8_t port_id, } int -rte_pmd_ixgbe_bypass_event_store(uint8_t port_id, +rte_pmd_ixgbe_bypass_event_store(uint16_t port_id, uint32_t event, uint32_t state) { @@ -986,7 +986,7 @@ rte_pmd_ixgbe_bypass_event_store(uint8_t port_id, } int -rte_pmd_ixgbe_bypass_wd_timeout_store(uint8_t port_id, uint32_t timeout) +rte_pmd_ixgbe_bypass_wd_timeout_store(uint16_t port_id, uint32_t timeout) { struct rte_eth_dev *dev; @@ -1000,7 +1000,7 @@ rte_pmd_ixgbe_bypass_wd_timeout_store(uint8_t port_id, uint32_t timeout) } int -rte_pmd_ixgbe_bypass_ver_show(uint8_t port_id, uint32_t *ver) +rte_pmd_ixgbe_bypass_ver_show(uint16_t port_id, uint32_t *ver) { struct rte_eth_dev *dev; @@ -1014,7 +1014,7 @@ rte_pmd_ixgbe_bypass_ver_show(uint8_t port_id, uint32_t *ver) } int -rte_pmd_ixgbe_bypass_wd_timeout_show(uint8_t port_id, uint32_t *wd_timeout) +rte_pmd_ixgbe_bypass_wd_timeout_show(uint16_t port_id, uint32_t *wd_timeout) { struct rte_eth_dev *dev; @@ -1028,7 +1028,7 @@ rte_pmd_ixgbe_bypass_wd_timeout_show(uint8_t port_id, uint32_t *wd_timeout) } int -rte_pmd_ixgbe_bypass_wd_reset(uint8_t port_id) +rte_pmd_ixgbe_bypass_wd_reset(uint16_t port_id) { struct rte_eth_dev *dev; diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.h b/drivers/net/ixgbe/rte_pmd_ixgbe.h index d33c285db..3d55ab20e 100644 --- a/drivers/net/ixgbe/rte_pmd_ixgbe.h +++ b/drivers/net/ixgbe/rte_pmd_ixgbe.h @@ -53,7 +53,7 @@ * - (-ENODEV) if *port* invalid. * - (-EINVAL) if *vf* invalid. */ -int rte_pmd_ixgbe_ping_vf(uint8_t port, uint16_t vf); +int rte_pmd_ixgbe_ping_vf(uint16_t port, uint16_t vf); /** * Set the VF MAC address. @@ -69,8 +69,8 @@ int rte_pmd_ixgbe_ping_vf(uint8_t port, uint16_t vf); * - (-ENODEV) if *port* invalid. * - (-EINVAL) if *vf* or *mac_addr* is invalid. */ -int rte_pmd_ixgbe_set_vf_mac_addr(uint8_t port, uint16_t vf, - struct ether_addr *mac_addr); +int rte_pmd_ixgbe_set_vf_mac_addr(uint16_t port, uint16_t vf, + struct ether_addr *mac_addr); /** * Enable/Disable VF VLAN anti spoofing. @@ -87,7 +87,8 @@ int rte_pmd_ixgbe_set_vf_mac_addr(uint8_t port, uint16_t vf, * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_ixgbe_set_vf_vlan_anti_spoof(uint8_t port, uint16_t vf, uint8_t on); +int rte_pmd_ixgbe_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf, + uint8_t on); /** * Enable/Disable VF MAC anti spoofing. @@ -104,7 +105,7 @@ int rte_pmd_ixgbe_set_vf_vlan_anti_spoof(uint8_t port, uint16_t vf, uint8_t on); * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_ixgbe_set_vf_mac_anti_spoof(uint8_t port, uint16_t vf, uint8_t on); +int rte_pmd_ixgbe_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf, uint8_t on); /** * Enable/Disable vf vlan insert @@ -122,7 +123,7 @@ int rte_pmd_ixgbe_set_vf_mac_anti_spoof(uint8_t port, uint16_t vf, uint8_t on); * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_ixgbe_set_vf_vlan_insert(uint8_t port, uint16_t vf, +int rte_pmd_ixgbe_set_vf_vlan_insert(uint16_t port, uint16_t vf, uint16_t vlan_id); /** @@ -139,7 +140,7 @@ int rte_pmd_ixgbe_set_vf_vlan_insert(uint8_t port, uint16_t vf, * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_ixgbe_set_tx_loopback(uint8_t port, uint8_t on); +int rte_pmd_ixgbe_set_tx_loopback(uint16_t port, uint8_t on); /** * set all queues drop enable bit @@ -155,7 +156,7 @@ int rte_pmd_ixgbe_set_tx_loopback(uint8_t port, uint8_t on); * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_ixgbe_set_all_queues_drop_en(uint8_t port, uint8_t on); +int rte_pmd_ixgbe_set_all_queues_drop_en(uint16_t port, uint8_t on); /** * set drop enable bit in the VF split rx control register @@ -174,7 +175,7 @@ int rte_pmd_ixgbe_set_all_queues_drop_en(uint8_t port, uint8_t on); * - (-EINVAL) if bad parameter. */ -int rte_pmd_ixgbe_set_vf_split_drop_en(uint8_t port, uint16_t vf, uint8_t on); +int rte_pmd_ixgbe_set_vf_split_drop_en(uint16_t port, uint16_t vf, uint8_t on); /** * Enable/Disable vf vlan strip for all queues in a pool @@ -194,7 +195,7 @@ int rte_pmd_ixgbe_set_vf_split_drop_en(uint8_t port, uint16_t vf, uint8_t on); * - (-EINVAL) if bad parameter. */ int -rte_pmd_ixgbe_set_vf_vlan_stripq(uint8_t port, uint16_t vf, uint8_t on); +rte_pmd_ixgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on); /** * Enable MACsec offload. @@ -212,7 +213,7 @@ rte_pmd_ixgbe_set_vf_vlan_stripq(uint8_t port, uint16_t vf, uint8_t on); * - (-ENODEV) if *port* invalid. * - (-ENOTSUP) if hardware doesn't support this feature. */ -int rte_pmd_ixgbe_macsec_enable(uint8_t port, uint8_t en, uint8_t rp); +int rte_pmd_ixgbe_macsec_enable(uint16_t port, uint8_t en, uint8_t rp); /** * Disable MACsec offload. @@ -224,7 +225,7 @@ int rte_pmd_ixgbe_macsec_enable(uint8_t port, uint8_t en, uint8_t rp); * - (-ENODEV) if *port* invalid. * - (-ENOTSUP) if hardware doesn't support this feature. */ -int rte_pmd_ixgbe_macsec_disable(uint8_t port); +int rte_pmd_ixgbe_macsec_disable(uint16_t port); /** * Configure Tx SC (Secure Connection). @@ -238,7 +239,7 @@ int rte_pmd_ixgbe_macsec_disable(uint8_t port); * - (-ENODEV) if *port* invalid. * - (-ENOTSUP) if hardware doesn't support this feature. */ -int rte_pmd_ixgbe_macsec_config_txsc(uint8_t port, uint8_t *mac); +int rte_pmd_ixgbe_macsec_config_txsc(uint16_t port, uint8_t *mac); /** * Configure Rx SC (Secure Connection). @@ -254,7 +255,7 @@ int rte_pmd_ixgbe_macsec_config_txsc(uint8_t port, uint8_t *mac); * - (-ENODEV) if *port* invalid. * - (-ENOTSUP) if hardware doesn't support this feature. */ -int rte_pmd_ixgbe_macsec_config_rxsc(uint8_t port, uint8_t *mac, uint16_t pi); +int rte_pmd_ixgbe_macsec_config_rxsc(uint16_t port, uint8_t *mac, uint16_t pi); /** * Enable Tx SA (Secure Association). @@ -275,8 +276,8 @@ int rte_pmd_ixgbe_macsec_config_rxsc(uint8_t port, uint8_t *mac, uint16_t pi); * - (-ENOTSUP) if hardware doesn't support this feature. * - (-EINVAL) if bad parameter. */ -int rte_pmd_ixgbe_macsec_select_txsa(uint8_t port, uint8_t idx, uint8_t an, - uint32_t pn, uint8_t *key); +int rte_pmd_ixgbe_macsec_select_txsa(uint16_t port, uint8_t idx, uint8_t an, + uint32_t pn, uint8_t *key); /** * Enable Rx SA (Secure Association). @@ -297,8 +298,8 @@ int rte_pmd_ixgbe_macsec_select_txsa(uint8_t port, uint8_t idx, uint8_t an, * - (-ENOTSUP) if hardware doesn't support this feature. * - (-EINVAL) if bad parameter. */ -int rte_pmd_ixgbe_macsec_select_rxsa(uint8_t port, uint8_t idx, uint8_t an, - uint32_t pn, uint8_t *key); +int rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an, + uint32_t pn, uint8_t *key); /** * Set RX L2 Filtering mode of a VF of an Ethernet device. @@ -323,7 +324,8 @@ int rte_pmd_ixgbe_macsec_select_rxsa(uint8_t port, uint8_t idx, uint8_t an, * - (-EINVAL) if bad parameter. */ int -rte_pmd_ixgbe_set_vf_rxmode(uint8_t port, uint16_t vf, uint16_t rx_mask, uint8_t on); +rte_pmd_ixgbe_set_vf_rxmode(uint16_t port, uint16_t vf, uint16_t rx_mask, + uint8_t on); /** * Enable or disable a VF traffic receive of an Ethernet device. @@ -342,7 +344,7 @@ rte_pmd_ixgbe_set_vf_rxmode(uint8_t port, uint16_t vf, uint16_t rx_mask, uint8_t * - (-EINVAL) if bad parameter. */ int -rte_pmd_ixgbe_set_vf_rx(uint8_t port, uint16_t vf, uint8_t on); +rte_pmd_ixgbe_set_vf_rx(uint16_t port, uint16_t vf, uint8_t on); /** * Enable or disable a VF traffic transmit of the Ethernet device. @@ -361,7 +363,7 @@ rte_pmd_ixgbe_set_vf_rx(uint8_t port, uint16_t vf, uint8_t on); * - (-EINVAL) if bad parameter. */ int -rte_pmd_ixgbe_set_vf_tx(uint8_t port, uint16_t vf, uint8_t on); +rte_pmd_ixgbe_set_vf_tx(uint16_t port, uint16_t vf, uint8_t on); /** * Enable/Disable hardware VF VLAN filtering by an Ethernet device of @@ -383,7 +385,8 @@ rte_pmd_ixgbe_set_vf_tx(uint8_t port, uint16_t vf, uint8_t on); * - (-EINVAL) if bad parameter. */ int -rte_pmd_ixgbe_set_vf_vlan_filter(uint8_t port, uint16_t vlan, uint64_t vf_mask, uint8_t vlan_on); +rte_pmd_ixgbe_set_vf_vlan_filter(uint16_t port, uint16_t vlan, + uint64_t vf_mask, uint8_t vlan_on); /** * Set the rate limitation for a vf on an Ethernet device. @@ -402,7 +405,8 @@ rte_pmd_ixgbe_set_vf_vlan_filter(uint8_t port, uint16_t vlan, uint64_t vf_mask, * - (-ENODEV) if *port_id* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_ixgbe_set_vf_rate_limit(uint8_t port, uint16_t vf, uint16_t tx_rate, uint64_t q_msk); +int rte_pmd_ixgbe_set_vf_rate_limit(uint16_t port, uint16_t vf, + uint16_t tx_rate, uint64_t q_msk); /** * Set all the TCs' bandwidth weight. @@ -423,7 +427,7 @@ int rte_pmd_ixgbe_set_vf_rate_limit(uint8_t port, uint16_t vf, uint16_t tx_rate, * - (-EINVAL) if bad parameter. * - (-ENOTSUP) not supported by firmware. */ -int rte_pmd_ixgbe_set_tc_bw_alloc(uint8_t port, +int rte_pmd_ixgbe_set_tc_bw_alloc(uint16_t port, uint8_t tc_num, uint8_t *bw_weight); @@ -439,7 +443,7 @@ int rte_pmd_ixgbe_set_tc_bw_alloc(uint8_t port, * - (-ENOTSUP) if hardware doesn't support. * - (-EINVAL) if bad parameter. */ -int rte_pmd_ixgbe_bypass_init(uint8_t port); +int rte_pmd_ixgbe_bypass_init(uint16_t port); /** * Return bypass state. @@ -456,7 +460,7 @@ int rte_pmd_ixgbe_bypass_init(uint8_t port); * - (-ENOTSUP) if hardware doesn't support. * - (-EINVAL) if bad parameter. */ -int rte_pmd_ixgbe_bypass_state_show(uint8_t port, uint32_t *state); +int rte_pmd_ixgbe_bypass_state_show(uint16_t port, uint32_t *state); /** * Set bypass state @@ -473,7 +477,7 @@ int rte_pmd_ixgbe_bypass_state_show(uint8_t port, uint32_t *state); * - (-ENOTSUP) if hardware doesn't support. * - (-EINVAL) if bad parameter. */ -int rte_pmd_ixgbe_bypass_state_set(uint8_t port, uint32_t *new_state); +int rte_pmd_ixgbe_bypass_state_set(uint16_t port, uint32_t *new_state); /** * Return bypass state when given event occurs. @@ -497,7 +501,7 @@ int rte_pmd_ixgbe_bypass_state_set(uint8_t port, uint32_t *new_state); * - (-ENOTSUP) if hardware doesn't support. * - (-EINVAL) if bad parameter. */ -int rte_pmd_ixgbe_bypass_event_show(uint8_t port, +int rte_pmd_ixgbe_bypass_event_show(uint16_t port, uint32_t event, uint32_t *state); @@ -523,7 +527,7 @@ int rte_pmd_ixgbe_bypass_event_show(uint8_t port, * - (-ENOTSUP) if hardware doesn't support. * - (-EINVAL) if bad parameter. */ -int rte_pmd_ixgbe_bypass_event_store(uint8_t port, +int rte_pmd_ixgbe_bypass_event_store(uint16_t port, uint32_t event, uint32_t state); @@ -547,7 +551,7 @@ int rte_pmd_ixgbe_bypass_event_store(uint8_t port, * - (-ENOTSUP) if hardware doesn't support. * - (-EINVAL) if bad parameter. */ -int rte_pmd_ixgbe_bypass_wd_timeout_store(uint8_t port, uint32_t timeout); +int rte_pmd_ixgbe_bypass_wd_timeout_store(uint16_t port, uint32_t timeout); /** * Get bypass firmware version. @@ -561,7 +565,7 @@ int rte_pmd_ixgbe_bypass_wd_timeout_store(uint8_t port, uint32_t timeout); * - (-ENOTSUP) if hardware doesn't support. * - (-EINVAL) if bad parameter. */ -int rte_pmd_ixgbe_bypass_ver_show(uint8_t port, uint32_t *ver); +int rte_pmd_ixgbe_bypass_ver_show(uint16_t port, uint32_t *ver); /** * Return bypass watchdog timeout in seconds @@ -583,7 +587,7 @@ int rte_pmd_ixgbe_bypass_ver_show(uint8_t port, uint32_t *ver); * - (-ENOTSUP) if hardware doesn't support. * - (-EINVAL) if bad parameter. */ -int rte_pmd_ixgbe_bypass_wd_timeout_show(uint8_t port, uint32_t *wd_timeout); +int rte_pmd_ixgbe_bypass_wd_timeout_show(uint16_t port, uint32_t *wd_timeout); /** * Reset bypass watchdog timer @@ -595,7 +599,7 @@ int rte_pmd_ixgbe_bypass_wd_timeout_show(uint8_t port, uint32_t *wd_timeout); * - (-ENOTSUP) if hardware doesn't support. * - (-EINVAL) if bad parameter. */ -int rte_pmd_ixgbe_bypass_wd_reset(uint8_t port); +int rte_pmd_ixgbe_bypass_wd_reset(uint16_t port); /** diff --git a/drivers/net/mlx4/mlx4.h b/drivers/net/mlx4/mlx4.h index c0ade4f1a..fe911c3a7 100644 --- a/drivers/net/mlx4/mlx4.h +++ b/drivers/net/mlx4/mlx4.h @@ -334,7 +334,7 @@ struct priv { } vlan_filter[MLX4_MAX_VLAN_IDS]; /* VLAN filters table. */ /* Device properties. */ uint16_t mtu; /* Configured MTU. */ - uint8_t port; /* Physical port number. */ + uint16_t port; /* Physical port number. */ unsigned int started:1; /* Device started, flows enabled. */ unsigned int promisc:1; /* Device in promiscuous mode. */ unsigned int allmulti:1; /* Device receives all multicast packets. */ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 43c538419..54a2e8a54 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -114,7 +114,7 @@ struct priv { unsigned int vlan_filter_n; /* Number of configured VLAN filters. */ /* Device properties. */ uint16_t mtu; /* Configured MTU. */ - uint8_t port; /* Physical port number. */ + uint16_t port; /* Physical port number. */ unsigned int started:1; /* Device started, flows enabled. */ unsigned int promisc_req:1; /* Promiscuous mode requested. */ unsigned int allmulti_req:1; /* All multicast mode requested. */ diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index b0eb3cdfc..58e7cf571 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -75,7 +75,7 @@ struct ethtool_link_settings { uint32_t cmd; uint32_t speed; uint8_t duplex; - uint8_t port; + uint16_t port; uint8_t phy_address; uint8_t autoneg; uint8_t mdio_support; diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index 7de1d1086..c47e54fe0 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -112,7 +112,6 @@ struct rxq { unsigned int sges_n:2; /* Log 2 of SGEs (max buffers per packet). */ unsigned int cqe_n:4; /* Log 2 of CQ elements. */ unsigned int elts_n:4; /* Log 2 of Mbufs. */ - unsigned int port_id:8; unsigned int rss_hash:1; /* RSS hash result is enabled. */ unsigned int mark:1; /* Marked flow available on the queue. */ unsigned int pending_err:1; /* CQE error needs to be handled. */ @@ -120,6 +119,7 @@ struct rxq { unsigned int :6; /* Remaining bits. */ volatile uint32_t *rq_db; volatile uint32_t *cq_db; + uint16_t port_id; uint16_t rq_ci; uint16_t rq_pi; uint16_t cq_ci; diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c index 92b03c4cb..2d794f841 100644 --- a/drivers/net/nfp/nfp_net.c +++ b/drivers/net/nfp/nfp_net.c @@ -1239,13 +1239,13 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev) memset(&link, 0, sizeof(link)); nfp_net_dev_atomic_read_link_status(dev, &link); if (link.link_status) - RTE_LOG(INFO, PMD, "Port %d: Link Up - speed %u Mbps - %s\n", - (int)(dev->data->port_id), (unsigned)link.link_speed, + RTE_LOG(INFO, PMD, "Port %u: Link Up - speed %u Mbps - %s\n", + dev->data->port_id, link.link_speed, link.link_duplex == ETH_LINK_FULL_DUPLEX ? "full-duplex" : "half-duplex"); else - RTE_LOG(INFO, PMD, " Port %d: Link Down\n", - (int)(dev->data->port_id)); + RTE_LOG(INFO, PMD, " Port %u: Link Down\n", + dev->data->port_id); RTE_LOG(INFO, PMD, "PCI Address: %04d:%02d:%02d:%d\n", pci_dev->addr.domain, pci_dev->addr.bus, @@ -1547,9 +1547,9 @@ nfp_net_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, if (tx_free_thresh > (nb_desc)) { RTE_LOG(ERR, PMD, "tx_free_thresh must be less than the number of TX " - "descriptors. (tx_free_thresh=%u port=%d " + "descriptors. (tx_free_thresh=%u port=%u " "queue=%d)\n", (unsigned int)tx_free_thresh, - (int)dev->data->port_id, (int)queue_idx); + dev->data->port_id, (int)queue_idx); return -(EINVAL); } @@ -1847,9 +1847,9 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) */ new_mb = rte_pktmbuf_alloc(rxq->mem_pool); if (unlikely(new_mb == NULL)) { - RTE_LOG_DP(DEBUG, PMD, "RX mbuf alloc failed port_id=%u " - "queue_id=%u\n", (unsigned)rxq->port_id, - (unsigned)rxq->qidx); + RTE_LOG_DP(DEBUG, PMD, + "RX mbuf alloc failed port_id=%u queue_id=%u\n", + rxq->port_id, rxq->qidx); nfp_net_mbuf_alloc_failed(rxq); break; } @@ -1932,8 +1932,8 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (nb_hold == 0) return nb_hold; - PMD_RX_LOG(DEBUG, "RX port_id=%u queue_id=%u, %d packets received\n", - (unsigned)rxq->port_id, (unsigned)rxq->qidx, nb_hold); + PMD_RX_LOG(DEBUG, "RX port_id=%u queue_id=%u, %d packets received\n", + rxq->port_id, (unsigned int)rxq->qidx, nb_hold); nb_hold += rxq->nb_rx_hold; @@ -1944,7 +1944,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rte_wmb(); if (nb_hold > rxq->rx_free_thresh) { PMD_RX_LOG(DEBUG, "port=%u queue=%u nb_hold=%u avail=%u\n", - (unsigned)rxq->port_id, (unsigned)rxq->qidx, + rxq->port_id, (unsigned int)rxq->qidx, (unsigned)nb_hold, (unsigned)avail); nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, nb_hold); nb_hold = 0; @@ -2547,7 +2547,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) ether_addr_copy((struct ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]); - PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x " + PMD_INIT_LOG(INFO, "port %u VendorID=0x%x DeviceID=0x%x " "mac=%02x:%02x:%02x:%02x:%02x:%02x", eth_dev->data->port_id, pci_dev->id.vendor_id, pci_dev->id.device_id, diff --git a/drivers/net/nfp/nfp_net_pmd.h b/drivers/net/nfp/nfp_net_pmd.h index eec56bc1c..7fc76aa76 100644 --- a/drivers/net/nfp/nfp_net_pmd.h +++ b/drivers/net/nfp/nfp_net_pmd.h @@ -250,7 +250,7 @@ struct nfp_net_txq { uint32_t tx_hthresh; /* not used by now. Future? */ uint32_t tx_wthresh; /* not used by now. Future? */ uint32_t txq_flags; /* not used by now. Future? */ - uint8_t port_id; + uint16_t port_id; int qidx; int tx_qcidx; __le64 dma; diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c index 5aef0591e..fa9313dec 100644 --- a/drivers/net/null/rte_eth_null.c +++ b/drivers/net/null/rte_eth_null.c @@ -68,7 +68,7 @@ struct null_queue { struct pmd_internals { unsigned packet_size; unsigned packet_copy; - uint8_t port_id; + uint16_t port_id; struct null_queue rx_null_queues[RTE_MAX_QUEUES_PER_PORT]; struct null_queue tx_null_queues[RTE_MAX_QUEUES_PER_PORT]; diff --git a/drivers/net/pcap/rte_eth_pcap.c b/drivers/net/pcap/rte_eth_pcap.c index defb3b419..b51f16cbd 100644 --- a/drivers/net/pcap/rte_eth_pcap.c +++ b/drivers/net/pcap/rte_eth_pcap.c @@ -75,7 +75,7 @@ struct queue_stat { struct pcap_rx_queue { pcap_t *pcap; - uint8_t in_port; + uint16_t in_port; struct rte_mempool *mb_pool; struct queue_stat rx_stat; char name[PATH_MAX]; diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h index 9864bb448..96f0d351d 100644 --- a/drivers/net/qede/qede_if.h +++ b/drivers/net/qede/qede_if.h @@ -97,7 +97,7 @@ struct qed_link_output { uint32_t speed; /* In Mb/s */ uint32_t adv_speed; /* Speed mask */ uint8_t duplex; /* In DUPLEX defs */ - uint8_t port; /* In PORT defs */ + uint16_t port; /* In PORT defs */ bool autoneg; uint32_t pause_config; }; diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c index 464d3d384..e3fa7b0e2 100644 --- a/drivers/net/ring/rte_eth_ring.c +++ b/drivers/net/ring/rte_eth_ring.c @@ -394,7 +394,7 @@ rte_eth_from_rings(const char *name, struct rte_ring *const rx_queues[], }; char args_str[32] = { 0 }; char ring_name[32] = { 0 }; - uint8_t port_id = RTE_MAX_ETHPORTS; + uint16_t port_id = RTE_MAX_ETHPORTS; int ret; /* do some parameter checking */ diff --git a/drivers/net/szedata2/rte_eth_szedata2.c b/drivers/net/szedata2/rte_eth_szedata2.c index 9c0d57cc1..d141acf0e 100644 --- a/drivers/net/szedata2/rte_eth_szedata2.c +++ b/drivers/net/szedata2/rte_eth_szedata2.c @@ -71,7 +71,7 @@ struct szedata2_rx_queue { struct szedata *sze; uint8_t rx_channel; - uint8_t in_port; + uint16_t in_port; struct rte_mempool *mb_pool; volatile uint64_t rx_pkts; volatile uint64_t rx_bytes; diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h index 4ee6c3bb0..e54a96f8e 100644 --- a/drivers/net/thunderx/nicvf_struct.h +++ b/drivers/net/thunderx/nicvf_struct.h @@ -100,7 +100,7 @@ struct nicvf_rxq { uint16_t queue_id; uint16_t precharge_cnt; uint8_t rx_drop_en; - uint8_t port_id; + uint16_t port_id; uint8_t rbptr_offset; } __rte_cache_aligned; diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c index 0dac5e60e..93310bdfb 100644 --- a/drivers/net/vhost/rte_eth_vhost.c +++ b/drivers/net/vhost/rte_eth_vhost.c @@ -105,7 +105,7 @@ struct vhost_queue { rte_atomic32_t while_queuing; struct pmd_internal *internal; struct rte_mempool *mb_pool; - uint8_t port; + uint16_t port; uint16_t virtqueue_id; struct vhost_stats stats; }; @@ -705,8 +705,8 @@ static struct vhost_device_ops vhost_ops = { }; int -rte_eth_vhost_get_queue_event(uint8_t port_id, - struct rte_eth_vhost_queue_event *event) +rte_eth_vhost_get_queue_event(uint16_t port_id, + struct rte_eth_vhost_queue_event *event) { struct rte_vhost_vring_state *state; unsigned int i; @@ -742,7 +742,7 @@ rte_eth_vhost_get_queue_event(uint8_t port_id, } int -rte_eth_vhost_get_vid_from_port_id(uint8_t port_id) +rte_eth_vhost_get_vid_from_port_id(uint16_t port_id) { struct internal_list *list; struct rte_eth_dev *eth_dev; diff --git a/drivers/net/vhost/rte_eth_vhost.h b/drivers/net/vhost/rte_eth_vhost.h index 39ca77197..0528c6aee 100644 --- a/drivers/net/vhost/rte_eth_vhost.h +++ b/drivers/net/vhost/rte_eth_vhost.h @@ -69,8 +69,8 @@ struct rte_eth_vhost_queue_event { * - On success, zero. * - On failure, a negative value. */ -int rte_eth_vhost_get_queue_event(uint8_t port_id, - struct rte_eth_vhost_queue_event *event); +int rte_eth_vhost_get_queue_event(uint16_t port_id, + struct rte_eth_vhost_queue_event *event); /** * Get the 'vid' value associated with the specified port. @@ -79,7 +79,7 @@ int rte_eth_vhost_get_queue_event(uint8_t port_id, * - On success, the 'vid' associated with 'port_id'. * - On failure, a negative value. */ -int rte_eth_vhost_get_vid_from_port_id(uint8_t port_id); +int rte_eth_vhost_get_vid_from_port_id(uint16_t port_id); #ifdef __cplusplus } diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h index 18caebdd7..330ee94be 100644 --- a/drivers/net/virtio/virtio_pci.h +++ b/drivers/net/virtio/virtio_pci.h @@ -260,7 +260,7 @@ struct virtio_hw { uint8_t use_msix; uint8_t modern; uint8_t use_simple_rxtx; - uint8_t port_id; + uint16_t port_id; uint8_t mac_addr[ETHER_ADDR_LEN]; uint32_t notify_off_multiplier; uint8_t *isr; diff --git a/drivers/net/virtio/virtio_rxtx.h b/drivers/net/virtio/virtio_rxtx.h index 28f82d6a8..198b2d8fb 100644 --- a/drivers/net/virtio/virtio_rxtx.h +++ b/drivers/net/virtio/virtio_rxtx.h @@ -54,7 +54,7 @@ struct virtnet_rx { struct rte_mempool *mpool; /**< mempool for mbuf allocation */ uint16_t queue_id; /**< DPDK queue index. */ - uint8_t port_id; /**< Device port identifier. */ + uint16_t port_id; /**< Device port identifier. */ /* Statistics */ struct virtnet_stats stats; @@ -69,7 +69,7 @@ struct virtnet_tx { phys_addr_t virtio_net_hdr_mem; /**< hdr for each xmit packet */ uint16_t queue_id; /**< DPDK queue index. */ - uint8_t port_id; /**< Device port identifier. */ + uint16_t port_id; /**< Device port identifier. */ /* Statistics */ struct virtnet_stats stats; @@ -82,7 +82,7 @@ struct virtnet_ctl { /**< memzone to populate hdr. */ const struct rte_memzone *virtio_net_hdr_mz; phys_addr_t virtio_net_hdr_mem; /**< hdr for each xmit packet */ - uint8_t port_id; /**< Device port identifier. */ + uint16_t port_id; /**< Device port identifier. */ const struct rte_memzone *mz; /**< mem zone to populate RX ring. */ }; diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h b/drivers/net/virtio/virtio_user/virtio_user_dev.h index 8361b6bdd..de6302cbf 100644 --- a/drivers/net/virtio/virtio_user/virtio_user_dev.h +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h @@ -60,7 +60,7 @@ struct virtio_user_dev { */ uint64_t device_features; /* supported features by device */ uint8_t status; - uint8_t port_id; + uint16_t port_id; uint8_t mac_addr[ETHER_ADDR_LEN]; char path[PATH_MAX]; struct vring vrings[VIRTIO_MAX_VIRTQUEUES]; diff --git a/drivers/net/vmxnet3/vmxnet3_ring.h b/drivers/net/vmxnet3/vmxnet3_ring.h index d2e8323ba..2f2ff3976 100644 --- a/drivers/net/vmxnet3/vmxnet3_ring.h +++ b/drivers/net/vmxnet3/vmxnet3_ring.h @@ -143,8 +143,8 @@ typedef struct vmxnet3_tx_queue { struct vmxnet3_txq_stats stats; const struct rte_memzone *mz; bool stopped; - uint16_t queue_id; /**< Device TX queue index. */ - uint8_t port_id; /**< Device port identifier. */ + uint16_t queue_id; /**< Device TX queue index. */ + uint16_t port_id; /**< Device port identifier. */ uint16_t txdata_desc_size; } vmxnet3_tx_queue_t; @@ -178,8 +178,8 @@ typedef struct vmxnet3_rx_queue { struct vmxnet3_rxq_stats stats; const struct rte_memzone *mz; bool stopped; - uint16_t queue_id; /**< Device RX queue index. */ - uint8_t port_id; /**< Device port identifier. */ + uint16_t queue_id; /**< Device RX queue index. */ + uint16_t port_id; /**< Device port identifier. */ } vmxnet3_rx_queue_t; #endif /* _VMXNET3_RING_H_ */ diff --git a/drivers/net/xenvirt/virtqueue.h b/drivers/net/xenvirt/virtqueue.h index 1bb6877cd..1374d9193 100644 --- a/drivers/net/xenvirt/virtqueue.h +++ b/drivers/net/xenvirt/virtqueue.h @@ -74,7 +74,7 @@ struct virtqueue { struct rte_mempool *mpool; /**< mempool for mbuf allocation */ uint16_t queue_id; /**< DPDK queue index. */ uint16_t vq_queue_index; /**< PCI queue index */ - uint8_t port_id; /**< Device port identifier. */ + uint16_t port_id; /**< Device port identifier. */ void *vq_ring_virt_mem; /**< virtual address of vring*/ int vq_alignment; diff --git a/lib/librte_bitratestats/rte_bitrate.c b/lib/librte_bitratestats/rte_bitrate.c index 3ceb35166..f373697a7 100644 --- a/lib/librte_bitratestats/rte_bitrate.c +++ b/lib/librte_bitratestats/rte_bitrate.c @@ -84,7 +84,7 @@ rte_stats_bitrate_reg(struct rte_stats_bitrates *bitrate_data) int rte_stats_bitrate_calc(struct rte_stats_bitrates *bitrate_data, - uint8_t port_id) + uint16_t port_id) { struct rte_stats_bitrate *port_data; struct rte_eth_stats eth_stats; diff --git a/lib/librte_bitratestats/rte_bitrate.h b/lib/librte_bitratestats/rte_bitrate.h index 15fc270a3..2b40cda03 100644 --- a/lib/librte_bitratestats/rte_bitrate.h +++ b/lib/librte_bitratestats/rte_bitrate.h @@ -85,7 +85,7 @@ int rte_stats_bitrate_reg(struct rte_stats_bitrates *bitrate_data); * - Negative value on error */ int rte_stats_bitrate_calc(struct rte_stats_bitrates *bitrate_data, - uint8_t port_id); + uint16_t port_id); #ifdef __cplusplus } diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c index 0597641ee..f1731238b 100644 --- a/lib/librte_ether/rte_ethdev.c +++ b/lib/librte_ether/rte_ethdev.c @@ -138,8 +138,8 @@ enum { STAT_QMAP_RX }; -uint8_t -rte_eth_find_next(uint8_t port_id) +uint16_t +rte_eth_find_next(uint16_t port_id) { while (port_id < RTE_MAX_ETHPORTS && rte_eth_devices[port_id].state != RTE_ETH_DEV_ATTACHED) @@ -187,7 +187,7 @@ rte_eth_dev_allocated(const char *name) return NULL; } -static uint8_t +static uint16_t rte_eth_dev_find_free_port(void) { unsigned i; @@ -200,7 +200,7 @@ rte_eth_dev_find_free_port(void) } static struct rte_eth_dev * -eth_dev_get(uint8_t port_id) +eth_dev_get(uint16_t port_id) { struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id]; @@ -216,7 +216,7 @@ eth_dev_get(uint8_t port_id) struct rte_eth_dev * rte_eth_dev_allocate(const char *name) { - uint8_t port_id; + uint16_t port_id; struct rte_eth_dev *eth_dev; port_id = rte_eth_dev_find_free_port(); @@ -251,7 +251,7 @@ rte_eth_dev_allocate(const char *name) struct rte_eth_dev * rte_eth_dev_attach_secondary(const char *name) { - uint8_t i; + uint16_t i; struct rte_eth_dev *eth_dev; if (rte_eth_dev_data == NULL) @@ -285,7 +285,7 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev) } int -rte_eth_dev_is_valid_port(uint8_t port_id) +rte_eth_dev_is_valid_port(uint16_t port_id) { if (port_id >= RTE_MAX_ETHPORTS || (rte_eth_devices[port_id].state != RTE_ETH_DEV_ATTACHED && @@ -296,17 +296,17 @@ rte_eth_dev_is_valid_port(uint8_t port_id) } int -rte_eth_dev_socket_id(uint8_t port_id) +rte_eth_dev_socket_id(uint16_t port_id) { RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -1); return rte_eth_devices[port_id].data->numa_node; } -uint8_t +uint16_t rte_eth_dev_count(void) { - uint8_t p; - uint8_t count; + uint16_t p; + uint16_t count; count = 0; @@ -317,7 +317,7 @@ rte_eth_dev_count(void) } int -rte_eth_dev_get_name_by_port(uint8_t port_id, char *name) +rte_eth_dev_get_name_by_port(uint16_t port_id, char *name) { const char *tmp; @@ -336,7 +336,7 @@ rte_eth_dev_get_name_by_port(uint8_t port_id, char *name) } int -rte_eth_dev_get_port_by_name(const char *name, uint8_t *port_id) +rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id) { int ret; int i; @@ -361,7 +361,7 @@ rte_eth_dev_get_port_by_name(const char *name, uint8_t *port_id) } static int -rte_eth_dev_is_detachable(uint8_t port_id) +rte_eth_dev_is_detachable(uint16_t port_id) { uint32_t dev_flags; @@ -377,7 +377,7 @@ rte_eth_dev_is_detachable(uint8_t port_id) /* attach the new device, then store port_id of the device */ int -rte_eth_dev_attach(const char *devargs, uint8_t *port_id) +rte_eth_dev_attach(const char *devargs, uint16_t *port_id) { int ret = -1; int current = rte_eth_dev_count(); @@ -423,7 +423,7 @@ rte_eth_dev_attach(const char *devargs, uint8_t *port_id) /* detach the device, then store the name of the device */ int -rte_eth_dev_detach(uint8_t port_id, char *name) +rte_eth_dev_detach(uint16_t port_id, char *name) { int ret = -1; @@ -501,7 +501,7 @@ rte_eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) } int -rte_eth_dev_rx_queue_start(uint8_t port_id, uint16_t rx_queue_id) +rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id) { struct rte_eth_dev *dev; @@ -527,7 +527,7 @@ rte_eth_dev_rx_queue_start(uint8_t port_id, uint16_t rx_queue_id) } int -rte_eth_dev_rx_queue_stop(uint8_t port_id, uint16_t rx_queue_id) +rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id) { struct rte_eth_dev *dev; @@ -553,7 +553,7 @@ rte_eth_dev_rx_queue_stop(uint8_t port_id, uint16_t rx_queue_id) } int -rte_eth_dev_tx_queue_start(uint8_t port_id, uint16_t tx_queue_id) +rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id) { struct rte_eth_dev *dev; @@ -579,7 +579,7 @@ rte_eth_dev_tx_queue_start(uint8_t port_id, uint16_t tx_queue_id) } int -rte_eth_dev_tx_queue_stop(uint8_t port_id, uint16_t tx_queue_id) +rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id) { struct rte_eth_dev *dev; @@ -688,7 +688,7 @@ rte_eth_speed_bitflag(uint32_t speed, int duplex) } int -rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, +rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, const struct rte_eth_conf *dev_conf) { struct rte_eth_dev *dev; @@ -839,7 +839,7 @@ _rte_eth_dev_reset(struct rte_eth_dev *dev) } static void -rte_eth_dev_config_restore(uint8_t port_id) +rte_eth_dev_config_restore(uint16_t port_id) { struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; @@ -894,7 +894,7 @@ rte_eth_dev_config_restore(uint8_t port_id) } int -rte_eth_dev_start(uint8_t port_id) +rte_eth_dev_start(uint16_t port_id) { struct rte_eth_dev *dev; int diag; @@ -906,7 +906,7 @@ rte_eth_dev_start(uint8_t port_id) RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP); if (dev->data->dev_started != 0) { - RTE_PMD_DEBUG_TRACE("Device with port_id=%" PRIu8 + RTE_PMD_DEBUG_TRACE("Device with port_id=%" PRIu16 " already started\n", port_id); return 0; @@ -928,7 +928,7 @@ rte_eth_dev_start(uint8_t port_id) } void -rte_eth_dev_stop(uint8_t port_id) +rte_eth_dev_stop(uint16_t port_id) { struct rte_eth_dev *dev; @@ -938,7 +938,7 @@ rte_eth_dev_stop(uint8_t port_id) RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop); if (dev->data->dev_started == 0) { - RTE_PMD_DEBUG_TRACE("Device with port_id=%" PRIu8 + RTE_PMD_DEBUG_TRACE("Device with port_id=%" PRIu16 " already stopped\n", port_id); return; @@ -949,7 +949,7 @@ rte_eth_dev_stop(uint8_t port_id) } int -rte_eth_dev_set_link_up(uint8_t port_id) +rte_eth_dev_set_link_up(uint16_t port_id) { struct rte_eth_dev *dev; @@ -962,7 +962,7 @@ rte_eth_dev_set_link_up(uint8_t port_id) } int -rte_eth_dev_set_link_down(uint8_t port_id) +rte_eth_dev_set_link_down(uint16_t port_id) { struct rte_eth_dev *dev; @@ -975,7 +975,7 @@ rte_eth_dev_set_link_down(uint8_t port_id) } void -rte_eth_dev_close(uint8_t port_id) +rte_eth_dev_close(uint16_t port_id) { struct rte_eth_dev *dev; @@ -995,7 +995,7 @@ rte_eth_dev_close(uint8_t port_id) } int -rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id, +rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, uint16_t nb_rx_desc, unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) @@ -1086,7 +1086,7 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id, } int -rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id, +rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, uint16_t nb_tx_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf) { @@ -1190,7 +1190,7 @@ rte_eth_tx_buffer_init(struct rte_eth_dev_tx_buffer *buffer, uint16_t size) } int -rte_eth_tx_done_cleanup(uint8_t port_id, uint16_t queue_id, uint32_t free_cnt) +rte_eth_tx_done_cleanup(uint16_t port_id, uint16_t queue_id, uint32_t free_cnt) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; @@ -1204,7 +1204,7 @@ rte_eth_tx_done_cleanup(uint8_t port_id, uint16_t queue_id, uint32_t free_cnt) } void -rte_eth_promiscuous_enable(uint8_t port_id) +rte_eth_promiscuous_enable(uint16_t port_id) { struct rte_eth_dev *dev; @@ -1217,7 +1217,7 @@ rte_eth_promiscuous_enable(uint8_t port_id) } void -rte_eth_promiscuous_disable(uint8_t port_id) +rte_eth_promiscuous_disable(uint16_t port_id) { struct rte_eth_dev *dev; @@ -1230,7 +1230,7 @@ rte_eth_promiscuous_disable(uint8_t port_id) } int -rte_eth_promiscuous_get(uint8_t port_id) +rte_eth_promiscuous_get(uint16_t port_id) { struct rte_eth_dev *dev; @@ -1241,7 +1241,7 @@ rte_eth_promiscuous_get(uint8_t port_id) } void -rte_eth_allmulticast_enable(uint8_t port_id) +rte_eth_allmulticast_enable(uint16_t port_id) { struct rte_eth_dev *dev; @@ -1254,7 +1254,7 @@ rte_eth_allmulticast_enable(uint8_t port_id) } void -rte_eth_allmulticast_disable(uint8_t port_id) +rte_eth_allmulticast_disable(uint16_t port_id) { struct rte_eth_dev *dev; @@ -1267,7 +1267,7 @@ rte_eth_allmulticast_disable(uint8_t port_id) } int -rte_eth_allmulticast_get(uint8_t port_id) +rte_eth_allmulticast_get(uint16_t port_id) { struct rte_eth_dev *dev; @@ -1292,7 +1292,7 @@ rte_eth_dev_atomic_read_link_status(struct rte_eth_dev *dev, } void -rte_eth_link_get(uint8_t port_id, struct rte_eth_link *eth_link) +rte_eth_link_get(uint16_t port_id, struct rte_eth_link *eth_link) { struct rte_eth_dev *dev; @@ -1309,7 +1309,7 @@ rte_eth_link_get(uint8_t port_id, struct rte_eth_link *eth_link) } void -rte_eth_link_get_nowait(uint8_t port_id, struct rte_eth_link *eth_link) +rte_eth_link_get_nowait(uint16_t port_id, struct rte_eth_link *eth_link) { struct rte_eth_dev *dev; @@ -1326,7 +1326,7 @@ rte_eth_link_get_nowait(uint8_t port_id, struct rte_eth_link *eth_link) } int -rte_eth_stats_get(uint8_t port_id, struct rte_eth_stats *stats) +rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats) { struct rte_eth_dev *dev; @@ -1342,7 +1342,7 @@ rte_eth_stats_get(uint8_t port_id, struct rte_eth_stats *stats) } void -rte_eth_stats_reset(uint8_t port_id) +rte_eth_stats_reset(uint16_t port_id) { struct rte_eth_dev *dev; @@ -1355,7 +1355,7 @@ rte_eth_stats_reset(uint8_t port_id) } static int -get_xstats_count(uint8_t port_id) +get_xstats_count(uint16_t port_id) { struct rte_eth_dev *dev; int count; @@ -1384,7 +1384,7 @@ get_xstats_count(uint8_t port_id) } int -rte_eth_xstats_get_id_by_name(uint8_t port_id, const char *xstat_name, +rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name, uint64_t *id) { int cnt_xstats, idx_xstat; @@ -1428,7 +1428,7 @@ rte_eth_xstats_get_id_by_name(uint8_t port_id, const char *xstat_name, } int -rte_eth_xstats_get_names_by_id(uint8_t port_id, +rte_eth_xstats_get_names_by_id(uint16_t port_id, struct rte_eth_xstat_name *xstats_names, unsigned int size, uint64_t *ids) { @@ -1545,7 +1545,7 @@ rte_eth_xstats_get_names_by_id(uint8_t port_id, } int -rte_eth_xstats_get_names(uint8_t port_id, +rte_eth_xstats_get_names(uint16_t port_id, struct rte_eth_xstat_name *xstats_names, unsigned int size) { @@ -1611,8 +1611,8 @@ rte_eth_xstats_get_names(uint8_t port_id, /* retrieve ethdev extended statistics */ int -rte_eth_xstats_get_by_id(uint8_t port_id, const uint64_t *ids, uint64_t *values, - unsigned int n) +rte_eth_xstats_get_by_id(uint16_t port_id, const uint64_t *ids, + uint64_t *values, unsigned int n) { /* If need all xstats */ if (!ids) { @@ -1737,7 +1737,7 @@ rte_eth_xstats_get_by_id(uint8_t port_id, const uint64_t *ids, uint64_t *values, } int -rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats, +rte_eth_xstats_get(uint16_t port_id, struct rte_eth_xstat *xstats, unsigned int n) { struct rte_eth_stats eth_stats; @@ -1819,7 +1819,7 @@ rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats, /* reset ethdev extended statistics */ void -rte_eth_xstats_reset(uint8_t port_id) +rte_eth_xstats_reset(uint16_t port_id) { struct rte_eth_dev *dev; @@ -1837,7 +1837,7 @@ rte_eth_xstats_reset(uint8_t port_id) } static int -set_queue_stats_mapping(uint8_t port_id, uint16_t queue_id, uint8_t stat_idx, +set_queue_stats_mapping(uint16_t port_id, uint16_t queue_id, uint8_t stat_idx, uint8_t is_rx) { struct rte_eth_dev *dev; @@ -1853,7 +1853,7 @@ set_queue_stats_mapping(uint8_t port_id, uint16_t queue_id, uint8_t stat_idx, int -rte_eth_dev_set_tx_queue_stats_mapping(uint8_t port_id, uint16_t tx_queue_id, +rte_eth_dev_set_tx_queue_stats_mapping(uint16_t port_id, uint16_t tx_queue_id, uint8_t stat_idx) { return set_queue_stats_mapping(port_id, tx_queue_id, stat_idx, @@ -1862,7 +1862,7 @@ rte_eth_dev_set_tx_queue_stats_mapping(uint8_t port_id, uint16_t tx_queue_id, int -rte_eth_dev_set_rx_queue_stats_mapping(uint8_t port_id, uint16_t rx_queue_id, +rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id, uint16_t rx_queue_id, uint8_t stat_idx) { return set_queue_stats_mapping(port_id, rx_queue_id, stat_idx, @@ -1870,7 +1870,7 @@ rte_eth_dev_set_rx_queue_stats_mapping(uint8_t port_id, uint16_t rx_queue_id, } int -rte_eth_dev_fw_version_get(uint8_t port_id, char *fw_version, size_t fw_size) +rte_eth_dev_fw_version_get(uint16_t port_id, char *fw_version, size_t fw_size) { struct rte_eth_dev *dev; @@ -1882,7 +1882,7 @@ rte_eth_dev_fw_version_get(uint8_t port_id, char *fw_version, size_t fw_size) } void -rte_eth_dev_info_get(uint8_t port_id, struct rte_eth_dev_info *dev_info) +rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info) { struct rte_eth_dev *dev; const struct rte_eth_desc_lim lim = { @@ -1906,7 +1906,7 @@ rte_eth_dev_info_get(uint8_t port_id, struct rte_eth_dev_info *dev_info) } int -rte_eth_dev_get_supported_ptypes(uint8_t port_id, uint32_t ptype_mask, +rte_eth_dev_get_supported_ptypes(uint16_t port_id, uint32_t ptype_mask, uint32_t *ptypes, int num) { int i, j; @@ -1932,7 +1932,7 @@ rte_eth_dev_get_supported_ptypes(uint8_t port_id, uint32_t ptype_mask, } void -rte_eth_macaddr_get(uint8_t port_id, struct ether_addr *mac_addr) +rte_eth_macaddr_get(uint16_t port_id, struct ether_addr *mac_addr) { struct rte_eth_dev *dev; @@ -1943,7 +1943,7 @@ rte_eth_macaddr_get(uint8_t port_id, struct ether_addr *mac_addr) int -rte_eth_dev_get_mtu(uint8_t port_id, uint16_t *mtu) +rte_eth_dev_get_mtu(uint16_t port_id, uint16_t *mtu) { struct rte_eth_dev *dev; @@ -1955,7 +1955,7 @@ rte_eth_dev_get_mtu(uint8_t port_id, uint16_t *mtu) } int -rte_eth_dev_set_mtu(uint8_t port_id, uint16_t mtu) +rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu) { int ret; struct rte_eth_dev *dev; @@ -1972,7 +1972,7 @@ rte_eth_dev_set_mtu(uint8_t port_id, uint16_t mtu) } int -rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on) +rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on) { struct rte_eth_dev *dev; int ret; @@ -2011,7 +2011,8 @@ rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on) } int -rte_eth_dev_set_vlan_strip_on_queue(uint8_t port_id, uint16_t rx_queue_id, int on) +rte_eth_dev_set_vlan_strip_on_queue(uint16_t port_id, uint16_t rx_queue_id, + int on) { struct rte_eth_dev *dev; @@ -2029,7 +2030,7 @@ rte_eth_dev_set_vlan_strip_on_queue(uint8_t port_id, uint16_t rx_queue_id, int o } int -rte_eth_dev_set_vlan_ether_type(uint8_t port_id, +rte_eth_dev_set_vlan_ether_type(uint16_t port_id, enum rte_vlan_type vlan_type, uint16_t tpid) { @@ -2043,7 +2044,7 @@ rte_eth_dev_set_vlan_ether_type(uint8_t port_id, } int -rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask) +rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask) { struct rte_eth_dev *dev; int ret = 0; @@ -2086,7 +2087,7 @@ rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask) } int -rte_eth_dev_get_vlan_offload(uint8_t port_id) +rte_eth_dev_get_vlan_offload(uint16_t port_id) { struct rte_eth_dev *dev; int ret = 0; @@ -2107,7 +2108,7 @@ rte_eth_dev_get_vlan_offload(uint8_t port_id) } int -rte_eth_dev_set_vlan_pvid(uint8_t port_id, uint16_t pvid, int on) +rte_eth_dev_set_vlan_pvid(uint16_t port_id, uint16_t pvid, int on) { struct rte_eth_dev *dev; @@ -2120,7 +2121,7 @@ rte_eth_dev_set_vlan_pvid(uint8_t port_id, uint16_t pvid, int on) } int -rte_eth_dev_flow_ctrl_get(uint8_t port_id, struct rte_eth_fc_conf *fc_conf) +rte_eth_dev_flow_ctrl_get(uint16_t port_id, struct rte_eth_fc_conf *fc_conf) { struct rte_eth_dev *dev; @@ -2132,7 +2133,7 @@ rte_eth_dev_flow_ctrl_get(uint8_t port_id, struct rte_eth_fc_conf *fc_conf) } int -rte_eth_dev_flow_ctrl_set(uint8_t port_id, struct rte_eth_fc_conf *fc_conf) +rte_eth_dev_flow_ctrl_set(uint16_t port_id, struct rte_eth_fc_conf *fc_conf) { struct rte_eth_dev *dev; @@ -2148,7 +2149,8 @@ rte_eth_dev_flow_ctrl_set(uint8_t port_id, struct rte_eth_fc_conf *fc_conf) } int -rte_eth_dev_priority_flow_ctrl_set(uint8_t port_id, struct rte_eth_pfc_conf *pfc_conf) +rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id, + struct rte_eth_pfc_conf *pfc_conf) { struct rte_eth_dev *dev; @@ -2214,7 +2216,7 @@ rte_eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf, } int -rte_eth_dev_rss_reta_update(uint8_t port_id, +rte_eth_dev_rss_reta_update(uint16_t port_id, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { @@ -2240,7 +2242,7 @@ rte_eth_dev_rss_reta_update(uint8_t port_id, } int -rte_eth_dev_rss_reta_query(uint8_t port_id, +rte_eth_dev_rss_reta_query(uint16_t port_id, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { @@ -2260,7 +2262,8 @@ rte_eth_dev_rss_reta_query(uint8_t port_id, } int -rte_eth_dev_rss_hash_update(uint8_t port_id, struct rte_eth_rss_conf *rss_conf) +rte_eth_dev_rss_hash_update(uint16_t port_id, + struct rte_eth_rss_conf *rss_conf) { struct rte_eth_dev *dev; uint16_t rss_hash_protos; @@ -2279,7 +2282,7 @@ rte_eth_dev_rss_hash_update(uint8_t port_id, struct rte_eth_rss_conf *rss_conf) } int -rte_eth_dev_rss_hash_conf_get(uint8_t port_id, +rte_eth_dev_rss_hash_conf_get(uint16_t port_id, struct rte_eth_rss_conf *rss_conf) { struct rte_eth_dev *dev; @@ -2291,7 +2294,7 @@ rte_eth_dev_rss_hash_conf_get(uint8_t port_id, } int -rte_eth_dev_udp_tunnel_port_add(uint8_t port_id, +rte_eth_dev_udp_tunnel_port_add(uint16_t port_id, struct rte_eth_udp_tunnel *udp_tunnel) { struct rte_eth_dev *dev; @@ -2313,7 +2316,7 @@ rte_eth_dev_udp_tunnel_port_add(uint8_t port_id, } int -rte_eth_dev_udp_tunnel_port_delete(uint8_t port_id, +rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id, struct rte_eth_udp_tunnel *udp_tunnel) { struct rte_eth_dev *dev; @@ -2336,7 +2339,7 @@ rte_eth_dev_udp_tunnel_port_delete(uint8_t port_id, } int -rte_eth_led_on(uint8_t port_id) +rte_eth_led_on(uint16_t port_id) { struct rte_eth_dev *dev; @@ -2347,7 +2350,7 @@ rte_eth_led_on(uint8_t port_id) } int -rte_eth_led_off(uint8_t port_id) +rte_eth_led_off(uint16_t port_id) { struct rte_eth_dev *dev; @@ -2362,7 +2365,7 @@ rte_eth_led_off(uint8_t port_id) * an empty spot. */ static int -get_mac_addr_index(uint8_t port_id, const struct ether_addr *addr) +get_mac_addr_index(uint16_t port_id, const struct ether_addr *addr) { struct rte_eth_dev_info dev_info; struct rte_eth_dev *dev = &rte_eth_devices[port_id]; @@ -2381,7 +2384,7 @@ get_mac_addr_index(uint8_t port_id, const struct ether_addr *addr) static const struct ether_addr null_mac_addr; int -rte_eth_dev_mac_addr_add(uint8_t port_id, struct ether_addr *addr, +rte_eth_dev_mac_addr_add(uint16_t port_id, struct ether_addr *addr, uint32_t pool) { struct rte_eth_dev *dev; @@ -2434,7 +2437,7 @@ rte_eth_dev_mac_addr_add(uint8_t port_id, struct ether_addr *addr, } int -rte_eth_dev_mac_addr_remove(uint8_t port_id, struct ether_addr *addr) +rte_eth_dev_mac_addr_remove(uint16_t port_id, struct ether_addr *addr) { struct rte_eth_dev *dev; int index; @@ -2463,7 +2466,7 @@ rte_eth_dev_mac_addr_remove(uint8_t port_id, struct ether_addr *addr) } int -rte_eth_dev_default_mac_addr_set(uint8_t port_id, struct ether_addr *addr) +rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct ether_addr *addr) { struct rte_eth_dev *dev; @@ -2489,7 +2492,7 @@ rte_eth_dev_default_mac_addr_set(uint8_t port_id, struct ether_addr *addr) * an empty spot. */ static int -get_hash_mac_addr_index(uint8_t port_id, const struct ether_addr *addr) +get_hash_mac_addr_index(uint16_t port_id, const struct ether_addr *addr) { struct rte_eth_dev_info dev_info; struct rte_eth_dev *dev = &rte_eth_devices[port_id]; @@ -2508,7 +2511,7 @@ get_hash_mac_addr_index(uint8_t port_id, const struct ether_addr *addr) } int -rte_eth_dev_uc_hash_table_set(uint8_t port_id, struct ether_addr *addr, +rte_eth_dev_uc_hash_table_set(uint16_t port_id, struct ether_addr *addr, uint8_t on) { int index; @@ -2560,7 +2563,7 @@ rte_eth_dev_uc_hash_table_set(uint8_t port_id, struct ether_addr *addr, } int -rte_eth_dev_uc_all_hash_table_set(uint8_t port_id, uint8_t on) +rte_eth_dev_uc_all_hash_table_set(uint16_t port_id, uint8_t on) { struct rte_eth_dev *dev; @@ -2572,7 +2575,7 @@ rte_eth_dev_uc_all_hash_table_set(uint8_t port_id, uint8_t on) return (*dev->dev_ops->uc_all_hash_table_set)(dev, on); } -int rte_eth_set_queue_rate_limit(uint8_t port_id, uint16_t queue_idx, +int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx, uint16_t tx_rate) { struct rte_eth_dev *dev; @@ -2603,7 +2606,7 @@ int rte_eth_set_queue_rate_limit(uint8_t port_id, uint16_t queue_idx, } int -rte_eth_mirror_rule_set(uint8_t port_id, +rte_eth_mirror_rule_set(uint16_t port_id, struct rte_eth_mirror_conf *mirror_conf, uint8_t rule_id, uint8_t on) { @@ -2641,7 +2644,7 @@ rte_eth_mirror_rule_set(uint8_t port_id, } int -rte_eth_mirror_rule_reset(uint8_t port_id, uint8_t rule_id) +rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id) { struct rte_eth_dev *dev; @@ -2654,7 +2657,7 @@ rte_eth_mirror_rule_reset(uint8_t port_id, uint8_t rule_id) } int -rte_eth_dev_callback_register(uint8_t port_id, +rte_eth_dev_callback_register(uint16_t port_id, enum rte_eth_event_type event, rte_eth_dev_cb_fn cb_fn, void *cb_arg) { @@ -2694,7 +2697,7 @@ rte_eth_dev_callback_register(uint8_t port_id, } int -rte_eth_dev_callback_unregister(uint8_t port_id, +rte_eth_dev_callback_unregister(uint16_t port_id, enum rte_eth_event_type event, rte_eth_dev_cb_fn cb_fn, void *cb_arg) { @@ -2766,7 +2769,7 @@ _rte_eth_dev_callback_process(struct rte_eth_dev *dev, } int -rte_eth_dev_rx_intr_ctl(uint8_t port_id, int epfd, int op, void *data) +rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data) { uint32_t vec; struct rte_eth_dev *dev; @@ -2827,7 +2830,7 @@ rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name, } int -rte_eth_dev_rx_intr_ctl_q(uint8_t port_id, uint16_t queue_id, +rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id, int epfd, int op, void *data) { uint32_t vec; @@ -2867,7 +2870,7 @@ rte_eth_dev_rx_intr_ctl_q(uint8_t port_id, uint16_t queue_id, } int -rte_eth_dev_rx_intr_enable(uint8_t port_id, +rte_eth_dev_rx_intr_enable(uint16_t port_id, uint16_t queue_id) { struct rte_eth_dev *dev; @@ -2881,7 +2884,7 @@ rte_eth_dev_rx_intr_enable(uint8_t port_id, } int -rte_eth_dev_rx_intr_disable(uint8_t port_id, +rte_eth_dev_rx_intr_disable(uint16_t port_id, uint16_t queue_id) { struct rte_eth_dev *dev; @@ -2896,7 +2899,8 @@ rte_eth_dev_rx_intr_disable(uint8_t port_id, int -rte_eth_dev_filter_supported(uint8_t port_id, enum rte_filter_type filter_type) +rte_eth_dev_filter_supported(uint16_t port_id, + enum rte_filter_type filter_type) { struct rte_eth_dev *dev; @@ -2909,7 +2913,7 @@ rte_eth_dev_filter_supported(uint8_t port_id, enum rte_filter_type filter_type) } int -rte_eth_dev_filter_ctrl(uint8_t port_id, enum rte_filter_type filter_type, +rte_eth_dev_filter_ctrl(uint16_t port_id, enum rte_filter_type filter_type, enum rte_filter_op filter_op, void *arg) { struct rte_eth_dev *dev; @@ -2922,7 +2926,7 @@ rte_eth_dev_filter_ctrl(uint8_t port_id, enum rte_filter_type filter_type, } void * -rte_eth_add_rx_callback(uint8_t port_id, uint16_t queue_id, +rte_eth_add_rx_callback(uint16_t port_id, uint16_t queue_id, rte_rx_callback_fn fn, void *user_param) { #ifndef RTE_ETHDEV_RXTX_CALLBACKS @@ -2964,7 +2968,7 @@ rte_eth_add_rx_callback(uint8_t port_id, uint16_t queue_id, } void * -rte_eth_add_first_rx_callback(uint8_t port_id, uint16_t queue_id, +rte_eth_add_first_rx_callback(uint16_t port_id, uint16_t queue_id, rte_rx_callback_fn fn, void *user_param) { #ifndef RTE_ETHDEV_RXTX_CALLBACKS @@ -2999,7 +3003,7 @@ rte_eth_add_first_rx_callback(uint8_t port_id, uint16_t queue_id, } void * -rte_eth_add_tx_callback(uint8_t port_id, uint16_t queue_id, +rte_eth_add_tx_callback(uint16_t port_id, uint16_t queue_id, rte_tx_callback_fn fn, void *user_param) { #ifndef RTE_ETHDEV_RXTX_CALLBACKS @@ -3042,7 +3046,7 @@ rte_eth_add_tx_callback(uint8_t port_id, uint16_t queue_id, } int -rte_eth_remove_rx_callback(uint8_t port_id, uint16_t queue_id, +rte_eth_remove_rx_callback(uint16_t port_id, uint16_t queue_id, struct rte_eth_rxtx_callback *user_cb) { #ifndef RTE_ETHDEV_RXTX_CALLBACKS @@ -3076,7 +3080,7 @@ rte_eth_remove_rx_callback(uint8_t port_id, uint16_t queue_id, } int -rte_eth_remove_tx_callback(uint8_t port_id, uint16_t queue_id, +rte_eth_remove_tx_callback(uint16_t port_id, uint16_t queue_id, struct rte_eth_rxtx_callback *user_cb) { #ifndef RTE_ETHDEV_RXTX_CALLBACKS @@ -3110,7 +3114,7 @@ rte_eth_remove_tx_callback(uint8_t port_id, uint16_t queue_id, } int -rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id, +rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id, struct rte_eth_rxq_info *qinfo) { struct rte_eth_dev *dev; @@ -3134,7 +3138,7 @@ rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id, } int -rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id, +rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id, struct rte_eth_txq_info *qinfo) { struct rte_eth_dev *dev; @@ -3158,7 +3162,7 @@ rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id, } int -rte_eth_dev_set_mc_addr_list(uint8_t port_id, +rte_eth_dev_set_mc_addr_list(uint16_t port_id, struct ether_addr *mc_addr_set, uint32_t nb_mc_addr) { @@ -3172,7 +3176,7 @@ rte_eth_dev_set_mc_addr_list(uint8_t port_id, } int -rte_eth_timesync_enable(uint8_t port_id) +rte_eth_timesync_enable(uint16_t port_id) { struct rte_eth_dev *dev; @@ -3184,7 +3188,7 @@ rte_eth_timesync_enable(uint8_t port_id) } int -rte_eth_timesync_disable(uint8_t port_id) +rte_eth_timesync_disable(uint16_t port_id) { struct rte_eth_dev *dev; @@ -3196,7 +3200,7 @@ rte_eth_timesync_disable(uint8_t port_id) } int -rte_eth_timesync_read_rx_timestamp(uint8_t port_id, struct timespec *timestamp, +rte_eth_timesync_read_rx_timestamp(uint16_t port_id, struct timespec *timestamp, uint32_t flags) { struct rte_eth_dev *dev; @@ -3209,7 +3213,8 @@ rte_eth_timesync_read_rx_timestamp(uint8_t port_id, struct timespec *timestamp, } int -rte_eth_timesync_read_tx_timestamp(uint8_t port_id, struct timespec *timestamp) +rte_eth_timesync_read_tx_timestamp(uint16_t port_id, + struct timespec *timestamp) { struct rte_eth_dev *dev; @@ -3221,7 +3226,7 @@ rte_eth_timesync_read_tx_timestamp(uint8_t port_id, struct timespec *timestamp) } int -rte_eth_timesync_adjust_time(uint8_t port_id, int64_t delta) +rte_eth_timesync_adjust_time(uint16_t port_id, int64_t delta) { struct rte_eth_dev *dev; @@ -3233,7 +3238,7 @@ rte_eth_timesync_adjust_time(uint8_t port_id, int64_t delta) } int -rte_eth_timesync_read_time(uint8_t port_id, struct timespec *timestamp) +rte_eth_timesync_read_time(uint16_t port_id, struct timespec *timestamp) { struct rte_eth_dev *dev; @@ -3245,7 +3250,7 @@ rte_eth_timesync_read_time(uint8_t port_id, struct timespec *timestamp) } int -rte_eth_timesync_write_time(uint8_t port_id, const struct timespec *timestamp) +rte_eth_timesync_write_time(uint16_t port_id, const struct timespec *timestamp) { struct rte_eth_dev *dev; @@ -3257,7 +3262,7 @@ rte_eth_timesync_write_time(uint8_t port_id, const struct timespec *timestamp) } int -rte_eth_dev_get_reg_info(uint8_t port_id, struct rte_dev_reg_info *info) +rte_eth_dev_get_reg_info(uint16_t port_id, struct rte_dev_reg_info *info) { struct rte_eth_dev *dev; @@ -3269,7 +3274,7 @@ rte_eth_dev_get_reg_info(uint8_t port_id, struct rte_dev_reg_info *info) } int -rte_eth_dev_get_eeprom_length(uint8_t port_id) +rte_eth_dev_get_eeprom_length(uint16_t port_id) { struct rte_eth_dev *dev; @@ -3281,7 +3286,7 @@ rte_eth_dev_get_eeprom_length(uint8_t port_id) } int -rte_eth_dev_get_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info) +rte_eth_dev_get_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info) { struct rte_eth_dev *dev; @@ -3293,7 +3298,7 @@ rte_eth_dev_get_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info) } int -rte_eth_dev_set_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info) +rte_eth_dev_set_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info) { struct rte_eth_dev *dev; @@ -3305,7 +3310,7 @@ rte_eth_dev_set_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info) } int -rte_eth_dev_get_dcb_info(uint8_t port_id, +rte_eth_dev_get_dcb_info(uint16_t port_id, struct rte_eth_dcb_info *dcb_info) { struct rte_eth_dev *dev; @@ -3320,7 +3325,7 @@ rte_eth_dev_get_dcb_info(uint8_t port_id, } int -rte_eth_dev_l2_tunnel_eth_type_conf(uint8_t port_id, +rte_eth_dev_l2_tunnel_eth_type_conf(uint16_t port_id, struct rte_eth_l2_tunnel_conf *l2_tunnel) { struct rte_eth_dev *dev; @@ -3343,7 +3348,7 @@ rte_eth_dev_l2_tunnel_eth_type_conf(uint8_t port_id, } int -rte_eth_dev_l2_tunnel_offload_set(uint8_t port_id, +rte_eth_dev_l2_tunnel_offload_set(uint16_t port_id, struct rte_eth_l2_tunnel_conf *l2_tunnel, uint32_t mask, uint8_t en) @@ -3387,7 +3392,7 @@ rte_eth_dev_adjust_nb_desc(uint16_t *nb_desc, } int -rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, +rte_eth_dev_adjust_nb_rx_tx_desc(uint16_t port_id, uint16_t *nb_rx_desc, uint16_t *nb_tx_desc) { diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h index 0adf3274a..ca75b199c 100644 --- a/lib/librte_ether/rte_ethdev.h +++ b/lib/librte_ether/rte_ethdev.h @@ -1568,7 +1568,7 @@ struct eth_dev_ops { * @return * The number of packets returned to the user. */ -typedef uint16_t (*rte_rx_callback_fn)(uint8_t port, uint16_t queue, +typedef uint16_t (*rte_rx_callback_fn)(uint16_t port, uint16_t queue, struct rte_mbuf *pkts[], uint16_t nb_pkts, uint16_t max_pkts, void *user_param); @@ -1592,7 +1592,7 @@ typedef uint16_t (*rte_rx_callback_fn)(uint8_t port, uint16_t queue, * @return * The number of packets to be written to the NIC. */ -typedef uint16_t (*rte_tx_callback_fn)(uint8_t port, uint16_t queue, +typedef uint16_t (*rte_tx_callback_fn)(uint16_t port, uint16_t queue, struct rte_mbuf *pkts[], uint16_t nb_pkts, void *user_param); /** @@ -1695,7 +1695,7 @@ struct rte_eth_dev_data { /** bitmap array of associating Ethernet MAC addresses to pools */ struct ether_addr* hash_mac_addrs; /** Device Ethernet MAC addresses of hash filtering. */ - uint8_t port_id; /**< Device [external] port identifier. */ + uint16_t port_id; /**< Device [external] port identifier. */ __extension__ uint8_t promiscuous : 1, /**< RX promiscuous mode ON(1) / OFF(0). */ scattered_rx : 1, /**< RX of scattered packets is ON(1) / OFF(0) */ @@ -1737,7 +1737,7 @@ extern struct rte_eth_dev rte_eth_devices[]; * @return * Next valid port id, RTE_MAX_ETHPORTS if there is none. */ -uint8_t rte_eth_find_next(uint8_t port_id); +uint16_t rte_eth_find_next(uint16_t port_id); /** * Macro to iterate over all enabled ethdev ports. @@ -1760,7 +1760,7 @@ uint8_t rte_eth_find_next(uint8_t port_id); * @return * - The total number of usable Ethernet devices. */ -uint8_t rte_eth_dev_count(void); +uint16_t rte_eth_dev_count(void); /** * @internal @@ -1821,7 +1821,7 @@ int rte_eth_dev_release_port(struct rte_eth_dev *eth_dev); * @return * 0 on success and port_id is filled, negative on error */ -int rte_eth_dev_attach(const char *devargs, uint8_t *port_id); +int rte_eth_dev_attach(const char *devargs, uint16_t *port_id); /** * Detach a Ethernet device specified by port identifier. @@ -1836,7 +1836,7 @@ int rte_eth_dev_attach(const char *devargs, uint8_t *port_id); * @return * 0 on success and devname is filled, negative on error */ -int rte_eth_dev_detach(uint8_t port_id, char *devname); +int rte_eth_dev_detach(uint16_t port_id, char *devname); /** * Convert a numerical speed in Mbps to a bitmap flag that can be used in @@ -1880,7 +1880,7 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex); * - 0: Success, device configured. * - <0: Error code returned by the driver configuration function. */ -int rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_queue, +int rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_queue, uint16_t nb_tx_queue, const struct rte_eth_conf *eth_conf); /** @@ -1935,7 +1935,7 @@ void _rte_eth_dev_reset(struct rte_eth_dev *dev); * allocate network memory buffers from the memory pool when * initializing receive descriptors. */ -int rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id, +int rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, uint16_t nb_rx_desc, unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mb_pool); @@ -1983,7 +1983,7 @@ int rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id, * - 0: Success, the transmit queue is correctly set up. * - -ENOMEM: Unable to allocate the transmit ring descriptors. */ -int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id, +int rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, uint16_t nb_tx_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); @@ -1997,7 +1997,7 @@ int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id, * a default of zero if the socket could not be determined. * -1 is returned is the port_id value is out of range. */ -int rte_eth_dev_socket_id(uint8_t port_id); +int rte_eth_dev_socket_id(uint16_t port_id); /** * Check if port_id of device is attached @@ -2008,7 +2008,7 @@ int rte_eth_dev_socket_id(uint8_t port_id); * - 0 if port is out of range or not attached * - 1 if device is attached */ -int rte_eth_dev_is_valid_port(uint8_t port_id); +int rte_eth_dev_is_valid_port(uint16_t port_id); /** * Start specified RX queue of a port. It is used when rx_deferred_start @@ -2025,7 +2025,7 @@ int rte_eth_dev_is_valid_port(uint8_t port_id); * - -EINVAL: The port_id or the queue_id out of range. * - -ENOTSUP: The function not supported in PMD driver. */ -int rte_eth_dev_rx_queue_start(uint8_t port_id, uint16_t rx_queue_id); +int rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id); /** * Stop specified RX queue of a port @@ -2041,7 +2041,7 @@ int rte_eth_dev_rx_queue_start(uint8_t port_id, uint16_t rx_queue_id); * - -EINVAL: The port_id or the queue_id out of range. * - -ENOTSUP: The function not supported in PMD driver. */ -int rte_eth_dev_rx_queue_stop(uint8_t port_id, uint16_t rx_queue_id); +int rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id); /** * Start TX for specified queue of a port. It is used when tx_deferred_start @@ -2058,7 +2058,7 @@ int rte_eth_dev_rx_queue_stop(uint8_t port_id, uint16_t rx_queue_id); * - -EINVAL: The port_id or the queue_id out of range. * - -ENOTSUP: The function not supported in PMD driver. */ -int rte_eth_dev_tx_queue_start(uint8_t port_id, uint16_t tx_queue_id); +int rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id); /** * Stop specified TX queue of a port @@ -2074,9 +2074,7 @@ int rte_eth_dev_tx_queue_start(uint8_t port_id, uint16_t tx_queue_id); * - -EINVAL: The port_id or the queue_id out of range. * - -ENOTSUP: The function not supported in PMD driver. */ -int rte_eth_dev_tx_queue_stop(uint8_t port_id, uint16_t tx_queue_id); - - +int rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id); /** * Start an Ethernet device. @@ -2093,7 +2091,7 @@ int rte_eth_dev_tx_queue_stop(uint8_t port_id, uint16_t tx_queue_id); * - 0: Success, Ethernet device started. * - <0: Error code of the driver device start function. */ -int rte_eth_dev_start(uint8_t port_id); +int rte_eth_dev_start(uint16_t port_id); /** * Stop an Ethernet device. The device can be restarted with a call to @@ -2102,7 +2100,7 @@ int rte_eth_dev_start(uint8_t port_id); * @param port_id * The port identifier of the Ethernet device. */ -void rte_eth_dev_stop(uint8_t port_id); +void rte_eth_dev_stop(uint16_t port_id); /** @@ -2117,7 +2115,7 @@ void rte_eth_dev_stop(uint8_t port_id); * - 0: Success, Ethernet device linked up. * - <0: Error code of the driver device link up function. */ -int rte_eth_dev_set_link_up(uint8_t port_id); +int rte_eth_dev_set_link_up(uint16_t port_id); /** * Link down an Ethernet device. @@ -2128,7 +2126,7 @@ int rte_eth_dev_set_link_up(uint8_t port_id); * @param port_id * The port identifier of the Ethernet device. */ -int rte_eth_dev_set_link_down(uint8_t port_id); +int rte_eth_dev_set_link_down(uint16_t port_id); /** * Close a stopped Ethernet device. The device cannot be restarted! @@ -2138,7 +2136,7 @@ int rte_eth_dev_set_link_down(uint8_t port_id); * @param port_id * The port identifier of the Ethernet device. */ -void rte_eth_dev_close(uint8_t port_id); +void rte_eth_dev_close(uint16_t port_id); /** * Enable receipt in promiscuous mode for an Ethernet device. @@ -2146,7 +2144,7 @@ void rte_eth_dev_close(uint8_t port_id); * @param port_id * The port identifier of the Ethernet device. */ -void rte_eth_promiscuous_enable(uint8_t port_id); +void rte_eth_promiscuous_enable(uint16_t port_id); /** * Disable receipt in promiscuous mode for an Ethernet device. @@ -2154,7 +2152,7 @@ void rte_eth_promiscuous_enable(uint8_t port_id); * @param port_id * The port identifier of the Ethernet device. */ -void rte_eth_promiscuous_disable(uint8_t port_id); +void rte_eth_promiscuous_disable(uint16_t port_id); /** * Return the value of promiscuous mode for an Ethernet device. @@ -2166,7 +2164,7 @@ void rte_eth_promiscuous_disable(uint8_t port_id); * - (0) if promiscuous is disabled. * - (-1) on error */ -int rte_eth_promiscuous_get(uint8_t port_id); +int rte_eth_promiscuous_get(uint16_t port_id); /** * Enable the receipt of any multicast frame by an Ethernet device. @@ -2174,7 +2172,7 @@ int rte_eth_promiscuous_get(uint8_t port_id); * @param port_id * The port identifier of the Ethernet device. */ -void rte_eth_allmulticast_enable(uint8_t port_id); +void rte_eth_allmulticast_enable(uint16_t port_id); /** * Disable the receipt of all multicast frames by an Ethernet device. @@ -2182,7 +2180,7 @@ void rte_eth_allmulticast_enable(uint8_t port_id); * @param port_id * The port identifier of the Ethernet device. */ -void rte_eth_allmulticast_disable(uint8_t port_id); +void rte_eth_allmulticast_disable(uint16_t port_id); /** * Return the value of allmulticast mode for an Ethernet device. @@ -2194,7 +2192,7 @@ void rte_eth_allmulticast_disable(uint8_t port_id); * - (0) if allmulticast is disabled. * - (-1) on error */ -int rte_eth_allmulticast_get(uint8_t port_id); +int rte_eth_allmulticast_get(uint16_t port_id); /** * Retrieve the status (ON/OFF), the speed (in Mbps) and the mode (HALF-DUPLEX @@ -2207,7 +2205,7 @@ int rte_eth_allmulticast_get(uint8_t port_id); * A pointer to an *rte_eth_link* structure to be filled with * the status, the speed and the mode of the Ethernet device link. */ -void rte_eth_link_get(uint8_t port_id, struct rte_eth_link *link); +void rte_eth_link_get(uint16_t port_id, struct rte_eth_link *link); /** * Retrieve the status (ON/OFF), the speed (in Mbps) and the mode (HALF-DUPLEX @@ -2220,7 +2218,7 @@ void rte_eth_link_get(uint8_t port_id, struct rte_eth_link *link); * A pointer to an *rte_eth_link* structure to be filled with * the status, the speed and the mode of the Ethernet device link. */ -void rte_eth_link_get_nowait(uint8_t port_id, struct rte_eth_link *link); +void rte_eth_link_get_nowait(uint16_t port_id, struct rte_eth_link *link); /** * Retrieve the general I/O statistics of an Ethernet device. @@ -2239,7 +2237,7 @@ void rte_eth_link_get_nowait(uint8_t port_id, struct rte_eth_link *link); * @return * Zero if successful. Non-zero otherwise. */ -int rte_eth_stats_get(uint8_t port_id, struct rte_eth_stats *stats); +int rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats); /** * Reset the general I/O statistics of an Ethernet device. @@ -2247,7 +2245,7 @@ int rte_eth_stats_get(uint8_t port_id, struct rte_eth_stats *stats); * @param port_id * The port identifier of the Ethernet device. */ -void rte_eth_stats_reset(uint8_t port_id); +void rte_eth_stats_reset(uint16_t port_id); /** * Retrieve names of extended statistics of an Ethernet device. @@ -2269,7 +2267,7 @@ void rte_eth_stats_reset(uint8_t port_id); * shall not be used by the caller. * - A negative value on error (invalid port id). */ -int rte_eth_xstats_get_names(uint8_t port_id, +int rte_eth_xstats_get_names(uint16_t port_id, struct rte_eth_xstat_name *xstats_names, unsigned int size); @@ -2295,7 +2293,7 @@ int rte_eth_xstats_get_names(uint8_t port_id, * shall not be used by the caller. * - A negative value on error (invalid port id). */ -int rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats, +int rte_eth_xstats_get(uint16_t port_id, struct rte_eth_xstat *xstats, unsigned int n); /** @@ -2321,7 +2319,7 @@ int rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats, * - A negative value on error (invalid port id). */ int -rte_eth_xstats_get_names_by_id(uint8_t port_id, +rte_eth_xstats_get_names_by_id(uint16_t port_id, struct rte_eth_xstat_name *xstats_names, unsigned int size, uint64_t *ids); @@ -2348,7 +2346,7 @@ rte_eth_xstats_get_names_by_id(uint8_t port_id, * shall not be used by the caller. * - A negative value on error (invalid port id). */ -int rte_eth_xstats_get_by_id(uint8_t port_id, const uint64_t *ids, +int rte_eth_xstats_get_by_id(uint16_t port_id, const uint64_t *ids, uint64_t *values, unsigned int n); /** @@ -2368,7 +2366,7 @@ int rte_eth_xstats_get_by_id(uint8_t port_id, const uint64_t *ids, * -ENODEV for invalid port_id, * -EINVAL if the xstat_name doesn't exist in port_id */ -int rte_eth_xstats_get_id_by_name(uint8_t port_id, const char *xstat_name, +int rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name, uint64_t *id); /** @@ -2377,7 +2375,7 @@ int rte_eth_xstats_get_id_by_name(uint8_t port_id, const char *xstat_name, * @param port_id * The port identifier of the Ethernet device. */ -void rte_eth_xstats_reset(uint8_t port_id); +void rte_eth_xstats_reset(uint16_t port_id); /** * Set a mapping for the specified transmit queue to the specified per-queue @@ -2396,7 +2394,7 @@ void rte_eth_xstats_reset(uint8_t port_id); * @return * Zero if successful. Non-zero otherwise. */ -int rte_eth_dev_set_tx_queue_stats_mapping(uint8_t port_id, +int rte_eth_dev_set_tx_queue_stats_mapping(uint16_t port_id, uint16_t tx_queue_id, uint8_t stat_idx); /** @@ -2416,7 +2414,7 @@ int rte_eth_dev_set_tx_queue_stats_mapping(uint8_t port_id, * @return * Zero if successful. Non-zero otherwise. */ -int rte_eth_dev_set_rx_queue_stats_mapping(uint8_t port_id, +int rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id, uint16_t rx_queue_id, uint8_t stat_idx); @@ -2429,7 +2427,7 @@ int rte_eth_dev_set_rx_queue_stats_mapping(uint8_t port_id, * A pointer to a structure of type *ether_addr* to be filled with * the Ethernet address of the Ethernet device. */ -void rte_eth_macaddr_get(uint8_t port_id, struct ether_addr *mac_addr); +void rte_eth_macaddr_get(uint16_t port_id, struct ether_addr *mac_addr); /** * Retrieve the contextual information of an Ethernet device. @@ -2440,7 +2438,7 @@ void rte_eth_macaddr_get(uint8_t port_id, struct ether_addr *mac_addr); * A pointer to a structure of type *rte_eth_dev_info* to be filled with * the contextual information of the Ethernet device. */ -void rte_eth_dev_info_get(uint8_t port_id, struct rte_eth_dev_info *dev_info); +void rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info); /** * Retrieve the firmware version of a device. @@ -2460,7 +2458,7 @@ void rte_eth_dev_info_get(uint8_t port_id, struct rte_eth_dev_info *dev_info); * - (>0) if *fw_size* is not enough to store firmware version, return * the size of the non truncated string. */ -int rte_eth_dev_fw_version_get(uint8_t port_id, +int rte_eth_dev_fw_version_get(uint16_t port_id, char *fw_version, size_t fw_size); /** @@ -2501,7 +2499,7 @@ int rte_eth_dev_fw_version_get(uint8_t port_id, * count of supported ptypes will be returned. * - (-ENODEV) if *port_id* invalid. */ -int rte_eth_dev_get_supported_ptypes(uint8_t port_id, uint32_t ptype_mask, +int rte_eth_dev_get_supported_ptypes(uint16_t port_id, uint32_t ptype_mask, uint32_t *ptypes, int num); /** @@ -2515,7 +2513,7 @@ int rte_eth_dev_get_supported_ptypes(uint8_t port_id, uint32_t ptype_mask, * - (0) if successful. * - (-ENODEV) if *port_id* invalid. */ -int rte_eth_dev_get_mtu(uint8_t port_id, uint16_t *mtu); +int rte_eth_dev_get_mtu(uint16_t port_id, uint16_t *mtu); /** * Change the MTU of an Ethernet device. @@ -2531,7 +2529,7 @@ int rte_eth_dev_get_mtu(uint8_t port_id, uint16_t *mtu); * - (-EINVAL) if *mtu* invalid. * - (-EBUSY) if operation is not allowed when the port is running */ -int rte_eth_dev_set_mtu(uint8_t port_id, uint16_t mtu); +int rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu); /** * Enable/Disable hardware filtering by an Ethernet device of received @@ -2551,7 +2549,7 @@ int rte_eth_dev_set_mtu(uint8_t port_id, uint16_t mtu); * - (-ENOSYS) if VLAN filtering on *port_id* disabled. * - (-EINVAL) if *vlan_id* > 4095. */ -int rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on); +int rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on); /** * Enable/Disable hardware VLAN Strip by a rx queue of an Ethernet device. @@ -2572,7 +2570,7 @@ int rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on); * - (-ENODEV) if *port_id* invalid. * - (-EINVAL) if *rx_queue_id* invalid. */ -int rte_eth_dev_set_vlan_strip_on_queue(uint8_t port_id, uint16_t rx_queue_id, +int rte_eth_dev_set_vlan_strip_on_queue(uint16_t port_id, uint16_t rx_queue_id, int on); /** @@ -2591,7 +2589,7 @@ int rte_eth_dev_set_vlan_strip_on_queue(uint8_t port_id, uint16_t rx_queue_id, * - (-ENOSUP) if hardware-assisted VLAN TPID setup is not supported. * - (-ENODEV) if *port_id* invalid. */ -int rte_eth_dev_set_vlan_ether_type(uint8_t port_id, +int rte_eth_dev_set_vlan_ether_type(uint16_t port_id, enum rte_vlan_type vlan_type, uint16_t tag_type); @@ -2615,7 +2613,7 @@ int rte_eth_dev_set_vlan_ether_type(uint8_t port_id, * - (-ENOSUP) if hardware-assisted VLAN filtering not configured. * - (-ENODEV) if *port_id* invalid. */ -int rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask); +int rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask); /** * Read VLAN Offload configuration from an Ethernet device @@ -2629,7 +2627,7 @@ int rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask); * ETH_VLAN_EXTEND_OFFLOAD * - (-ENODEV) if *port_id* invalid. */ -int rte_eth_dev_get_vlan_offload(uint8_t port_id); +int rte_eth_dev_get_vlan_offload(uint16_t port_id); /** * Set port based TX VLAN insertion on or off. @@ -2645,7 +2643,7 @@ int rte_eth_dev_get_vlan_offload(uint8_t port_id); * - (0) if successful. * - negative if failed. */ -int rte_eth_dev_set_vlan_pvid(uint8_t port_id, uint16_t pvid, int on); +int rte_eth_dev_set_vlan_pvid(uint16_t port_id, uint16_t pvid, int on); /** * @@ -2730,7 +2728,7 @@ int rte_eth_dev_set_vlan_pvid(uint8_t port_id, uint16_t pvid, int on); * *rx_pkts* array. */ static inline uint16_t -rte_eth_rx_burst(uint8_t port_id, uint16_t queue_id, +rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, struct rte_mbuf **rx_pkts, const uint16_t nb_pkts) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; @@ -2775,7 +2773,7 @@ rte_eth_rx_burst(uint8_t port_id, uint16_t queue_id, * (-ENOTSUP) if the device does not support this function */ static inline int -rte_eth_rx_queue_count(uint8_t port_id, uint16_t queue_id) +rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id) { struct rte_eth_dev *dev; @@ -2804,7 +2802,7 @@ rte_eth_rx_queue_count(uint8_t port_id, uint16_t queue_id) * - (-ENOTSUP) if the device does not support this function */ static inline int -rte_eth_rx_descriptor_done(uint8_t port_id, uint16_t queue_id, uint16_t offset) +rte_eth_rx_descriptor_done(uint16_t port_id, uint16_t queue_id, uint16_t offset) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); @@ -2851,7 +2849,7 @@ rte_eth_rx_descriptor_done(uint8_t port_id, uint16_t queue_id, uint16_t offset) * - (-ENODEV) bad port or queue (only if compiled with debug). */ static inline int -rte_eth_rx_descriptor_status(uint8_t port_id, uint16_t queue_id, +rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id, uint16_t offset) { struct rte_eth_dev *dev; @@ -2908,7 +2906,7 @@ rte_eth_rx_descriptor_status(uint8_t port_id, uint16_t queue_id, * - (-ENOTSUP) if the device does not support this function. * - (-ENODEV) bad port or queue (only if compiled with debug). */ -static inline int rte_eth_tx_descriptor_status(uint8_t port_id, +static inline int rte_eth_tx_descriptor_status(uint16_t port_id, uint16_t queue_id, uint16_t offset) { struct rte_eth_dev *dev; @@ -2992,7 +2990,7 @@ static inline int rte_eth_tx_descriptor_status(uint8_t port_id, * *tx_pkts* parameter when the transmit ring is full or has been filled up. */ static inline uint16_t -rte_eth_tx_burst(uint8_t port_id, uint16_t queue_id, +rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; @@ -3081,7 +3079,7 @@ rte_eth_tx_burst(uint8_t port_id, uint16_t queue_id, #ifndef RTE_ETHDEV_TX_PREPARE_NOOP static inline uint16_t -rte_eth_tx_prepare(uint8_t port_id, uint16_t queue_id, +rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct rte_eth_dev *dev; @@ -3123,7 +3121,8 @@ rte_eth_tx_prepare(uint8_t port_id, uint16_t queue_id, */ static inline uint16_t -rte_eth_tx_prepare(__rte_unused uint8_t port_id, __rte_unused uint16_t queue_id, +rte_eth_tx_prepare(__rte_unused uint16_t port_id, + __rte_unused uint16_t queue_id, __rte_unused struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { return nb_pkts; @@ -3192,7 +3191,7 @@ rte_eth_tx_buffer_init(struct rte_eth_dev_tx_buffer *buffer, uint16_t size); * callback is called for any packets which could not be sent. */ static inline uint16_t -rte_eth_tx_buffer_flush(uint8_t port_id, uint16_t queue_id, +rte_eth_tx_buffer_flush(uint16_t port_id, uint16_t queue_id, struct rte_eth_dev_tx_buffer *buffer) { uint16_t sent; @@ -3244,7 +3243,7 @@ rte_eth_tx_buffer_flush(uint8_t port_id, uint16_t queue_id, * the rest. */ static __rte_always_inline uint16_t -rte_eth_tx_buffer(uint8_t port_id, uint16_t queue_id, +rte_eth_tx_buffer(uint16_t port_id, uint16_t queue_id, struct rte_eth_dev_tx_buffer *buffer, struct rte_mbuf *tx_pkt) { buffer->pkts[buffer->length++] = tx_pkt; @@ -3360,7 +3359,7 @@ rte_eth_tx_buffer_count_callback(struct rte_mbuf **pkts, uint16_t unsent, * are in use. */ int -rte_eth_tx_done_cleanup(uint8_t port_id, uint16_t queue_id, uint32_t free_cnt); +rte_eth_tx_done_cleanup(uint16_t port_id, uint16_t queue_id, uint32_t free_cnt); /** * The eth device event type for interrupt, and maybe others in the future. @@ -3378,7 +3377,7 @@ enum rte_eth_event_type { RTE_ETH_EVENT_MAX /**< max value of this enum */ }; -typedef int (*rte_eth_dev_cb_fn)(uint8_t port_id, +typedef int (*rte_eth_dev_cb_fn)(uint16_t port_id, enum rte_eth_event_type event, void *cb_arg, void *ret_param); /**< user application callback to be registered for interrupts */ @@ -3400,7 +3399,7 @@ typedef int (*rte_eth_dev_cb_fn)(uint8_t port_id, * - On success, zero. * - On failure, a negative value. */ -int rte_eth_dev_callback_register(uint8_t port_id, +int rte_eth_dev_callback_register(uint16_t port_id, enum rte_eth_event_type event, rte_eth_dev_cb_fn cb_fn, void *cb_arg); @@ -3421,7 +3420,7 @@ int rte_eth_dev_callback_register(uint8_t port_id, * - On success, zero. * - On failure, a negative value. */ -int rte_eth_dev_callback_unregister(uint8_t port_id, +int rte_eth_dev_callback_unregister(uint16_t port_id, enum rte_eth_event_type event, rte_eth_dev_cb_fn cb_fn, void *cb_arg); @@ -3467,7 +3466,7 @@ int _rte_eth_dev_callback_process(struct rte_eth_dev *dev, * that operation. * - (-ENODEV) if *port_id* invalid. */ -int rte_eth_dev_rx_intr_enable(uint8_t port_id, uint16_t queue_id); +int rte_eth_dev_rx_intr_enable(uint16_t port_id, uint16_t queue_id); /** * When lcore wakes up from rx interrupt indicating packet coming, disable rx @@ -3488,7 +3487,7 @@ int rte_eth_dev_rx_intr_enable(uint8_t port_id, uint16_t queue_id); * that operation. * - (-ENODEV) if *port_id* invalid. */ -int rte_eth_dev_rx_intr_disable(uint8_t port_id, uint16_t queue_id); +int rte_eth_dev_rx_intr_disable(uint16_t port_id, uint16_t queue_id); /** * RX Interrupt control per port. @@ -3507,7 +3506,7 @@ int rte_eth_dev_rx_intr_disable(uint8_t port_id, uint16_t queue_id); * - On success, zero. * - On failure, a negative value. */ -int rte_eth_dev_rx_intr_ctl(uint8_t port_id, int epfd, int op, void *data); +int rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data); /** * RX Interrupt control per queue. @@ -3530,7 +3529,7 @@ int rte_eth_dev_rx_intr_ctl(uint8_t port_id, int epfd, int op, void *data); * - On success, zero. * - On failure, a negative value. */ -int rte_eth_dev_rx_intr_ctl_q(uint8_t port_id, uint16_t queue_id, +int rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id, int epfd, int op, void *data); /** @@ -3545,7 +3544,7 @@ int rte_eth_dev_rx_intr_ctl_q(uint8_t port_id, uint16_t queue_id, * that operation. * - (-ENODEV) if *port_id* invalid. */ -int rte_eth_led_on(uint8_t port_id); +int rte_eth_led_on(uint16_t port_id); /** * Turn off the LED on the Ethernet device. @@ -3559,7 +3558,7 @@ int rte_eth_led_on(uint8_t port_id); * that operation. * - (-ENODEV) if *port_id* invalid. */ -int rte_eth_led_off(uint8_t port_id); +int rte_eth_led_off(uint16_t port_id); /** * Get current status of the Ethernet link flow control for Ethernet device @@ -3573,7 +3572,7 @@ int rte_eth_led_off(uint8_t port_id); * - (-ENOTSUP) if hardware doesn't support flow control. * - (-ENODEV) if *port_id* invalid. */ -int rte_eth_dev_flow_ctrl_get(uint8_t port_id, +int rte_eth_dev_flow_ctrl_get(uint16_t port_id, struct rte_eth_fc_conf *fc_conf); /** @@ -3590,7 +3589,7 @@ int rte_eth_dev_flow_ctrl_get(uint8_t port_id, * - (-EINVAL) if bad parameter * - (-EIO) if flow control setup failure */ -int rte_eth_dev_flow_ctrl_set(uint8_t port_id, +int rte_eth_dev_flow_ctrl_set(uint16_t port_id, struct rte_eth_fc_conf *fc_conf); /** @@ -3608,7 +3607,7 @@ int rte_eth_dev_flow_ctrl_set(uint8_t port_id, * - (-EINVAL) if bad parameter * - (-EIO) if flow control setup failure */ -int rte_eth_dev_priority_flow_ctrl_set(uint8_t port_id, +int rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id, struct rte_eth_pfc_conf *pfc_conf); /** @@ -3629,7 +3628,7 @@ int rte_eth_dev_priority_flow_ctrl_set(uint8_t port_id, * - (-ENOSPC) if no more MAC addresses can be added. * - (-EINVAL) if MAC address is invalid. */ -int rte_eth_dev_mac_addr_add(uint8_t port, struct ether_addr *mac_addr, +int rte_eth_dev_mac_addr_add(uint16_t port, struct ether_addr *mac_addr, uint32_t pool); /** @@ -3645,7 +3644,7 @@ int rte_eth_dev_mac_addr_add(uint8_t port, struct ether_addr *mac_addr, * - (-ENODEV) if *port* invalid. * - (-EADDRINUSE) if attempting to remove the default MAC address */ -int rte_eth_dev_mac_addr_remove(uint8_t port, struct ether_addr *mac_addr); +int rte_eth_dev_mac_addr_remove(uint16_t port, struct ether_addr *mac_addr); /** * Set the default MAC address. @@ -3660,8 +3659,8 @@ int rte_eth_dev_mac_addr_remove(uint8_t port, struct ether_addr *mac_addr); * - (-ENODEV) if *port* invalid. * - (-EINVAL) if MAC address is invalid. */ -int rte_eth_dev_default_mac_addr_set(uint8_t port, struct ether_addr *mac_addr); - +int rte_eth_dev_default_mac_addr_set(uint16_t port, + struct ether_addr *mac_addr); /** * Update Redirection Table(RETA) of Receive Side Scaling of Ethernet device. @@ -3678,7 +3677,7 @@ int rte_eth_dev_default_mac_addr_set(uint8_t port, struct ether_addr *mac_addr); * - (-ENOTSUP) if hardware doesn't support. * - (-EINVAL) if bad parameter. */ -int rte_eth_dev_rss_reta_update(uint8_t port, +int rte_eth_dev_rss_reta_update(uint16_t port, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size); @@ -3697,7 +3696,7 @@ int rte_eth_dev_rss_reta_update(uint8_t port, * - (-ENOTSUP) if hardware doesn't support. * - (-EINVAL) if bad parameter. */ -int rte_eth_dev_rss_reta_query(uint8_t port, +int rte_eth_dev_rss_reta_query(uint16_t port, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size); @@ -3719,8 +3718,8 @@ int rte_eth_dev_rss_reta_query(uint8_t port, * - (-ENODEV) if *port_id* invalid. * - (-EINVAL) if bad parameter. */ -int rte_eth_dev_uc_hash_table_set(uint8_t port,struct ether_addr *addr, - uint8_t on); +int rte_eth_dev_uc_hash_table_set(uint16_t port, struct ether_addr *addr, + uint8_t on); /** * Updates all unicast hash bitmaps for receiving packet with any Unicast @@ -3739,7 +3738,7 @@ int rte_eth_dev_uc_hash_table_set(uint8_t port,struct ether_addr *addr, * - (-ENODEV) if *port_id* invalid. * - (-EINVAL) if bad parameter. */ -int rte_eth_dev_uc_all_hash_table_set(uint8_t port,uint8_t on); +int rte_eth_dev_uc_all_hash_table_set(uint16_t port, uint8_t on); /** * Set a traffic mirroring rule on an Ethernet device @@ -3762,7 +3761,7 @@ int rte_eth_dev_uc_all_hash_table_set(uint8_t port,uint8_t on); * - (-ENODEV) if *port_id* invalid. * - (-EINVAL) if the mr_conf information is not correct. */ -int rte_eth_mirror_rule_set(uint8_t port_id, +int rte_eth_mirror_rule_set(uint16_t port_id, struct rte_eth_mirror_conf *mirror_conf, uint8_t rule_id, uint8_t on); @@ -3780,7 +3779,7 @@ int rte_eth_mirror_rule_set(uint8_t port_id, * - (-ENODEV) if *port_id* invalid. * - (-EINVAL) if bad parameter. */ -int rte_eth_mirror_rule_reset(uint8_t port_id, +int rte_eth_mirror_rule_reset(uint16_t port_id, uint8_t rule_id); /** @@ -3798,7 +3797,7 @@ int rte_eth_mirror_rule_reset(uint8_t port_id, * - (-ENODEV) if *port_id* invalid. * - (-EINVAL) if bad parameter. */ -int rte_eth_set_queue_rate_limit(uint8_t port_id, uint16_t queue_idx, +int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx, uint16_t tx_rate); /** @@ -3814,7 +3813,7 @@ int rte_eth_set_queue_rate_limit(uint8_t port_id, uint16_t queue_idx, * - (-ENOTSUP) if hardware doesn't support. * - (-EINVAL) if bad parameter. */ -int rte_eth_dev_rss_hash_update(uint8_t port_id, +int rte_eth_dev_rss_hash_update(uint16_t port_id, struct rte_eth_rss_conf *rss_conf); /** @@ -3831,7 +3830,7 @@ int rte_eth_dev_rss_hash_update(uint8_t port_id, * - (-ENOTSUP) if hardware doesn't support RSS. */ int -rte_eth_dev_rss_hash_conf_get(uint8_t port_id, +rte_eth_dev_rss_hash_conf_get(uint16_t port_id, struct rte_eth_rss_conf *rss_conf); /** @@ -3852,7 +3851,7 @@ rte_eth_dev_rss_hash_conf_get(uint8_t port_id, * - (-ENOTSUP) if hardware doesn't support tunnel type. */ int -rte_eth_dev_udp_tunnel_port_add(uint8_t port_id, +rte_eth_dev_udp_tunnel_port_add(uint16_t port_id, struct rte_eth_udp_tunnel *tunnel_udp); /** @@ -3874,7 +3873,7 @@ rte_eth_dev_udp_tunnel_port_add(uint8_t port_id, * - (-ENOTSUP) if hardware doesn't support tunnel type. */ int -rte_eth_dev_udp_tunnel_port_delete(uint8_t port_id, +rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id, struct rte_eth_udp_tunnel *tunnel_udp); /** @@ -3890,7 +3889,8 @@ rte_eth_dev_udp_tunnel_port_delete(uint8_t port_id, * - (-ENOTSUP) if hardware doesn't support this filter type. * - (-ENODEV) if *port_id* invalid. */ -int rte_eth_dev_filter_supported(uint8_t port_id, enum rte_filter_type filter_type); +int rte_eth_dev_filter_supported(uint16_t port_id, + enum rte_filter_type filter_type); /** * Take operations to assigned filter type on an Ethernet device. @@ -3910,7 +3910,7 @@ int rte_eth_dev_filter_supported(uint8_t port_id, enum rte_filter_type filter_ty * - (-ENODEV) if *port_id* invalid. * - others depends on the specific operations implementation. */ -int rte_eth_dev_filter_ctrl(uint8_t port_id, enum rte_filter_type filter_type, +int rte_eth_dev_filter_ctrl(uint16_t port_id, enum rte_filter_type filter_type, enum rte_filter_op filter_op, void *arg); /** @@ -3925,7 +3925,7 @@ int rte_eth_dev_filter_ctrl(uint8_t port_id, enum rte_filter_type filter_type, * - (-ENODEV) if port identifier is invalid. * - (-ENOTSUP) if hardware doesn't support. */ -int rte_eth_dev_get_dcb_info(uint8_t port_id, +int rte_eth_dev_get_dcb_info(uint16_t port_id, struct rte_eth_dcb_info *dcb_info); /** @@ -3952,7 +3952,7 @@ int rte_eth_dev_get_dcb_info(uint8_t port_id, * NULL on error. * On success, a pointer value which can later be used to remove the callback. */ -void *rte_eth_add_rx_callback(uint8_t port_id, uint16_t queue_id, +void *rte_eth_add_rx_callback(uint16_t port_id, uint16_t queue_id, rte_rx_callback_fn fn, void *user_param); /** @@ -3980,7 +3980,7 @@ void *rte_eth_add_rx_callback(uint8_t port_id, uint16_t queue_id, * NULL on error. * On success, a pointer value which can later be used to remove the callback. */ -void *rte_eth_add_first_rx_callback(uint8_t port_id, uint16_t queue_id, +void *rte_eth_add_first_rx_callback(uint16_t port_id, uint16_t queue_id, rte_rx_callback_fn fn, void *user_param); /** @@ -4007,7 +4007,7 @@ void *rte_eth_add_first_rx_callback(uint8_t port_id, uint16_t queue_id, * NULL on error. * On success, a pointer value which can later be used to remove the callback. */ -void *rte_eth_add_tx_callback(uint8_t port_id, uint16_t queue_id, +void *rte_eth_add_tx_callback(uint16_t port_id, uint16_t queue_id, rte_tx_callback_fn fn, void *user_param); /** @@ -4040,7 +4040,7 @@ void *rte_eth_add_tx_callback(uint8_t port_id, uint16_t queue_id, * - -EINVAL: The port_id or the queue_id is out of range, or the callback * is NULL or not found for the port/queue. */ -int rte_eth_remove_rx_callback(uint8_t port_id, uint16_t queue_id, +int rte_eth_remove_rx_callback(uint16_t port_id, uint16_t queue_id, struct rte_eth_rxtx_callback *user_cb); /** @@ -4073,7 +4073,7 @@ int rte_eth_remove_rx_callback(uint8_t port_id, uint16_t queue_id, * - -EINVAL: The port_id or the queue_id is out of range, or the callback * is NULL or not found for the port/queue. */ -int rte_eth_remove_tx_callback(uint8_t port_id, uint16_t queue_id, +int rte_eth_remove_tx_callback(uint16_t port_id, uint16_t queue_id, struct rte_eth_rxtx_callback *user_cb); /** @@ -4093,7 +4093,7 @@ int rte_eth_remove_tx_callback(uint8_t port_id, uint16_t queue_id, * - -ENOTSUP: routine is not supported by the device PMD. * - -EINVAL: The port_id or the queue_id is out of range. */ -int rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id, +int rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id, struct rte_eth_rxq_info *qinfo); /** @@ -4113,7 +4113,7 @@ int rte_eth_rx_queue_info_get(uint8_t port_id, uint16_t queue_id, * - -ENOTSUP: routine is not supported by the device PMD. * - -EINVAL: The port_id or the queue_id is out of range. */ -int rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id, +int rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id, struct rte_eth_txq_info *qinfo); /** @@ -4132,7 +4132,7 @@ int rte_eth_tx_queue_info_get(uint8_t port_id, uint16_t queue_id, * - (-ENODEV) if *port_id* invalid. * - others depends on the specific operations implementation. */ -int rte_eth_dev_get_reg_info(uint8_t port_id, struct rte_dev_reg_info *info); +int rte_eth_dev_get_reg_info(uint16_t port_id, struct rte_dev_reg_info *info); /** * Retrieve size of device EEPROM @@ -4145,7 +4145,7 @@ int rte_eth_dev_get_reg_info(uint8_t port_id, struct rte_dev_reg_info *info); * - (-ENODEV) if *port_id* invalid. * - others depends on the specific operations implementation. */ -int rte_eth_dev_get_eeprom_length(uint8_t port_id); +int rte_eth_dev_get_eeprom_length(uint16_t port_id); /** * Retrieve EEPROM and EEPROM attribute @@ -4161,7 +4161,7 @@ int rte_eth_dev_get_eeprom_length(uint8_t port_id); * - (-ENODEV) if *port_id* invalid. * - others depends on the specific operations implementation. */ -int rte_eth_dev_get_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info); +int rte_eth_dev_get_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info); /** * Program EEPROM with provided data @@ -4177,7 +4177,7 @@ int rte_eth_dev_get_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info); * - (-ENODEV) if *port_id* invalid. * - others depends on the specific operations implementation. */ -int rte_eth_dev_set_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info); +int rte_eth_dev_set_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info); /** * Set the list of multicast addresses to filter on an Ethernet device. @@ -4196,7 +4196,7 @@ int rte_eth_dev_set_eeprom(uint8_t port_id, struct rte_dev_eeprom_info *info); * - (-ENOTSUP) if PMD of *port_id* doesn't support multicast filtering. * - (-ENOSPC) if *port_id* has not enough multicast filtering resources. */ -int rte_eth_dev_set_mc_addr_list(uint8_t port_id, +int rte_eth_dev_set_mc_addr_list(uint16_t port_id, struct ether_addr *mc_addr_set, uint32_t nb_mc_addr); @@ -4211,7 +4211,7 @@ int rte_eth_dev_set_mc_addr_list(uint8_t port_id, * - -ENODEV: The port ID is invalid. * - -ENOTSUP: The function is not supported by the Ethernet driver. */ -int rte_eth_timesync_enable(uint8_t port_id); +int rte_eth_timesync_enable(uint16_t port_id); /** * Disable IEEE1588/802.1AS timestamping for an Ethernet device. @@ -4224,7 +4224,7 @@ int rte_eth_timesync_enable(uint8_t port_id); * - -ENODEV: The port ID is invalid. * - -ENOTSUP: The function is not supported by the Ethernet driver. */ -int rte_eth_timesync_disable(uint8_t port_id); +int rte_eth_timesync_disable(uint16_t port_id); /** * Read an IEEE1588/802.1AS RX timestamp from an Ethernet device. @@ -4243,7 +4243,7 @@ int rte_eth_timesync_disable(uint8_t port_id); * - -ENODEV: The port ID is invalid. * - -ENOTSUP: The function is not supported by the Ethernet driver. */ -int rte_eth_timesync_read_rx_timestamp(uint8_t port_id, +int rte_eth_timesync_read_rx_timestamp(uint16_t port_id, struct timespec *timestamp, uint32_t flags); /** @@ -4260,7 +4260,7 @@ int rte_eth_timesync_read_rx_timestamp(uint8_t port_id, * - -ENODEV: The port ID is invalid. * - -ENOTSUP: The function is not supported by the Ethernet driver. */ -int rte_eth_timesync_read_tx_timestamp(uint8_t port_id, +int rte_eth_timesync_read_tx_timestamp(uint16_t port_id, struct timespec *timestamp); /** @@ -4279,7 +4279,7 @@ int rte_eth_timesync_read_tx_timestamp(uint8_t port_id, * - -ENODEV: The port ID is invalid. * - -ENOTSUP: The function is not supported by the Ethernet driver. */ -int rte_eth_timesync_adjust_time(uint8_t port_id, int64_t delta); +int rte_eth_timesync_adjust_time(uint16_t port_id, int64_t delta); /** * Read the time from the timesync clock on an Ethernet device. @@ -4295,7 +4295,7 @@ int rte_eth_timesync_adjust_time(uint8_t port_id, int64_t delta); * @return * - 0: Success. */ -int rte_eth_timesync_read_time(uint8_t port_id, struct timespec *time); +int rte_eth_timesync_read_time(uint16_t port_id, struct timespec *time); /** * Set the time of the timesync clock on an Ethernet device. @@ -4314,7 +4314,7 @@ int rte_eth_timesync_read_time(uint8_t port_id, struct timespec *time); * - -ENODEV: The port ID is invalid. * - -ENOTSUP: The function is not supported by the Ethernet driver. */ -int rte_eth_timesync_write_time(uint8_t port_id, const struct timespec *time); +int rte_eth_timesync_write_time(uint16_t port_id, const struct timespec *time); /** * Create memzone for HW rings. @@ -4355,7 +4355,7 @@ rte_eth_dma_zone_reserve(const struct rte_eth_dev *eth_dev, const char *name, * - (-ENOTSUP) if hardware doesn't support tunnel type. */ int -rte_eth_dev_l2_tunnel_eth_type_conf(uint8_t port_id, +rte_eth_dev_l2_tunnel_eth_type_conf(uint16_t port_id, struct rte_eth_l2_tunnel_conf *l2_tunnel); /** @@ -4382,7 +4382,7 @@ rte_eth_dev_l2_tunnel_eth_type_conf(uint8_t port_id, * - (-ENOTSUP) if hardware doesn't support tunnel type. */ int -rte_eth_dev_l2_tunnel_offload_set(uint8_t port_id, +rte_eth_dev_l2_tunnel_offload_set(uint16_t port_id, struct rte_eth_l2_tunnel_conf *l2_tunnel, uint32_t mask, uint8_t en); @@ -4400,7 +4400,7 @@ rte_eth_dev_l2_tunnel_offload_set(uint8_t port_id, * - (-ENODEV or -EINVAL) on failure. */ int -rte_eth_dev_get_port_by_name(const char *name, uint8_t *port_id); +rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id); /** * Get the device name from port id @@ -4414,7 +4414,7 @@ rte_eth_dev_get_port_by_name(const char *name, uint8_t *port_id); * - (-EINVAL) on failure. */ int -rte_eth_dev_get_name_by_port(uint8_t port_id, char *name); +rte_eth_dev_get_name_by_port(uint16_t port_id, char *name); /** * Check that numbers of Rx and Tx descriptors satisfy descriptors limits from @@ -4432,7 +4432,7 @@ rte_eth_dev_get_name_by_port(uint8_t port_id, char *name); * - (0) if successful. * - (-ENOTSUP, -ENODEV or -EINVAL) on failure. */ -int rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, +int rte_eth_dev_adjust_nb_rx_tx_desc(uint16_t port_id, uint16_t *nb_rx_desc, uint16_t *nb_tx_desc); diff --git a/lib/librte_ether/rte_tm.c b/lib/librte_ether/rte_tm.c index 71679650e..ceac34115 100644 --- a/lib/librte_ether/rte_tm.c +++ b/lib/librte_ether/rte_tm.c @@ -40,7 +40,7 @@ /* Get generic traffic manager operations structure from a port. */ const struct rte_tm_ops * -rte_tm_ops_get(uint8_t port_id, struct rte_tm_error *error) +rte_tm_ops_get(uint16_t port_id, struct rte_tm_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; const struct rte_tm_ops *ops; @@ -87,7 +87,7 @@ rte_tm_ops_get(uint8_t port_id, struct rte_tm_error *error) /* Get number of leaf nodes */ int -rte_tm_get_number_of_leaf_nodes(uint8_t port_id, +rte_tm_get_number_of_leaf_nodes(uint16_t port_id, uint32_t *n_leaf_nodes, struct rte_tm_error *error) { @@ -113,7 +113,7 @@ rte_tm_get_number_of_leaf_nodes(uint8_t port_id, /* Check node type (leaf or non-leaf) */ int -rte_tm_node_type_get(uint8_t port_id, +rte_tm_node_type_get(uint16_t port_id, uint32_t node_id, int *is_leaf, struct rte_tm_error *error) @@ -124,7 +124,7 @@ rte_tm_node_type_get(uint8_t port_id, } /* Get capabilities */ -int rte_tm_capabilities_get(uint8_t port_id, +int rte_tm_capabilities_get(uint16_t port_id, struct rte_tm_capabilities *cap, struct rte_tm_error *error) { @@ -134,7 +134,7 @@ int rte_tm_capabilities_get(uint8_t port_id, } /* Get level capabilities */ -int rte_tm_level_capabilities_get(uint8_t port_id, +int rte_tm_level_capabilities_get(uint16_t port_id, uint32_t level_id, struct rte_tm_level_capabilities *cap, struct rte_tm_error *error) @@ -145,7 +145,7 @@ int rte_tm_level_capabilities_get(uint8_t port_id, } /* Get node capabilities */ -int rte_tm_node_capabilities_get(uint8_t port_id, +int rte_tm_node_capabilities_get(uint16_t port_id, uint32_t node_id, struct rte_tm_node_capabilities *cap, struct rte_tm_error *error) @@ -156,7 +156,7 @@ int rte_tm_node_capabilities_get(uint8_t port_id, } /* Add WRED profile */ -int rte_tm_wred_profile_add(uint8_t port_id, +int rte_tm_wred_profile_add(uint16_t port_id, uint32_t wred_profile_id, struct rte_tm_wred_params *profile, struct rte_tm_error *error) @@ -167,7 +167,7 @@ int rte_tm_wred_profile_add(uint8_t port_id, } /* Delete WRED profile */ -int rte_tm_wred_profile_delete(uint8_t port_id, +int rte_tm_wred_profile_delete(uint16_t port_id, uint32_t wred_profile_id, struct rte_tm_error *error) { @@ -177,7 +177,7 @@ int rte_tm_wred_profile_delete(uint8_t port_id, } /* Add/update shared WRED context */ -int rte_tm_shared_wred_context_add_update(uint8_t port_id, +int rte_tm_shared_wred_context_add_update(uint16_t port_id, uint32_t shared_wred_context_id, uint32_t wred_profile_id, struct rte_tm_error *error) @@ -188,7 +188,7 @@ int rte_tm_shared_wred_context_add_update(uint8_t port_id, } /* Delete shared WRED context */ -int rte_tm_shared_wred_context_delete(uint8_t port_id, +int rte_tm_shared_wred_context_delete(uint16_t port_id, uint32_t shared_wred_context_id, struct rte_tm_error *error) { @@ -198,7 +198,7 @@ int rte_tm_shared_wred_context_delete(uint8_t port_id, } /* Add shaper profile */ -int rte_tm_shaper_profile_add(uint8_t port_id, +int rte_tm_shaper_profile_add(uint16_t port_id, uint32_t shaper_profile_id, struct rte_tm_shaper_params *profile, struct rte_tm_error *error) @@ -209,7 +209,7 @@ int rte_tm_shaper_profile_add(uint8_t port_id, } /* Delete WRED profile */ -int rte_tm_shaper_profile_delete(uint8_t port_id, +int rte_tm_shaper_profile_delete(uint16_t port_id, uint32_t shaper_profile_id, struct rte_tm_error *error) { @@ -219,7 +219,7 @@ int rte_tm_shaper_profile_delete(uint8_t port_id, } /* Add shared shaper */ -int rte_tm_shared_shaper_add_update(uint8_t port_id, +int rte_tm_shared_shaper_add_update(uint16_t port_id, uint32_t shared_shaper_id, uint32_t shaper_profile_id, struct rte_tm_error *error) @@ -230,7 +230,7 @@ int rte_tm_shared_shaper_add_update(uint8_t port_id, } /* Delete shared shaper */ -int rte_tm_shared_shaper_delete(uint8_t port_id, +int rte_tm_shared_shaper_delete(uint16_t port_id, uint32_t shared_shaper_id, struct rte_tm_error *error) { @@ -240,7 +240,7 @@ int rte_tm_shared_shaper_delete(uint8_t port_id, } /* Add node to port traffic manager hierarchy */ -int rte_tm_node_add(uint8_t port_id, +int rte_tm_node_add(uint16_t port_id, uint32_t node_id, uint32_t parent_node_id, uint32_t priority, @@ -256,7 +256,7 @@ int rte_tm_node_add(uint8_t port_id, } /* Delete node from traffic manager hierarchy */ -int rte_tm_node_delete(uint8_t port_id, +int rte_tm_node_delete(uint16_t port_id, uint32_t node_id, struct rte_tm_error *error) { @@ -266,7 +266,7 @@ int rte_tm_node_delete(uint8_t port_id, } /* Suspend node */ -int rte_tm_node_suspend(uint8_t port_id, +int rte_tm_node_suspend(uint16_t port_id, uint32_t node_id, struct rte_tm_error *error) { @@ -276,7 +276,7 @@ int rte_tm_node_suspend(uint8_t port_id, } /* Resume node */ -int rte_tm_node_resume(uint8_t port_id, +int rte_tm_node_resume(uint16_t port_id, uint32_t node_id, struct rte_tm_error *error) { @@ -286,7 +286,7 @@ int rte_tm_node_resume(uint8_t port_id, } /* Commit the initial port traffic manager hierarchy */ -int rte_tm_hierarchy_commit(uint8_t port_id, +int rte_tm_hierarchy_commit(uint16_t port_id, int clear_on_fail, struct rte_tm_error *error) { @@ -296,7 +296,7 @@ int rte_tm_hierarchy_commit(uint8_t port_id, } /* Update node parent */ -int rte_tm_node_parent_update(uint8_t port_id, +int rte_tm_node_parent_update(uint16_t port_id, uint32_t node_id, uint32_t parent_node_id, uint32_t priority, @@ -309,7 +309,7 @@ int rte_tm_node_parent_update(uint8_t port_id, } /* Update node private shaper */ -int rte_tm_node_shaper_update(uint8_t port_id, +int rte_tm_node_shaper_update(uint16_t port_id, uint32_t node_id, uint32_t shaper_profile_id, struct rte_tm_error *error) @@ -320,7 +320,7 @@ int rte_tm_node_shaper_update(uint8_t port_id, } /* Update node shared shapers */ -int rte_tm_node_shared_shaper_update(uint8_t port_id, +int rte_tm_node_shared_shaper_update(uint16_t port_id, uint32_t node_id, uint32_t shared_shaper_id, int add, @@ -332,7 +332,7 @@ int rte_tm_node_shared_shaper_update(uint8_t port_id, } /* Update node stats */ -int rte_tm_node_stats_update(uint8_t port_id, +int rte_tm_node_stats_update(uint16_t port_id, uint32_t node_id, uint64_t stats_mask, struct rte_tm_error *error) @@ -343,7 +343,7 @@ int rte_tm_node_stats_update(uint8_t port_id, } /* Update WFQ weight mode */ -int rte_tm_node_wfq_weight_mode_update(uint8_t port_id, +int rte_tm_node_wfq_weight_mode_update(uint16_t port_id, uint32_t node_id, int *wfq_weight_mode, uint32_t n_sp_priorities, @@ -355,7 +355,7 @@ int rte_tm_node_wfq_weight_mode_update(uint8_t port_id, } /* Update node congestion management mode */ -int rte_tm_node_cman_update(uint8_t port_id, +int rte_tm_node_cman_update(uint16_t port_id, uint32_t node_id, enum rte_tm_cman_mode cman, struct rte_tm_error *error) @@ -366,7 +366,7 @@ int rte_tm_node_cman_update(uint8_t port_id, } /* Update node private WRED context */ -int rte_tm_node_wred_context_update(uint8_t port_id, +int rte_tm_node_wred_context_update(uint16_t port_id, uint32_t node_id, uint32_t wred_profile_id, struct rte_tm_error *error) @@ -377,7 +377,7 @@ int rte_tm_node_wred_context_update(uint8_t port_id, } /* Update node shared WRED context */ -int rte_tm_node_shared_wred_context_update(uint8_t port_id, +int rte_tm_node_shared_wred_context_update(uint16_t port_id, uint32_t node_id, uint32_t shared_wred_context_id, int add, @@ -389,7 +389,7 @@ int rte_tm_node_shared_wred_context_update(uint8_t port_id, } /* Read and/or clear stats counters for specific node */ -int rte_tm_node_stats_read(uint8_t port_id, +int rte_tm_node_stats_read(uint16_t port_id, uint32_t node_id, struct rte_tm_node_stats *stats, uint64_t *stats_mask, @@ -402,7 +402,7 @@ int rte_tm_node_stats_read(uint8_t port_id, } /* Packet marking - VLAN DEI */ -int rte_tm_mark_vlan_dei(uint8_t port_id, +int rte_tm_mark_vlan_dei(uint16_t port_id, int mark_green, int mark_yellow, int mark_red, @@ -414,7 +414,7 @@ int rte_tm_mark_vlan_dei(uint8_t port_id, } /* Packet marking - IPv4/IPv6 ECN */ -int rte_tm_mark_ip_ecn(uint8_t port_id, +int rte_tm_mark_ip_ecn(uint16_t port_id, int mark_green, int mark_yellow, int mark_red, @@ -426,7 +426,7 @@ int rte_tm_mark_ip_ecn(uint8_t port_id, } /* Packet marking - IPv4/IPv6 DSCP */ -int rte_tm_mark_ip_dscp(uint8_t port_id, +int rte_tm_mark_ip_dscp(uint16_t port_id, int mark_green, int mark_yellow, int mark_red, diff --git a/lib/librte_ether/rte_tm.h b/lib/librte_ether/rte_tm.h index ebbfa1eec..2b25a8715 100644 --- a/lib/librte_ether/rte_tm.h +++ b/lib/librte_ether/rte_tm.h @@ -1040,7 +1040,7 @@ struct rte_tm_error { * 0 on success, non-zero error code otherwise. */ int -rte_tm_get_number_of_leaf_nodes(uint8_t port_id, +rte_tm_get_number_of_leaf_nodes(uint16_t port_id, uint32_t *n_leaf_nodes, struct rte_tm_error *error); @@ -1064,7 +1064,7 @@ rte_tm_get_number_of_leaf_nodes(uint8_t port_id, * 0 on success, non-zero error code otherwise. */ int -rte_tm_node_type_get(uint8_t port_id, +rte_tm_node_type_get(uint16_t port_id, uint32_t node_id, int *is_leaf, struct rte_tm_error *error); @@ -1082,7 +1082,7 @@ rte_tm_node_type_get(uint8_t port_id, * 0 on success, non-zero error code otherwise. */ int -rte_tm_capabilities_get(uint8_t port_id, +rte_tm_capabilities_get(uint16_t port_id, struct rte_tm_capabilities *cap, struct rte_tm_error *error); @@ -1102,7 +1102,7 @@ rte_tm_capabilities_get(uint8_t port_id, * 0 on success, non-zero error code otherwise. */ int -rte_tm_level_capabilities_get(uint8_t port_id, +rte_tm_level_capabilities_get(uint16_t port_id, uint32_t level_id, struct rte_tm_level_capabilities *cap, struct rte_tm_error *error); @@ -1122,7 +1122,7 @@ rte_tm_level_capabilities_get(uint8_t port_id, * 0 on success, non-zero error code otherwise. */ int -rte_tm_node_capabilities_get(uint8_t port_id, +rte_tm_node_capabilities_get(uint16_t port_id, uint32_t node_id, struct rte_tm_node_capabilities *cap, struct rte_tm_error *error); @@ -1147,7 +1147,7 @@ rte_tm_node_capabilities_get(uint8_t port_id, * @see struct rte_tm_capabilities::cman_wred_context_n_max */ int -rte_tm_wred_profile_add(uint8_t port_id, +rte_tm_wred_profile_add(uint16_t port_id, uint32_t wred_profile_id, struct rte_tm_wred_params *profile, struct rte_tm_error *error); @@ -1170,7 +1170,7 @@ rte_tm_wred_profile_add(uint8_t port_id, * @see struct rte_tm_capabilities::cman_wred_context_n_max */ int -rte_tm_wred_profile_delete(uint8_t port_id, +rte_tm_wred_profile_delete(uint16_t port_id, uint32_t wred_profile_id, struct rte_tm_error *error); @@ -1201,7 +1201,7 @@ rte_tm_wred_profile_delete(uint8_t port_id, * @see struct rte_tm_capabilities::cman_wred_context_shared_n_max */ int -rte_tm_shared_wred_context_add_update(uint8_t port_id, +rte_tm_shared_wred_context_add_update(uint16_t port_id, uint32_t shared_wred_context_id, uint32_t wred_profile_id, struct rte_tm_error *error); @@ -1225,7 +1225,7 @@ rte_tm_shared_wred_context_add_update(uint8_t port_id, * @see struct rte_tm_capabilities::cman_wred_context_shared_n_max */ int -rte_tm_shared_wred_context_delete(uint8_t port_id, +rte_tm_shared_wred_context_delete(uint16_t port_id, uint32_t shared_wred_context_id, struct rte_tm_error *error); @@ -1249,7 +1249,7 @@ rte_tm_shared_wred_context_delete(uint8_t port_id, * @see struct rte_tm_capabilities::shaper_n_max */ int -rte_tm_shaper_profile_add(uint8_t port_id, +rte_tm_shaper_profile_add(uint16_t port_id, uint32_t shaper_profile_id, struct rte_tm_shaper_params *profile, struct rte_tm_error *error); @@ -1272,7 +1272,7 @@ rte_tm_shaper_profile_add(uint8_t port_id, * @see struct rte_tm_capabilities::shaper_n_max */ int -rte_tm_shaper_profile_delete(uint8_t port_id, +rte_tm_shaper_profile_delete(uint16_t port_id, uint32_t shaper_profile_id, struct rte_tm_error *error); @@ -1301,7 +1301,7 @@ rte_tm_shaper_profile_delete(uint8_t port_id, * @see struct rte_tm_capabilities::shaper_shared_n_max */ int -rte_tm_shared_shaper_add_update(uint8_t port_id, +rte_tm_shared_shaper_add_update(uint16_t port_id, uint32_t shared_shaper_id, uint32_t shaper_profile_id, struct rte_tm_error *error); @@ -1324,7 +1324,7 @@ rte_tm_shared_shaper_add_update(uint8_t port_id, * @see struct rte_tm_capabilities::shaper_shared_n_max */ int -rte_tm_shared_shaper_delete(uint8_t port_id, +rte_tm_shared_shaper_delete(uint16_t port_id, uint32_t shared_shaper_id, struct rte_tm_error *error); @@ -1392,7 +1392,7 @@ rte_tm_shared_shaper_delete(uint8_t port_id, * @see struct rte_tm_capabilities */ int -rte_tm_node_add(uint8_t port_id, +rte_tm_node_add(uint16_t port_id, uint32_t node_id, uint32_t parent_node_id, uint32_t priority, @@ -1425,7 +1425,7 @@ rte_tm_node_add(uint8_t port_id, * @see RTE_TM_UPDATE_NODE_ADD_DELETE */ int -rte_tm_node_delete(uint8_t port_id, +rte_tm_node_delete(uint16_t port_id, uint32_t node_id, struct rte_tm_error *error); @@ -1449,7 +1449,7 @@ rte_tm_node_delete(uint8_t port_id, * @see RTE_TM_UPDATE_NODE_SUSPEND_RESUME */ int -rte_tm_node_suspend(uint8_t port_id, +rte_tm_node_suspend(uint16_t port_id, uint32_t node_id, struct rte_tm_error *error); @@ -1472,7 +1472,7 @@ rte_tm_node_suspend(uint8_t port_id, * @see RTE_TM_UPDATE_NODE_SUSPEND_RESUME */ int -rte_tm_node_resume(uint8_t port_id, +rte_tm_node_resume(uint16_t port_id, uint32_t node_id, struct rte_tm_error *error); @@ -1513,7 +1513,7 @@ rte_tm_node_resume(uint8_t port_id, * @see rte_tm_node_delete() */ int -rte_tm_hierarchy_commit(uint8_t port_id, +rte_tm_hierarchy_commit(uint16_t port_id, int clear_on_fail, struct rte_tm_error *error); @@ -1549,7 +1549,7 @@ rte_tm_hierarchy_commit(uint8_t port_id, * @see RTE_TM_UPDATE_NODE_PARENT_CHANGE_LEVEL */ int -rte_tm_node_parent_update(uint8_t port_id, +rte_tm_node_parent_update(uint16_t port_id, uint32_t node_id, uint32_t parent_node_id, uint32_t priority, @@ -1578,7 +1578,7 @@ rte_tm_node_parent_update(uint8_t port_id, * @see struct rte_tm_capabilities::shaper_private_n_max */ int -rte_tm_node_shaper_update(uint8_t port_id, +rte_tm_node_shaper_update(uint16_t port_id, uint32_t node_id, uint32_t shaper_profile_id, struct rte_tm_error *error); @@ -1605,7 +1605,7 @@ rte_tm_node_shaper_update(uint8_t port_id, * @see struct rte_tm_capabilities::shaper_shared_n_max */ int -rte_tm_node_shared_shaper_update(uint8_t port_id, +rte_tm_node_shared_shaper_update(uint16_t port_id, uint32_t node_id, uint32_t shared_shaper_id, int add, @@ -1632,7 +1632,7 @@ rte_tm_node_shared_shaper_update(uint8_t port_id, * @see RTE_TM_UPDATE_NODE_STATS */ int -rte_tm_node_stats_update(uint8_t port_id, +rte_tm_node_stats_update(uint16_t port_id, uint32_t node_id, uint64_t stats_mask, struct rte_tm_error *error); @@ -1660,7 +1660,7 @@ rte_tm_node_stats_update(uint8_t port_id, * @see RTE_TM_UPDATE_NODE_N_SP_PRIORITIES */ int -rte_tm_node_wfq_weight_mode_update(uint8_t port_id, +rte_tm_node_wfq_weight_mode_update(uint16_t port_id, uint32_t node_id, int *wfq_weight_mode, uint32_t n_sp_priorities, @@ -1683,7 +1683,7 @@ rte_tm_node_wfq_weight_mode_update(uint8_t port_id, * @see RTE_TM_UPDATE_NODE_CMAN */ int -rte_tm_node_cman_update(uint8_t port_id, +rte_tm_node_cman_update(uint16_t port_id, uint32_t node_id, enum rte_tm_cman_mode cman, struct rte_tm_error *error); @@ -1707,7 +1707,7 @@ rte_tm_node_cman_update(uint8_t port_id, * @see struct rte_tm_capabilities::cman_wred_context_private_n_max */ int -rte_tm_node_wred_context_update(uint8_t port_id, +rte_tm_node_wred_context_update(uint16_t port_id, uint32_t node_id, uint32_t wred_profile_id, struct rte_tm_error *error); @@ -1732,7 +1732,7 @@ rte_tm_node_wred_context_update(uint8_t port_id, * @see struct rte_tm_capabilities::cman_wred_context_shared_n_max */ int -rte_tm_node_shared_wred_context_update(uint8_t port_id, +rte_tm_node_shared_wred_context_update(uint16_t port_id, uint32_t node_id, uint32_t shared_wred_context_id, int add, @@ -1764,7 +1764,7 @@ rte_tm_node_shared_wred_context_update(uint8_t port_id, * @see enum rte_tm_stats_type */ int -rte_tm_node_stats_read(uint8_t port_id, +rte_tm_node_stats_read(uint16_t port_id, uint32_t node_id, struct rte_tm_node_stats *stats, uint64_t *stats_mask, @@ -1801,7 +1801,7 @@ rte_tm_node_stats_read(uint8_t port_id, * @see struct rte_tm_capabilities::mark_vlan_dei_supported */ int -rte_tm_mark_vlan_dei(uint8_t port_id, +rte_tm_mark_vlan_dei(uint16_t port_id, int mark_green, int mark_yellow, int mark_red, @@ -1851,7 +1851,7 @@ rte_tm_mark_vlan_dei(uint8_t port_id, * @see struct rte_tm_capabilities::mark_ip_ecn_sctp_supported */ int -rte_tm_mark_ip_ecn(uint8_t port_id, +rte_tm_mark_ip_ecn(uint16_t port_id, int mark_green, int mark_yellow, int mark_red, @@ -1899,7 +1899,7 @@ rte_tm_mark_ip_ecn(uint8_t port_id, * @see struct rte_tm_capabilities::mark_ip_dscp_supported */ int -rte_tm_mark_ip_dscp(uint8_t port_id, +rte_tm_mark_ip_dscp(uint16_t port_id, int mark_green, int mark_yellow, int mark_red, diff --git a/lib/librte_ether/rte_tm_driver.h b/lib/librte_ether/rte_tm_driver.h index a5b698fe0..b2e8ccf80 100644 --- a/lib/librte_ether/rte_tm_driver.h +++ b/lib/librte_ether/rte_tm_driver.h @@ -357,7 +357,7 @@ rte_tm_error_set(struct rte_tm_error *error, * success, NULL otherwise. */ const struct rte_tm_ops * -rte_tm_ops_get(uint8_t port_id, struct rte_tm_error *error); +rte_tm_ops_get(uint16_t port_id, struct rte_tm_error *error); #ifdef __cplusplus } diff --git a/lib/librte_kni/rte_kni.h b/lib/librte_kni/rte_kni.h index 37deb4727..87812cd55 100644 --- a/lib/librte_kni/rte_kni.h +++ b/lib/librte_kni/rte_kni.h @@ -63,13 +63,13 @@ struct rte_mbuf; * Structure which has the function pointers for KNI interface. */ struct rte_kni_ops { - uint8_t port_id; /* Port ID */ + uint16_t port_id; /* Port ID */ /* Pointer to function of changing MTU */ - int (*change_mtu)(uint8_t port_id, unsigned new_mtu); + int (*change_mtu)(uint16_t port_id, unsigned int new_mtu); /* Pointer to function of configuring network interface */ - int (*config_network_if)(uint8_t port_id, uint8_t if_up); + int (*config_network_if)(uint16_t port_id, uint8_t if_up); }; /** diff --git a/lib/librte_latencystats/rte_latencystats.c b/lib/librte_latencystats/rte_latencystats.c index ce029a12c..d6ad13c4e 100644 --- a/lib/librte_latencystats/rte_latencystats.c +++ b/lib/librte_latencystats/rte_latencystats.c @@ -135,7 +135,7 @@ rte_latencystats_fill_values(struct rte_metric_value *values) } static uint16_t -add_time_stamps(uint8_t pid __rte_unused, +add_time_stamps(uint16_t pid __rte_unused, uint16_t qid __rte_unused, struct rte_mbuf **pkts, uint16_t nb_pkts, @@ -165,7 +165,7 @@ add_time_stamps(uint8_t pid __rte_unused, } static uint16_t -calc_latency(uint8_t pid __rte_unused, +calc_latency(uint16_t pid __rte_unused, uint16_t qid __rte_unused, struct rte_mbuf **pkts, uint16_t nb_pkts, @@ -226,10 +226,10 @@ rte_latencystats_init(uint64_t app_samp_intvl, rte_latency_stats_flow_type_fn user_cb) { unsigned int i; - uint8_t pid; + uint16_t pid; uint16_t qid; struct rxtx_cbs *cbs = NULL; - const uint8_t nb_ports = rte_eth_dev_count(); + const uint16_t nb_ports = rte_eth_dev_count(); const char *ptr_strings[NUM_LATENCY_STATS] = {0}; const struct rte_memzone *mz = NULL; const unsigned int flags = 0; @@ -290,11 +290,11 @@ rte_latencystats_init(uint64_t app_samp_intvl, int rte_latencystats_uninit(void) { - uint8_t pid; + uint16_t pid; uint16_t qid; int ret = 0; struct rxtx_cbs *cbs = NULL; - const uint8_t nb_ports = rte_eth_dev_count(); + const uint16_t nb_ports = rte_eth_dev_count(); /** De register Rx/Tx callbacks */ for (pid = 0; pid < nb_ports; pid++) { diff --git a/lib/librte_pdump/rte_pdump.c b/lib/librte_pdump/rte_pdump.c index 729e79a36..e6182d35c 100644 --- a/lib/librte_pdump/rte_pdump.c +++ b/lib/librte_pdump/rte_pdump.c @@ -207,7 +207,7 @@ pdump_copy(struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params) } static uint16_t -pdump_rx(uint8_t port __rte_unused, uint16_t qidx __rte_unused, +pdump_rx(uint16_t port __rte_unused, uint16_t qidx __rte_unused, struct rte_mbuf **pkts, uint16_t nb_pkts, uint16_t max_pkts __rte_unused, void *user_params) @@ -217,7 +217,7 @@ pdump_rx(uint8_t port __rte_unused, uint16_t qidx __rte_unused, } static uint16_t -pdump_tx(uint8_t port __rte_unused, uint16_t qidx __rte_unused, +pdump_tx(uint16_t port __rte_unused, uint16_t qidx __rte_unused, struct rte_mbuf **pkts, uint16_t nb_pkts, void *user_params) { pdump_copy(pkts, nb_pkts, user_params); @@ -225,7 +225,7 @@ pdump_tx(uint8_t port __rte_unused, uint16_t qidx __rte_unused, } static int -pdump_regitser_rx_callbacks(uint16_t end_q, uint8_t port, uint16_t queue, +pdump_regitser_rx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue, struct rte_ring *ring, struct rte_mempool *mp, uint16_t operation) { @@ -279,7 +279,7 @@ pdump_regitser_rx_callbacks(uint16_t end_q, uint8_t port, uint16_t queue, } static int -pdump_regitser_tx_callbacks(uint16_t end_q, uint8_t port, uint16_t queue, +pdump_regitser_tx_callbacks(uint16_t end_q, uint16_t port, uint16_t queue, struct rte_ring *ring, struct rte_mempool *mp, uint16_t operation) { @@ -337,7 +337,7 @@ static int set_pdump_rxtx_cbs(struct pdump_request *p) { uint16_t nb_rx_q = 0, nb_tx_q = 0, end_q, queue; - uint8_t port; + uint16_t port; int ret = 0; uint32_t flags; uint16_t operation; @@ -764,7 +764,7 @@ pdump_validate_flags(uint32_t flags) } static int -pdump_validate_port(uint8_t port, char *name) +pdump_validate_port(uint16_t port, char *name) { int ret = 0; @@ -828,7 +828,7 @@ pdump_prepare_client_request(char *device, uint16_t queue, } int -rte_pdump_enable(uint8_t port, uint16_t queue, uint32_t flags, +rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags, struct rte_ring *ring, struct rte_mempool *mp, void *filter) @@ -876,7 +876,7 @@ rte_pdump_enable_by_deviceid(char *device_id, uint16_t queue, } int -rte_pdump_disable(uint8_t port, uint16_t queue, uint32_t flags) +rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags) { int ret = 0; char name[DEVICE_ID_SIZE]; diff --git a/lib/librte_pdump/rte_pdump.h b/lib/librte_pdump/rte_pdump.h index ba6e39b09..4ec0a106f 100644 --- a/lib/librte_pdump/rte_pdump.h +++ b/lib/librte_pdump/rte_pdump.h @@ -113,7 +113,7 @@ rte_pdump_uninit(void); */ int -rte_pdump_enable(uint8_t port, uint16_t queue, uint32_t flags, +rte_pdump_enable(uint16_t port, uint16_t queue, uint32_t flags, struct rte_ring *ring, struct rte_mempool *mp, void *filter); @@ -136,7 +136,7 @@ rte_pdump_enable(uint8_t port, uint16_t queue, uint32_t flags, */ int -rte_pdump_disable(uint8_t port, uint16_t queue, uint32_t flags); +rte_pdump_disable(uint16_t port, uint16_t queue, uint32_t flags); /** * Enables packet capturing on given device id and queue. diff --git a/lib/librte_port/rte_port_ethdev.c b/lib/librte_port/rte_port_ethdev.c index d5c5fba55..1dc949c73 100644 --- a/lib/librte_port/rte_port_ethdev.c +++ b/lib/librte_port/rte_port_ethdev.c @@ -60,7 +60,7 @@ struct rte_port_ethdev_reader { struct rte_port_in_stats stats; uint16_t queue_id; - uint8_t port_id; + uint16_t port_id; }; static void * @@ -94,8 +94,7 @@ rte_port_ethdev_reader_create(void *params, int socket_id) static int rte_port_ethdev_reader_rx(void *port, struct rte_mbuf **pkts, uint32_t n_pkts) { - struct rte_port_ethdev_reader *p = - port; + struct rte_port_ethdev_reader *p = port; uint16_t rx_pkt_cnt; rx_pkt_cnt = rte_eth_rx_burst(p->port_id, p->queue_id, pkts, n_pkts); @@ -119,8 +118,7 @@ rte_port_ethdev_reader_free(void *port) static int rte_port_ethdev_reader_stats_read(void *port, struct rte_port_in_stats *stats, int clear) { - struct rte_port_ethdev_reader *p = - port; + struct rte_port_ethdev_reader *p = port; if (stats != NULL) memcpy(stats, &p->stats, sizeof(p->stats)); @@ -156,7 +154,7 @@ struct rte_port_ethdev_writer { uint16_t tx_buf_count; uint64_t bsz_mask; uint16_t queue_id; - uint8_t port_id; + uint16_t port_id; }; static void * @@ -211,8 +209,7 @@ send_burst(struct rte_port_ethdev_writer *p) static int rte_port_ethdev_writer_tx(void *port, struct rte_mbuf *pkt) { - struct rte_port_ethdev_writer *p = - port; + struct rte_port_ethdev_writer *p = port; p->tx_buf[p->tx_buf_count++] = pkt; RTE_PORT_ETHDEV_WRITER_STATS_PKTS_IN_ADD(p, 1); @@ -227,8 +224,7 @@ rte_port_ethdev_writer_tx_bulk(void *port, struct rte_mbuf **pkts, uint64_t pkts_mask) { - struct rte_port_ethdev_writer *p = - port; + struct rte_port_ethdev_writer *p = port; uint64_t bsz_mask = p->bsz_mask; uint32_t tx_buf_count = p->tx_buf_count; uint64_t expr = (pkts_mask & (pkts_mask + 1)) | @@ -273,8 +269,7 @@ rte_port_ethdev_writer_tx_bulk(void *port, static int rte_port_ethdev_writer_flush(void *port) { - struct rte_port_ethdev_writer *p = - port; + struct rte_port_ethdev_writer *p = port; if (p->tx_buf_count > 0) send_burst(p); @@ -299,8 +294,7 @@ rte_port_ethdev_writer_free(void *port) static int rte_port_ethdev_writer_stats_read(void *port, struct rte_port_out_stats *stats, int clear) { - struct rte_port_ethdev_writer *p = - port; + struct rte_port_ethdev_writer *p = port; if (stats != NULL) memcpy(stats, &p->stats, sizeof(p->stats)); @@ -337,14 +331,13 @@ struct rte_port_ethdev_writer_nodrop { uint64_t bsz_mask; uint64_t n_retries; uint16_t queue_id; - uint8_t port_id; + uint16_t port_id; }; static void * rte_port_ethdev_writer_nodrop_create(void *params, int socket_id) { - struct rte_port_ethdev_writer_nodrop_params *conf = - params; + struct rte_port_ethdev_writer_nodrop_params *conf = params; struct rte_port_ethdev_writer_nodrop *port; /* Check input parameters */ @@ -417,8 +410,7 @@ send_burst_nodrop(struct rte_port_ethdev_writer_nodrop *p) static int rte_port_ethdev_writer_nodrop_tx(void *port, struct rte_mbuf *pkt) { - struct rte_port_ethdev_writer_nodrop *p = - port; + struct rte_port_ethdev_writer_nodrop *p = port; p->tx_buf[p->tx_buf_count++] = pkt; RTE_PORT_ETHDEV_WRITER_NODROP_STATS_PKTS_IN_ADD(p, 1); @@ -433,8 +425,7 @@ rte_port_ethdev_writer_nodrop_tx_bulk(void *port, struct rte_mbuf **pkts, uint64_t pkts_mask) { - struct rte_port_ethdev_writer_nodrop *p = - port; + struct rte_port_ethdev_writer_nodrop *p = port; uint64_t bsz_mask = p->bsz_mask; uint32_t tx_buf_count = p->tx_buf_count; @@ -486,8 +477,7 @@ rte_port_ethdev_writer_nodrop_tx_bulk(void *port, static int rte_port_ethdev_writer_nodrop_flush(void *port) { - struct rte_port_ethdev_writer_nodrop *p = - port; + struct rte_port_ethdev_writer_nodrop *p = port; if (p->tx_buf_count > 0) send_burst_nodrop(p); @@ -512,8 +502,7 @@ rte_port_ethdev_writer_nodrop_free(void *port) static int rte_port_ethdev_writer_nodrop_stats_read(void *port, struct rte_port_out_stats *stats, int clear) { - struct rte_port_ethdev_writer_nodrop *p = - port; + struct rte_port_ethdev_writer_nodrop *p = port; if (stats != NULL) memcpy(stats, &p->stats, sizeof(p->stats)); diff --git a/lib/librte_port/rte_port_ethdev.h b/lib/librte_port/rte_port_ethdev.h index 201a79e41..f5ed9ab2d 100644 --- a/lib/librte_port/rte_port_ethdev.h +++ b/lib/librte_port/rte_port_ethdev.h @@ -54,7 +54,7 @@ extern "C" { /** ethdev_reader port parameters */ struct rte_port_ethdev_reader_params { /** NIC RX port ID */ - uint8_t port_id; + uint16_t port_id; /** NIC RX queue ID */ uint16_t queue_id; @@ -66,7 +66,7 @@ extern struct rte_port_in_ops rte_port_ethdev_reader_ops; /** ethdev_writer port parameters */ struct rte_port_ethdev_writer_params { /** NIC RX port ID */ - uint8_t port_id; + uint16_t port_id; /** NIC RX queue ID */ uint16_t queue_id; @@ -82,7 +82,7 @@ extern struct rte_port_out_ops rte_port_ethdev_writer_ops; /** ethdev_writer_nodrop port parameters */ struct rte_port_ethdev_writer_nodrop_params { /** NIC RX port ID */ - uint8_t port_id; + uint16_t port_id; /** NIC RX queue ID */ uint16_t queue_id; -- 2.13.3