From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-we0-f175.google.com (mail-we0-f175.google.com [74.125.82.175]) by dpdk.org (Postfix) with ESMTP id 396E9B0AF for ; Mon, 19 May 2014 15:56:58 +0200 (CEST) Received: by mail-we0-f175.google.com with SMTP id t61so5495058wes.20 for ; Mon, 19 May 2014 06:57:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=hwlC3sATvxpmJt/pY7WSpHw8u99/BfyOoF7ZZb62wWI=; b=TuWhSrCPXtHBcVRoYHdEU3dAc9iwzNkscu0px/BNj/FP69+K4N7w+THA00Q7J34Ork dfSwb7uVIiuw+Njiuvet+meJdtPPT0vJ+BPB7rNelukAnyBZcXho6n6sHtIqGv2kyzpm hO3QLEoKsQ2AJ4HDXtXGRhhC7D5gTO7Fd5o8NEsYL6MUYdmtkmoZTtx1oiBXisJ/MZcN xKsfqdfqNq7mNIcu3nUJhUPZNj1wP5fE8es41EvGjBJm0trS1/8WFME0wEJCNveTZWEa eEwBGJZN1SvzXp8J1+v5B3pI1pugud+VRIDiNB36A+ZMohstUHbCMnCrp9RFx8BPC+IU sG+A== X-Gm-Message-State: ALoCoQka04okXPlvW5Qpm2D+pLCJaWG+siLs4M7ExVF/1cm1RBcucwWw3C6hVGRLUK32dqSe9NHJ X-Received: by 10.180.8.40 with SMTP id o8mr13217368wia.25.1400507827012; Mon, 19 May 2014 06:57:07 -0700 (PDT) Received: from glumotte.dev.6wind.com (6wind.net2.nerim.net. [213.41.180.237]) by mx.google.com with ESMTPSA id t18sm15201828wiv.16.2014.05.19.06.57.05 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 19 May 2014 06:57:06 -0700 (PDT) From: Olivier Matz To: dev@dpdk.org Date: Mon, 19 May 2014 15:56:22 +0200 Message-Id: <1400507789-18453-11-git-send-email-olivier.matz@6wind.com> X-Mailer: git-send-email 1.9.2 In-Reply-To: <1400507789-18453-1-git-send-email-olivier.matz@6wind.com> References: <1400507789-18453-1-git-send-email-olivier.matz@6wind.com> Subject: [dpdk-dev] [PATCH v2 10/17] mbuf: rename vlan_macip_len in hw_offload and increase its size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 May 2014 13:56:58 -0000 To implement the TCP segmentation offload, we will need to add some more meta information in the mbuf, like the length of the L4 header, the MSS, ... To prepare this modification, this patch renames vlan_macip_len in hw_offload and change its length from 32 bits to 64 bits. Signed-off-by: Olivier Matz --- app/test-pmd/csumonly.c | 4 +-- app/test-pmd/flowgen.c | 6 ++-- app/test-pmd/macfwd.c | 6 ++-- app/test-pmd/macswap.c | 6 ++-- app/test-pmd/rxonly.c | 2 +- app/test-pmd/testpmd.c | 2 +- app/test-pmd/txonly.c | 6 ++-- examples/ip_reassembly/ipv4_rsmbl.h | 10 +++---- examples/ip_reassembly/main.c | 4 +-- lib/librte_mbuf/rte_mbuf.h | 34 ++++++++++----------- lib/librte_pmd_e1000/em_rxtx.c | 50 +++++++++++++++++-------------- lib/librte_pmd_e1000/igb_rxtx.c | 56 ++++++++++++++++++++--------------- lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 54 +++++++++++++++++++-------------- lib/librte_pmd_ixgbe/ixgbe_rxtx.h | 3 +- lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 +-- 15 files changed, 132 insertions(+), 115 deletions(-) diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c index 69b90a7..9caad8f 100644 --- a/app/test-pmd/csumonly.c +++ b/app/test-pmd/csumonly.c @@ -430,8 +430,8 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) } /* Combine the packet header write. VLAN is not consider here */ - mb->vlan_macip.f.l2_len = l2_len; - mb->vlan_macip.f.l3_len = l3_len; + mb->hw_offload.l2_len = l2_len; + mb->hw_offload.l3_len = l3_len; mb->ol_flags = ol_flags; } nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx); diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c index d69b2b8..14e43b5 100644 --- a/app/test-pmd/flowgen.c +++ b/app/test-pmd/flowgen.c @@ -208,9 +208,9 @@ pkt_burst_flow_gen(struct fwd_stream *fs) pkt->nb_segs = 1; pkt->pkt_len = pkt_size; pkt->ol_flags = ol_flags; - pkt->vlan_macip.f.vlan_tci = vlan_tci; - pkt->vlan_macip.f.l2_len = sizeof(struct ether_hdr); - pkt->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr); + pkt->hw_offload.vlan_tci = vlan_tci; + pkt->hw_offload.l2_len = sizeof(struct ether_hdr); + pkt->hw_offload.l3_len = sizeof(struct ipv4_hdr); pkts_burst[nb_pkt] = pkt; next_flow = (next_flow + 1) % cfg_n_flows; diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c index ab74d0c..d137f92 100644 --- a/app/test-pmd/macfwd.c +++ b/app/test-pmd/macfwd.c @@ -116,9 +116,9 @@ pkt_burst_mac_forward(struct fwd_stream *fs) ether_addr_copy(&ports[fs->tx_port].eth_addr, ð_hdr->s_addr); mb->ol_flags = txp->tx_ol_flags; - mb->vlan_macip.f.l2_len = sizeof(struct ether_hdr); - mb->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr); - mb->vlan_macip.f.vlan_tci = txp->tx_vlan_id; + mb->hw_offload.l2_len = sizeof(struct ether_hdr); + mb->hw_offload.l3_len = sizeof(struct ipv4_hdr); + mb->hw_offload.vlan_tci = txp->tx_vlan_id; } nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx); fs->tx_packets += nb_tx; diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c index d274b36..b1b2324 100644 --- a/app/test-pmd/macswap.c +++ b/app/test-pmd/macswap.c @@ -118,9 +118,9 @@ pkt_burst_mac_swap(struct fwd_stream *fs) ether_addr_copy(&addr, ð_hdr->s_addr); mb->ol_flags = txp->tx_ol_flags; - mb->vlan_macip.f.l2_len = sizeof(struct ether_hdr); - mb->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr); - mb->vlan_macip.f.vlan_tci = txp->tx_vlan_id; + mb->hw_offload.l2_len = sizeof(struct ether_hdr); + mb->hw_offload.l3_len = sizeof(struct ipv4_hdr); + mb->hw_offload.vlan_tci = txp->tx_vlan_id; } nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx); fs->tx_packets += nb_tx; diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c index 0bf4440..6283482 100644 --- a/app/test-pmd/rxonly.c +++ b/app/test-pmd/rxonly.c @@ -149,7 +149,7 @@ pkt_burst_receive(struct fwd_stream *fs) mb->hash.fdir.hash, mb->hash.fdir.id); if (ol_flags & PKT_RX_VLAN_PKT) printf(" - VLAN tci=0x%x", - mb->vlan_macip.f.vlan_tci); + mb->hw_offload.vlan_tci); printf("\n"); if (ol_flags != 0) { uint32_t rxf; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 92e2729..ec3a522 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -406,7 +406,7 @@ testpmd_mbuf_ctor(struct rte_mempool *mp, mb->ol_flags = 0; mb->data_off = RTE_PKTMBUF_HEADROOM; mb->nb_segs = 1; - mb->vlan_macip.data = 0; + mb->hw_offload.u64 = 0; mb->hash.rss = 0; } diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index b2d8dbd..0f3722f 100644 --- a/app/test-pmd/txonly.c +++ b/app/test-pmd/txonly.c @@ -264,9 +264,9 @@ pkt_burst_transmit(struct fwd_stream *fs) pkt->nb_segs = tx_pkt_nb_segs; pkt->pkt_len = tx_pkt_length; pkt->ol_flags = ol_flags; - pkt->vlan_macip.f.vlan_tci = vlan_tci; - pkt->vlan_macip.f.l2_len = sizeof(struct ether_hdr); - pkt->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr); + pkt->hw_offload.vlan_tci = vlan_tci; + pkt->hw_offload.l2_len = sizeof(struct ether_hdr); + pkt->hw_offload.l3_len = sizeof(struct ipv4_hdr); pkts_burst[nb_pkt] = pkt; } nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_pkt); diff --git a/examples/ip_reassembly/ipv4_rsmbl.h b/examples/ip_reassembly/ipv4_rsmbl.h index 9b647fb..c653993 100644 --- a/examples/ip_reassembly/ipv4_rsmbl.h +++ b/examples/ip_reassembly/ipv4_rsmbl.h @@ -168,8 +168,8 @@ ipv4_frag_chain(struct rte_mbuf *mn, struct rte_mbuf *mp) struct rte_mbuf *ms; /* adjust start of the last fragment data. */ - rte_pktmbuf_adj(mp, (uint16_t)(mp->vlan_macip.f.l2_len + - mp->vlan_macip.f.l3_len)); + rte_pktmbuf_adj(mp, (uint16_t)(mp->hw_offload.l2_len + + mp->hw_offload.l3_len)); /* chain two fragments. */ ms = rte_pktmbuf_lastseg(mn); @@ -233,10 +233,10 @@ ipv4_frag_reassemble(const struct ipv4_frag_pkt *fp) /* update ipv4 header for the reassmebled packet */ ip_hdr = (struct ipv4_hdr*)(rte_pktmbuf_mtod(m, uint8_t *) + - m->vlan_macip.f.l2_len); + m->hw_offload.l2_len); ip_hdr->total_length = rte_cpu_to_be_16((uint16_t)(fp->total_size + - m->vlan_macip.f.l3_len)); + m->hw_offload.l3_len)); ip_hdr->fragment_offset = (uint16_t)(ip_hdr->fragment_offset & rte_cpu_to_be_16(IPV4_HDR_DF_FLAG)); ip_hdr->hdr_checksum = 0; @@ -377,7 +377,7 @@ ipv4_frag_mbuf(struct ipv4_frag_tbl *tbl, struct ipv4_frag_death_row *dr, ip_ofs *= IPV4_HDR_OFFSET_UNITS; ip_len = (uint16_t)(rte_be_to_cpu_16(ip_hdr->total_length) - - mb->vlan_macip.f.l3_len); + mb->hw_offload.l3_len); IPV4_FRAG_LOG(DEBUG, "%s:%d:\n" "mbuf: %p, tms: %" PRIu64 diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c index 5c5626a..a817d3d 100644 --- a/examples/ip_reassembly/main.c +++ b/examples/ip_reassembly/main.c @@ -680,8 +680,8 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, uint32_t queue, dr = &qconf->death_row; /* prepare mbuf: setup l2_len/l3_len. */ - m->vlan_macip.f.l2_len = sizeof(*eth_hdr); - m->vlan_macip.f.l3_len = sizeof(*ipv4_hdr); + m->hw_offload.l2_len = sizeof(*eth_hdr); + m->hw_offload.l3_len = sizeof(*ipv4_hdr); /* process this fragment. */ if ((mo = ipv4_frag_mbuf(tbl, dr, m, tms, ipv4_hdr, diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index 540a62c..1e63511 100644 --- a/lib/librte_mbuf/rte_mbuf.h +++ b/lib/librte_mbuf/rte_mbuf.h @@ -149,26 +149,22 @@ static inline const char *rte_get_tx_ol_flag_name(uint32_t mask) } /** Offload features */ -union rte_vlan_macip { - uint32_t data; +union rte_hw_offload { + uint64_t u64; struct { - uint16_t l3_len:9; /**< L3 (IP) Header Length. */ - uint16_t l2_len:7; /**< L2 (MAC) Header Length. */ +#define HW_OFFLOAD_L2_LEN_MASK 0x7f +#define HW_OFFLOAD_L3_LEN_MASK 0x1ff +#define HW_OFFLOAD_L4_LEN_MASK 0xff + uint32_t l2_len:7; /**< L2 (MAC) Header Length. */ + uint32_t l3_len:9; /**< L3 (IP) Header Length. */ + uint32_t reserved:16; + uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order). */ - } f; + uint16_t reserved2; + }; }; -/* - * Compare mask for vlan_macip_len.data, - * should be in sync with rte_vlan_macip.f layout. - * */ -#define TX_VLAN_CMP_MASK 0xFFFF0000 /**< VLAN length - 16-bits. */ -#define TX_MAC_LEN_CMP_MASK 0x0000FE00 /**< MAC length - 7-bits. */ -#define TX_IP_LEN_CMP_MASK 0x000001FF /**< IP length - 9-bits. */ -/**< MAC+IP length. */ -#define TX_MACIP_LEN_CMP_MASK (TX_MAC_LEN_CMP_MASK | TX_IP_LEN_CMP_MASK) - /** * The generic rte_mbuf, containing a packet mbuf. */ @@ -207,7 +203,7 @@ struct rte_mbuf { uint32_t ol_flags; /**< Offload features. */ /* offload features, valid for first segment only */ - union rte_vlan_macip vlan_macip; + union rte_hw_offload hw_offload; union { uint32_t rss; /**< RSS hash result if RSS enabled */ struct { @@ -216,7 +212,7 @@ struct rte_mbuf { } fdir; /**< Filter identifier if FDIR enabled */ uint32_t sched; /**< Hierarchical scheduler */ } hash; /**< hash information */ - uint64_t reserved; /**< Unused field. Required for padding. */ + uint32_t reserved; /**< Unused field. Required for padding. */ } __rte_cache_aligned; /** @@ -483,7 +479,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m) { m->next = NULL; m->pkt_len = 0; - m->vlan_macip.data = 0; + m->hw_offload.u64 = 0; m->nb_segs = 1; m->in_port = 0xff; @@ -549,7 +545,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *md) mi->data_off = md->data_off; mi->data_len = md->data_len; mi->in_port = md->in_port; - mi->vlan_macip = md->vlan_macip; + mi->hw_offload.u64 = md->hw_offload.u64; mi->hash = md->hash; mi->next = NULL; diff --git a/lib/librte_pmd_e1000/em_rxtx.c b/lib/librte_pmd_e1000/em_rxtx.c index 1a34340..8870ccc 100644 --- a/lib/librte_pmd_e1000/em_rxtx.c +++ b/lib/librte_pmd_e1000/em_rxtx.c @@ -148,8 +148,8 @@ enum { */ struct em_ctx_info { uint32_t flags; /**< ol_flags related to context build. */ - uint32_t cmp_mask; /**< compare mask */ - union rte_vlan_macip hdrlen; /**< L2 and L3 header lenghts */ + union rte_hw_offload hw_offload; /**< l2/l3/l4 length, vlan, mss. */ + union rte_hw_offload offload_mask; /**< compare mask for hw_offload */ }; /** @@ -217,18 +217,18 @@ struct em_tx_queue { static inline void em_set_xmit_ctx(struct em_tx_queue* txq, volatile struct e1000_context_desc *ctx_txd, - uint32_t flags, - union rte_vlan_macip hdrlen) + uint32_t flags, union rte_hw_offload hw_offload) { - uint32_t cmp_mask, cmd_len; + uint32_t cmd_len; uint16_t ipcse, l2len; struct e1000_context_desc ctx; + union rte_hw_offload offload_mask; - cmp_mask = 0; + offload_mask.u64 = 0; cmd_len = E1000_TXD_CMD_DEXT | E1000_TXD_DTYP_C; - l2len = hdrlen.f.l2_len; - ipcse = (uint16_t)(l2len + hdrlen.f.l3_len); + l2len = hw_offload.l2_len; + ipcse = (uint16_t)(l2len + hw_offload.l3_len); /* setup IPCS* fields */ ctx.lower_setup.ip_fields.ipcss = (uint8_t)l2len; @@ -243,7 +243,8 @@ em_set_xmit_ctx(struct em_tx_queue* txq, ctx.lower_setup.ip_fields.ipcse = (uint16_t)rte_cpu_to_le_16(ipcse - 1); cmd_len |= E1000_TXD_CMD_IP; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; } else { ctx.lower_setup.ip_fields.ipcse = 0; } @@ -256,13 +257,15 @@ em_set_xmit_ctx(struct em_tx_queue* txq, case PKT_TX_UDP_CKSUM: ctx.upper_setup.tcp_fields.tucso = (uint8_t)(ipcse + offsetof(struct udp_hdr, dgram_cksum)); - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; break; case PKT_TX_TCP_CKSUM: ctx.upper_setup.tcp_fields.tucso = (uint8_t)(ipcse + offsetof(struct tcp_hdr, cksum)); cmd_len |= E1000_TXD_CMD_TCP; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; break; default: ctx.upper_setup.tcp_fields.tucso = 0; @@ -274,8 +277,9 @@ em_set_xmit_ctx(struct em_tx_queue* txq, *ctx_txd = ctx; txq->ctx_cache.flags = flags; - txq->ctx_cache.cmp_mask = cmp_mask; - txq->ctx_cache.hdrlen = hdrlen; + txq->ctx_cache.hw_offload.u64 = + offload_mask.u64 & hw_offload.u64; + txq->ctx_cache.offload_mask = offload_mask; } /* @@ -284,12 +288,12 @@ em_set_xmit_ctx(struct em_tx_queue* txq, */ static inline uint32_t what_ctx_update(struct em_tx_queue *txq, uint32_t flags, - union rte_vlan_macip hdrlen) + union rte_hw_offload hw_offload) { /* If match with the current context */ if (likely (txq->ctx_cache.flags == flags && - ((txq->ctx_cache.hdrlen.data ^ hdrlen.data) & - txq->ctx_cache.cmp_mask) == 0)) + ((txq->ctx_cache.hw_offload.u64 ^ hw_offload.u64) & + txq->ctx_cache.offload_mask.u64) == 0)) return (EM_CTX_0); /* Mismatch */ @@ -390,7 +394,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint32_t tx_ol_req; uint32_t ctx; uint32_t new_ctx; - union rte_vlan_macip hdrlen; + union rte_hw_offload hw_offload; txq = tx_queue; sw_ring = txq->sw_ring; @@ -419,9 +423,9 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* If hardware offload required */ tx_ol_req = (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)); if (tx_ol_req) { - hdrlen = tx_pkt->vlan_macip; + hw_offload.u64 = tx_pkt->hw_offload.u64; /* If new context to be built or reuse the exist ctx. */ - ctx = what_ctx_update(txq, tx_ol_req, hdrlen); + ctx = what_ctx_update(txq, tx_ol_req, hw_offload); /* Only allocate context descriptor if required*/ new_ctx = (ctx == EM_CTX_NUM); @@ -514,7 +518,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* Set VLAN Tag offload fields. */ if (ol_flags & PKT_TX_VLAN_PKT) { cmd_type_len |= E1000_TXD_CMD_VLE; - popts_spec = tx_pkt->vlan_macip.f.vlan_tci << + popts_spec = tx_pkt->hw_offload.vlan_tci << E1000_TXD_VLAN_SHIFT; } @@ -537,7 +541,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } em_set_xmit_ctx(txq, ctx_txd, tx_ol_req, - hdrlen); + hw_offload); txe->last_id = tx_last; tx_id = txe->next_id; @@ -782,7 +786,7 @@ eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_desc_error_to_pkt_flags(rxd.errors); /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */ - rxm->vlan_macip.f.vlan_tci = rte_le_to_cpu_16(rxd.special); + rxm->hw_offload.vlan_tci = rte_le_to_cpu_16(rxd.special); /* * Store the mbuf address into the next entry of the array @@ -1008,7 +1012,7 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_desc_error_to_pkt_flags(rxd.errors); /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */ - rxm->vlan_macip.f.vlan_tci = rte_le_to_cpu_16(rxd.special); + rxm->hw_offload.vlan_tci = rte_le_to_cpu_16(rxd.special); /* Prefetch data of first segment, if configured to do so. */ rte_packet_prefetch((char *)first_seg->buf_addr + diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c index 322dfa0..2db496f 100644 --- a/lib/librte_pmd_e1000/igb_rxtx.c +++ b/lib/librte_pmd_e1000/igb_rxtx.c @@ -145,8 +145,8 @@ enum igb_advctx_num { */ struct igb_advctx_info { uint32_t flags; /**< ol_flags related to context build. */ - uint32_t cmp_mask; /**< compare mask for vlan_macip_lens */ - union rte_vlan_macip vlan_macip_lens; /**< vlan, mac & ip length. */ + union rte_hw_offload hw_offload; /**< l2/l3/l4 length, vlan, mss. */ + union rte_hw_offload offload_mask; /**< compare mask for hw_offload */ }; /** @@ -212,26 +212,28 @@ struct igb_tx_queue { static inline void igbe_set_xmit_ctx(struct igb_tx_queue* txq, volatile struct e1000_adv_tx_context_desc *ctx_txd, - uint32_t ol_flags, uint32_t vlan_macip_lens) + uint32_t ol_flags, union rte_hw_offload hw_offload) { uint32_t type_tucmd_mlhl; uint32_t mss_l4len_idx; uint32_t ctx_idx, ctx_curr; - uint32_t cmp_mask; + uint32_t vlan_macip_lens; + union rte_hw_offload offload_mask; ctx_curr = txq->ctx_curr; ctx_idx = ctx_curr + txq->ctx_start; - cmp_mask = 0; + offload_mask.u64 = 0; type_tucmd_mlhl = 0; if (ol_flags & PKT_TX_VLAN_PKT) { - cmp_mask |= TX_VLAN_CMP_MASK; + offload_mask.vlan_tci = 0xffff; } if (ol_flags & PKT_TX_IP_CKSUM) { type_tucmd_mlhl = E1000_ADVTXD_TUCMD_IPV4; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; } /* Specify which HW CTX to upload. */ @@ -241,19 +243,22 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq, type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_UDP | E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT; mss_l4len_idx |= sizeof(struct udp_hdr) << E1000_ADVTXD_L4LEN_SHIFT; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; break; case PKT_TX_TCP_CKSUM: type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_TCP | E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT; mss_l4len_idx |= sizeof(struct tcp_hdr) << E1000_ADVTXD_L4LEN_SHIFT; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; break; case PKT_TX_SCTP_CKSUM: type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_SCTP | E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT; mss_l4len_idx |= sizeof(struct sctp_hdr) << E1000_ADVTXD_L4LEN_SHIFT; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; break; default: type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_RSV | @@ -262,11 +267,14 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq, } txq->ctx_cache[ctx_curr].flags = ol_flags; - txq->ctx_cache[ctx_curr].cmp_mask = cmp_mask; - txq->ctx_cache[ctx_curr].vlan_macip_lens.data = - vlan_macip_lens & cmp_mask; + txq->ctx_cache[ctx_curr].hw_offload.u64 = + offload_mask.u64 & hw_offload.u64; + txq->ctx_cache[ctx_curr].offload_mask = offload_mask; ctx_txd->type_tucmd_mlhl = rte_cpu_to_le_32(type_tucmd_mlhl); + vlan_macip_lens = hw_offload.l3_len; + vlan_macip_lens |= (hw_offload.l2_len << E1000_ADVTXD_MACLEN_SHIFT); + vlan_macip_lens |= ((uint32_t)hw_offload.vlan_tci << E1000_ADVTXD_VLAN_SHIFT); ctx_txd->vlan_macip_lens = rte_cpu_to_le_32(vlan_macip_lens); ctx_txd->mss_l4len_idx = rte_cpu_to_le_32(mss_l4len_idx); ctx_txd->seqnum_seed = 0; @@ -278,20 +286,20 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq, */ static inline uint32_t what_advctx_update(struct igb_tx_queue *txq, uint32_t flags, - uint32_t vlan_macip_lens) + union rte_hw_offload hw_offload) { /* If match with the current context */ if (likely((txq->ctx_cache[txq->ctx_curr].flags == flags) && - (txq->ctx_cache[txq->ctx_curr].vlan_macip_lens.data == - (txq->ctx_cache[txq->ctx_curr].cmp_mask & vlan_macip_lens)))) { + (txq->ctx_cache[txq->ctx_curr].hw_offload.u64 == + (txq->ctx_cache[txq->ctx_curr].offload_mask.u64 & hw_offload.u64)))) { return txq->ctx_curr; } /* If match with the second context */ txq->ctx_curr ^= 1; if (likely((txq->ctx_cache[txq->ctx_curr].flags == flags) && - (txq->ctx_cache[txq->ctx_curr].vlan_macip_lens.data == - (txq->ctx_cache[txq->ctx_curr].cmp_mask & vlan_macip_lens)))) { + (txq->ctx_cache[txq->ctx_curr].hw_offload.u64 == + (txq->ctx_cache[txq->ctx_curr].offload_mask.u64 & hw_offload.u64)))) { return txq->ctx_curr; } @@ -342,7 +350,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint32_t tx_ol_req; uint32_t new_ctx = 0; uint32_t ctx = 0; - uint32_t vlan_macip_lens; + union rte_hw_offload hw_offload; txq = tx_queue; sw_ring = txq->sw_ring; @@ -367,14 +375,14 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, tx_last = (uint16_t) (tx_id + tx_pkt->nb_segs - 1); ol_flags = tx_pkt->ol_flags; - vlan_macip_lens = tx_pkt->vlan_macip.data; + hw_offload.u64 = tx_pkt->hw_offload.u64; tx_ol_req = ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_IP_CKSUM | PKT_TX_L4_MASK); /* If a Context Descriptor need be built . */ if (tx_ol_req) { ctx = what_advctx_update(txq, tx_ol_req, - vlan_macip_lens); + hw_offload); /* Only allocate context descriptor if required*/ new_ctx = (ctx == IGB_CTX_NUM); ctx = txq->ctx_curr; @@ -490,7 +498,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } igbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req, - vlan_macip_lens); + hw_offload); txe->last_id = tx_last; tx_id = txe->next_id; @@ -752,7 +760,7 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rxm->hash.rss = rxd.wb.lower.hi_dword.rss; hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data); /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */ - rxm->vlan_macip.f.vlan_tci = + rxm->hw_offload.vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan); pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss); @@ -989,7 +997,7 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is * set in the pkt_flags field. */ - first_seg->vlan_macip.f.vlan_tci = + first_seg->hw_offload.vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan); hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data); pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss); diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c index 0ff1a07..e1eb59d 100644 --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c @@ -350,24 +350,26 @@ ixgbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts, static inline void ixgbe_set_xmit_ctx(struct igb_tx_queue* txq, volatile struct ixgbe_adv_tx_context_desc *ctx_txd, - uint32_t ol_flags, uint32_t vlan_macip_lens) + uint32_t ol_flags, union rte_hw_offload hw_offload) { uint32_t type_tucmd_mlhl; uint32_t mss_l4len_idx; uint32_t ctx_idx; - uint32_t cmp_mask; + uint32_t vlan_macip_lens; + union rte_hw_offload offload_mask; ctx_idx = txq->ctx_curr; - cmp_mask = 0; + offload_mask.u64 = 0; type_tucmd_mlhl = 0; if (ol_flags & PKT_TX_VLAN_PKT) { - cmp_mask |= TX_VLAN_CMP_MASK; + offload_mask.vlan_tci = 0xffff; } if (ol_flags & PKT_TX_IP_CKSUM) { type_tucmd_mlhl = IXGBE_ADVTXD_TUCMD_IPV4; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; } /* Specify which HW CTX to upload. */ @@ -377,19 +379,22 @@ ixgbe_set_xmit_ctx(struct igb_tx_queue* txq, type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_UDP | IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT; mss_l4len_idx |= sizeof(struct udp_hdr) << IXGBE_ADVTXD_L4LEN_SHIFT; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; break; case PKT_TX_TCP_CKSUM: type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_TCP | IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT; mss_l4len_idx |= sizeof(struct tcp_hdr) << IXGBE_ADVTXD_L4LEN_SHIFT; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; break; case PKT_TX_SCTP_CKSUM: type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_SCTP | IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT; mss_l4len_idx |= sizeof(struct sctp_hdr) << IXGBE_ADVTXD_L4LEN_SHIFT; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; break; default: type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_RSV | @@ -398,11 +403,14 @@ ixgbe_set_xmit_ctx(struct igb_tx_queue* txq, } txq->ctx_cache[ctx_idx].flags = ol_flags; - txq->ctx_cache[ctx_idx].cmp_mask = cmp_mask; - txq->ctx_cache[ctx_idx].vlan_macip_lens.data = - vlan_macip_lens & cmp_mask; + txq->ctx_cache[ctx_idx].hw_offload.u64 = + offload_mask.u64 & hw_offload.u64; + txq->ctx_cache[ctx_idx].offload_mask = offload_mask; ctx_txd->type_tucmd_mlhl = rte_cpu_to_le_32(type_tucmd_mlhl); + vlan_macip_lens = hw_offload.l3_len; + vlan_macip_lens |= (hw_offload.l2_len << IXGBE_ADVTXD_MACLEN_SHIFT); + vlan_macip_lens |= ((uint32_t)hw_offload.vlan_tci << IXGBE_ADVTXD_VLAN_SHIFT); ctx_txd->vlan_macip_lens = rte_cpu_to_le_32(vlan_macip_lens); ctx_txd->mss_l4len_idx = rte_cpu_to_le_32(mss_l4len_idx); ctx_txd->seqnum_seed = 0; @@ -414,20 +422,20 @@ ixgbe_set_xmit_ctx(struct igb_tx_queue* txq, */ static inline uint32_t what_advctx_update(struct igb_tx_queue *txq, uint32_t flags, - uint32_t vlan_macip_lens) + union rte_hw_offload hw_offload) { /* If match with the current used context */ if (likely((txq->ctx_cache[txq->ctx_curr].flags == flags) && - (txq->ctx_cache[txq->ctx_curr].vlan_macip_lens.data == - (txq->ctx_cache[txq->ctx_curr].cmp_mask & vlan_macip_lens)))) { + (txq->ctx_cache[txq->ctx_curr].hw_offload.u64 == + (txq->ctx_cache[txq->ctx_curr].offload_mask.u64 & hw_offload.u64)))) { return txq->ctx_curr; } /* What if match with the next context */ txq->ctx_curr ^= 1; if (likely((txq->ctx_cache[txq->ctx_curr].flags == flags) && - (txq->ctx_cache[txq->ctx_curr].vlan_macip_lens.data == - (txq->ctx_cache[txq->ctx_curr].cmp_mask & vlan_macip_lens)))) { + (txq->ctx_cache[txq->ctx_curr].hw_offload.u64 == + (txq->ctx_cache[txq->ctx_curr].offload_mask.u64 & hw_offload.u64)))) { return txq->ctx_curr; } @@ -543,9 +551,9 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_tx; uint16_t nb_used; uint32_t tx_ol_req; - uint32_t vlan_macip_lens; uint32_t ctx = 0; uint32_t new_ctx; + union rte_hw_offload hw_offload; txq = tx_queue; sw_ring = txq->sw_ring; @@ -571,7 +579,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, * are needed for offload functionality. */ ol_flags = tx_pkt->ol_flags; - vlan_macip_lens = tx_pkt->vlan_macip.data; + hw_offload.u64 = tx_pkt->hw_offload.u64; /* If hardware offload required */ tx_ol_req = ol_flags & @@ -579,7 +587,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, if (tx_ol_req) { /* If new context need be built or reuse the exist ctx. */ ctx = what_advctx_update(txq, tx_ol_req, - vlan_macip_lens); + hw_offload); /* Only allocate context descriptor if required*/ new_ctx = (ctx == IXGBE_CTX_NUM); ctx = txq->ctx_curr; @@ -721,7 +729,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } ixgbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req, - vlan_macip_lens); + hw_offload); txe->last_id = tx_last; tx_id = txe->next_id; @@ -932,7 +940,7 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq) rxq->crc_len); mb->data_len = pkt_len; mb->pkt_len = pkt_len; - mb->vlan_macip.f.vlan_tci = rxdp[j].wb.upper.vlan; + mb->hw_offload.vlan_tci = rxdp[j].wb.upper.vlan; mb->hash.rss = rxdp[j].wb.lower.hi_dword.rss; /* convert descriptor fields to rte mbuf flags */ @@ -1250,7 +1258,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data); /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */ - rxm->vlan_macip.f.vlan_tci = + rxm->hw_offload.vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan); pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss); @@ -1495,7 +1503,7 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is * set in the pkt_flags field. */ - first_seg->vlan_macip.f.vlan_tci = + first_seg->hw_offload.vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan); hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data); pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss); diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h index 75f8239..9199d31 100644 --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h @@ -152,7 +152,8 @@ enum ixgbe_advctx_num { struct ixgbe_advctx_info { uint32_t flags; /**< ol_flags for context build. */ uint32_t cmp_mask; /**< compare mask for vlan_macip_lens */ - union rte_vlan_macip vlan_macip_lens; /**< vlan, mac ip length. */ + union rte_hw_offload hw_offload; /**< l2/l3/l4 length, vlan, mss. */ + union rte_hw_offload offload_mask; /**< compare mask for hw_offload */ }; /** diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c index 0845b1d..60309a3 100644 --- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c +++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c @@ -536,7 +536,7 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rte_pktmbuf_mtod(rxm, void *)); #endif //Copy vlan tag in packet buffer - rxm->vlan_macip.f.vlan_tci = + rxm->hw_offload.vlan_tci = rte_le_to_cpu_16((uint16_t)rcd->tci); } else @@ -549,7 +549,7 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxm->pkt_len = (uint16_t)rcd->len; rxm->data_len = (uint16_t)rcd->len; rxm->in_port = rxq->port_id; - rxm->vlan_macip.f.vlan_tci = 0; + rxm->hw_offload.vlan_tci = 0; rxm->data_off = RTE_PKTMBUF_HEADROOM; rx_pkts[nb_rx++] = rxm; -- 1.9.2