From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-we0-f181.google.com (mail-we0-f181.google.com [74.125.82.181]) by dpdk.org (Postfix) with ESMTP id 9283AB0BC for ; Fri, 9 May 2014 16:51:00 +0200 (CEST) Received: by mail-we0-f181.google.com with SMTP id w61so3990877wes.40 for ; Fri, 09 May 2014 07:51:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=qmw/X8SGOhe8rnwUHR1aBo9ujj/E0vnPUgtNHVK1adI=; b=bVZRdBWUc4L4QettQeUP0mMv5CsVLzD3m06yMMtJNNYiCDqLrAbPbohJQigpMy+eYa kjX0rOM1Nebkt3ibD/anv1JPEIyKAxpj9ej/Na7clOgvMNwNsVEIAIqlO+VV9SRsGIGl lst7e1Of/9vYwS6C3DcAfGu7M6KKFunHWMB1t6oX0CvhvyQyTsSEEZ66D87Q85n1nvAN 9BwDBad32J9WR2LeTwxCYo/t5m3SMEFXClGRVFbHBTZ81PZvTgvygaARn9Ddgv0wWxtG YKmGXj8BpgZVgoL8c+EDKNaYfxn1dsYEZAD47fkd4g+/atkiKeLbGxVc67LIpjkk1HZk mM2A== X-Gm-Message-State: ALoCoQlM84E3Fmln51Yz8r59pc1ppr7INggy9BVJOjRxeQpZ9vzZNWZbZrog/pQHUFQ24QGnIIgE X-Received: by 10.194.202.229 with SMTP id kl5mr951579wjc.86.1399647066985; Fri, 09 May 2014 07:51:06 -0700 (PDT) Received: from glumotte.dev.6wind.com (6wind.net2.nerim.net. [213.41.180.237]) by mx.google.com with ESMTPSA id c2sm5744122wja.18.2014.05.09.07.51.05 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 09 May 2014 07:51:06 -0700 (PDT) From: Olivier Matz To: dev@dpdk.org Date: Fri, 9 May 2014 16:50:36 +0200 Message-Id: <1399647038-15095-10-git-send-email-olivier.matz@6wind.com> X-Mailer: git-send-email 1.9.2 In-Reply-To: <1399647038-15095-1-git-send-email-olivier.matz@6wind.com> References: <1399647038-15095-1-git-send-email-olivier.matz@6wind.com> Subject: [dpdk-dev] [PATCH RFC 09/11] mbuf: rename vlan_macip_len in hw_offload and increase its size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 May 2014 14:51:01 -0000 To implement the TCP segmentation offload, we will need to add some more meta information in the mbuf, like the length of the L4 header, the MSS, ... To prepare this modification, this patch renames vlan_macip_len in hw_offload and change its length from 32 bits to 64 bits. Signed-off-by: Olivier Matz --- app/test-pmd/csumonly.c | 4 +-- app/test-pmd/macfwd.c | 6 ++-- app/test-pmd/rxonly.c | 2 +- app/test-pmd/testpmd.c | 2 +- app/test-pmd/txonly.c | 6 ++-- examples/ip_reassembly/ipv4_rsmbl.h | 10 +++---- examples/ip_reassembly/main.c | 4 +-- lib/librte_mbuf/rte_mbuf.h | 34 ++++++++++----------- lib/librte_pmd_e1000/em_rxtx.c | 50 +++++++++++++++++-------------- lib/librte_pmd_e1000/igb_rxtx.c | 56 ++++++++++++++++++++--------------- lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 54 +++++++++++++++++++-------------- lib/librte_pmd_ixgbe/ixgbe_rxtx.h | 3 +- lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 +-- 13 files changed, 126 insertions(+), 109 deletions(-) diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c index 69b90a7..9caad8f 100644 --- a/app/test-pmd/csumonly.c +++ b/app/test-pmd/csumonly.c @@ -430,8 +430,8 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) } /* Combine the packet header write. VLAN is not consider here */ - mb->vlan_macip.f.l2_len = l2_len; - mb->vlan_macip.f.l3_len = l3_len; + mb->hw_offload.l2_len = l2_len; + mb->hw_offload.l3_len = l3_len; mb->ol_flags = ol_flags; } nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx); diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c index ab74d0c..d137f92 100644 --- a/app/test-pmd/macfwd.c +++ b/app/test-pmd/macfwd.c @@ -116,9 +116,9 @@ pkt_burst_mac_forward(struct fwd_stream *fs) ether_addr_copy(&ports[fs->tx_port].eth_addr, ð_hdr->s_addr); mb->ol_flags = txp->tx_ol_flags; - mb->vlan_macip.f.l2_len = sizeof(struct ether_hdr); - mb->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr); - mb->vlan_macip.f.vlan_tci = txp->tx_vlan_id; + mb->hw_offload.l2_len = sizeof(struct ether_hdr); + mb->hw_offload.l3_len = sizeof(struct ipv4_hdr); + mb->hw_offload.vlan_tci = txp->tx_vlan_id; } nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx); fs->tx_packets += nb_tx; diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c index 0bf4440..6283482 100644 --- a/app/test-pmd/rxonly.c +++ b/app/test-pmd/rxonly.c @@ -149,7 +149,7 @@ pkt_burst_receive(struct fwd_stream *fs) mb->hash.fdir.hash, mb->hash.fdir.id); if (ol_flags & PKT_RX_VLAN_PKT) printf(" - VLAN tci=0x%x", - mb->vlan_macip.f.vlan_tci); + mb->hw_offload.vlan_tci); printf("\n"); if (ol_flags != 0) { uint32_t rxf; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 572c3aa..3085be5 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -397,7 +397,7 @@ testpmd_mbuf_ctor(struct rte_mempool *mp, mb->ol_flags = 0; mb->data_off = RTE_PKTMBUF_HEADROOM; mb->nb_segs = 1; - mb->vlan_macip.data = 0; + mb->hw_offload.u64 = 0; mb->hash.rss = 0; } diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index 5d93209..97e381a 100644 --- a/app/test-pmd/txonly.c +++ b/app/test-pmd/txonly.c @@ -264,9 +264,9 @@ pkt_burst_transmit(struct fwd_stream *fs) pkt->nb_segs = tx_pkt_nb_segs; pkt->pkt_len = tx_pkt_length; pkt->ol_flags = ol_flags; - pkt->vlan_macip.f.vlan_tci = vlan_tci; - pkt->vlan_macip.f.l2_len = sizeof(struct ether_hdr); - pkt->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr); + pkt->hw_offload.vlan_tci = vlan_tci; + pkt->hw_offload.l2_len = sizeof(struct ether_hdr); + pkt->hw_offload.l3_len = sizeof(struct ipv4_hdr); pkts_burst[nb_pkt] = pkt; } nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_pkt); diff --git a/examples/ip_reassembly/ipv4_rsmbl.h b/examples/ip_reassembly/ipv4_rsmbl.h index 9b647fb..c653993 100644 --- a/examples/ip_reassembly/ipv4_rsmbl.h +++ b/examples/ip_reassembly/ipv4_rsmbl.h @@ -168,8 +168,8 @@ ipv4_frag_chain(struct rte_mbuf *mn, struct rte_mbuf *mp) struct rte_mbuf *ms; /* adjust start of the last fragment data. */ - rte_pktmbuf_adj(mp, (uint16_t)(mp->vlan_macip.f.l2_len + - mp->vlan_macip.f.l3_len)); + rte_pktmbuf_adj(mp, (uint16_t)(mp->hw_offload.l2_len + + mp->hw_offload.l3_len)); /* chain two fragments. */ ms = rte_pktmbuf_lastseg(mn); @@ -233,10 +233,10 @@ ipv4_frag_reassemble(const struct ipv4_frag_pkt *fp) /* update ipv4 header for the reassmebled packet */ ip_hdr = (struct ipv4_hdr*)(rte_pktmbuf_mtod(m, uint8_t *) + - m->vlan_macip.f.l2_len); + m->hw_offload.l2_len); ip_hdr->total_length = rte_cpu_to_be_16((uint16_t)(fp->total_size + - m->vlan_macip.f.l3_len)); + m->hw_offload.l3_len)); ip_hdr->fragment_offset = (uint16_t)(ip_hdr->fragment_offset & rte_cpu_to_be_16(IPV4_HDR_DF_FLAG)); ip_hdr->hdr_checksum = 0; @@ -377,7 +377,7 @@ ipv4_frag_mbuf(struct ipv4_frag_tbl *tbl, struct ipv4_frag_death_row *dr, ip_ofs *= IPV4_HDR_OFFSET_UNITS; ip_len = (uint16_t)(rte_be_to_cpu_16(ip_hdr->total_length) - - mb->vlan_macip.f.l3_len); + mb->hw_offload.l3_len); IPV4_FRAG_LOG(DEBUG, "%s:%d:\n" "mbuf: %p, tms: %" PRIu64 diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c index 5c5626a..a817d3d 100644 --- a/examples/ip_reassembly/main.c +++ b/examples/ip_reassembly/main.c @@ -680,8 +680,8 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, uint32_t queue, dr = &qconf->death_row; /* prepare mbuf: setup l2_len/l3_len. */ - m->vlan_macip.f.l2_len = sizeof(*eth_hdr); - m->vlan_macip.f.l3_len = sizeof(*ipv4_hdr); + m->hw_offload.l2_len = sizeof(*eth_hdr); + m->hw_offload.l3_len = sizeof(*ipv4_hdr); /* process this fragment. */ if ((mo = ipv4_frag_mbuf(tbl, dr, m, tms, ipv4_hdr, diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index 1cd51c2..d71c86c 100644 --- a/lib/librte_mbuf/rte_mbuf.h +++ b/lib/librte_mbuf/rte_mbuf.h @@ -145,26 +145,22 @@ static inline const char *rte_get_tx_ol_flag_name(uint32_t mask) } /** Offload features */ -union rte_vlan_macip { - uint32_t data; +union rte_hw_offload { + uint64_t u64; struct { - uint16_t l3_len:9; /**< L3 (IP) Header Length. */ - uint16_t l2_len:7; /**< L2 (MAC) Header Length. */ +#define HW_OFFLOAD_L2_LEN_MASK 0x7f +#define HW_OFFLOAD_L3_LEN_MASK 0x1ff +#define HW_OFFLOAD_L4_LEN_MASK 0xff + uint32_t l2_len:7; /**< L2 (MAC) Header Length. */ + uint32_t l3_len:9; /**< L3 (IP) Header Length. */ + uint32_t reserved:16; + uint16_t vlan_tci; /**< VLAN Tag Control Identifier (CPU order). */ - } f; + uint16_t reserved2; + }; }; -/* - * Compare mask for vlan_macip_len.data, - * should be in sync with rte_vlan_macip.f layout. - * */ -#define TX_VLAN_CMP_MASK 0xFFFF0000 /**< VLAN length - 16-bits. */ -#define TX_MAC_LEN_CMP_MASK 0x0000FE00 /**< MAC length - 7-bits. */ -#define TX_IP_LEN_CMP_MASK 0x000001FF /**< IP length - 9-bits. */ -/**< MAC+IP length. */ -#define TX_MACIP_LEN_CMP_MASK (TX_MAC_LEN_CMP_MASK | TX_IP_LEN_CMP_MASK) - /** * The generic rte_mbuf, containing a packet mbuf. */ @@ -203,7 +199,7 @@ struct rte_mbuf { uint32_t ol_flags; /**< Offload features. */ /* offload features, valid for first segment only */ - union rte_vlan_macip vlan_macip; + union rte_hw_offload hw_offload; union { uint32_t rss; /**< RSS hash result if RSS enabled */ struct { @@ -212,7 +208,7 @@ struct rte_mbuf { } fdir; /**< Filter identifier if FDIR enabled */ uint32_t sched; /**< Hierarchical scheduler */ } hash; /**< hash information */ - uint64_t reserved; /**< Unused field. Required for padding. */ + uint32_t reserved; /**< Unused field. Required for padding. */ } __rte_cache_aligned; /** @@ -479,7 +475,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m) { m->next = NULL; m->pkt_len = 0; - m->vlan_macip.data = 0; + m->hw_offload.u64 = 0; m->nb_segs = 1; m->in_port = 0xff; @@ -545,7 +541,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *md) mi->data_off = md->data_off; mi->data_len = md->data_len; mi->in_port = md->in_port; - mi->vlan_macip = md->vlan_macip; + mi->hw_offload.u64 = md->hw_offload.u64; mi->hash = md->hash; mi->next = NULL; diff --git a/lib/librte_pmd_e1000/em_rxtx.c b/lib/librte_pmd_e1000/em_rxtx.c index 015c0af..69bd666 100644 --- a/lib/librte_pmd_e1000/em_rxtx.c +++ b/lib/librte_pmd_e1000/em_rxtx.c @@ -148,8 +148,8 @@ enum { */ struct em_ctx_info { uint32_t flags; /**< ol_flags related to context build. */ - uint32_t cmp_mask; /**< compare mask */ - union rte_vlan_macip hdrlen; /**< L2 and L3 header lenghts */ + union rte_hw_offload hw_offload; /**< l2/l3/l4 length, vlan, mss. */ + union rte_hw_offload offload_mask; /**< compare mask for hw_offload */ }; /** @@ -217,18 +217,18 @@ struct em_tx_queue { static inline void em_set_xmit_ctx(struct em_tx_queue* txq, volatile struct e1000_context_desc *ctx_txd, - uint32_t flags, - union rte_vlan_macip hdrlen) + uint32_t flags, union rte_hw_offload hw_offload) { - uint32_t cmp_mask, cmd_len; + uint32_t cmd_len; uint16_t ipcse, l2len; struct e1000_context_desc ctx; + union rte_hw_offload offload_mask; - cmp_mask = 0; + offload_mask.u64 = 0; cmd_len = E1000_TXD_CMD_DEXT | E1000_TXD_DTYP_C; - l2len = hdrlen.f.l2_len; - ipcse = (uint16_t)(l2len + hdrlen.f.l3_len); + l2len = hw_offload.l2_len; + ipcse = (uint16_t)(l2len + hw_offload.l3_len); /* setup IPCS* fields */ ctx.lower_setup.ip_fields.ipcss = (uint8_t)l2len; @@ -243,7 +243,8 @@ em_set_xmit_ctx(struct em_tx_queue* txq, ctx.lower_setup.ip_fields.ipcse = (uint16_t)rte_cpu_to_le_16(ipcse - 1); cmd_len |= E1000_TXD_CMD_IP; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; } else { ctx.lower_setup.ip_fields.ipcse = 0; } @@ -256,13 +257,15 @@ em_set_xmit_ctx(struct em_tx_queue* txq, case PKT_TX_UDP_CKSUM: ctx.upper_setup.tcp_fields.tucso = (uint8_t)(ipcse + offsetof(struct udp_hdr, dgram_cksum)); - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; break; case PKT_TX_TCP_CKSUM: ctx.upper_setup.tcp_fields.tucso = (uint8_t)(ipcse + offsetof(struct tcp_hdr, cksum)); cmd_len |= E1000_TXD_CMD_TCP; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; break; default: ctx.upper_setup.tcp_fields.tucso = 0; @@ -274,8 +277,9 @@ em_set_xmit_ctx(struct em_tx_queue* txq, *ctx_txd = ctx; txq->ctx_cache.flags = flags; - txq->ctx_cache.cmp_mask = cmp_mask; - txq->ctx_cache.hdrlen = hdrlen; + txq->ctx_cache.hw_offload.u64 = + offload_mask.u64 & hw_offload.u64; + txq->ctx_cache.offload_mask = offload_mask; } /* @@ -284,12 +288,12 @@ em_set_xmit_ctx(struct em_tx_queue* txq, */ static inline uint32_t what_ctx_update(struct em_tx_queue *txq, uint32_t flags, - union rte_vlan_macip hdrlen) + union rte_hw_offload hw_offload) { /* If match with the current context */ if (likely (txq->ctx_cache.flags == flags && - ((txq->ctx_cache.hdrlen.data ^ hdrlen.data) & - txq->ctx_cache.cmp_mask) == 0)) + ((txq->ctx_cache.hw_offload.u64 ^ hw_offload.u64) & + txq->ctx_cache.offload_mask.u64) == 0)) return (EM_CTX_0); /* Mismatch */ @@ -390,7 +394,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint32_t tx_ol_req; uint32_t ctx; uint32_t new_ctx; - union rte_vlan_macip hdrlen; + union rte_hw_offload hw_offload; txq = tx_queue; sw_ring = txq->sw_ring; @@ -419,9 +423,9 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* If hardware offload required */ tx_ol_req = (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)); if (tx_ol_req) { - hdrlen = tx_pkt->vlan_macip; + hw_offload.u64 = tx_pkt->hw_offload.u64; /* If new context to be built or reuse the exist ctx. */ - ctx = what_ctx_update(txq, tx_ol_req, hdrlen); + ctx = what_ctx_update(txq, tx_ol_req, hw_offload); /* Only allocate context descriptor if required*/ new_ctx = (ctx == EM_CTX_NUM); @@ -514,7 +518,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* Set VLAN Tag offload fields. */ if (ol_flags & PKT_TX_VLAN_PKT) { cmd_type_len |= E1000_TXD_CMD_VLE; - popts_spec = tx_pkt->vlan_macip.f.vlan_tci << + popts_spec = tx_pkt->hw_offload.vlan_tci << E1000_TXD_VLAN_SHIFT; } @@ -537,7 +541,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } em_set_xmit_ctx(txq, ctx_txd, tx_ol_req, - hdrlen); + hw_offload); txe->last_id = tx_last; tx_id = txe->next_id; @@ -782,7 +786,7 @@ eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_desc_error_to_pkt_flags(rxd.errors); /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */ - rxm->vlan_macip.f.vlan_tci = rte_le_to_cpu_16(rxd.special); + rxm->hw_offload.vlan_tci = rte_le_to_cpu_16(rxd.special); /* * Store the mbuf address into the next entry of the array @@ -1008,7 +1012,7 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_desc_error_to_pkt_flags(rxd.errors); /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */ - rxm->vlan_macip.f.vlan_tci = rte_le_to_cpu_16(rxd.special); + rxm->hw_offload.vlan_tci = rte_le_to_cpu_16(rxd.special); /* Prefetch data of first segment, if configured to do so. */ rte_packet_prefetch((char *)first_seg->buf_addr + diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c index 322dfa0..2db496f 100644 --- a/lib/librte_pmd_e1000/igb_rxtx.c +++ b/lib/librte_pmd_e1000/igb_rxtx.c @@ -145,8 +145,8 @@ enum igb_advctx_num { */ struct igb_advctx_info { uint32_t flags; /**< ol_flags related to context build. */ - uint32_t cmp_mask; /**< compare mask for vlan_macip_lens */ - union rte_vlan_macip vlan_macip_lens; /**< vlan, mac & ip length. */ + union rte_hw_offload hw_offload; /**< l2/l3/l4 length, vlan, mss. */ + union rte_hw_offload offload_mask; /**< compare mask for hw_offload */ }; /** @@ -212,26 +212,28 @@ struct igb_tx_queue { static inline void igbe_set_xmit_ctx(struct igb_tx_queue* txq, volatile struct e1000_adv_tx_context_desc *ctx_txd, - uint32_t ol_flags, uint32_t vlan_macip_lens) + uint32_t ol_flags, union rte_hw_offload hw_offload) { uint32_t type_tucmd_mlhl; uint32_t mss_l4len_idx; uint32_t ctx_idx, ctx_curr; - uint32_t cmp_mask; + uint32_t vlan_macip_lens; + union rte_hw_offload offload_mask; ctx_curr = txq->ctx_curr; ctx_idx = ctx_curr + txq->ctx_start; - cmp_mask = 0; + offload_mask.u64 = 0; type_tucmd_mlhl = 0; if (ol_flags & PKT_TX_VLAN_PKT) { - cmp_mask |= TX_VLAN_CMP_MASK; + offload_mask.vlan_tci = 0xffff; } if (ol_flags & PKT_TX_IP_CKSUM) { type_tucmd_mlhl = E1000_ADVTXD_TUCMD_IPV4; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; } /* Specify which HW CTX to upload. */ @@ -241,19 +243,22 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq, type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_UDP | E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT; mss_l4len_idx |= sizeof(struct udp_hdr) << E1000_ADVTXD_L4LEN_SHIFT; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; break; case PKT_TX_TCP_CKSUM: type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_TCP | E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT; mss_l4len_idx |= sizeof(struct tcp_hdr) << E1000_ADVTXD_L4LEN_SHIFT; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; break; case PKT_TX_SCTP_CKSUM: type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_SCTP | E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT; mss_l4len_idx |= sizeof(struct sctp_hdr) << E1000_ADVTXD_L4LEN_SHIFT; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; break; default: type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_RSV | @@ -262,11 +267,14 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq, } txq->ctx_cache[ctx_curr].flags = ol_flags; - txq->ctx_cache[ctx_curr].cmp_mask = cmp_mask; - txq->ctx_cache[ctx_curr].vlan_macip_lens.data = - vlan_macip_lens & cmp_mask; + txq->ctx_cache[ctx_curr].hw_offload.u64 = + offload_mask.u64 & hw_offload.u64; + txq->ctx_cache[ctx_curr].offload_mask = offload_mask; ctx_txd->type_tucmd_mlhl = rte_cpu_to_le_32(type_tucmd_mlhl); + vlan_macip_lens = hw_offload.l3_len; + vlan_macip_lens |= (hw_offload.l2_len << E1000_ADVTXD_MACLEN_SHIFT); + vlan_macip_lens |= ((uint32_t)hw_offload.vlan_tci << E1000_ADVTXD_VLAN_SHIFT); ctx_txd->vlan_macip_lens = rte_cpu_to_le_32(vlan_macip_lens); ctx_txd->mss_l4len_idx = rte_cpu_to_le_32(mss_l4len_idx); ctx_txd->seqnum_seed = 0; @@ -278,20 +286,20 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq, */ static inline uint32_t what_advctx_update(struct igb_tx_queue *txq, uint32_t flags, - uint32_t vlan_macip_lens) + union rte_hw_offload hw_offload) { /* If match with the current context */ if (likely((txq->ctx_cache[txq->ctx_curr].flags == flags) && - (txq->ctx_cache[txq->ctx_curr].vlan_macip_lens.data == - (txq->ctx_cache[txq->ctx_curr].cmp_mask & vlan_macip_lens)))) { + (txq->ctx_cache[txq->ctx_curr].hw_offload.u64 == + (txq->ctx_cache[txq->ctx_curr].offload_mask.u64 & hw_offload.u64)))) { return txq->ctx_curr; } /* If match with the second context */ txq->ctx_curr ^= 1; if (likely((txq->ctx_cache[txq->ctx_curr].flags == flags) && - (txq->ctx_cache[txq->ctx_curr].vlan_macip_lens.data == - (txq->ctx_cache[txq->ctx_curr].cmp_mask & vlan_macip_lens)))) { + (txq->ctx_cache[txq->ctx_curr].hw_offload.u64 == + (txq->ctx_cache[txq->ctx_curr].offload_mask.u64 & hw_offload.u64)))) { return txq->ctx_curr; } @@ -342,7 +350,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint32_t tx_ol_req; uint32_t new_ctx = 0; uint32_t ctx = 0; - uint32_t vlan_macip_lens; + union rte_hw_offload hw_offload; txq = tx_queue; sw_ring = txq->sw_ring; @@ -367,14 +375,14 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, tx_last = (uint16_t) (tx_id + tx_pkt->nb_segs - 1); ol_flags = tx_pkt->ol_flags; - vlan_macip_lens = tx_pkt->vlan_macip.data; + hw_offload.u64 = tx_pkt->hw_offload.u64; tx_ol_req = ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_IP_CKSUM | PKT_TX_L4_MASK); /* If a Context Descriptor need be built . */ if (tx_ol_req) { ctx = what_advctx_update(txq, tx_ol_req, - vlan_macip_lens); + hw_offload); /* Only allocate context descriptor if required*/ new_ctx = (ctx == IGB_CTX_NUM); ctx = txq->ctx_curr; @@ -490,7 +498,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } igbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req, - vlan_macip_lens); + hw_offload); txe->last_id = tx_last; tx_id = txe->next_id; @@ -752,7 +760,7 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rxm->hash.rss = rxd.wb.lower.hi_dword.rss; hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data); /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */ - rxm->vlan_macip.f.vlan_tci = + rxm->hw_offload.vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan); pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss); @@ -989,7 +997,7 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is * set in the pkt_flags field. */ - first_seg->vlan_macip.f.vlan_tci = + first_seg->hw_offload.vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan); hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data); pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss); diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c index 7096ea6..d52482e 100644 --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c @@ -350,24 +350,26 @@ ixgbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts, static inline void ixgbe_set_xmit_ctx(struct igb_tx_queue* txq, volatile struct ixgbe_adv_tx_context_desc *ctx_txd, - uint32_t ol_flags, uint32_t vlan_macip_lens) + uint32_t ol_flags, union rte_hw_offload hw_offload) { uint32_t type_tucmd_mlhl; uint32_t mss_l4len_idx; uint32_t ctx_idx; - uint32_t cmp_mask; + uint32_t vlan_macip_lens; + union rte_hw_offload offload_mask; ctx_idx = txq->ctx_curr; - cmp_mask = 0; + offload_mask.u64 = 0; type_tucmd_mlhl = 0; if (ol_flags & PKT_TX_VLAN_PKT) { - cmp_mask |= TX_VLAN_CMP_MASK; + offload_mask.vlan_tci = 0xffff; } if (ol_flags & PKT_TX_IP_CKSUM) { type_tucmd_mlhl = IXGBE_ADVTXD_TUCMD_IPV4; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; } /* Specify which HW CTX to upload. */ @@ -377,19 +379,22 @@ ixgbe_set_xmit_ctx(struct igb_tx_queue* txq, type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_UDP | IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT; mss_l4len_idx |= sizeof(struct udp_hdr) << IXGBE_ADVTXD_L4LEN_SHIFT; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; break; case PKT_TX_TCP_CKSUM: type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_TCP | IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT; mss_l4len_idx |= sizeof(struct tcp_hdr) << IXGBE_ADVTXD_L4LEN_SHIFT; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; break; case PKT_TX_SCTP_CKSUM: type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_SCTP | IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT; mss_l4len_idx |= sizeof(struct sctp_hdr) << IXGBE_ADVTXD_L4LEN_SHIFT; - cmp_mask |= TX_MACIP_LEN_CMP_MASK; + offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK; + offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK; break; default: type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_RSV | @@ -398,11 +403,14 @@ ixgbe_set_xmit_ctx(struct igb_tx_queue* txq, } txq->ctx_cache[ctx_idx].flags = ol_flags; - txq->ctx_cache[ctx_idx].cmp_mask = cmp_mask; - txq->ctx_cache[ctx_idx].vlan_macip_lens.data = - vlan_macip_lens & cmp_mask; + txq->ctx_cache[ctx_idx].hw_offload.u64 = + offload_mask.u64 & hw_offload.u64; + txq->ctx_cache[ctx_idx].offload_mask = offload_mask; ctx_txd->type_tucmd_mlhl = rte_cpu_to_le_32(type_tucmd_mlhl); + vlan_macip_lens = hw_offload.l3_len; + vlan_macip_lens |= (hw_offload.l2_len << IXGBE_ADVTXD_MACLEN_SHIFT); + vlan_macip_lens |= ((uint32_t)hw_offload.vlan_tci << IXGBE_ADVTXD_VLAN_SHIFT); ctx_txd->vlan_macip_lens = rte_cpu_to_le_32(vlan_macip_lens); ctx_txd->mss_l4len_idx = rte_cpu_to_le_32(mss_l4len_idx); ctx_txd->seqnum_seed = 0; @@ -414,20 +422,20 @@ ixgbe_set_xmit_ctx(struct igb_tx_queue* txq, */ static inline uint32_t what_advctx_update(struct igb_tx_queue *txq, uint32_t flags, - uint32_t vlan_macip_lens) + union rte_hw_offload hw_offload) { /* If match with the current used context */ if (likely((txq->ctx_cache[txq->ctx_curr].flags == flags) && - (txq->ctx_cache[txq->ctx_curr].vlan_macip_lens.data == - (txq->ctx_cache[txq->ctx_curr].cmp_mask & vlan_macip_lens)))) { + (txq->ctx_cache[txq->ctx_curr].hw_offload.u64 == + (txq->ctx_cache[txq->ctx_curr].offload_mask.u64 & hw_offload.u64)))) { return txq->ctx_curr; } /* What if match with the next context */ txq->ctx_curr ^= 1; if (likely((txq->ctx_cache[txq->ctx_curr].flags == flags) && - (txq->ctx_cache[txq->ctx_curr].vlan_macip_lens.data == - (txq->ctx_cache[txq->ctx_curr].cmp_mask & vlan_macip_lens)))) { + (txq->ctx_cache[txq->ctx_curr].hw_offload.u64 == + (txq->ctx_cache[txq->ctx_curr].offload_mask.u64 & hw_offload.u64)))) { return txq->ctx_curr; } @@ -543,9 +551,9 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_tx; uint16_t nb_used; uint32_t tx_ol_req; - uint32_t vlan_macip_lens; uint32_t ctx = 0; uint32_t new_ctx; + union rte_hw_offload hw_offload; txq = tx_queue; sw_ring = txq->sw_ring; @@ -571,7 +579,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, * are needed for offload functionality. */ ol_flags = tx_pkt->ol_flags; - vlan_macip_lens = tx_pkt->vlan_macip.data; + hw_offload.u64 = tx_pkt->hw_offload.u64; /* If hardware offload required */ tx_ol_req = ol_flags & @@ -579,7 +587,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, if (tx_ol_req) { /* If new context need be built or reuse the exist ctx. */ ctx = what_advctx_update(txq, tx_ol_req, - vlan_macip_lens); + hw_offload); /* Only allocate context descriptor if required*/ new_ctx = (ctx == IXGBE_CTX_NUM); ctx = txq->ctx_curr; @@ -721,7 +729,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } ixgbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req, - vlan_macip_lens); + hw_offload); txe->last_id = tx_last; tx_id = txe->next_id; @@ -932,7 +940,7 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq) rxq->crc_len); mb->data_len = pkt_len; mb->pkt_len = pkt_len; - mb->vlan_macip.f.vlan_tci = rxdp[j].wb.upper.vlan; + mb->hw_offload.vlan_tci = rxdp[j].wb.upper.vlan; mb->hash.rss = rxdp[j].wb.lower.hi_dword.rss; /* convert descriptor fields to rte mbuf flags */ @@ -1250,7 +1258,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data); /* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */ - rxm->vlan_macip.f.vlan_tci = + rxm->hw_offload.vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan); pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss); @@ -1495,7 +1503,7 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * The vlan_tci field is only valid when PKT_RX_VLAN_PKT is * set in the pkt_flags field. */ - first_seg->vlan_macip.f.vlan_tci = + first_seg->hw_offload.vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan); hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data); pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss); diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h index 571d2ca..978bb19 100644 --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h @@ -152,7 +152,8 @@ enum ixgbe_advctx_num { struct ixgbe_advctx_info { uint32_t flags; /**< ol_flags for context build. */ uint32_t cmp_mask; /**< compare mask for vlan_macip_lens */ - union rte_vlan_macip vlan_macip_lens; /**< vlan, mac ip length. */ + union rte_hw_offload hw_offload; /**< l2/l3/l4 length, vlan, mss. */ + union rte_hw_offload offload_mask; /**< compare mask for hw_offload */ }; /** diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c index b5450b2..c85da80 100644 --- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c +++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c @@ -536,7 +536,7 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rte_pktmbuf_mtod(rxm, void *)); #endif //Copy vlan tag in packet buffer - rxm->vlan_macip.f.vlan_tci = + rxm->hw_offload.vlan_tci = rte_le_to_cpu_16((uint16_t)rcd->tci); } else @@ -549,7 +549,7 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxm->pkt_len = (uint16_t)rcd->len; rxm->data_len = (uint16_t)rcd->len; rxm->in_port = rxq->port_id; - rxm->vlan_macip.f.vlan_tci = 0; + rxm->hw_offload.vlan_tci = 0; rxm->data_off = RTE_PKTMBUF_HEADROOM; rx_pkts[nb_rx++] = rxm; -- 1.9.2