From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id E84E77DFF for ; Fri, 26 Sep 2014 11:40:24 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP; 26 Sep 2014 02:46:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.97,862,1389772800"; d="scan'208";a="391935205" Received: from shvmail01.sh.intel.com ([10.239.29.42]) by FMSMGA003.fm.intel.com with ESMTP; 26 Sep 2014 02:40:42 -0700 Received: from shecgisg003.sh.intel.com (shecgisg003.sh.intel.com [10.239.29.90]) by shvmail01.sh.intel.com with ESMTP id s8Q9khqZ007038; Fri, 26 Sep 2014 17:46:43 +0800 Received: from shecgisg003.sh.intel.com (localhost [127.0.0.1]) by shecgisg003.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP id s8Q9kep6027558; Fri, 26 Sep 2014 17:46:42 +0800 Received: (from hxie5@localhost) by shecgisg003.sh.intel.com (8.13.6/8.13.6/Submit) id s8Q9kejc027554; Fri, 26 Sep 2014 17:46:40 +0800 From: Huawei Xie To: dev@dpdk.org Date: Fri, 26 Sep 2014 17:45:52 +0800 Message-Id: <1411724758-27488-6-git-send-email-huawei.xie@intel.com> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1411724758-27488-1-git-send-email-huawei.xie@intel.com> References: <1411724758-27488-1-git-send-email-huawei.xie@intel.com> Subject: [dpdk-dev] [PATCH v5 05/11] lib/librte_vhost: merge Oliver's mbuf change X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 Sep 2014 09:40:25 -0000 There is no rte_pktmbuf structure in mbuf now. Its fields are merged to rte_mbuf structure. Signed-off-by: Huawei Xie --- lib/librte_vhost/vhost_rxtx.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vhost/vhost_rxtx.c index 81368e6..688e661 100644 --- a/lib/librte_vhost/vhost_rxtx.c +++ b/lib/librte_vhost/vhost_rxtx.c @@ -145,7 +145,7 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, struct rte_mbuf **pkts, /* Copy mbuf data to buffer */ /* TODO fixme for sg mbuf and the case that desc couldn't hold the mbuf data */ rte_memcpy((void *)(uintptr_t)buff_addr, - (const void *)buff->pkt.data, + rte_pktmbuf_mtod(buff, const void *), rte_pktmbuf_data_len(buff)); VHOST_PRINT_PACKET(dev, (uintptr_t)buff_addr, rte_pktmbuf_data_len(buff), 0); @@ -307,7 +307,7 @@ copy_from_mbuf_to_vring(struct virtio_net *dev, uint16_t res_base_idx, * This current segment complete, need continue to * check if the whole packet complete or not. */ - pkt = pkt->pkt.next; + pkt = pkt->next; if (pkt != NULL) { /* * There are more segments. @@ -411,7 +411,7 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, struct rte_mbuf * uint32_t secure_len = 0; uint16_t need_cnt; uint32_t vec_idx = 0; - uint32_t pkt_len = pkts[pkt_idx]->pkt.pkt_len + vq->vhost_hlen; + uint32_t pkt_len = pkts[pkt_idx]->pkt_len + vq->vhost_hlen; uint16_t i, id; do { @@ -631,8 +631,8 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, struct rte_me * while the virtio buffer in TX vring has * more data to be copied. */ - cur->pkt.data_len = seg_offset; - m->pkt.pkt_len += seg_offset; + cur->data_len = seg_offset; + m->pkt_len += seg_offset; /* Allocate mbuf and populate the structure. */ cur = rte_pktmbuf_alloc(mbuf_pool); if (unlikely(cur == NULL)) { @@ -644,7 +644,7 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, struct rte_me } seg_num++; - prev->pkt.next = cur; + prev->next = cur; prev = cur; seg_offset = 0; seg_avail = cur->buf_len - RTE_PKTMBUF_HEADROOM; @@ -660,8 +660,8 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, struct rte_me * room to accomodate more * data. */ - cur->pkt.data_len = seg_offset; - m->pkt.pkt_len += seg_offset; + cur->data_len = seg_offset; + m->pkt_len += seg_offset; /* * Allocate an mbuf and * populate the structure. @@ -678,7 +678,7 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, struct rte_me break; } seg_num++; - prev->pkt.next = cur; + prev->next = cur; prev = cur; seg_offset = 0; seg_avail = cur->buf_len - RTE_PKTMBUF_HEADROOM; @@ -697,8 +697,8 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, struct rte_me desc->len, 0); } else { /* The whole packet completes. */ - cur->pkt.data_len = seg_offset; - m->pkt.pkt_len += seg_offset; + cur->data_len = seg_offset; + m->pkt_len += seg_offset; vb_avail = 0; } } @@ -709,7 +709,7 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, struct rte_me if (unlikely(alloc_err == 1)) break; - m->pkt.nb_segs = seg_num; + m->nb_segs = seg_num; pkts[entry_success] = m; vq->last_used_idx++; -- 1.8.1.4