From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id D202DA0096 for ; Sun, 2 Jun 2019 13:45:23 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BD22C2C01; Sun, 2 Jun 2019 13:45:22 +0200 (CEST) Received: by dpdk.org (Postfix, from userid 33) id D92A62C02; Sun, 2 Jun 2019 13:45:21 +0200 (CEST) From: bugzilla@dpdk.org To: dev@dpdk.org Date: Sun, 02 Jun 2019 11:45:20 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: DPDK X-Bugzilla-Component: vhost/virtio X-Bugzilla-Version: 19.05 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: ybrustin@cisco.com X-Bugzilla-Status: CONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: dev@dpdk.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter cc target_milestone Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All MIME-Version: 1.0 Subject: [dpdk-dev] [Bug 290] RX packets in Virtio are corrupted in case of split to several mbufs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" https://bugs.dpdk.org/show_bug.cgi?id=3D290 Bug ID: 290 Summary: RX packets in Virtio are corrupted in case of split to several mbufs Product: DPDK Version: 19.05 Hardware: All OS: All Status: CONFIRMED Severity: normal Priority: Normal Component: vhost/virtio Assignee: dev@dpdk.org Reporter: ybrustin@cisco.com CC: gavin.hu@arm.com, maxime.coquelin@redhat.com Target Milestone: --- Hi, Starting from this commit - bcac5aa207f896c46963b2ac0a06bc09b1e912a5, RX packets that are split to several mbufs are corrupted. For example, we are using 2KB mbufs, and sending Jumbo packets (~9k). After several received packets we got bad packet: RX pkt #1 dump mbuf at 0x1f5082300, iova=3D266c82380, buf_len=3D2112 pkt_len=3D9230, ol_flags=3D0, nb_segs=3D5, in_port=3D0 segment at 0x1f5082300, data=3D0x1f50823c0, data_len=3D2048 Dump data at [0x1f50823c0], len=3D16 00000000: 16 58 82 41 3C CF E2 D1 D5 84 5A 99 08 00 45 00 | .X.A<.....Z...E. segment at 0x1f50819c0, data=3D0x1f5081a74, data_len=3D2060 segment at 0x1f5081080, data=3D0x1f5081134, data_len=3D2060 segment at 0x1f5080740, data=3D0x1f50807f4, data_len=3D2060 segment at 0x1f507fe00, data=3D0x1f507feb4, data_len=3D1002 RX pkt #2 dump mbuf at 0x1f507f4c0, iova=3D266c7f540, buf_len=3D2112 pkt_len=3D9230, ol_flags=3D0, nb_segs=3D5, in_port=3D0 segment at 0x1f507f4c0, data=3D0x1f507f580, data_len=3D2048 Dump data at [0x1f507f580], len=3D16 00000000: 16 58 82 41 3C CF E2 D1 D5 84 5A 99 08 00 45 00 | .X.A<.....Z...E. segment at 0x1f507eb80, data=3D0x1f507ec34, data_len=3D2060 segment at 0x1f506fb00, data=3D0x1f506fbb4, data_len=3D2060 segment at 0x1f5095440, data=3D0x1f50954f4, data_len=3D2060 segment at 0x1f5094b00, data=3D0x1f5094bb4, data_len=3D1002 RX pkt #3 dump mbuf at 0x1f507c680, iova=3D266c7c700, buf_len=3D2112 pkt_len=3D9230, ol_flags=3D0, nb_segs=3D5, in_port=3D0 segment at 0x1f507c680, data=3D0x1f507c740, data_len=3D2048 Dump data at [0x1f507c740], len=3D16 00000000: 16 58 82 41 3C CF E2 D1 D5 84 5A 99 08 00 45 00 | .X.A<.....Z...E. segment at 0x1f507bd40, data=3D0x1f507bdf4, data_len=3D2060 segment at 0x1f507b400, data=3D0x1f507b4b4, data_len=3D2060 segment at 0x1f507aac0, data=3D0x1f507ab74, data_len=3D2060 segment at 0x1f507a180, data=3D0x1f507a234, data_len=3D1002 RX pkt #4 dump mbuf at 0x1f5079840, iova=3D266c798c0, buf_len=3D2112 pkt_len=3D9230, ol_flags=3D0, nb_segs=3D5, in_port=3D0 segment at 0x1f5079840, data=3D0x1f5079900, data_len=3D2048 Dump data at [0x1f5079900], len=3D16 00000000: 16 58 82 41 3C CF E2 D1 D5 84 5A 99 08 00 45 00 | .X.A<.....Z...E. segment at 0x1f5078f00, data=3D0x1f5078fb4, data_len=3D2060 segment at 0x1f50785c0, data=3D0x1f5078674, data_len=3D2060 segment at 0x1f5077c80, data=3D0x1f5077d34, data_len=3D2060 segment at 0x1f5077340, data=3D0x1f50773f4, data_len=3D1002 RX pkt #5 dump mbuf at 0x1f5076a00, iova=3D266c76a80, buf_len=3D2112 pkt_len=3D9230, ol_flags=3D0, nb_segs=3D5, in_port=3D0 segment at 0x1f5076a00, data=3D0x1f5076ac0, data_len=3D2048 Dump data at [0x1f5076ac0], len=3D16 00000000: 16 58 82 41 3C CF E2 D1 D5 84 5A 99 08 00 45 00 | .X.A<.....Z...E. segment at 0x1f50760c0, data=3D0x1f5076174, data_len=3D2060 segment at 0x1f5075780, data=3D0x1f5075834, data_len=3D2060 segment at 0x1f5074e40, data=3D0x1f5074ef4, data_len=3D2060 segment at 0x1f5074500, data=3D0x1f50745b4, data_len=3D1002 RX pkt #6 dump mbuf at 0x1f5073bc0, iova=3D266c73c40, buf_len=3D2112 pkt_len=3D9230, ol_flags=3D0, nb_segs=3D5, in_port=3D0 segment at 0x1f5073bc0, data=3D0x1f5073c80, data_len=3D2048 Dump data at [0x1f5073c80], len=3D16 00000000: 16 58 82 41 3C CF E2 D1 D5 84 5A 99 08 00 45 00 | .X.A<.....Z...E. segment at 0x1f5073280, data=3D0x1f5073334, data_len=3D2060 segment at 0x1f5072940, data=3D0x1f50729f4, data_len=3D2060 segment at 0x1f5072000, data=3D0x1f50720b4, data_len=3D2060 segment at 0x1f50716c0, data=3D0x1f5071774, data_len=3D1002 RX pkt #7 dump mbuf at 0x1f5070d80, iova=3D266c70e00, buf_len=3D2112 pkt_len=3D9230, ol_flags=3D0, nb_segs=3D5, in_port=3D0 segment at 0x1f5070d80, data=3D0x1f5070e40, data_len=3D2048 Dump data at [0x1f5070e40], len=3D16 00000000: 16 58 82 41 3C CF E2 D1 D5 84 5A 99 08 00 45 00 | .X.A<.....Z...E. segment at 0x1f5070440, data=3D0x1f50704f4, data_len=3D2060 total packets length is not valid: pkt_len: 9230, sum of data_len: 4108 #seg is not valid, written in mbuf: 5, actual count: 2 As a workaround, we will use 9KB mbufs for now... Thanks, Yaroslav. --=20 You are receiving this mail because: You are the assignee for the bug.=