automatic DPDK test reports
 help / color / mirror / Atom feed
From: sys_stv@intel.com
To: test-report@dpdk.org
Cc: santosh.shukla@caviumnetworks.com
Subject: [dpdk-test-report] |FAILURE| pw25606 [PATCH 4/4] mempool: update range info to pool
Date: 23 Jun 2017 13:15:38 -0700	[thread overview]
Message-ID: <ffb989$327vmb@orsmga002.jf.intel.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 4497 bytes --]

Test-Label: Intel-compilation
Test-Status: FAILURE
http://dpdk.org/patch/25606

_apply patch file failure_

Submitter: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Date: Wed, 21 Jun 2017 17:32:48 +0000
DPDK git baseline: Repo:dpdk-next-eventdev, Branch:master, CommitID:c687cebe095b2ea325aa012e2834b830f1f74eba
                   Repo:dpdk-next-crypto, Branch:master, CommitID:0a2d25e7f531a607a8c230b69a499f322133d1bd
                   Repo:dpdk-next-net, Branch:master, CommitID:efe576f0d5cc4cd93317c7833f387cb64d2abcf1
                   Repo:dpdk-next-virtio, Branch:master, CommitID:da5129d9b6b53375667690970f6824565c1f1443
                   Repo:dpdk, Branch:master, CommitID:8a04cb6125896e9ea25a4d15a316f0d873822c7b
                   
Apply patch file failed:
Repo: dpdk
25606:
patching file lib/librte_mempool/rte_mempool.c
patching file lib/librte_mempool/rte_mempool.h
Hunk #1 succeeded at 389 with fuzz 2 (offset -9 lines).
Hunk #2 FAILED at 413.
Hunk #3 succeeded at 523 with fuzz 2 (offset -14 lines).
1 out of 3 hunks FAILED -- saving rejects to file lib/librte_mempool/rte_mempool.h.rej
patching file lib/librte_mempool/rte_mempool_ops.c
Hunk #1 FAILED at 86.
Hunk #2 succeeded at 123 with fuzz 1 (offset -14 lines).
1 out of 2 hunks FAILED -- saving rejects to file lib/librte_mempool/rte_mempool_ops.c.rej
patching file lib/librte_mempool/rte_mempool_version.map
Hunk #1 FAILED at 46.
1 out of 1 hunk FAILED -- saving rejects to file lib/librte_mempool/rte_mempool_version.map.rej

Repo: dpdk-next-crypto
25606:
patching file lib/librte_mempool/rte_mempool.c
patching file lib/librte_mempool/rte_mempool.h
Hunk #1 succeeded at 389 with fuzz 2 (offset -9 lines).
Hunk #2 FAILED at 413.
Hunk #3 succeeded at 523 with fuzz 2 (offset -14 lines).
1 out of 3 hunks FAILED -- saving rejects to file lib/librte_mempool/rte_mempool.h.rej
patching file lib/librte_mempool/rte_mempool_ops.c
Hunk #1 FAILED at 86.
Hunk #2 succeeded at 123 with fuzz 1 (offset -14 lines).
1 out of 2 hunks FAILED -- saving rejects to file lib/librte_mempool/rte_mempool_ops.c.rej
patching file lib/librte_mempool/rte_mempool_version.map
Hunk #1 FAILED at 46.
1 out of 1 hunk FAILED -- saving rejects to file lib/librte_mempool/rte_mempool_version.map.rej

Repo: dpdk-next-net
25606:
patching file lib/librte_mempool/rte_mempool.c
patching file lib/librte_mempool/rte_mempool.h
Hunk #1 succeeded at 389 with fuzz 2 (offset -9 lines).
Hunk #2 FAILED at 413.
Hunk #3 succeeded at 523 with fuzz 2 (offset -14 lines).
1 out of 3 hunks FAILED -- saving rejects to file lib/librte_mempool/rte_mempool.h.rej
patching file lib/librte_mempool/rte_mempool_ops.c
Hunk #1 FAILED at 86.
Hunk #2 succeeded at 123 with fuzz 1 (offset -14 lines).
1 out of 2 hunks FAILED -- saving rejects to file lib/librte_mempool/rte_mempool_ops.c.rej
patching file lib/librte_mempool/rte_mempool_version.map
Hunk #1 FAILED at 46.
1 out of 1 hunk FAILED -- saving rejects to file lib/librte_mempool/rte_mempool_version.map.rej

Repo: dpdk-next-virtio
25606:
patching file lib/librte_mempool/rte_mempool.c
patching file lib/librte_mempool/rte_mempool.h
Hunk #1 succeeded at 389 with fuzz 2 (offset -9 lines).
Hunk #2 FAILED at 413.
Hunk #3 succeeded at 523 with fuzz 2 (offset -14 lines).
1 out of 3 hunks FAILED -- saving rejects to file lib/librte_mempool/rte_mempool.h.rej
patching file lib/librte_mempool/rte_mempool_ops.c
Hunk #1 FAILED at 86.
Hunk #2 succeeded at 123 with fuzz 1 (offset -14 lines).
1 out of 2 hunks FAILED -- saving rejects to file lib/librte_mempool/rte_mempool_ops.c.rej
patching file lib/librte_mempool/rte_mempool_version.map
Hunk #1 FAILED at 46.
1 out of 1 hunk FAILED -- saving rejects to file lib/librte_mempool/rte_mempool_version.map.rej

Repo: dpdk-next-eventdev
25606:
patching file lib/librte_mempool/rte_mempool.c
patching file lib/librte_mempool/rte_mempool.h
Hunk #1 succeeded at 389 with fuzz 2 (offset -9 lines).
Hunk #2 FAILED at 413.
Hunk #3 succeeded at 523 with fuzz 2 (offset -14 lines).
1 out of 3 hunks FAILED -- saving rejects to file lib/librte_mempool/rte_mempool.h.rej
patching file lib/librte_mempool/rte_mempool_ops.c
Hunk #1 FAILED at 86.
Hunk #2 succeeded at 123 with fuzz 1 (offset -14 lines).
1 out of 2 hunks FAILED -- saving rejects to file lib/librte_mempool/rte_mempool_ops.c.rej
patching file lib/librte_mempool/rte_mempool_version.map
Hunk #1 FAILED at 46.
1 out of 1 hunk FAILED -- saving rejects to file lib/librte_mempool/rte_mempool_version.map.rej


DPDK STV team

             reply	other threads:[~2017-06-23 20:15 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-23 20:15 sys_stv [this message]
2017-06-30  5:15 sys_stv

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='ffb989$327vmb@orsmga002.jf.intel.com' \
    --to=sys_stv@intel.com \
    --cc=santosh.shukla@caviumnetworks.com \
    --cc=test-report@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).