DPDK patches and discussions
 help / color / mirror / Atom feed
From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: "Roger Melton (rmelton)" <rmelton@cisco.com>,
	"Dave Johnson (davejo)" <davejo@cisco.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"Sampath Peechu (speechu)" <speechu@cisco.com>,
	chenbo.xia@outlook.com
Cc: "Malcolm Bumgardner (mbumgard)" <mbumgard@cisco.com>,
	"Chris Brezovec (cbrezove)" <cbrezove@cisco.com>,
	David Marchand <david.marchand@redhat.com>
Subject: Re: Commit broke 32-bit testpmd app
Date: Wed, 20 Sep 2023 09:35:05 +0200	[thread overview]
Message-ID: <b01d2744-982a-f4b7-8fb3-48ee9ecacd05@redhat.com> (raw)
In-Reply-To: <ae4c8aaf-6c12-8ddf-90b9-76e76564728d@cisco.com>

[-- Attachment #1: Type: text/plain, Size: 9905 bytes --]

Hi,

I tried to reproduce without success(see attached log).

I fail to reproduce because buf_iova fits into 32 bits in my case:
(gdb) p /x *tx_pkts[0]

$4 = {
   cacheline0 = 0x77b19ec0,
   buf_addr = 0x77b19f40,
   buf_iova = 0x49519f40,
   rearm_data = 0x77b19ed0,

However, looking at your report, something like this would work for you:

diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 9d4aba11a3..38efbc517a 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -124,7 +124,7 @@ virtqueue_store_flags_packed(struct 
vring_packed_desc *dp,
   * (virtio-pci and virtio-user).
   */
  #define VIRTIO_MBUF_ADDR(mb, vq) \
-       ((uint64_t)(*(uintptr_t *)((uintptr_t)(mb) + 
(vq)->mbuf_addr_offset)))
+       (*(uint64_t *)((uintptr_t)(mb) + (vq)->mbuf_addr_offset))


The problem is that it would likely break Virtio-user en 32bits mode, as
this is how it was initially implemented, and got fixed few years ago,
as David hinted to me:

commit 260aae9ad9621e3e758f1443abb8fcbc25ece07c
Author: Jianfeng Tan <jianfeng.tan@intel.com>
Date:   Wed Apr 19 02:30:33 2017 +0000

     net/virtio-user: fix address on 32-bit system

     virtio-user cannot work on 32-bit system as higher 32-bit of the
     addr field (64-bit) in the desc is filled with non-zero value
     which should not happen for a 32-bit system.

     In case of virtio-user, we use buf_addr of mbuf to fill the
     virtqueue desc addr. This is a regression bug. For 32-bit system,
     the first 4 bytes of mbuf is buf_addr, with following 8 bytes for
     buf_phyaddr. With below wrong definition, both buf_addr and lower
     4 bytes buf_phyaddr are obtained to fill the virtqueue desc.
       #define VIRTIO_MBUF_ADDR(mb, vq) \
             (*(uint64_t *)((uintptr_t)(mb) + (vq)->offset))

     Fixes: 25f80d108780 ("net/virtio: fix packet corruption")
     Cc: stable@dpdk.org

     Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
     Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>

If my understanding is correct, on 32 bits, when mbuf->buf_addr is used
(Virtio-user), we need to mask out the higher 4 bytes, while when using
Virtio-pci we need the full 64 bits (as the physical addresses used as
IOVA on the guest are 64 bits).

Regards,
Maxime

On 9/13/23 15:24, Roger Melton (rmelton) wrote:
> +Chris Brezovec
> 
> Hi Maxime,
> 
> Chris from our team is attending the DPDK Summit in Dublin this week.  
> If you have some time available, we'd appreciate it if he could meet 
> with you to discuss the 32bit virtio issue we are seeing.
> 
> Regards,
> Roger Melton
> 
> On 9/6/23 2:57 PM, Dave Johnson (davejo) wrote:
>>
>> Hi Maxime,
>>
>> This email is regarding the following commit:
>>
>> https://github.com/DPDK/dpdk/commit/ba55c94a7ebc386d2288d6578ed57aad6cb92657
>>
>> A query had been sent previously on this topic (see below) indicating 
>> this commit appears to have broken the 32-bit testpmd app and impacted 
>> one of our products that runs as a 32-bit DPDK application.  We 
>> consequently backed the commit out of our product but would prefer to 
>> get a fix for it.  In the earlier exchange, you had asked if we were 
>> using virtio-pci or virtio-user (we are using virtio-pci) and asked 
>> for logs which Sampath provided.  It’s been a while, so let me now if 
>> you need me to send resend those logs or need any other information.
>>
>> FWIW, I reproduced this using testpmd and noticed that this part of 
>> the change seems to be the interesting part (in 
>> drivers/net/virtio/virtqueue.h):
>>
>> /**
>>
>> * Return the IOVA (or virtual address in case of virtio-user) of mbuf
>>
>> * data buffer.
>>
>> *
>>
>> * The address is firstly casted to the word size (sizeof(uintptr_t))
>>
>> * before casting it to uint64_t. This is to make it work with different
>>
>> * combination of word size (64 bit and 32 bit) and virtio device
>>
>> * (virtio-pci and virtio-user).
>>
>> */
>>
>> #define VIRTIO_MBUF_ADDR(mb, vq) \
>>
>>       ((uint64_t)(*(uintptr_t *)((uintptr_t)(mb) + 
>> (vq)->mbuf_addr_offset))
>>
>> If I revert just this part of the changeset (by re-using the 
>> VIRTIO_MBUF_ADDR to return buf_iova which matches what it had used 
>> previously), then 32-bit testpmd is able to receive traffic again:
>>
>> #define VIRTIO_MBUF_ADDR(mb, vq) (mb->buf_iova)
>>
>> Looking at the address produced by each of these, I see the address is 
>> the same except that the casting results in the upper bits getting 
>> cleared:
>>
>> Address from patch (nonworking case) = 0x58e7c900
>>
>> Address using buf_iova (working case) = 0x158e7c900
>>
>> ::
>>
>> Address from patch (nonworking case) = 0x58e7bfc0
>>
>> Address using buf_iova (working case) = 0x158e7bfc0
>>
>> ::
>>
>> Address from patch (nonworking case) = 0x58e7b680
>>
>> Address using buf_iova (working case) = 0x158e7b680
>>
>> ::
>>
>> Regards, Dave
>>
>> *From: *Sampath Peechu (speechu) <speechu@cisco.com>
>> *Date: *Monday, January 30, 2023 at 3:29 PM
>> *To: *Maxime Coquelin <maxime.coquelin@redhat.com>, 
>> chenbo.xia@intel.com <chenbo.xia@intel.com>, dev@dpdk.org <dev@dpdk.org>
>> *Cc: *Roger Melton (rmelton) <rmelton@cisco.com>, Malcolm Bumgardner 
>> (mbumgard) <mbumgard@cisco.com>
>> *Subject: *Re: Commit broke 32-bit testpmd app
>>
>> Hi Maxime,
>>
>> Could you please let us know if you got a chance to look at the debugs 
>> logs I provided?
>>
>> Thanks,
>>
>> Sampath
>>
>> *From: *Sampath Peechu (speechu) <speechu@cisco.com>
>> *Date: *Tuesday, December 6, 2022 at 1:08 PM
>> *To: *Maxime Coquelin <maxime.coquelin@redhat.com>, 
>> chenbo.xia@intel.com <chenbo.xia@intel.com>, dev@dpdk.org <dev@dpdk.org>
>> *Cc: *Roger Melton (rmelton) <rmelton@cisco.com>
>> *Subject: *Re: Commit broke 32-bit testpmd app
>>
>> Hi Maxime,
>>
>> Did you get a chance to look into this?
>>
>> Please let me know if you need anything else.
>>
>> Thanks,
>>
>> Sampath
>>
>> *From: *Sampath Peechu (speechu) <speechu@cisco.com>
>> *Date: *Wednesday, November 23, 2022 at 5:15 PM
>> *To: *Maxime Coquelin <maxime.coquelin@redhat.com>, 
>> chenbo.xia@intel.com <chenbo.xia@intel.com>, dev@dpdk.org <dev@dpdk.org>
>> *Cc: *Roger Melton (rmelton) <rmelton@cisco.com>
>> *Subject: *Re: Commit broke 32-bit testpmd app
>>
>> Hi Maxime,
>>
>> I’m attaching the following for reference.
>>
>>   * Instructions for Centos8 test setup
>>   * Diffs between the working and non-working versions (working
>>     version has the problem commit backed out)
>>   * Working logs (stats show that ping packets from neighbor VM can be
>>     seen with both 64-bit and 32-bit apps)
>>   * Non-working logs (stats show that ping packets from neighbor VM
>>     are seen with 64-bit app but NOT seen with 32-bit app)
>>
>> ============================
>>
>> $ sudo ./usertools/dpdk-devbind.py --status
>>
>> Network devices using DPDK-compatible driver
>>
>> ============================================
>>
>> 0000:07:00.0 'Virtio network device 1041' drv=igb_uio unused=
>>
>> 0000:08:00.0 'Virtio network device 1041' drv=igb_uio unused=
>>
>> Network devices using kernel driver
>>
>> ===================================
>>
>> 0000:01:00.0 'Virtio network device 1041' if=enp1s0 drv=virtio-pci 
>> unused=igb_uio *Active*
>>
>> …
>>
>> ===========================
>>
>> Thanks,
>>
>> Sampath
>>
>> *From: *Maxime Coquelin <maxime.coquelin@redhat.com>
>> *Date: *Tuesday, November 22, 2022 at 4:24 AM
>> *To: *Sampath Peechu (speechu) <speechu@cisco.com>, 
>> chenbo.xia@intel.com <chenbo.xia@intel.com>, dev@dpdk.org <dev@dpdk.org>
>> *Cc: *Roger Melton (rmelton) <rmelton@cisco.com>
>> *Subject: *Re: Commit broke 32-bit testpmd app
>>
>> Hi,
>>
>> In my initial reply (see below), I also asked if you had logs to share.
>> And wondered whether it happens with Virtio PCI or Virtio-user?
>>
>> Regards,
>> Maxime
>>
>> On 11/16/22 00:30, Sampath Peechu (speechu) wrote:
>> > ++ dev@dpdk.org <mailto:dev@dpdk.org <mailto:dev@dpdk.org>>
>> >
>> > *From: *Maxime Coquelin <maxime.coquelin@redhat.com>
>> > *Date: *Tuesday, November 15, 2022 at 3:19 AM
>> > *To: *Sampath Peechu (speechu) <speechu@cisco.com>, 
>> chenbo.xia@intel.com
>> > <chenbo.xia@intel.com>
>> > *Cc: *Roger Melton (rmelton) <rmelton@cisco.com>
>> > *Subject: *Re: Commit broke 32-bit testpmd app
>> >
>> > Hi Sampath,
>> >
>> >
>> > Please add dev@dpdk.org, the upstream mailing list, if this is related
>> > to the upstream DPDK project.If it is using RHEL DPDK package, please
>> > use the appropriate support channels.
>> >
>> > On 11/14/22 23:55, Sampath Peechu (speechu) wrote:
>> >  > Hi Virtio Maintainers team,
>> >  >
>> >  > This email is regarding the following commit.
>> >  >
>> >  >
>> > 
>> https://github.com/DPDK/dpdk/commit/ba55c94a7ebc386d2288d6578ed57aad6cb92657 <https://github.com/DPDK/dpdk/commit/ba55c94a7ebc386d2288d6578ed57aad6cb92657> <https://github.com/DPDK/dpdk/commit/ba55c94a7ebc386d2288d6578ed57aad6cb92657 <https://github.com/DPDK/dpdk/commit/ba55c94a7ebc386d2288d6578ed57aad6cb92657>>
>> >  >
>> >  > The above commit appears to have broken the 32-bit testpmd app (and
>> >  > consequently impacted one of our products that runs as a 32-bit DPDK
>> >  > app). The 64-bit testpmd app does not appear to be impacted though.
>> >
>> > We'll need some logs to understand what is going on.
>> > Does it happen with virtio-pci or virtio-user?
>> >
>> > Regards,
>> > Maxime
>> >
>> >  > With the commit in place, we didn’t see any packets going through at
>> >  > all. After backing out the commit and rebuilding the 32-bit 
>> testpmd app
>> >  > in our test setup, we were able to pass traffic as expected.
>> >  >
>> >  > Could you please let us know if this is a known issue? And if 
>> there is a
>> >  > fix available for it?
>> >  >
>> >  > Thank you,
>> >  >
>> >  > Sampath Peechu
>> >  >
>> >  > Cisco Systems
>> >  >
>> >
>>
> 

[-- Attachment #2: virtio_32bits.log --]
[-- Type: text/x-log, Size: 9729 bytes --]



    [root@localhost dpdk]# file ./build/app/dpdk-testpmd
    ./build/app/dpdk-testpmd: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, BuildID[sha1]=55cf73daf9530b97290b252e48da9390163d42b7, for GNU/Linux 3.2.0, with debug_info, not stripped
    [root@localhost dpdk]# uname -a
    Linux localhost.localdomain 6.2.9-300.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Mar 30 22:32:58 UTC 2023 x86_64 GNU/Linux
    [root@localhost dpdk]# ./build/app/dpdk-testpmd -- -i
    EAL: Detected CPU lcores: 3
    EAL: Detected NUMA nodes: 1
    EAL: Detected static linkage of DPDK
    EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
    EAL: Selected IOVA mode 'PA'
    EAL: VFIO support initialized
    EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:01:00.0 (socket -1)
    eth_virtio_pci_init(): Failed to init PCI device
    EAL: Requested device 0000:01:00.0 cannot be used
    EAL: Probe PCI driver: net_virtio (1af4:1041) device: 0000:07:00.0 (socket -1)
    EAL: Using IOMMU type 8 (No-IOMMU)
    TELEMETRY: No legacy callbacks, legacy socket not created
    Interactive-mode selected
    Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
    testpmd: create a new mbuf pool <mb_pool_0>: n=163456, size=2176, socket=0
    testpmd: preferred mempool ops selected: ring_mp_mc
     
    Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
     
    Configuring Port 0 (socket 0)
    EAL: Error disabling MSI-X interrupts for fd 24
    Port 0: 56:48:4F:53:54:01
    Checking link statuses...
    Done
    testpmd> set fwd icmpecho
    Set icmpecho packet forwarding mode
    testpmd> start
    icmpecho packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
    Logical Core 1 (socket 0) forwards packets on 1 streams:
      RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
     
      icmpecho packet forwarding packets/burst=32
      nb forwarding cores=1 - nb forwarding ports=1
      port 0: RX queue number: 1 Tx queue number: 1
        Rx offloads=0x0 Tx offloads=0x0
        RX queue: 0
          RX desc=0 - RX free threshold=0
          RX threshold registers: pthresh=0 hthresh=0  wthresh=0
          RX Offloads=0x0
        TX queue: 0
          TX desc=0 - TX free threshold=0
          TX threshold registers: pthresh=0 hthresh=0  wthresh=0
          TX offloads=0x0 - TX RS bit threshold=0
    testpmd> set verbose 9
    Change verbose level from 0 to 9
    testpmd> port 0/queue 0: received 1 packets
      src=DE:D0:06:AC:DC:3B - dst=56:48:4F:53:54:01 - pool=mb_pool_0 - type=0x0800 - length=98 - nb_segs=1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
      ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
     
    Port 0 pkt-len=98 nb-segs=1
      ETH:  src=DE:D0:06:AC:DC:3B dst=56:48:4F:53:54:01 type=0x0800
      IPV4: src=192.168.101.2 dst=192.168.101.3 proto=1 (ICMP)
      ICMP: echo request seq id=223
    port 0/queue 0: sent 1 packets
      src=56:48:4F:53:54:01 - dst=DE:D0:06:AC:DC:3B - pool=mb_pool_0 - type=0x0800 - length=98 - nb_segs=1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x0
      ol_flags: RTE_MBUF_F_TX_L4_NO_CKSUM
    port 0/queue 0: received 1 packets
      src=DE:D0:06:AC:DC:3B - dst=56:48:4F:53:54:01 - pool=mb_pool_0 - type=0x0800 - length=98 - nb_segs=1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
      ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
     
    Port 0 pkt-len=98 nb-segs=1
      ETH:  src=DE:D0:06:AC:DC:3B dst=56:48:4F:53:54:01 type=0x0800
      IPV4: src=192.168.101.2 dst=192.168.101.3 proto=1 (ICMP)
      ICMP: echo request seq id=224
    port 0/queue 0: sent 1 packets
      src=56:48:4F:53:54:01 - dst=DE:D0:06:AC:DC:3B - pool=mb_pool_0 - type=0x0800 - length=98 - nb_segs=1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x0
      ol_flags: RTE_MBUF_F_TX_L4_NO_CKSUM
    port 0/queue 0: received 1 packets
      src=DE:D0:06:AC:DC:3B - dst=56:48:4F:53:54:01 - pool=mb_pool_0 - type=0x0800 - length=98 - nb_segs=1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
      ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
     
    Port 0 pkt-len=98 nb-segs=1
      ETH:  src=DE:D0:06:AC:DC:3B dst=56:48:4F:53:54:01 type=0x0800
      IPV4: src=192.168.101.2 dst=192.168.101.3 proto=1 (ICMP)
      ICMP: echo request seq id=225
    port 0/queue 0: sent 1 packets
      src=56:48:4F:53:54:01 - dst=DE:D0:06:AC:DC:3B - pool=mb_pool_0 - type=0x0800 - length=98 - nb_segs=1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x0
      ol_flags: RTE_MBUF_F_TX_L4_NO_CKSUM
    port 0/queue 0: received 1 packets
      src=DE:D0:06:AC:DC:3B - dst=56:48:4F:53:54:01 - pool=mb_pool_0 - type=0x0800 - length=98 - nb_segs=1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
      ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
     
    Port 0 pkt-len=98 nb-segs=1
      ETH:  src=DE:D0:06:AC:DC:3B dst=56:48:4F:53:54:01 type=0x0800
      IPV4: src=192.168.101.2 dst=192.168.101.3 proto=1 (ICMP)
      ICMP: echo request seq id=226
    port 0/queue 0: sent 1 packets
      src=56:48:4F:53:54:01 - dst=DE:D0:06:AC:DC:3B - pool=mb_pool_0 - type=0x0800 - length=98 - nb_segs=1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x0
      ol_flags: RTE_MBUF_F_TX_L4_NO_CKSUM
    port 0/queue 0: received 1 packets
      src=DE:D0:06:AC:DC:3B - dst=56:48:4F:53:54:01 - pool=mb_pool_0 - type=0x0800 - length=98 - nb_segs=1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
      ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
     
    Port 0 pkt-len=98 nb-segs=1
      ETH:  src=DE:D0:06:AC:DC:3B dst=56:48:4F:53:54:01 type=0x0800
      IPV4: src=192.168.101.2 dst=192.168.101.3 proto=1 (ICMP)
      ICMP: echo request seq id=227
    port 0/queue 0: sent 1 packets
      src=56:48:4F:53:54:01 - dst=DE:D0:06:AC:DC:3B - pool=mb_pool_0 - type=0x0800 - length=98 - nb_segs=1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x0
      ol_flags: RTE_MBUF_F_TX_L4_NO_CKSUM
    port 0/queue 0: received 1 packets
      src=DE:D0:06:AC:DC:3B - dst=56:48:4F:53:54:01 - pool=mb_pool_0 - type=0x0800 - length=98 - nb_segs=1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
      ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
     
    Port 0 pkt-len=98 nb-segs=1
      ETH:  src=DE:D0:06:AC:DC:3B dst=56:48:4F:53:54:01 type=0x0800
      IPV4: src=192.168.101.2 dst=192.168.101.3 proto=1 (ICMP)
      ICMP: echo request seq id=228
    port 0/queue 0: sent 1 packets
      src=56:48:4F:53:54:01 - dst=DE:D0:06:AC:DC:3B - pool=mb_pool_0 - type=0x0800 - length=98 - nb_segs=1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x0
      ol_flags: RTE_MBUF_F_TX_L4_NO_CKSUM
    port 0/queue 0: received 1 packets
      src=DE:D0:06:AC:DC:3B - dst=56:48:4F:53:54:01 - pool=mb_pool_0 - type=0x0800 - length=98 - nb_segs=1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
      ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
     
    Port 0 pkt-len=98 nb-segs=1
      ETH:  src=DE:D0:06:AC:DC:3B dst=56:48:4F:53:54:01 type=0x0800
      IPV4: src=192.168.101.2 dst=192.168.101.3 proto=1 (ICMP)
      ICMP: echo request seq id=229
    port 0/queue 0: sent 1 packets
      src=56:48:4F:53:54:01 - dst=DE:D0:06:AC:DC:3B - pool=mb_pool_0 - type=0x0800 - length=98 - nb_segs=1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x0
      ol_flags: RTE_MBUF_F_TX_L4_NO_CKSUM
    stopport 0/queue 0: received 1 packets
      src=DE:D0:06:AC:DC:3B - dst=56:48:4F:53:54:01 - pool=mb_pool_0 - type=0x0800 - length=98 - nb_segs=1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
      ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
     
    Port 0 pkt-len=98 nb-segs=1
      ETH:  src=DE:D0:06:AC:DC:3B dst=56:48:4F:53:54:01 type=0x0800
      IPV4: src=192.168.101.2 dst=192.168.101.3 proto=1 (ICMP)
      ICMP: echo request seq id=230
    port 0/queue 0: sent 1 packets
      src=56:48:4F:53:54:01 - dst=DE:D0:06:AC:DC:3B - pool=mb_pool_0 - type=0x0800 - length=98 - nb_segs=1 - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x0
      ol_flags: RTE_MBUF_F_TX_L4_NO_CKSUM
     
    Telling cores to stop...
    Waiting for lcores to finish...
     
      ---------------------- Forward statistics for port 0  ----------------------
      RX-packets: 15             RX-dropped: 0             RX-total: 15
      TX-packets: 15             TX-dropped: 0             TX-total: 15
      ----------------------------------------------------------------------------
     
      +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
      RX-packets: 15             RX-dropped: 0             RX-total: 15
      TX-packets: 15             TX-dropped: 0             TX-total: 15
      ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
     
    Done.
    testpmd>



  reply	other threads:[~2023-09-20  7:35 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <MN2PR11MB45814151ACC8381ABFB1DCB9BD059@MN2PR11MB4581.namprd11.prod.outlook.com>
     [not found] ` <63486764-3b44-3299-6830-05435dfd78f3@redhat.com>
2022-11-15 23:30   ` Sampath Peechu (speechu)
2022-11-22  9:23     ` Maxime Coquelin
2022-11-23 22:15       ` Sampath Peechu (speechu)
2022-12-06 18:08         ` Sampath Peechu (speechu)
2023-01-30 20:29           ` Sampath Peechu (speechu)
2023-03-24  0:49             ` Roger Melton (rmelton)
     [not found]             ` <MN2PR11MB458198A11C9CDA27C69FBA37BD1FA@MN2PR11MB4581.namprd11.prod.outlook.com>
2023-09-06 18:57               ` Dave Johnson (davejo)
2023-09-13 13:24                 ` Roger Melton (rmelton)
2023-09-20  7:35                   ` Maxime Coquelin [this message]
2023-09-20 13:05                     ` Maxime Coquelin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b01d2744-982a-f4b7-8fb3-48ee9ecacd05@redhat.com \
    --to=maxime.coquelin@redhat.com \
    --cc=cbrezove@cisco.com \
    --cc=chenbo.xia@outlook.com \
    --cc=davejo@cisco.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=mbumgard@cisco.com \
    --cc=rmelton@cisco.com \
    --cc=speechu@cisco.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).