DPDK patches and discussions
 help / color / mirror / Atom feed
* [Bug 961] PCAP PMD receives packets larger then MTU
@ 2022-03-16 13:53 bugzilla
  2023-07-04 17:43 ` bugzilla
  0 siblings, 1 reply; 2+ messages in thread
From: bugzilla @ 2022-03-16 13:53 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=961

            Bug ID: 961
           Summary: PCAP PMD receives packets larger then MTU
           Product: DPDK
           Version: 21.11
          Hardware: All
                OS: All
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: ethdev
          Assignee: dev@dpdk.org
          Reporter: ido@cgstowernetworks.com
  Target Milestone: ---

Here is example of MTU (max-pkt-len) set to ~300B but driver receives packet of
~9000B (the jumbo pcap is attached)

<code>
root@u18c_9td3:/home/cgs/workspace/master/jumbo# ./dpdk-testpmd --no-huge
-m1024 -l 0-3 -n 4 --vdev
'net_pcap0,rx_pcap=jumbo_9000.pcap,tx_pcap=file_tx.pcap' -- --no-flush-rx
--total-num-mbufs=2048  --max-pkt-len=300 -i
EAL: Detected CPU lcores: 6
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and
--ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mb_pool_0>: n=2048, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will
pair with itself.

Configuring Port 0 (socket 0)
Port 0: 02:70:63:61:70:00
Checking link statuses...
Done
testpmd> show port info 0

********************* Infos for port 0  *********************
MAC address: 02:70:63:61:70:00
Device name: net_pcap0
Driver name: net_pcap
Firmware-version: not available
Devargs: rx_pcap=jumbo_9000.pcap,tx_pcap=file_tx.pcap
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 10 Gbps
Link duplex: full-duplex
Autoneg status: Off
MTU: 282
Promiscuous mode: enabled
Allmulticast mode: enabled
Maximum number of MAC addresses: 1
Maximum number of MAC addresses of hash filtering: 0
VLAN offload: 
  strip off, filter off, extend off, qinq strip off
No RSS offload flow type is supported.
Minimum size of RX buffer: 0
Maximum configurable length of RX packet: 4294967295
Maximum configurable size of LRO aggregated packet: 0
Current number of RX queues: 1
Max possible RX queues: 1
Max possible number of RXDs per queue: 65535
Min possible number of RXDs per queue: 0
RXDs number alignment: 1
Current number of TX queues: 1
Max possible TX queues: 1
Max possible number of TXDs per queue: 65535
Min possible number of TXDs per queue: 0
TXDs number alignment: 1
Max segment number per packet: 65535
Max segment number per MTU/TSO: 65535
Device capabilities: 0x0( )
testpmd> set  verbose 3
Change verbose level from 0 to 3
testpmd> start
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP
allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=1
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> port 0/queue 0: received 1 packets
  src=04:F4:BC:3E:E4:25 - dst=00:00:00:00:00:00 - type=0x0800 - length=8996 -
nb_segs=5 - timestamp 1266577613728443  - sw ptype: L2_ETHER L3_IPV4  -
l2_len=14 - l3_len=20 - Receive queue=0x0
  ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN
RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN 
port 0/queue 0: sent 1 packets
  src=04:F4:BC:3E:E4:25 - dst=00:00:00:00:00:00 - type=0x0800 - length=8996 -
nb_segs=5 - timestamp 1266577613728443  - sw ptype: L2_ETHER L3_IPV4  -
l2_len=14 - l3_len=20 - Send queue=0x0
  ol_flags: RTE_MBUF_F_TX_L4_NO_CKSUM 

testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 1              TX-dropped: 0             TX-total: 1
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 1              TX-dropped: 0             TX-total: 1
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
testpmd> show port stats 0

  ######################## NIC statistics for port 0  ########################
  RX-packets: 1          RX-missed: 0          RX-bytes:  8996
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 1          TX-errors: 0          TX-bytes:  8996

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd> quit

Stopping port 0...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
Port 0 is closed
Done

Bye...


There's further issues for apps (non test-pmd) that use pcap driver as they may
unexpectedly get segmented mbufs w/o setting RTE_ETH_RX_OFFLOAD_SCATTER

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [Bug 961] PCAP PMD receives packets larger then MTU
  2022-03-16 13:53 [Bug 961] PCAP PMD receives packets larger then MTU bugzilla
@ 2023-07-04 17:43 ` bugzilla
  0 siblings, 0 replies; 2+ messages in thread
From: bugzilla @ 2023-07-04 17:43 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 1009 bytes --]

https://bugs.dpdk.org/show_bug.cgi?id=961

Stephen Hemminger (stephen@networkplumber.org) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |stephen@networkplumber.org
         Resolution|---                         |WONTFIX
             Status|UNCONFIRMED                 |RESOLVED

--- Comment #4 from Stephen Hemminger (stephen@networkplumber.org) ---
This is not a bug. Other drivers allow bigger frames.

It is part of the overall confusion of what does MTU mean in OS.
In Linux and FreeBSD, it is maximum transmission unit and it also
serves as the minimum receive unit for device drivers.

Many device drivers will receive oversize packets, and leave it up to upper
layers to handle. It is the device driver version of the robustness principle
(aka Postel's Law).

-- 
You are receiving this mail because:
You are the assignee for the bug.

[-- Attachment #2: Type: text/html, Size: 3140 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-07-04 17:43 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-16 13:53 [Bug 961] PCAP PMD receives packets larger then MTU bugzilla
2023-07-04 17:43 ` bugzilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).