DPDK patches and discussions
 help / color / mirror / Atom feed
* Re: GCP cloud : Virtio-PMD performance Issue
       [not found]   ` <ed76028a-819e-4918-9529-5aedc4762148@redhat.com>
@ 2024-12-05 22:54     ` Mukul Sinha
  2024-12-05 22:58       ` Mukul Sinha
  2024-12-06  8:23       ` Maxime Coquelin
  0 siblings, 2 replies; 6+ messages in thread
From: Mukul Sinha @ 2024-12-05 22:54 UTC (permalink / raw)
  To: Maxime Coquelin, dev
  Cc: chenbox, jeroendb, rushilg, joshwash, Srinivasa Srikanth Podila,
	Tathagat Priyadarshi, Samar Yadav, Varun LA

[-- Attachment #1: Type: text/plain, Size: 10264 bytes --]

Thanks @maxime.coquelin@redhat.com
Have included dev@dpdk.org


On Fri, Dec 6, 2024 at 2:11 AM Maxime Coquelin <maxime.coquelin@redhat.com>
wrote:

> Hi Mukul,
>
> DPDK upstream mailing lists should be added to this e-mail.
> I am not allowed to provide off-list support, all discussions should
> happen upstream.
>
> If this is reproduced with downtream DPDK provided with RHEL and you
> have a RHEL subscription, please use the Red Hat issue tracker.
>
> Thanks for your understanding,
> Maxime
>
> On 12/5/24 21:36, Mukul Sinha wrote:
> > + Varun
> >
> > On Fri, Dec 6, 2024 at 2:04 AM Mukul Sinha <mukul.sinha@broadcom.com
> > <mailto:mukul.sinha@broadcom.com>> wrote:
> >
> >     Hi GCP & Virtio-PMD dev teams,
> >     We are from VMware NSX Advanced Load Balancer Team whereby in
> >     GCP-cloud (*custom-8-8192 VM instance type 8core8G*) we are triaging
> >     an issue of TCP profile application throughput performance with
> >     single dispatcher core single Rx/Tx queue (queue depth: 2048) the
> >     throughput performance we get using dpdk-22.11 virtio-PMD code is
> >     degraded significantly when compared to when using dpdk-20.05 PMD
> >     We see high amount of Tx packet drop counter incrementing on
> >     virtio-NIC pointing to issue that the GCP hypervisor side is unable
> >     to drain the packets faster (No drops are seen on Rx side)
> >     The behavior is like this :
> >     _Using dpdk-22.11_
> >     At 75% CPU usage itself we start seeing huge number of Tx packet
> >     drops reported (no Rx drops) causing TCP restransmissions eventually
> >     bringing down the effective throughput numbers
> >     _Using dpdk-20.05_
> >     even at ~95% CPU usage without any packet drops (neither Rx nor Tx)
> >     we are able to get a much better throughput
> >
> >     To improve performance numbers with dpdk-22.11 we have tried
> >     increasing the queue depth to 4096 but that din't help.
> >     If with dpdk-22.11 we move from single core Rx/Tx queue=1 to single
> >     core Rx/Tx queue=2 we are able to get slightly better numbers (but
> >     still doesnt match the numbers obtained using dpdk-20.05 single core
> >     Rx/Tx queue=1). This again corroborates the fact the GCP hypervisor
> >     is the bottleneck here.
> >
> >     To root-cause this issue we were able to replicate this behavior
> >     using native DPDK testpmd as shown below (cmds used):-
> >     Hugepage size: 2 MB
> >       ./app/dpdk-testpmd -l 0-1 -n 1 -- -i --nb-cores=1 --txd=2048
> >     --rxd=2048 --rxq=1 --txq=1  --portmask=0x3
> >     set fwd mac
> >     set fwd flowgen
> >     set txpkts 1518
> >     start
> >     stop
> >
> >     Testpmd traffic run (for packet-size=1518) for exact same
> >     time-interval of 15 seconds:
> >
> >     _22.11_
> >        ---------------------- Forward statistics for port 0
> >       ----------------------
> >        RX-packets: 2              RX-dropped: 0             RX-total: 2
> >        TX-packets: 19497570 *TX-dropped: 364674686 *    TX-total:
> 384172256
> >
> >
>  ----------------------------------------------------------------------------
> >     _20.05_
> >        ---------------------- Forward statistics for port 0
> >       ----------------------
> >        RX-packets: 3              RX-dropped: 0             RX-total: 3
> >        TX-packets: 19480319       TX-dropped: 0             TX-total:
> >     19480319
> >
> >
>  ----------------------------------------------------------------------------
> >
> >     As you can see
> >     dpdk-22.11
> >     Packets generated : 384 million Packets serviced : ~19.5 million :
> >     Tx-dropped : 364 million
> >     dpdk-20.05
> >     Packets generated : ~19.5 million Packets serviced : ~19.5 million :
> >     Tx-dropped : 0
> >
> >     Actual serviced traffic remains almost same between the two versions
> >     (implying the underlying GCP hypervisor is only capable of handling
> >     that much) but in dpdk-22.11 the PMD is pushing almost 20x traffic
> >     compared to dpdk-20.05
> >     The same pattern can be seen even if we run traffic for a longer
> >     duration.
> >
>  ===============================================================================================
> >
> >     Following are our queries:
> >     @ Virtio-dev team
> >     1. Why in dpdk-22.11 using virtio PMD the testpmd application is
> >     able to pump 20 times Tx traffic towards hypervisor compared to
> >     dpdk-20.05 ?
> >     What has changed either in the virtio-PMD or in the virtio-PMD &
> >     underlying hypervisor communication causing this behavior ?
> >     If you see actual serviced traffic by the hypervisor remains almost
> >     on par with dpdk-20.05 but its the humongous packets drop count
> >     which can be overall detrimental for any DPDK-application running
> >     TCP traffic profile.
> >     Is there a way to slow down the number of packets sent towards the
> >     hypervisor (through either any code change in virtio-PMD or any
> >     config setting) and make it on-par with dpdk-20.05 performance ?
> >     2. In the published Virtio performance report Release 22.11 we see
> >     no qualification of throughput numbers done on GCP-cloud. Is there
> >     any internal performance benchmark numbers you have for GCP-cloud
> >     and if yes can you please share it with us so that we can check if
> >     there's any configs/knobs/settings you used to get optimum
> performance.
> >
> >     @ GCP-cloud dev team
> >     As we can see any amount of traffic greater than what can be
> >     successfully serviced by the GCP hypervisor is all getting dropped
> >     hence we need help from your side to reproduce this issue in your
> >     in-house setup preferably using the same VM instance type as
> >     highlighted before.
> >     We need further investigation by you from the GCP host level side to
> >     check on parameters like running out of Tx buffers or Queue full
> >     conditions for the virtio-NIC or number of NIC Rx/Tx kernel threads
> >     as to what is causing hypervisor to not match up to the traffic load
> >     pumped in dpdk-22.11
> >     Based on your debugging we would additionally need inputs as to what
> >     can be tweaked or any knobs/settings can be configured from the
> >     GCP-VM level to get better performance numbers.
> >
> >     Please feel free to reach out to us for any further queries.
> >
> >     _Additional outputs for debugging:_
> >     lspci | grep Eth
> >     00:06.0 Ethernet controller: Red Hat, Inc. Virtio network device
> >     root@dcg15-se-ecmyw:/home/admin/dpdk/build# ethtool -i eth0
> >     driver: virtio_net
> >     version: 1.0.0
> >     firmware-version:
> >     expansion-rom-version:
> >     bus-info: 0000:00:06.0
> >     supports-statistics: yes
> >     supports-test: no
> >     supports-eeprom-access: no
> >     supports-register-dump: no
> >     supports-priv-flags: no
> >
> >     testpmd> show port info all
> >     ********************* Infos for port 0  *********************
> >     MAC address: 42:01:0A:98:A0:0F
> >     Device name: 0000:00:06.0
> >     Driver name: net_virtio
> >     Firmware-version: not available
> >     Connect to socket: 0
> >     memory allocation on the socket: 0
> >     Link status: up
> >     Link speed: Unknown
> >     Link duplex: full-duplex
> >     Autoneg status: On
> >     MTU: 1500
> >     Promiscuous mode: disabled
> >     Allmulticast mode: disabled
> >     Maximum number of MAC addresses: 64
> >     Maximum number of MAC addresses of hash filtering: 0
> >     VLAN offload:
> >        strip off, filter off, extend off, qinq strip off
> >     No RSS offload flow type is supported.
> >     Minimum size of RX buffer: 64
> >     Maximum configurable length of RX packet: 9728
> >     Maximum configurable size of LRO aggregated packet: 0
> >     Current number of RX queues: 1
> >     Max possible RX queues: 2
> >     Max possible number of RXDs per queue: 32768
> >     Min possible number of RXDs per queue: 32
> >     RXDs number alignment: 1
> >     Current number of TX queues: 1
> >     Max possible TX queues: 2
> >     Max possible number of TXDs per queue: 32768
> >     Min possible number of TXDs per queue: 32
> >     TXDs number alignment: 1
> >     Max segment number per packet: 65535
> >     Max segment number per MTU/TSO: 65535
> >     Device capabilities: 0x0( )
> >     Device error handling mode: none
> >
> >
> >
> > This electronic communication and the information and any files
> > transmitted with it, or attached to it, are confidential and are
> > intended solely for the use of the individual or entity to whom it is
> > addressed and may contain information that is confidential, legally
> > privileged, protected by privacy laws, or otherwise restricted from
> > disclosure to anyone else. If you are not the intended recipient or the
> > person responsible for delivering the e-mail to the intended recipient,
> > you are hereby notified that any use, copying, distributing,
> > dissemination, forwarding, printing, or copying of this e-mail is
> > strictly prohibited. If you received this e-mail in error, please return
> > the e-mail to the sender, delete it from your computer, and destroy any
> > printed copy of it.
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 12378 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: GCP cloud : Virtio-PMD performance Issue
  2024-12-05 22:54     ` GCP cloud : Virtio-PMD performance Issue Mukul Sinha
@ 2024-12-05 22:58       ` Mukul Sinha
  2024-12-06  8:23       ` Maxime Coquelin
  1 sibling, 0 replies; 6+ messages in thread
From: Mukul Sinha @ 2024-12-05 22:58 UTC (permalink / raw)
  To: Maxime Coquelin, dev
  Cc: chenbox, jeroendb, rushilg, joshwash, Srinivasa Srikanth Podila,
	Tathagat Priyadarshi, Samar Yadav, Varun LA

[-- Attachment #1: Type: text/plain, Size: 10745 bytes --]

GCP-dev team @jeroendb@google.com <jeroendb@google.com>  @rushilg@google.com
<rushilg@google.com> @joshwash@google.com <joshwash@google.com>
Please do check on this & get back.

On Fri, Dec 6, 2024 at 4:24 AM Mukul Sinha <mukul.sinha@broadcom.com> wrote:

> Thanks @maxime.coquelin@redhat.com
> Have included dev@dpdk.org
>
>
> On Fri, Dec 6, 2024 at 2:11 AM Maxime Coquelin <maxime.coquelin@redhat.com>
> wrote:
>
>> Hi Mukul,
>>
>> DPDK upstream mailing lists should be added to this e-mail.
>> I am not allowed to provide off-list support, all discussions should
>> happen upstream.
>>
>> If this is reproduced with downtream DPDK provided with RHEL and you
>> have a RHEL subscription, please use the Red Hat issue tracker.
>>
>> Thanks for your understanding,
>> Maxime
>>
>> On 12/5/24 21:36, Mukul Sinha wrote:
>> > + Varun
>> >
>> > On Fri, Dec 6, 2024 at 2:04 AM Mukul Sinha <mukul.sinha@broadcom.com
>> > <mailto:mukul.sinha@broadcom.com>> wrote:
>> >
>> >     Hi GCP & Virtio-PMD dev teams,
>> >     We are from VMware NSX Advanced Load Balancer Team whereby in
>> >     GCP-cloud (*custom-8-8192 VM instance type 8core8G*) we are triaging
>> >     an issue of TCP profile application throughput performance with
>> >     single dispatcher core single Rx/Tx queue (queue depth: 2048) the
>> >     throughput performance we get using dpdk-22.11 virtio-PMD code is
>> >     degraded significantly when compared to when using dpdk-20.05 PMD
>> >     We see high amount of Tx packet drop counter incrementing on
>> >     virtio-NIC pointing to issue that the GCP hypervisor side is unable
>> >     to drain the packets faster (No drops are seen on Rx side)
>> >     The behavior is like this :
>> >     _Using dpdk-22.11_
>> >     At 75% CPU usage itself we start seeing huge number of Tx packet
>> >     drops reported (no Rx drops) causing TCP restransmissions eventually
>> >     bringing down the effective throughput numbers
>> >     _Using dpdk-20.05_
>> >     even at ~95% CPU usage without any packet drops (neither Rx nor Tx)
>> >     we are able to get a much better throughput
>> >
>> >     To improve performance numbers with dpdk-22.11 we have tried
>> >     increasing the queue depth to 4096 but that din't help.
>> >     If with dpdk-22.11 we move from single core Rx/Tx queue=1 to single
>> >     core Rx/Tx queue=2 we are able to get slightly better numbers (but
>> >     still doesnt match the numbers obtained using dpdk-20.05 single core
>> >     Rx/Tx queue=1). This again corroborates the fact the GCP hypervisor
>> >     is the bottleneck here.
>> >
>> >     To root-cause this issue we were able to replicate this behavior
>> >     using native DPDK testpmd as shown below (cmds used):-
>> >     Hugepage size: 2 MB
>> >       ./app/dpdk-testpmd -l 0-1 -n 1 -- -i --nb-cores=1 --txd=2048
>> >     --rxd=2048 --rxq=1 --txq=1  --portmask=0x3
>> >     set fwd mac
>> >     set fwd flowgen
>> >     set txpkts 1518
>> >     start
>> >     stop
>> >
>> >     Testpmd traffic run (for packet-size=1518) for exact same
>> >     time-interval of 15 seconds:
>> >
>> >     _22.11_
>> >        ---------------------- Forward statistics for port 0
>> >       ----------------------
>> >        RX-packets: 2              RX-dropped: 0             RX-total: 2
>> >        TX-packets: 19497570 *TX-dropped: 364674686 *    TX-total:
>> 384172256
>> >
>> >
>>  ----------------------------------------------------------------------------
>> >     _20.05_
>> >        ---------------------- Forward statistics for port 0
>> >       ----------------------
>> >        RX-packets: 3              RX-dropped: 0             RX-total: 3
>> >        TX-packets: 19480319       TX-dropped: 0             TX-total:
>> >     19480319
>> >
>> >
>>  ----------------------------------------------------------------------------
>> >
>> >     As you can see
>> >     dpdk-22.11
>> >     Packets generated : 384 million Packets serviced : ~19.5 million :
>> >     Tx-dropped : 364 million
>> >     dpdk-20.05
>> >     Packets generated : ~19.5 million Packets serviced : ~19.5 million :
>> >     Tx-dropped : 0
>> >
>> >     Actual serviced traffic remains almost same between the two versions
>> >     (implying the underlying GCP hypervisor is only capable of handling
>> >     that much) but in dpdk-22.11 the PMD is pushing almost 20x traffic
>> >     compared to dpdk-20.05
>> >     The same pattern can be seen even if we run traffic for a longer
>> >     duration.
>> >
>>  ===============================================================================================
>> >
>> >     Following are our queries:
>> >     @ Virtio-dev team
>> >     1. Why in dpdk-22.11 using virtio PMD the testpmd application is
>> >     able to pump 20 times Tx traffic towards hypervisor compared to
>> >     dpdk-20.05 ?
>> >     What has changed either in the virtio-PMD or in the virtio-PMD &
>> >     underlying hypervisor communication causing this behavior ?
>> >     If you see actual serviced traffic by the hypervisor remains almost
>> >     on par with dpdk-20.05 but its the humongous packets drop count
>> >     which can be overall detrimental for any DPDK-application running
>> >     TCP traffic profile.
>> >     Is there a way to slow down the number of packets sent towards the
>> >     hypervisor (through either any code change in virtio-PMD or any
>> >     config setting) and make it on-par with dpdk-20.05 performance ?
>> >     2. In the published Virtio performance report Release 22.11 we see
>> >     no qualification of throughput numbers done on GCP-cloud. Is there
>> >     any internal performance benchmark numbers you have for GCP-cloud
>> >     and if yes can you please share it with us so that we can check if
>> >     there's any configs/knobs/settings you used to get optimum
>> performance.
>> >
>> >     @ GCP-cloud dev team
>> >     As we can see any amount of traffic greater than what can be
>> >     successfully serviced by the GCP hypervisor is all getting dropped
>> >     hence we need help from your side to reproduce this issue in your
>> >     in-house setup preferably using the same VM instance type as
>> >     highlighted before.
>> >     We need further investigation by you from the GCP host level side to
>> >     check on parameters like running out of Tx buffers or Queue full
>> >     conditions for the virtio-NIC or number of NIC Rx/Tx kernel threads
>> >     as to what is causing hypervisor to not match up to the traffic load
>> >     pumped in dpdk-22.11
>> >     Based on your debugging we would additionally need inputs as to what
>> >     can be tweaked or any knobs/settings can be configured from the
>> >     GCP-VM level to get better performance numbers.
>> >
>> >     Please feel free to reach out to us for any further queries.
>> >
>> >     _Additional outputs for debugging:_
>> >     lspci | grep Eth
>> >     00:06.0 Ethernet controller: Red Hat, Inc. Virtio network device
>> >     root@dcg15-se-ecmyw:/home/admin/dpdk/build# ethtool -i eth0
>> >     driver: virtio_net
>> >     version: 1.0.0
>> >     firmware-version:
>> >     expansion-rom-version:
>> >     bus-info: 0000:00:06.0
>> >     supports-statistics: yes
>> >     supports-test: no
>> >     supports-eeprom-access: no
>> >     supports-register-dump: no
>> >     supports-priv-flags: no
>> >
>> >     testpmd> show port info all
>> >     ********************* Infos for port 0  *********************
>> >     MAC address: 42:01:0A:98:A0:0F
>> >     Device name: 0000:00:06.0
>> >     Driver name: net_virtio
>> >     Firmware-version: not available
>> >     Connect to socket: 0
>> >     memory allocation on the socket: 0
>> >     Link status: up
>> >     Link speed: Unknown
>> >     Link duplex: full-duplex
>> >     Autoneg status: On
>> >     MTU: 1500
>> >     Promiscuous mode: disabled
>> >     Allmulticast mode: disabled
>> >     Maximum number of MAC addresses: 64
>> >     Maximum number of MAC addresses of hash filtering: 0
>> >     VLAN offload:
>> >        strip off, filter off, extend off, qinq strip off
>> >     No RSS offload flow type is supported.
>> >     Minimum size of RX buffer: 64
>> >     Maximum configurable length of RX packet: 9728
>> >     Maximum configurable size of LRO aggregated packet: 0
>> >     Current number of RX queues: 1
>> >     Max possible RX queues: 2
>> >     Max possible number of RXDs per queue: 32768
>> >     Min possible number of RXDs per queue: 32
>> >     RXDs number alignment: 1
>> >     Current number of TX queues: 1
>> >     Max possible TX queues: 2
>> >     Max possible number of TXDs per queue: 32768
>> >     Min possible number of TXDs per queue: 32
>> >     TXDs number alignment: 1
>> >     Max segment number per packet: 65535
>> >     Max segment number per MTU/TSO: 65535
>> >     Device capabilities: 0x0( )
>> >     Device error handling mode: none
>> >
>> >
>> >
>> > This electronic communication and the information and any files
>> > transmitted with it, or attached to it, are confidential and are
>> > intended solely for the use of the individual or entity to whom it is
>> > addressed and may contain information that is confidential, legally
>> > privileged, protected by privacy laws, or otherwise restricted from
>> > disclosure to anyone else. If you are not the intended recipient or the
>> > person responsible for delivering the e-mail to the intended recipient,
>> > you are hereby notified that any use, copying, distributing,
>> > dissemination, forwarding, printing, or copying of this e-mail is
>> > strictly prohibited. If you received this e-mail in error, please
>> return
>> > the e-mail to the sender, delete it from your computer, and destroy any
>> > printed copy of it.
>>
>>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 13239 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: GCP cloud : Virtio-PMD performance Issue
  2024-12-05 22:54     ` GCP cloud : Virtio-PMD performance Issue Mukul Sinha
  2024-12-05 22:58       ` Mukul Sinha
@ 2024-12-06  8:23       ` Maxime Coquelin
  2024-12-09 15:37         ` Mukul Sinha
  1 sibling, 1 reply; 6+ messages in thread
From: Maxime Coquelin @ 2024-12-06  8:23 UTC (permalink / raw)
  To: Mukul Sinha, dev
  Cc: chenbox, jeroendb, rushilg, joshwash, Srinivasa Srikanth Podila,
	Tathagat Priyadarshi, Samar Yadav, Varun LA

Hi Mukul,

On 12/5/24 23:54, Mukul Sinha wrote:
> Thanks @maxime.coquelin@redhat.com <mailto:maxime.coquelin@redhat.com>
> Have included dev@dpdk.org <mailto:dev@dpdk.org>
> 
> 
> On Fri, Dec 6, 2024 at 2:11 AM Maxime Coquelin 
> <maxime.coquelin@redhat.com <mailto:maxime.coquelin@redhat.com>> wrote:
> 
>     Hi Mukul,
> 
>     DPDK upstream mailing lists should be added to this e-mail.
>     I am not allowed to provide off-list support, all discussions should
>     happen upstream.
> 
>     If this is reproduced with downtream DPDK provided with RHEL and you
>     have a RHEL subscription, please use the Red Hat issue tracker.
> 
>     Thanks for your understanding,
>     Maxime
> 
>     On 12/5/24 21:36, Mukul Sinha wrote:
>      > + Varun
>      >
>      > On Fri, Dec 6, 2024 at 2:04 AM Mukul Sinha
>     <mukul.sinha@broadcom.com <mailto:mukul.sinha@broadcom.com>
>      > <mailto:mukul.sinha@broadcom.com
>     <mailto:mukul.sinha@broadcom.com>>> wrote:
>      >
>      >     Hi GCP & Virtio-PMD dev teams,
>      >     We are from VMware NSX Advanced Load Balancer Team whereby in
>      >     GCP-cloud (*custom-8-8192 VM instance type 8core8G*) we are
>     triaging
>      >     an issue of TCP profile application throughput performance with
>      >     single dispatcher core single Rx/Tx queue (queue depth: 2048) the
>      >     throughput performance we get using dpdk-22.11 virtio-PMD code is
>      >     degraded significantly when compared to when using dpdk-20.05 PMD
>      >     We see high amount of Tx packet drop counter incrementing on
>      >     virtio-NIC pointing to issue that the GCP hypervisor side is
>     unable
>      >     to drain the packets faster (No drops are seen on Rx side)
>      >     The behavior is like this :
>      >     _Using dpdk-22.11_
>      >     At 75% CPU usage itself we start seeing huge number of Tx packet
>      >     drops reported (no Rx drops) causing TCP restransmissions
>     eventually
>      >     bringing down the effective throughput numbers
>      >     _Using dpdk-20.05_
>      >     even at ~95% CPU usage without any packet drops (neither Rx
>     nor Tx)
>      >     we are able to get a much better throughput
>      >
>      >     To improve performance numbers with dpdk-22.11 we have tried
>      >     increasing the queue depth to 4096 but that din't help.
>      >     If with dpdk-22.11 we move from single core Rx/Tx queue=1 to
>     single
>      >     core Rx/Tx queue=2 we are able to get slightly better numbers
>     (but
>      >     still doesnt match the numbers obtained using dpdk-20.05
>     single core
>      >     Rx/Tx queue=1). This again corroborates the fact the GCP
>     hypervisor
>      >     is the bottleneck here.
>      >
>      >     To root-cause this issue we were able to replicate this behavior
>      >     using native DPDK testpmd as shown below (cmds used):-
>      >     Hugepage size: 2 MB
>      >       ./app/dpdk-testpmd -l 0-1 -n 1 -- -i --nb-cores=1 --txd=2048
>      >     --rxd=2048 --rxq=1 --txq=1  --portmask=0x3
>      >     set fwd mac
>      >     set fwd flowgen
>      >     set txpkts 1518
>      >     start
>      >     stop
>      >
>      >     Testpmd traffic run (for packet-size=1518) for exact same
>      >     time-interval of 15 seconds:
>      >
>      >     _22.11_
>      >        ---------------------- Forward statistics for port 0
>      >       ----------------------
>      >        RX-packets: 2              RX-dropped: 0            
>     RX-total: 2
>      >        TX-packets: 19497570 *TX-dropped: 364674686 *    TX-total:
>     384172256
>      >
>      >   
>       ----------------------------------------------------------------------------
>      >     _20.05_
>      >        ---------------------- Forward statistics for port 0
>      >       ----------------------
>      >        RX-packets: 3              RX-dropped: 0            
>     RX-total: 3
>      >        TX-packets: 19480319       TX-dropped: 0             TX-total:
>      >     19480319
>      >
>      >   
>       ----------------------------------------------------------------------------
>      >
>      >     As you can see
>      >     dpdk-22.11
>      >     Packets generated : 384 million Packets serviced : ~19.5
>     million :
>      >     Tx-dropped : 364 million
>      >     dpdk-20.05
>      >     Packets generated : ~19.5 million Packets serviced : ~19.5
>     million :
>      >     Tx-dropped : 0
>      >
>      >     Actual serviced traffic remains almost same between the two
>     versions
>      >     (implying the underlying GCP hypervisor is only capable of
>     handling
>      >     that much) but in dpdk-22.11 the PMD is pushing almost 20x
>     traffic
>      >     compared to dpdk-20.05
>      >     The same pattern can be seen even if we run traffic for a longer
>      >     duration.
>      >   
>       ===============================================================================================
>      >
>      >     Following are our queries:
>      >     @ Virtio-dev team
>      >     1. Why in dpdk-22.11 using virtio PMD the testpmd application is
>      >     able to pump 20 times Tx traffic towards hypervisor compared to
>      >     dpdk-20.05 ?
>      >     What has changed either in the virtio-PMD or in the virtio-PMD &
>      >     underlying hypervisor communication causing this behavior ?
>      >     If you see actual serviced traffic by the hypervisor remains
>     almost
>      >     on par with dpdk-20.05 but its the humongous packets drop count
>      >     which can be overall detrimental for any DPDK-application running
>      >     TCP traffic profile.
>      >     Is there a way to slow down the number of packets sent
>     towards the
>      >     hypervisor (through either any code change in virtio-PMD or any
>      >     config setting) and make it on-par with dpdk-20.05 performance ?
>      >     2. In the published Virtio performance report Release 22.11
>     we see
>      >     no qualification of throughput numbers done on GCP-cloud. Is
>     there
>      >     any internal performance benchmark numbers you have for GCP-cloud
>      >     and if yes can you please share it with us so that we can
>     check if
>      >     there's any configs/knobs/settings you used to get optimum
>     performance.


I don't know what your issue is, but this is not something we noticed
using QEMU/KVM as hypervisor with Vhost-user backend.

I would suggest you run a git bisect to pinpoint to the specific commit
introducing this regression.

Also, you could run perf top in the guest on both 20.05 and 22.11, maybe
we could spot something in it.

Regards,
Maxime

>      >
>      >     @ GCP-cloud dev team
>      >     As we can see any amount of traffic greater than what can be
>      >     successfully serviced by the GCP hypervisor is all getting
>     dropped
>      >     hence we need help from your side to reproduce this issue in your
>      >     in-house setup preferably using the same VM instance type as
>      >     highlighted before.
>      >     We need further investigation by you from the GCP host level
>     side to
>      >     check on parameters like running out of Tx buffers or Queue full
>      >     conditions for the virtio-NIC or number of NIC Rx/Tx kernel
>     threads
>      >     as to what is causing hypervisor to not match up to the
>     traffic load
>      >     pumped in dpdk-22.11
>      >     Based on your debugging we would additionally need inputs as
>     to what
>      >     can be tweaked or any knobs/settings can be configured from the
>      >     GCP-VM level to get better performance numbers.
>      >
>      >     Please feel free to reach out to us for any further queries.
>      >
>      >     _Additional outputs for debugging:_
>      >     lspci | grep Eth
>      >     00:06.0 Ethernet controller: Red Hat, Inc. Virtio network device
>      >     root@dcg15-se-ecmyw:/home/admin/dpdk/build# ethtool -i eth0
>      >     driver: virtio_net
>      >     version: 1.0.0
>      >     firmware-version:
>      >     expansion-rom-version:
>      >     bus-info: 0000:00:06.0
>      >     supports-statistics: yes
>      >     supports-test: no
>      >     supports-eeprom-access: no
>      >     supports-register-dump: no
>      >     supports-priv-flags: no
>      >
>      >     testpmd> show port info all
>      >     ********************* Infos for port 0  *********************
>      >     MAC address: 42:01:0A:98:A0:0F
>      >     Device name: 0000:00:06.0
>      >     Driver name: net_virtio
>      >     Firmware-version: not available
>      >     Connect to socket: 0
>      >     memory allocation on the socket: 0
>      >     Link status: up
>      >     Link speed: Unknown
>      >     Link duplex: full-duplex
>      >     Autoneg status: On
>      >     MTU: 1500
>      >     Promiscuous mode: disabled
>      >     Allmulticast mode: disabled
>      >     Maximum number of MAC addresses: 64
>      >     Maximum number of MAC addresses of hash filtering: 0
>      >     VLAN offload:
>      >        strip off, filter off, extend off, qinq strip off
>      >     No RSS offload flow type is supported.
>      >     Minimum size of RX buffer: 64
>      >     Maximum configurable length of RX packet: 9728
>      >     Maximum configurable size of LRO aggregated packet: 0
>      >     Current number of RX queues: 1
>      >     Max possible RX queues: 2
>      >     Max possible number of RXDs per queue: 32768
>      >     Min possible number of RXDs per queue: 32
>      >     RXDs number alignment: 1
>      >     Current number of TX queues: 1
>      >     Max possible TX queues: 2
>      >     Max possible number of TXDs per queue: 32768
>      >     Min possible number of TXDs per queue: 32
>      >     TXDs number alignment: 1
>      >     Max segment number per packet: 65535
>      >     Max segment number per MTU/TSO: 65535
>      >     Device capabilities: 0x0( )
>      >     Device error handling mode: none
>      >
>      >
>      >
>      > This electronic communication and the information and any files
>      > transmitted with it, or attached to it, are confidential and are
>      > intended solely for the use of the individual or entity to whom
>     it is
>      > addressed and may contain information that is confidential, legally
>      > privileged, protected by privacy laws, or otherwise restricted from
>      > disclosure to anyone else. If you are not the intended recipient
>     or the
>      > person responsible for delivering the e-mail to the intended
>     recipient,
>      > you are hereby notified that any use, copying, distributing,
>      > dissemination, forwarding, printing, or copying of this e-mail is
>      > strictly prohibited. If you received this e-mail in error, please
>     return
>      > the e-mail to the sender, delete it from your computer, and
>     destroy any
>      > printed copy of it.
> 
> 
> This electronic communication and the information and any files 
> transmitted with it, or attached to it, are confidential and are 
> intended solely for the use of the individual or entity to whom it is 
> addressed and may contain information that is confidential, legally 
> privileged, protected by privacy laws, or otherwise restricted from 
> disclosure to anyone else. If you are not the intended recipient or the 
> person responsible for delivering the e-mail to the intended recipient, 
> you are hereby notified that any use, copying, distributing, 
> dissemination, forwarding, printing, or copying of this e-mail is 
> strictly prohibited. If you received this e-mail in error, please return 
> the e-mail to the sender, delete it from your computer, and destroy any 
> printed copy of it.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: GCP cloud : Virtio-PMD performance Issue
  2024-12-06  8:23       ` Maxime Coquelin
@ 2024-12-09 15:37         ` Mukul Sinha
  0 siblings, 0 replies; 6+ messages in thread
From: Mukul Sinha @ 2024-12-09 15:37 UTC (permalink / raw)
  To: Maxime Coquelin
  Cc: dev, chenbox, jeroendb, rushilg, joshwash,
	Srinivasa Srikanth Podila, Tathagat Priyadarshi, Samar Yadav,
	Varun LA


[-- Attachment #1.1: Type: text/plain, Size: 13807 bytes --]

Hi Maxime,
We have run perf top on dpdk-20.05 vs dpdk-22.11 but nothing of difference
in the top-hitter api's. In our analysis virtio-PMD CPU performance's not a
bottleneck (infact its more performant now) but its the GCP hypervisor
which isn't able to cope up with 3 times the Tx traffic load.

On Fri, Dec 6, 2024 at 1:54 PM Maxime Coquelin <maxime.coquelin@redhat.com>
wrote:

> Hi Mukul,
>
> On 12/5/24 23:54, Mukul Sinha wrote:
> > Thanks @maxime.coquelin@redhat.com <mailto:maxime.coquelin@redhat.com>
> > Have included dev@dpdk.org <mailto:dev@dpdk.org>
> >
> >
> > On Fri, Dec 6, 2024 at 2:11 AM Maxime Coquelin
> > <maxime.coquelin@redhat.com <mailto:maxime.coquelin@redhat.com>> wrote:
> >
> >     Hi Mukul,
> >
> >     DPDK upstream mailing lists should be added to this e-mail.
> >     I am not allowed to provide off-list support, all discussions should
> >     happen upstream.
> >
> >     If this is reproduced with downtream DPDK provided with RHEL and you
> >     have a RHEL subscription, please use the Red Hat issue tracker.
> >
> >     Thanks for your understanding,
> >     Maxime
> >
> >     On 12/5/24 21:36, Mukul Sinha wrote:
> >      > + Varun
> >      >
> >      > On Fri, Dec 6, 2024 at 2:04 AM Mukul Sinha
> >     <mukul.sinha@broadcom.com <mailto:mukul.sinha@broadcom.com>
> >      > <mailto:mukul.sinha@broadcom.com
> >     <mailto:mukul.sinha@broadcom.com>>> wrote:
> >      >
> >      >     Hi GCP & Virtio-PMD dev teams,
> >      >     We are from VMware NSX Advanced Load Balancer Team whereby in
> >      >     GCP-cloud (*custom-8-8192 VM instance type 8core8G*) we are
> >     triaging
> >      >     an issue of TCP profile application throughput performance
> with
> >      >     single dispatcher core single Rx/Tx queue (queue depth: 2048)
> the
> >      >     throughput performance we get using dpdk-22.11 virtio-PMD
> code is
> >      >     degraded significantly when compared to when using dpdk-20.05
> PMD
> >      >     We see high amount of Tx packet drop counter incrementing on
> >      >     virtio-NIC pointing to issue that the GCP hypervisor side is
> >     unable
> >      >     to drain the packets faster (No drops are seen on Rx side)
> >      >     The behavior is like this :
> >      >     _Using dpdk-22.11_
> >      >     At 75% CPU usage itself we start seeing huge number of Tx
> packet
> >      >     drops reported (no Rx drops) causing TCP restransmissions
> >     eventually
> >      >     bringing down the effective throughput numbers
> >      >     _Using dpdk-20.05_
> >      >     even at ~95% CPU usage without any packet drops (neither Rx
> >     nor Tx)
> >      >     we are able to get a much better throughput
> >      >
> >      >     To improve performance numbers with dpdk-22.11 we have tried
> >      >     increasing the queue depth to 4096 but that din't help.
> >      >     If with dpdk-22.11 we move from single core Rx/Tx queue=1 to
> >     single
> >      >     core Rx/Tx queue=2 we are able to get slightly better numbers
> >     (but
> >      >     still doesnt match the numbers obtained using dpdk-20.05
> >     single core
> >      >     Rx/Tx queue=1). This again corroborates the fact the GCP
> >     hypervisor
> >      >     is the bottleneck here.
> >      >
> >      >     To root-cause this issue we were able to replicate this
> behavior
> >      >     using native DPDK testpmd as shown below (cmds used):-
> >      >     Hugepage size: 2 MB
> >      >       ./app/dpdk-testpmd -l 0-1 -n 1 -- -i --nb-cores=1 --txd=2048
> >      >     --rxd=2048 --rxq=1 --txq=1  --portmask=0x3
> >      >     set fwd mac
> >      >     set fwd flowgen
> >      >     set txpkts 1518
> >      >     start
> >      >     stop
> >      >
> >      >     Testpmd traffic run (for packet-size=1518) for exact same
> >      >     time-interval of 15 seconds:
> >      >
> >      >     _22.11_
> >      >        ---------------------- Forward statistics for port 0
> >      >       ----------------------
> >      >        RX-packets: 2              RX-dropped: 0
> >     RX-total: 2
> >      >        TX-packets: 19497570 *TX-dropped: 364674686 *    TX-total:
> >     384172256
> >      >
> >      >
> >
>  ----------------------------------------------------------------------------
> >      >     _20.05_
> >      >        ---------------------- Forward statistics for port 0
> >      >       ----------------------
> >      >        RX-packets: 3              RX-dropped: 0
> >     RX-total: 3
> >      >        TX-packets: 19480319       TX-dropped: 0
> TX-total:
> >      >     19480319
> >      >
> >      >
> >
>  ----------------------------------------------------------------------------
> >      >
> >      >     As you can see
> >      >     dpdk-22.11
> >      >     Packets generated : 384 million Packets serviced : ~19.5
> >     million :
> >      >     Tx-dropped : 364 million
> >      >     dpdk-20.05
> >      >     Packets generated : ~19.5 million Packets serviced : ~19.5
> >     million :
> >      >     Tx-dropped : 0
> >      >
> >      >     Actual serviced traffic remains almost same between the two
> >     versions
> >      >     (implying the underlying GCP hypervisor is only capable of
> >     handling
> >      >     that much) but in dpdk-22.11 the PMD is pushing almost 20x
> >     traffic
> >      >     compared to dpdk-20.05
> >      >     The same pattern can be seen even if we run traffic for a
> longer
> >      >     duration.
> >      >
> >
>  ===============================================================================================
> >      >
> >      >     Following are our queries:
> >      >     @ Virtio-dev team
> >      >     1. Why in dpdk-22.11 using virtio PMD the testpmd application
> is
> >      >     able to pump 20 times Tx traffic towards hypervisor compared
> to
> >      >     dpdk-20.05 ?
> >      >     What has changed either in the virtio-PMD or in the
> virtio-PMD &
> >      >     underlying hypervisor communication causing this behavior ?
> >      >     If you see actual serviced traffic by the hypervisor remains
> >     almost
> >      >     on par with dpdk-20.05 but its the humongous packets drop
> count
> >      >     which can be overall detrimental for any DPDK-application
> running
> >      >     TCP traffic profile.
> >      >     Is there a way to slow down the number of packets sent
> >     towards the
> >      >     hypervisor (through either any code change in virtio-PMD or
> any
> >      >     config setting) and make it on-par with dpdk-20.05
> performance ?
> >      >     2. In the published Virtio performance report Release 22.11
> >     we see
> >      >     no qualification of throughput numbers done on GCP-cloud. Is
> >     there
> >      >     any internal performance benchmark numbers you have for
> GCP-cloud
> >      >     and if yes can you please share it with us so that we can
> >     check if
> >      >     there's any configs/knobs/settings you used to get optimum
> >     performance.
>
>
> I don't know what your issue is, but this is not something we noticed
> using QEMU/KVM as hypervisor with Vhost-user backend.
>
> I would suggest you run a git bisect to pinpoint to the specific commit
> introducing this regression.
>
> Also, you could run perf top in the guest on both 20.05 and 22.11, maybe
> we could spot something in it.
>
> Regards,
> Maxime
>
> >      >
> >      >     @ GCP-cloud dev team
> >      >     As we can see any amount of traffic greater than what can be
> >      >     successfully serviced by the GCP hypervisor is all getting
> >     dropped
> >      >     hence we need help from your side to reproduce this issue in
> your
> >      >     in-house setup preferably using the same VM instance type as
> >      >     highlighted before.
> >      >     We need further investigation by you from the GCP host level
> >     side to
> >      >     check on parameters like running out of Tx buffers or Queue
> full
> >      >     conditions for the virtio-NIC or number of NIC Rx/Tx kernel
> >     threads
> >      >     as to what is causing hypervisor to not match up to the
> >     traffic load
> >      >     pumped in dpdk-22.11
> >      >     Based on your debugging we would additionally need inputs as
> >     to what
> >      >     can be tweaked or any knobs/settings can be configured from
> the
> >      >     GCP-VM level to get better performance numbers.
> >      >
> >      >     Please feel free to reach out to us for any further queries.
> >      >
> >      >     _Additional outputs for debugging:_
> >      >     lspci | grep Eth
> >      >     00:06.0 Ethernet controller: Red Hat, Inc. Virtio network
> device
> >      >     root@dcg15-se-ecmyw:/home/admin/dpdk/build# ethtool -i eth0
> >      >     driver: virtio_net
> >      >     version: 1.0.0
> >      >     firmware-version:
> >      >     expansion-rom-version:
> >      >     bus-info: 0000:00:06.0
> >      >     supports-statistics: yes
> >      >     supports-test: no
> >      >     supports-eeprom-access: no
> >      >     supports-register-dump: no
> >      >     supports-priv-flags: no
> >      >
> >      >     testpmd> show port info all
> >      >     ********************* Infos for port 0  *********************
> >      >     MAC address: 42:01:0A:98:A0:0F
> >      >     Device name: 0000:00:06.0
> >      >     Driver name: net_virtio
> >      >     Firmware-version: not available
> >      >     Connect to socket: 0
> >      >     memory allocation on the socket: 0
> >      >     Link status: up
> >      >     Link speed: Unknown
> >      >     Link duplex: full-duplex
> >      >     Autoneg status: On
> >      >     MTU: 1500
> >      >     Promiscuous mode: disabled
> >      >     Allmulticast mode: disabled
> >      >     Maximum number of MAC addresses: 64
> >      >     Maximum number of MAC addresses of hash filtering: 0
> >      >     VLAN offload:
> >      >        strip off, filter off, extend off, qinq strip off
> >      >     No RSS offload flow type is supported.
> >      >     Minimum size of RX buffer: 64
> >      >     Maximum configurable length of RX packet: 9728
> >      >     Maximum configurable size of LRO aggregated packet: 0
> >      >     Current number of RX queues: 1
> >      >     Max possible RX queues: 2
> >      >     Max possible number of RXDs per queue: 32768
> >      >     Min possible number of RXDs per queue: 32
> >      >     RXDs number alignment: 1
> >      >     Current number of TX queues: 1
> >      >     Max possible TX queues: 2
> >      >     Max possible number of TXDs per queue: 32768
> >      >     Min possible number of TXDs per queue: 32
> >      >     TXDs number alignment: 1
> >      >     Max segment number per packet: 65535
> >      >     Max segment number per MTU/TSO: 65535
> >      >     Device capabilities: 0x0( )
> >      >     Device error handling mode: none
> >      >
> >      >
> >      >
> >      > This electronic communication and the information and any files
> >      > transmitted with it, or attached to it, are confidential and are
> >      > intended solely for the use of the individual or entity to whom
> >     it is
> >      > addressed and may contain information that is confidential,
> legally
> >      > privileged, protected by privacy laws, or otherwise restricted
> from
> >      > disclosure to anyone else. If you are not the intended recipient
> >     or the
> >      > person responsible for delivering the e-mail to the intended
> >     recipient,
> >      > you are hereby notified that any use, copying, distributing,
> >      > dissemination, forwarding, printing, or copying of this e-mail is
> >      > strictly prohibited. If you received this e-mail in error, please
> >     return
> >      > the e-mail to the sender, delete it from your computer, and
> >     destroy any
> >      > printed copy of it.
> >
> >
> > This electronic communication and the information and any files
> > transmitted with it, or attached to it, are confidential and are
> > intended solely for the use of the individual or entity to whom it is
> > addressed and may contain information that is confidential, legally
> > privileged, protected by privacy laws, or otherwise restricted from
> > disclosure to anyone else. If you are not the intended recipient or the
> > person responsible for delivering the e-mail to the intended recipient,
> > you are hereby notified that any use, copying, distributing,
> > dissemination, forwarding, printing, or copying of this e-mail is
> > strictly prohibited. If you received this e-mail in error, please return
> > the e-mail to the sender, delete it from your computer, and destroy any
> > printed copy of it.
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 17908 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 5430 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: GCP cloud : Virtio-PMD performance Issue
       [not found]       ` <c714b248-971c-4c88-ad8a-f451607ed7ae@redhat.com>
@ 2024-12-13 10:47         ` Maxime Coquelin
  2024-12-16 11:04           ` Mukul Sinha
  0 siblings, 1 reply; 6+ messages in thread
From: Maxime Coquelin @ 2024-12-13 10:47 UTC (permalink / raw)
  To: Mukul Sinha, Joshua Washington
  Cc: chenbox, Jeroen de Borst, Rushil Gupta,
	Srinivasa Srikanth Podila, Tathagat Priyadarshi, Samar Yadav,
	Varun LA, dev

(with DPDK ML that got removed)

On 12/13/24 11:46, Maxime Coquelin wrote:
> 
> 
> On 12/13/24 11:21, Mukul Sinha wrote:
>> Thanks @joshwash@google.com <mailto:joshwash@google.com> @Maxime 
>> Coquelin <mailto:maxime.coquelin@redhat.com> for the inputs.
>>
>> @Maxime Coquelin <mailto:maxime.coquelin@redhat.com>
>> I did code bisecting and was able to pin-point through test-pmd run 
>> that *this issue we are starting to see since DPDK-21.11 version 
>> onwards. Till DPDK-21.08 this issue is not seen.*
>> To remind the issue what we see is that while actual amount of 
>> serviced traffic by the hypervisor remains almost same between the two 
>> versions (implying the underlying GCP hypervisor is only capable of 
>> handling that much) but in >=dpdk-21.11 versions the virtio-PMD is 
>> pushing almost 20x traffic compared to dpdk-21.08 (This humongous 
>> traffic rate in  >=dpdk-21.11 versions leads to high packet drop rates 
>> since the underlying hypervisor is only capable of max handling the 
>> same load it was servicing in <=dpdk-21.08)
>> The same pattern can be seen even if we run traffic for a longer 
>> duration.
>>
>> *_Eg:_*
>> Testpmd traffic run (for packet-size=1518) for exact same 
>> time-interval of 15 seconds:
>>
>> _*>=21.11 DPDK version*_
>>    ---------------------- Forward statistics for port 0 
>>   ----------------------
>>    RX-packets: 2              RX-dropped: 0             RX-total: 2
>>    TX-packets: 19497570 *TX-dropped: 364674686 *    TX-total: 384172256
>> ----------------------------------------------------------------------------
>> _*Upto 21.08 DPDK version *_
>>    ---------------------- Forward statistics for port 0 
>>   ----------------------
>>    RX-packets: 3              RX-dropped: 0             RX-total: 3
>>    TX-packets: 19480319       TX-dropped: 0             TX-total: 
>> 19480319
>> ----------------------------------------------------------------------------
>>
>> As you can see
>>  >=dpdk-21.11
>> Packets generated : 384 million Packets serviced : ~19.5 million : 
>> Tx-dropped : 364 million
>> <=dpdk-21.08
>> Packets generated : ~19.5 million Packets serviced : ~19.5 million : 
>> Tx-dropped : 0
>>
>> ==========================================================================
>> @Maxime Coquelin <mailto:maxime.coquelin@redhat.com>
>> I have run through all the commits made by virtio-team between 
>> DPDK-21.11 and DPDK-21.08 as per the commit-logs available at 
>> https://git.dpdk.org/dpdk/log/drivers/net/virtio 
>> <https://git.dpdk.org/dpdk/log/drivers/net/virtio>
>> I even tried undoing all the possible relevant commits (I could think 
>> of) on a dpdk-21.11 workspace & then re-running testpmd in order to 
>> track down which commit has introduced this regression but no luck.
>> Need your inputs further if you could glance through the commits made 
>> in between these releases and let us know if there's any particular 
>> commit of interest which you think can cause the behavior as seen 
>> above (or if there's any commit not captured in the above git link; 
>> maybe a commit checkin outside the virtio PMD code perhaps?).
> 
> As your issue seems 100% reproducible, using git bisect you should be 
> able to point to the commit introducing the regression.
> 
> This is what I need to be able to help you.
> 
> Regards,
> Maxime
> 
>>
>> Thanks,
>> Mukul
>>
>>
>> On Mon, Dec 9, 2024 at 9:54 PM Joshua Washington <joshwash@google.com 
>> <mailto:joshwash@google.com>> wrote:
>>
>>     Hello,
>>
>>     Based on your VM shape (8 vcpu VM) and packet size (1518B packets),
>>     what you are seeing is exactly expected. 8 vCPU Gen 2 VMs has a
>>     default egress cap of 16 Gbps. This equates to roughly 1.3Mpps when
>>     using 1518B packets, including IFG. Over the course of 15 seconds,
>>     19.5 million packets should be sent, which matches both cases. The
>>     difference here seems to be what happens on DPDK, not GCP. I don't
>>     believe that packet drops on the host NIC are captured in DPDK
>>     stats; likely the descriptor ring just filled up because the egress
>>     bandwidth cap was hit and queue servicing was throttled. This would
>>     cause a TX burst to return less packets than the burst size. The
>>     difference between 20.05 and 22.11 might have to do with this
>>     reporting, or a change in testpmd logic for when to send new bursts
>>     of traffic.
>>
>>     Best,
>>     Josh
>>
>>
>>     On Mon, Dec 9, 2024, 07:39 Mukul Sinha <mukul.sinha@broadcom.com
>>     <mailto:mukul.sinha@broadcom.com>> wrote:
>>
>>         GCP-dev team
>>         @jeroendb@google.com
>>         <mailto:jeroendb@google.com>@rushilg@google.com
>>         <mailto:rushilg@google.com> @joshwash@google.com
>>         <mailto:joshwash@google.com>
>>         Can you please check the following email & get back ?
>>
>>
>>         On Fri, Dec 6, 2024 at 2:04 AM Mukul Sinha
>>         <mukul.sinha@broadcom.com <mailto:mukul.sinha@broadcom.com>> 
>> wrote:
>>
>>             Hi GCP & Virtio-PMD dev teams,
>>             We are from VMware NSX Advanced Load Balancer Team whereby
>>             in GCP-cloud (*custom-8-8192 VM instance type 8core8G*) we
>>             are triaging an issue of TCP profile application throughput
>>             performance with single dispatcher core single Rx/Tx queue
>>             (queue depth: 2048) the throughput performance we get using
>>             dpdk-22.11 virtio-PMD code is degraded significantly when
>>             compared to when using dpdk-20.05 PMD
>>             We see high amount of Tx packet drop counter incrementing on
>>             virtio-NIC pointing to issue that the GCP hypervisor side is
>>             unable to drain the packets faster (No drops are seen on Rx
>>             side)
>>             The behavior is like this :
>>             _Using dpdk-22.11_
>>             At 75% CPU usage itself we start seeing huge number of Tx
>>             packet drops reported (no Rx drops) causing TCP
>>             restransmissions eventually bringing down the effective
>>             throughput numbers
>>             _Using dpdk-20.05_
>>             even at ~95% CPU usage without any packet drops (neither Rx
>>             nor Tx) we are able to get a much better throughput
>>
>>             To improve performance numbers with dpdk-22.11 we have tried
>>             increasing the queue depth to 4096 but that din't help.
>>             If with dpdk-22.11 we move from single core Rx/Tx queue=1 to
>>             single core Rx/Tx queue=2 we are able to get slightly better
>>             numbers (but still doesnt match the numbers obtained using
>>             dpdk-20.05 single core Rx/Tx queue=1). This again
>>             corroborates the fact the GCP hypervisor is the bottleneck
>>             here.
>>
>>             To root-cause this issue we were able to replicate this
>>             behavior using native DPDK testpmd as shown below (cmds 
>> used):-
>>             Hugepage size: 2 MB
>>               ./app/dpdk-testpmd -l 0-1 -n 1 -- -i --nb-cores=1
>>             --txd=2048 --rxd=2048 --rxq=1 --txq=1  --portmask=0x3
>>             set fwd mac
>>             set fwd flowgen
>>             set txpkts 1518
>>             start
>>             stop
>>
>>             Testpmd traffic run (for packet-size=1518) for exact same
>>             time-interval of 15 seconds:
>>
>>             _22.11_
>>                ---------------------- Forward statistics for port 0
>>               ----------------------
>>                RX-packets: 2              RX-dropped: 0             
>> RX-total: 2
>>                TX-packets: 19497570 *TX-dropped: 364674686 * 
>>             TX-total: 384172256
>>             
>> ----------------------------------------------------------------------------
>>             _20.05_
>>                ---------------------- Forward statistics for port 0
>>               ----------------------
>>                RX-packets: 3              RX-dropped: 0             
>> RX-total: 3
>>                TX-packets: 19480319       TX-dropped: 0             
>> TX-total: 19480319
>>             
>> ----------------------------------------------------------------------------
>>
>>             As you can see
>>             dpdk-22.11
>>             Packets generated : 384 million Packets serviced : ~19.5
>>             million : Tx-dropped : 364 million
>>             dpdk-20.05
>>             Packets generated : ~19.5 million Packets serviced : ~19.5
>>             million : Tx-dropped : 0
>>
>>             Actual serviced traffic remains almost same between the two
>>             versions (implying the underlying GCP hypervisor is only
>>             capable of handling that much) but in dpdk-22.11 the PMD is
>>             pushing almost 20x traffic compared to dpdk-20.05
>>             The same pattern can be seen even if we run traffic for a
>>             longer duration.
>>             
>> ===============================================================================================
>>
>>             Following are our queries:
>>             @ Virtio-dev team
>>             1. Why in dpdk-22.11 using virtio PMD the testpmd
>>             application is able to pump 20 times Tx traffic towards
>>             hypervisor compared to dpdk-20.05 ?
>>             What has changed either in the virtio-PMD or in the
>>             virtio-PMD & underlying hypervisor communication causing
>>             this behavior ?
>>             If you see actual serviced traffic by the hypervisor remains
>>             almost on par with dpdk-20.05 but its the humongous packets
>>             drop count which can be overall detrimental for any
>>             DPDK-application running TCP traffic profile.
>>             Is there a way to slow down the number of packets sent
>>             towards the hypervisor (through either any code change in
>>             virtio-PMD or any config setting) and make it on-par with
>>             dpdk-20.05 performance ?
>>             2. In the published Virtio performance report Release 22.11
>>             we see no qualification of throughput numbers done on
>>             GCP-cloud. Is there any internal performance benchmark
>>             numbers you have for GCP-cloud and if yes can you please
>>             share it with us so that we can check if there's any
>>             configs/knobs/settings you used to get optimum performance.
>>
>>             @ GCP-cloud dev team
>>             As we can see any amount of traffic greater than what can be
>>             successfully serviced by the GCP hypervisor is all getting
>>             dropped hence we need help from your side to reproduce this
>>             issue in your in-house setup preferably using the same VM
>>             instance type as highlighted before.
>>             We need further investigation by you from the GCP host level
>>             side to check on parameters like running out of Tx buffers
>>             or Queue full conditions for the virtio-NIC or number of NIC
>>             Rx/Tx kernel threads as to what is causing hypervisor to not
>>             match up to the traffic load pumped in dpdk-22.11
>>             Based on your debugging we would additionally need inputs as
>>             to what can be tweaked or any knobs/settings can be
>>             configured from the GCP-VM level to get better performance
>>             numbers.
>>
>>             Please feel free to reach out to us for any further queries.
>>
>>             _Additional outputs for debugging:_
>>             lspci | grep Eth
>>             00:06.0 Ethernet controller: Red Hat, Inc. Virtio network 
>> device
>>             root@dcg15-se-ecmyw:/home/admin/dpdk/build# ethtool -i eth0
>>             driver: virtio_net
>>             version: 1.0.0
>>             firmware-version:
>>             expansion-rom-version:
>>             bus-info: 0000:00:06.0
>>             supports-statistics: yes
>>             supports-test: no
>>             supports-eeprom-access: no
>>             supports-register-dump: no
>>             supports-priv-flags: no
>>
>>             testpmd> show port info all
>>             ********************* Infos for port 0  *********************
>>             MAC address: 42:01:0A:98:A0:0F
>>             Device name: 0000:00:06.0
>>             Driver name: net_virtio
>>             Firmware-version: not available
>>             Connect to socket: 0
>>             memory allocation on the socket: 0
>>             Link status: up
>>             Link speed: Unknown
>>             Link duplex: full-duplex
>>             Autoneg status: On
>>             MTU: 1500
>>             Promiscuous mode: disabled
>>             Allmulticast mode: disabled
>>             Maximum number of MAC addresses: 64
>>             Maximum number of MAC addresses of hash filtering: 0
>>             VLAN offload:
>>                strip off, filter off, extend off, qinq strip off
>>             No RSS offload flow type is supported.
>>             Minimum size of RX buffer: 64
>>             Maximum configurable length of RX packet: 9728
>>             Maximum configurable size of LRO aggregated packet: 0
>>             Current number of RX queues: 1
>>             Max possible RX queues: 2
>>             Max possible number of RXDs per queue: 32768
>>             Min possible number of RXDs per queue: 32
>>             RXDs number alignment: 1
>>             Current number of TX queues: 1
>>             Max possible TX queues: 2
>>             Max possible number of TXDs per queue: 32768
>>             Min possible number of TXDs per queue: 32
>>             TXDs number alignment: 1
>>             Max segment number per packet: 65535
>>             Max segment number per MTU/TSO: 65535
>>             Device capabilities: 0x0( )
>>             Device error handling mode: none
>>
>>
>>
>>         This electronic communication and the information and any files
>>         transmitted with it, or attached to it, are confidential and are
>>         intended solely for the use of the individual or entity to whom
>>         it is addressed and may contain information that is
>>         confidential, legally privileged, protected by privacy laws, or
>>         otherwise restricted from disclosure to anyone else. If you are
>>         not the intended recipient or the person responsible for
>>         delivering the e-mail to the intended recipient, you are hereby
>>         notified that any use, copying, distributing, dissemination,
>>         forwarding, printing, or copying of this e-mail is strictly
>>         prohibited. If you received this e-mail in error, please return
>>         the e-mail to the sender, delete it from your computer, and
>>         destroy any printed copy of it.
>>
>>
>> This electronic communication and the information and any files 
>> transmitted with it, or attached to it, are confidential and are 
>> intended solely for the use of the individual or entity to whom it is 
>> addressed and may contain information that is confidential, legally 
>> privileged, protected by privacy laws, or otherwise restricted from 
>> disclosure to anyone else. If you are not the intended recipient or 
>> the person responsible for delivering the e-mail to the intended 
>> recipient, you are hereby notified that any use, copying, 
>> distributing, dissemination, forwarding, printing, or copying of this 
>> e-mail is strictly prohibited. If you received this e-mail in error, 
>> please return the e-mail to the sender, delete it from your computer, 
>> and destroy any printed copy of it.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: GCP cloud : Virtio-PMD performance Issue
  2024-12-13 10:47         ` Maxime Coquelin
@ 2024-12-16 11:04           ` Mukul Sinha
  0 siblings, 0 replies; 6+ messages in thread
From: Mukul Sinha @ 2024-12-16 11:04 UTC (permalink / raw)
  To: Maxime Coquelin
  Cc: Joshua Washington, chenbox, Jeroen de Borst, Rushil Gupta,
	Srinivasa Srikanth Podila, Tathagat Priyadarshi, Samar Yadav,
	Varun LA, dev


[-- Attachment #1.1: Type: text/plain, Size: 17586 bytes --]

Thanks Maxime,
We will analyse further and try pinpointing the regression commit between
DPDK-21.11 & DPDK-21.08.
Will get back with further queries once we have an update.

On Fri, Dec 13, 2024 at 4:17 PM Maxime Coquelin <maxime.coquelin@redhat.com>
wrote:

> (with DPDK ML that got removed)
>
> On 12/13/24 11:46, Maxime Coquelin wrote:
> >
> >
> > On 12/13/24 11:21, Mukul Sinha wrote:
> >> Thanks @joshwash@google.com <mailto:joshwash@google.com> @Maxime
> >> Coquelin <mailto:maxime.coquelin@redhat.com> for the inputs.
> >>
> >> @Maxime Coquelin <mailto:maxime.coquelin@redhat.com>
> >> I did code bisecting and was able to pin-point through test-pmd run
> >> that *this issue we are starting to see since DPDK-21.11 version
> >> onwards. Till DPDK-21.08 this issue is not seen.*
> >> To remind the issue what we see is that while actual amount of
> >> serviced traffic by the hypervisor remains almost same between the two
> >> versions (implying the underlying GCP hypervisor is only capable of
> >> handling that much) but in >=dpdk-21.11 versions the virtio-PMD is
> >> pushing almost 20x traffic compared to dpdk-21.08 (This humongous
> >> traffic rate in  >=dpdk-21.11 versions leads to high packet drop rates
> >> since the underlying hypervisor is only capable of max handling the
> >> same load it was servicing in <=dpdk-21.08)
> >> The same pattern can be seen even if we run traffic for a longer
> >> duration.
> >>
> >> *_Eg:_*
> >> Testpmd traffic run (for packet-size=1518) for exact same
> >> time-interval of 15 seconds:
> >>
> >> _*>=21.11 DPDK version*_
> >>    ---------------------- Forward statistics for port 0
> >>   ----------------------
> >>    RX-packets: 2              RX-dropped: 0             RX-total: 2
> >>    TX-packets: 19497570 *TX-dropped: 364674686 *    TX-total: 384172256
> >>
> ----------------------------------------------------------------------------
> >> _*Upto 21.08 DPDK version *_
> >>    ---------------------- Forward statistics for port 0
> >>   ----------------------
> >>    RX-packets: 3              RX-dropped: 0             RX-total: 3
> >>    TX-packets: 19480319       TX-dropped: 0             TX-total:
> >> 19480319
> >>
> ----------------------------------------------------------------------------
> >>
> >> As you can see
> >>  >=dpdk-21.11
> >> Packets generated : 384 million Packets serviced : ~19.5 million :
> >> Tx-dropped : 364 million
> >> <=dpdk-21.08
> >> Packets generated : ~19.5 million Packets serviced : ~19.5 million :
> >> Tx-dropped : 0
> >>
> >>
> ==========================================================================
> >> @Maxime Coquelin <mailto:maxime.coquelin@redhat.com>
> >> I have run through all the commits made by virtio-team between
> >> DPDK-21.11 and DPDK-21.08 as per the commit-logs available at
> >> https://git.dpdk.org/dpdk/log/drivers/net/virtio
> >> <https://git.dpdk.org/dpdk/log/drivers/net/virtio>
> >> I even tried undoing all the possible relevant commits (I could think
> >> of) on a dpdk-21.11 workspace & then re-running testpmd in order to
> >> track down which commit has introduced this regression but no luck.
> >> Need your inputs further if you could glance through the commits made
> >> in between these releases and let us know if there's any particular
> >> commit of interest which you think can cause the behavior as seen
> >> above (or if there's any commit not captured in the above git link;
> >> maybe a commit checkin outside the virtio PMD code perhaps?).
> >
> > As your issue seems 100% reproducible, using git bisect you should be
> > able to point to the commit introducing the regression.
> >
> > This is what I need to be able to help you.
> >
> > Regards,
> > Maxime
> >
> >>
> >> Thanks,
> >> Mukul
> >>
> >>
> >> On Mon, Dec 9, 2024 at 9:54 PM Joshua Washington <joshwash@google.com
> >> <mailto:joshwash@google.com>> wrote:
> >>
> >>     Hello,
> >>
> >>     Based on your VM shape (8 vcpu VM) and packet size (1518B packets),
> >>     what you are seeing is exactly expected. 8 vCPU Gen 2 VMs has a
> >>     default egress cap of 16 Gbps. This equates to roughly 1.3Mpps when
> >>     using 1518B packets, including IFG. Over the course of 15 seconds,
> >>     19.5 million packets should be sent, which matches both cases. The
> >>     difference here seems to be what happens on DPDK, not GCP. I don't
> >>     believe that packet drops on the host NIC are captured in DPDK
> >>     stats; likely the descriptor ring just filled up because the egress
> >>     bandwidth cap was hit and queue servicing was throttled. This would
> >>     cause a TX burst to return less packets than the burst size. The
> >>     difference between 20.05 and 22.11 might have to do with this
> >>     reporting, or a change in testpmd logic for when to send new bursts
> >>     of traffic.
> >>
> >>     Best,
> >>     Josh
> >>
> >>
> >>     On Mon, Dec 9, 2024, 07:39 Mukul Sinha <mukul.sinha@broadcom.com
> >>     <mailto:mukul.sinha@broadcom.com>> wrote:
> >>
> >>         GCP-dev team
> >>         @jeroendb@google.com
> >>         <mailto:jeroendb@google.com>@rushilg@google.com
> >>         <mailto:rushilg@google.com> @joshwash@google.com
> >>         <mailto:joshwash@google.com>
> >>         Can you please check the following email & get back ?
> >>
> >>
> >>         On Fri, Dec 6, 2024 at 2:04 AM Mukul Sinha
> >>         <mukul.sinha@broadcom.com <mailto:mukul.sinha@broadcom.com>>
> >> wrote:
> >>
> >>             Hi GCP & Virtio-PMD dev teams,
> >>             We are from VMware NSX Advanced Load Balancer Team whereby
> >>             in GCP-cloud (*custom-8-8192 VM instance type 8core8G*) we
> >>             are triaging an issue of TCP profile application throughput
> >>             performance with single dispatcher core single Rx/Tx queue
> >>             (queue depth: 2048) the throughput performance we get using
> >>             dpdk-22.11 virtio-PMD code is degraded significantly when
> >>             compared to when using dpdk-20.05 PMD
> >>             We see high amount of Tx packet drop counter incrementing on
> >>             virtio-NIC pointing to issue that the GCP hypervisor side is
> >>             unable to drain the packets faster (No drops are seen on Rx
> >>             side)
> >>             The behavior is like this :
> >>             _Using dpdk-22.11_
> >>             At 75% CPU usage itself we start seeing huge number of Tx
> >>             packet drops reported (no Rx drops) causing TCP
> >>             restransmissions eventually bringing down the effective
> >>             throughput numbers
> >>             _Using dpdk-20.05_
> >>             even at ~95% CPU usage without any packet drops (neither Rx
> >>             nor Tx) we are able to get a much better throughput
> >>
> >>             To improve performance numbers with dpdk-22.11 we have tried
> >>             increasing the queue depth to 4096 but that din't help.
> >>             If with dpdk-22.11 we move from single core Rx/Tx queue=1 to
> >>             single core Rx/Tx queue=2 we are able to get slightly better
> >>             numbers (but still doesnt match the numbers obtained using
> >>             dpdk-20.05 single core Rx/Tx queue=1). This again
> >>             corroborates the fact the GCP hypervisor is the bottleneck
> >>             here.
> >>
> >>             To root-cause this issue we were able to replicate this
> >>             behavior using native DPDK testpmd as shown below (cmds
> >> used):-
> >>             Hugepage size: 2 MB
> >>               ./app/dpdk-testpmd -l 0-1 -n 1 -- -i --nb-cores=1
> >>             --txd=2048 --rxd=2048 --rxq=1 --txq=1  --portmask=0x3
> >>             set fwd mac
> >>             set fwd flowgen
> >>             set txpkts 1518
> >>             start
> >>             stop
> >>
> >>             Testpmd traffic run (for packet-size=1518) for exact same
> >>             time-interval of 15 seconds:
> >>
> >>             _22.11_
> >>                ---------------------- Forward statistics for port 0
> >>               ----------------------
> >>                RX-packets: 2              RX-dropped: 0
> >> RX-total: 2
> >>                TX-packets: 19497570 *TX-dropped: 364674686 *
> >>             TX-total: 384172256
> >>
> >>
> ----------------------------------------------------------------------------
> >>             _20.05_
> >>                ---------------------- Forward statistics for port 0
> >>               ----------------------
> >>                RX-packets: 3              RX-dropped: 0
> >> RX-total: 3
> >>                TX-packets: 19480319       TX-dropped: 0
> >> TX-total: 19480319
> >>
> >>
> ----------------------------------------------------------------------------
> >>
> >>             As you can see
> >>             dpdk-22.11
> >>             Packets generated : 384 million Packets serviced : ~19.5
> >>             million : Tx-dropped : 364 million
> >>             dpdk-20.05
> >>             Packets generated : ~19.5 million Packets serviced : ~19.5
> >>             million : Tx-dropped : 0
> >>
> >>             Actual serviced traffic remains almost same between the two
> >>             versions (implying the underlying GCP hypervisor is only
> >>             capable of handling that much) but in dpdk-22.11 the PMD is
> >>             pushing almost 20x traffic compared to dpdk-20.05
> >>             The same pattern can be seen even if we run traffic for a
> >>             longer duration.
> >>
> >>
> ===============================================================================================
> >>
> >>             Following are our queries:
> >>             @ Virtio-dev team
> >>             1. Why in dpdk-22.11 using virtio PMD the testpmd
> >>             application is able to pump 20 times Tx traffic towards
> >>             hypervisor compared to dpdk-20.05 ?
> >>             What has changed either in the virtio-PMD or in the
> >>             virtio-PMD & underlying hypervisor communication causing
> >>             this behavior ?
> >>             If you see actual serviced traffic by the hypervisor remains
> >>             almost on par with dpdk-20.05 but its the humongous packets
> >>             drop count which can be overall detrimental for any
> >>             DPDK-application running TCP traffic profile.
> >>             Is there a way to slow down the number of packets sent
> >>             towards the hypervisor (through either any code change in
> >>             virtio-PMD or any config setting) and make it on-par with
> >>             dpdk-20.05 performance ?
> >>             2. In the published Virtio performance report Release 22.11
> >>             we see no qualification of throughput numbers done on
> >>             GCP-cloud. Is there any internal performance benchmark
> >>             numbers you have for GCP-cloud and if yes can you please
> >>             share it with us so that we can check if there's any
> >>             configs/knobs/settings you used to get optimum performance.
> >>
> >>             @ GCP-cloud dev team
> >>             As we can see any amount of traffic greater than what can be
> >>             successfully serviced by the GCP hypervisor is all getting
> >>             dropped hence we need help from your side to reproduce this
> >>             issue in your in-house setup preferably using the same VM
> >>             instance type as highlighted before.
> >>             We need further investigation by you from the GCP host level
> >>             side to check on parameters like running out of Tx buffers
> >>             or Queue full conditions for the virtio-NIC or number of NIC
> >>             Rx/Tx kernel threads as to what is causing hypervisor to not
> >>             match up to the traffic load pumped in dpdk-22.11
> >>             Based on your debugging we would additionally need inputs as
> >>             to what can be tweaked or any knobs/settings can be
> >>             configured from the GCP-VM level to get better performance
> >>             numbers.
> >>
> >>             Please feel free to reach out to us for any further queries.
> >>
> >>             _Additional outputs for debugging:_
> >>             lspci | grep Eth
> >>             00:06.0 Ethernet controller: Red Hat, Inc. Virtio network
> >> device
> >>             root@dcg15-se-ecmyw:/home/admin/dpdk/build# ethtool -i eth0
> >>             driver: virtio_net
> >>             version: 1.0.0
> >>             firmware-version:
> >>             expansion-rom-version:
> >>             bus-info: 0000:00:06.0
> >>             supports-statistics: yes
> >>             supports-test: no
> >>             supports-eeprom-access: no
> >>             supports-register-dump: no
> >>             supports-priv-flags: no
> >>
> >>             testpmd> show port info all
> >>             ********************* Infos for port 0
>  *********************
> >>             MAC address: 42:01:0A:98:A0:0F
> >>             Device name: 0000:00:06.0
> >>             Driver name: net_virtio
> >>             Firmware-version: not available
> >>             Connect to socket: 0
> >>             memory allocation on the socket: 0
> >>             Link status: up
> >>             Link speed: Unknown
> >>             Link duplex: full-duplex
> >>             Autoneg status: On
> >>             MTU: 1500
> >>             Promiscuous mode: disabled
> >>             Allmulticast mode: disabled
> >>             Maximum number of MAC addresses: 64
> >>             Maximum number of MAC addresses of hash filtering: 0
> >>             VLAN offload:
> >>                strip off, filter off, extend off, qinq strip off
> >>             No RSS offload flow type is supported.
> >>             Minimum size of RX buffer: 64
> >>             Maximum configurable length of RX packet: 9728
> >>             Maximum configurable size of LRO aggregated packet: 0
> >>             Current number of RX queues: 1
> >>             Max possible RX queues: 2
> >>             Max possible number of RXDs per queue: 32768
> >>             Min possible number of RXDs per queue: 32
> >>             RXDs number alignment: 1
> >>             Current number of TX queues: 1
> >>             Max possible TX queues: 2
> >>             Max possible number of TXDs per queue: 32768
> >>             Min possible number of TXDs per queue: 32
> >>             TXDs number alignment: 1
> >>             Max segment number per packet: 65535
> >>             Max segment number per MTU/TSO: 65535
> >>             Device capabilities: 0x0( )
> >>             Device error handling mode: none
> >>
> >>
> >>
> >>         This electronic communication and the information and any files
> >>         transmitted with it, or attached to it, are confidential and are
> >>         intended solely for the use of the individual or entity to whom
> >>         it is addressed and may contain information that is
> >>         confidential, legally privileged, protected by privacy laws, or
> >>         otherwise restricted from disclosure to anyone else. If you are
> >>         not the intended recipient or the person responsible for
> >>         delivering the e-mail to the intended recipient, you are hereby
> >>         notified that any use, copying, distributing, dissemination,
> >>         forwarding, printing, or copying of this e-mail is strictly
> >>         prohibited. If you received this e-mail in error, please return
> >>         the e-mail to the sender, delete it from your computer, and
> >>         destroy any printed copy of it.
> >>
> >>
> >> This electronic communication and the information and any files
> >> transmitted with it, or attached to it, are confidential and are
> >> intended solely for the use of the individual or entity to whom it is
> >> addressed and may contain information that is confidential, legally
> >> privileged, protected by privacy laws, or otherwise restricted from
> >> disclosure to anyone else. If you are not the intended recipient or
> >> the person responsible for delivering the e-mail to the intended
> >> recipient, you are hereby notified that any use, copying,
> >> distributing, dissemination, forwarding, printing, or copying of this
> >> e-mail is strictly prohibited. If you received this e-mail in error,
> >> please return the e-mail to the sender, delete it from your computer,
> >> and destroy any printed copy of it.
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #1.2: Type: text/html, Size: 23984 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 5430 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2024-12-17  8:02 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CAAnnWRXB7gZ+vybkeECgjG=B-NNtmY2rewCTgPW3NNBOxd45Gw@mail.gmail.com>
     [not found] ` <CAAnnWRWpLLAT4TEyzf0pj+hF_guAqMBWez7rQbc=J3Q7Wsn_VA@mail.gmail.com>
     [not found]   ` <ed76028a-819e-4918-9529-5aedc4762148@redhat.com>
2024-12-05 22:54     ` GCP cloud : Virtio-PMD performance Issue Mukul Sinha
2024-12-05 22:58       ` Mukul Sinha
2024-12-06  8:23       ` Maxime Coquelin
2024-12-09 15:37         ` Mukul Sinha
     [not found] ` <CAAnnWRVymk=ttuub=0SXAbxbV+UcoXrskz+0Z6GrJyAOttBjkw@mail.gmail.com>
     [not found]   ` <CALuQH+UWTLf_tGH2JePT3TdQUNcn06xXwiED8vWsvyCJTCVdzg@mail.gmail.com>
     [not found]     ` <CAAnnWRVXrajEFYh_OBmkHq2fX_nNCm=f+C8t8Ff5cC5U7p6LqA@mail.gmail.com>
     [not found]       ` <c714b248-971c-4c88-ad8a-f451607ed7ae@redhat.com>
2024-12-13 10:47         ` Maxime Coquelin
2024-12-16 11:04           ` Mukul Sinha

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).