automatic DPDK test reports
 help / color / mirror / Atom feed
From: sys_stv@intel.com
To: test-report@dpdk.org
Cc: hengqi.chen@gmail.com
Subject: |SUCCESS|dpdk|3e3c7f3fa5| intel-Functional
Date: 29 Jun 2025 11:09:16 -0700	[thread overview]
Message-ID: <d449ee$4iqs3g@fmviesa010-auth.fm.intel.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 4625 bytes --]


Test-Label: intel-Functional
Test-Status: SUCCESS
_Functional PASS_

DPDK git repo: dpdk
commit 3e3c7f3fa5ac3f2748a4463d87e73eb28024b401
Author: Hengqi Chen <hengqi.chen@gmail.com>
Date:   Mon Jun 9 07:23:47 2025 +0000

    net/virtio: fix check of threshold for Tx freeing
    
    Like most drivers, make the fast path of virtio_xmit_cleanup() behave as
    described by the comments of rte_eth_txconf::tx_free_thresh ([0]):
        Start freeing Tx buffers if there are
        less free descriptors than this value.
    
    The rationale behind this change is that:
      * vq->vq_nentries is set during device probe
        with the queue size specified by vhost backend,
        this value does not reflect the real nb_tx_desc
      * the real available tx desc is set to vq->vq_free_cnt
        via the nb_tx_desc param of rte_eth_tx_queue_setup() API
      * so `nb_used > vq->vq_nentries - vq->vq_free_thresh` could never be true
        if say nb_tx_desc=2048, vq->vq_nentries=4096, vq->vq_free_thresh=32,
        see bug report 1716 ([1]) for details.
    
    [0]: https://github.com/DPDK/dpdk/commit/72514b5d5543
    [1]: https://bugs.dpdk.org/show_bug.cgi?id=1716
    
    Bugzilla ID: 1716
    Fixes: 72514b5d5543 ("ethdev: fix check of threshold for Tx freeing")
    Cc: stable@dpdk.org
    
    Signed-off-by: Baoyuan Li <updoing@sina.com>
    Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
    Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Smoke-Testing Summary : 31 Case Done, 31 Successful, 0 Failures      


OS : Ubuntu 22.04.2 LTS       
Kernel : 5.15.0-60-generic         
GCC : 11.3.0-1ubuntu1~22.04        
NIC : Ethernet Controller XL710 for 40GbE QSFP+         
Target : x86_64-native-linuxapp-gcc      

	Test result details:
	+-----------------+---------------------------------------------------+-------+
	| suite           | case                                              | status|
	+-----------------+---------------------------------------------------+-------+
	| checksum_offload| test_checksum_offload_with_vlan                   | passed|
	| checksum_offload| test_do_not_insert_checksum_on_the_transmit_packet| passed|
	| checksum_offload| test_hardware_checksum_check_ip_rx                | passed|
	| checksum_offload| test_hardware_checksum_check_ip_tx                | passed|
	| checksum_offload| test_hardware_checksum_check_l4_rx                | passed|
	| checksum_offload| test_hardware_checksum_check_l4_tx                | passed|
	| checksum_offload| test_insert_checksum_on_the_transmit_packet       | passed|
	| checksum_offload| test_rx_checksum_valid_flags                      | passed|
	| dual_vlan       | test_dual_vlan_priority_rxtx                      | passed|
	| dual_vlan       | test_vlan_filter_config                           | passed|
	| dual_vlan       | test_vlan_filter_table                            | passed|
	| dual_vlan       | test_vlan_insert_config                           | passed|
	| dual_vlan       | test_vlan_random_test                             | passed|
	| dual_vlan       | test_vlan_strip_config                            | passed|
	| dual_vlan       | test_vlan_synthetic_test                          | passed|
	| dual_vlan       | test_vlan_tpid_config                             | passed|
	| dual_vlan       | test_vlan_stripq_config                           | n/a   |
	| jumboframes     | test_jumboframes_bigger_jumbo                     | passed|
	| jumboframes     | test_jumboframes_jumbo_jumbo                      | passed|
	| jumboframes     | test_jumboframes_jumbo_nojumbo                    | passed|
	| jumboframes     | test_jumboframes_normal_jumbo                     | passed|
	| jumboframes     | test_jumboframes_normal_nojumbo                   | passed|
	| rxtx_offload    | test_rxoffload_port_all                           | passed|
	| rxtx_offload    | test_rxoffload_port_cmdline                       | passed|
	| rxtx_offload    | test_txoffload_port                               | passed|
	| rxtx_offload    | test_txoffload_port_all                           | passed|
	| rxtx_offload    | test_txoffload_port_checksum                      | passed|
	| rxtx_offload    | test_txoffload_port_cmdline                       | passed|
	| rxtx_offload    | test_txoffload_port_multi_segs                    | passed|
	| rxtx_offload    | test_txoffload_queue                              | passed|
	| rxtx_offload    | test_rxoffload_queue                              | n/a   |
	+-----------------+---------------------------------------------------+-------+


DPDK STV team

                 reply	other threads:[~2025-06-29 18:09 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='d449ee$4iqs3g@fmviesa010-auth.fm.intel.com' \
    --to=sys_stv@intel.com \
    --cc=hengqi.chen@gmail.com \
    --cc=test-report@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).