DPDK patches and discussions
 help / color / mirror / Atom feed
* The dpdk 18.11 version is on the i40e network card and the overcurrent 1G execution results in a high occupation of 0 cores
@ 2021-12-07  9:50  =?gb18030?B?19PAxw==?=
  0 siblings, 0 replies; only message in thread
From: =?gb18030?B?19PAxw==?= @ 2021-12-07  9:50 UTC (permalink / raw)
  To: =?gb18030?B?ZGV2?=

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="gb18030", Size: 10149 bytes --]

Hello, I used the ip_pipeline program of 18.11 dpdk and made it flow 1GB/s. I found that the 0-core CPU occupies very high (usleep has been added in the for loop of the ip_pipeline main function, which should not occupy very high CPU).Use perf to view and find that it is occupied in dw_readl.


perf situation:
    -   61.60%    61.60%  [kernel]             [k] dw_readl                                     
  
    + 5.89% ret_from_fork                                                                    
      1.57% ordered_events__flush                                                              
    + 1.25% 0xe1                                                                               
    + 1.13% x86_64_start_kernel                                                                   
      1.02%     hist_entry_iter__add                                                               
    + 1.02% 0xd1                                                                            
    +   22.14%     0.00%  [kernel]             [k] __irqentry_text_start                       
    +   22.11%     0.00%  [kernel]             [k] handle_irq                                  
    +   22.08%     0.01%  [kernel]             [k] handle_irq_event_percpu                     
    +   19.31%     0.00%  [unknown]            [k] 0xffffffff8168dd6d                          
    +   19.30%     0.00%  [unknown]            [k] 0xffffffff81698bef                          
    +   19.28%     0.00%  [unknown]            [k] 0xffffffff8102d26
    +   19.28%     0.00%  [unknown]            [k] 0xffffffff811337d
    +   19.27%     0.00%  [unknown]            [k] 0xffffffff8113033
    +   19.25%     0.00%  [unknown]            [k] 0xffffffff8113015
    +   17.14%     0.02%  [kernel]             [k] i2c_dw_isr      
    +   14.50%     0.00%  [unknown]            [k] 0xffffffffa04f406


This is my hardware information:


    CPU model : Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz
    Network card model and version : i40e 1.5.10-k, two card
    Network card firmware : 6.01 0x8000372b 0.0.0
    lscpi view network card information:
        03:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
        03:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)


    Network card driver use: vfio-pci


Modified ip_pipeline main function :


        /* Script */
        if (app.script_name)
            cli_script_process(app.script_name,
                app.conn.msg_in_len_max,
                app.conn.msg_out_len_max);
&nbsp; &nbsp; &nbsp; &nbsp; #include <unistd.h&gt;
&nbsp; &nbsp; &nbsp; &nbsp; /* Dispatch loop */
&nbsp; &nbsp; &nbsp; &nbsp; for ( ; ; ) {
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; conn_poll_for_conn(conn);
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; usleep(5000);
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; conn_poll_for_msg(conn);
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; kni_handle_request();
&nbsp; &nbsp; &nbsp; &nbsp; }


CLI startup parameters:
&nbsp; &nbsp; ./ip_pipeline -c f -- -s ./l2fwd.cli


CLI content:


&nbsp; &nbsp; mempool MEMPOOL0 buffer 2304 pool 32K cache 256 cpu 0


&nbsp; &nbsp; link LINK0 dev 0000:03:00.1 rxq 1 128 MEMPOOL0 txq 1 512 promiscuous on
&nbsp; &nbsp; link LINK1 dev 0000:03:00.0 rxq 1 128 MEMPOOL0 txq 1 512 promiscuous on
&nbsp; &nbsp; ;link LINK2 dev 0000:06:00.0 rxq 1 128 MEMPOOL0 txq 1 512 promiscuous on
&nbsp; &nbsp; ;link LINK3 dev 0000:06:00.1 rxq 1 128 MEMPOOL0 txq 1 512 promiscuous on


&nbsp; &nbsp; pipeline PIPELINE0 period 10 offset_port_id 0 cpu 0


&nbsp; &nbsp; pipeline PIPELINE0 port in bsz 32 link LINK0 rxq 0
&nbsp; &nbsp; pipeline PIPELINE0 port in bsz 32 link LINK1 rxq 0
&nbsp; &nbsp; ;pipeline PIPELINE0 port in bsz 32 link LINK2 rxq 0
&nbsp; &nbsp; ;pipeline PIPELINE0 port in bsz 32 link LINK3 rxq 0


&nbsp; &nbsp; pipeline PIPELINE0 port out bsz 32 link LINK0 txq 0
&nbsp; &nbsp; pipeline PIPELINE0 port out bsz 32 link LINK1 txq 0
&nbsp; &nbsp; ;pipeline PIPELINE0 port out bsz 32 link LINK2 txq 0
&nbsp; &nbsp; ;pipeline PIPELINE0 port out bsz 32 link LINK3 txq 0


&nbsp; &nbsp; pipeline PIPELINE0 table match stub
&nbsp; &nbsp; pipeline PIPELINE0 table match stub
&nbsp; &nbsp; ;pipeline PIPELINE0 table match stub
&nbsp; &nbsp; ;pipeline PIPELINE0 table match stub


&nbsp; &nbsp; pipeline PIPELINE0 port in 0 table 0
&nbsp; &nbsp; pipeline PIPELINE0 port in 1 table 1
&nbsp; &nbsp; ;pipeline PIPELINE0 port in 2 table 2
&nbsp; &nbsp; ;pipeline PIPELINE0 port in 3 table 3


&nbsp; &nbsp; thread 1 pipeline PIPELINE0 enable


&nbsp; &nbsp; pipeline PIPELINE0 table 0 rule add match default action fwd port 1
&nbsp; &nbsp; pipeline PIPELINE0 table 1 rule add match default action fwd port 0
&nbsp; &nbsp; ;pipeline PIPELINE0 table 2 rule add match default action fwd port 3
&nbsp; &nbsp; ;pipeline PIPELINE0 table 3 rule add match default action fwd port 2


I have done these tests


&nbsp; &nbsp; 1. Using the i40e device of E5-2620 CPU to perform the same test, there is no such problem


&nbsp; &nbsp; 2. I still use the i7-6700 i40e device, and I rolled back to the 18.05 version for testing, there is no such problem.


&nbsp; &nbsp; 3. The fallback code locates the code node that causes this problem as 4205c7ccec4fc2aeafe3e7ccf6b028d9476fccaf, this node had no problems before, and there were problems afterwards.


&nbsp; &nbsp; 4. I found that there are two kernel modules on this i7-6700 device: idma64, i2c_designware, when I uninstall these two modules, this phenomenon will not appear.


&nbsp; &nbsp; 5. I found that when there is a problem, the call frequency of an interrupt reaches tens of thousands per second! When there is no problem, the call frequency of this interrupt is 0.


&nbsp; &nbsp; 6. On the basis of 18.11, in the i40e_dev_start function


&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (dev-&gt;data-&gt;dev_conf.intr_conf.rxq == 0) {
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; rte_eal_alarm_set(I40E_ALARM_INTERVAL,
&nbsp; &nbsp;
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; i40e_dev_alarm_handler, dev);
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; } else {
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* enable uio intr after callback register */
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; rte_intr_enable(intr_handle);
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }


&nbsp; &nbsp; &nbsp; &nbsp; Change to


&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* enable uio intr after callback register */
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; rte_intr_enable(intr_handle);


&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; if (dev-&gt;data-&gt;dev_conf.intr_conf.rxq == 0) {
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; rte_eal_alarm_set(I40E_ALARM_INTERVAL,
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; i40e_dev_alarm_handler, dev);
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; }


&nbsp; &nbsp; &nbsp; &nbsp; Then this problem will not occur.


From the above scene, if I want to use the 18.11 dpdk version normally, I will either uninstall the module or add rte_intr_enable, but I don¡¯t know what will happen if I do this? Can you answer me? Is this a bug?

[-- Attachment #2: Type: text/html, Size: 11527 bytes --]

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2021-12-07 15:41 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-07  9:50 The dpdk 18.11 version is on the i40e network card and the overcurrent 1G execution results in a high occupation of 0 cores  =?gb18030?B?19PAxw==?=

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).