DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] why all the other threads except lcore-slave pinned to master lcore?
@ 2016-10-24 11:10 ychen
  2016-10-24 18:25 ` Kevin Traynor
  0 siblings, 1 reply; 2+ messages in thread
From: ychen @ 2016-10-24 11:10 UTC (permalink / raw)
  To: dev

Hi, I am a freshman learning DPDK, when I followed the document INSTALL.DPDK.md to launch openvswitch with dpdk inited, I found that all the threads are pinned to master lcore except lcore-slave and vfio-sync, but I can not find any code to set the affinity for these threads. 
Here is my question:
1. why vfio-sync is pinned to a core which is not included in the slave lcore nor master lcore?
2. why all other threads pinned to master lcore? is anything I am setting wrong?


Here is some logs:
2016-10-24T10:42:03Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log
2016-10-24T10:42:03Z|00002|ovs_numa|INFO|Discovered 24 CPU cores on NUMA node 0
2016-10-24T10:42:03Z|00003|ovs_numa|INFO|Discovered 24 CPU cores on NUMA node 1
2016-10-24T10:42:03Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 48 CPU cores
2016-10-24T10:42:03Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting...
2016-10-24T10:42:03Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected
2016-10-24T10:42:03Z|00007|dpdk|INFO|DPDK Enabled, initializing
2016-10-24T10:42:03Z|00008|dpdk|INFO|No vhost-sock-dir provided - defaulting to /var/run/openvswitch
2016-10-24T10:42:03Z|00009|dpdk|INFO|EAL ARGS: ovs-vswitchd -c 0xf --socket-mem 1024,1024
EAL: Detected 48 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI device 0000:01:00.1 on NUMA socket 0
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:06:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
2016-10-24T10:42:06Z|00010|dpdk|INFO|DPDK pdump packet capture enable


and the output of the cpu_layout:
cores =  [0, 1, 2, 3, 4, 5, 8, 9, 10, 11, 12, 13]
sockets =  [0, 1]


        Socket 0        Socket 1        
        --------        --------        
Core 0  [0, 24]         [1, 25]         
Core 1  [2, 26]         [3, 27]         
Core 2  [4, 28]         [5, 29]         
Core 3  [6, 30]         [7, 31]         
Core 4  [8, 32]         [9, 33]         
Core 5  [10, 34]        [11, 35]        
Core 8  [12, 36]        [13, 37]        
Core 9  [14, 38]        [15, 39]        
Core 10 [16, 40]        [17, 41]        
Core 11 [18, 42]        [19, 43]        
Core 12 [20, 44]        [21, 45]        
Core 13 [22, 46]        [23, 47] 
All the threads and their pinned core for vswitchd:
28262  28262   0 ovs-vswitchd
 28263  28262  39 vfio-sync
 28297  28262   0 eal-intr-thread
 28298  28262   1 lcore-slave-1
 28299  28262   2 lcore-slave-2
 28300  28262   3 lcore-slave-3
 28301  28262   0 dpdk_watchdog2
 28302  28262   0 vhost_thread1
 28303  28262   0 pdump-thread
 28304  28262   0 ct_clean3
 28305  28262   0 urcu4
 28744  28262   0 handler101
 28745  28262   0 handler100
 28746  28262   0 handler99
 28747  28262   0 handler98
 28748  28262   0 handler95
 28749  28262   0 handler77
 28750  28262   0 handler79
 28751  28262   0 handler80
 28752  28262   0 handler81
 28753  28262   0 handler73
 28756  28262   0 handler92
 28757  28262   0 handler82
 28758  28262   0 handler96
 28759  28262   0 handler71
 28760  28262   0 handler61
 28761  28262   0 handler62
 28762  28262   0 handler83
 28763  28262   0 handler63
 28764  28262   0 handler84
 28765  28262   0 handler93
 28766  28262   0 handler64
 28767  28262   0 handler85
 28768  28262   0 handler74
 28769  28262   0 handler65
 28770  28262   0 handler66
 28771  28262   0 handler78
 28772  28262   0 handler86
 28773  28262   0 handler87
 28774  28262   0 handler97
 28775  28262   0 handler88
 28776  28262   0 handler56
 28777  28262   0 handler76
 28778  28262   0 handler67
 28779  28262   0 handler60
 28780  28262   0 handler68
 28781  28262   0 revalidator75
 28782  28262   0 revalidator57
 28783  28262   0 revalidator89
 28784  28262   0 revalidator69
 28785  28262   0 revalidator54
 28786  28262   0 revalidator90
 28787  28262   0 revalidator55
 28788  28262   0 revalidator58
 28789  28262   0 revalidator59
 28790  28262   0 revalidator70
 28791  28262   0 revalidator94
 28792  28262   0 revalidator91
 28793  28262   0 revalidator72
 28827  28262   4 pmd103
 28829  28262   6 pmd102


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-dev] why all the other threads except lcore-slave pinned to master lcore?
  2016-10-24 11:10 [dpdk-dev] why all the other threads except lcore-slave pinned to master lcore? ychen
@ 2016-10-24 18:25 ` Kevin Traynor
  0 siblings, 0 replies; 2+ messages in thread
From: Kevin Traynor @ 2016-10-24 18:25 UTC (permalink / raw)
  To: ychen, dev

On 10/24/2016 12:10 PM, ychen wrote:
> Hi, I am a freshman learning DPDK, when I followed the document INSTALL.DPDK.md to launch openvswitch with dpdk inited, I found that all the threads are pinned to master lcore except lcore-slave and vfio-sync, but I can not find any code to set the affinity for these threads. 
> Here is my question:
> 1. why vfio-sync is pinned to a core which is not included in the slave lcore nor master lcore?
> 2. why all other threads pinned to master lcore? is anything I am setting wrong?

Hi - these are probably more appropriate for the ovs-dev list. I
answered the post you put over there.

> 
> 
> Here is some logs:
> 2016-10-24T10:42:03Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log
> 2016-10-24T10:42:03Z|00002|ovs_numa|INFO|Discovered 24 CPU cores on NUMA node 0
> 2016-10-24T10:42:03Z|00003|ovs_numa|INFO|Discovered 24 CPU cores on NUMA node 1
> 2016-10-24T10:42:03Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 48 CPU cores
> 2016-10-24T10:42:03Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting...
> 2016-10-24T10:42:03Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected
> 2016-10-24T10:42:03Z|00007|dpdk|INFO|DPDK Enabled, initializing
> 2016-10-24T10:42:03Z|00008|dpdk|INFO|No vhost-sock-dir provided - defaulting to /var/run/openvswitch
> 2016-10-24T10:42:03Z|00009|dpdk|INFO|EAL ARGS: ovs-vswitchd -c 0xf --socket-mem 1024,1024
> EAL: Detected 48 lcore(s)
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> PMD: bnxt_rte_pmd_init() called for (null)
> EAL: PCI device 0000:01:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL: PCI device 0000:01:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL: PCI device 0000:06:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1521 rte_igb_pmd
> EAL: PCI device 0000:06:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1521 rte_igb_pmd
> 2016-10-24T10:42:06Z|00010|dpdk|INFO|DPDK pdump packet capture enable
> 
> 
> and the output of the cpu_layout:
> cores =  [0, 1, 2, 3, 4, 5, 8, 9, 10, 11, 12, 13]
> sockets =  [0, 1]
> 
> 
>         Socket 0        Socket 1        
>         --------        --------        
> Core 0  [0, 24]         [1, 25]         
> Core 1  [2, 26]         [3, 27]         
> Core 2  [4, 28]         [5, 29]         
> Core 3  [6, 30]         [7, 31]         
> Core 4  [8, 32]         [9, 33]         
> Core 5  [10, 34]        [11, 35]        
> Core 8  [12, 36]        [13, 37]        
> Core 9  [14, 38]        [15, 39]        
> Core 10 [16, 40]        [17, 41]        
> Core 11 [18, 42]        [19, 43]        
> Core 12 [20, 44]        [21, 45]        
> Core 13 [22, 46]        [23, 47] 
> All the threads and their pinned core for vswitchd:
> 28262  28262   0 ovs-vswitchd
>  28263  28262  39 vfio-sync
>  28297  28262   0 eal-intr-thread
>  28298  28262   1 lcore-slave-1
>  28299  28262   2 lcore-slave-2
>  28300  28262   3 lcore-slave-3
>  28301  28262   0 dpdk_watchdog2
>  28302  28262   0 vhost_thread1
>  28303  28262   0 pdump-thread
>  28304  28262   0 ct_clean3
>  28305  28262   0 urcu4
>  28744  28262   0 handler101
>  28745  28262   0 handler100
>  28746  28262   0 handler99
>  28747  28262   0 handler98
>  28748  28262   0 handler95
>  28749  28262   0 handler77
>  28750  28262   0 handler79
>  28751  28262   0 handler80
>  28752  28262   0 handler81
>  28753  28262   0 handler73
>  28756  28262   0 handler92
>  28757  28262   0 handler82
>  28758  28262   0 handler96
>  28759  28262   0 handler71
>  28760  28262   0 handler61
>  28761  28262   0 handler62
>  28762  28262   0 handler83
>  28763  28262   0 handler63
>  28764  28262   0 handler84
>  28765  28262   0 handler93
>  28766  28262   0 handler64
>  28767  28262   0 handler85
>  28768  28262   0 handler74
>  28769  28262   0 handler65
>  28770  28262   0 handler66
>  28771  28262   0 handler78
>  28772  28262   0 handler86
>  28773  28262   0 handler87
>  28774  28262   0 handler97
>  28775  28262   0 handler88
>  28776  28262   0 handler56
>  28777  28262   0 handler76
>  28778  28262   0 handler67
>  28779  28262   0 handler60
>  28780  28262   0 handler68
>  28781  28262   0 revalidator75
>  28782  28262   0 revalidator57
>  28783  28262   0 revalidator89
>  28784  28262   0 revalidator69
>  28785  28262   0 revalidator54
>  28786  28262   0 revalidator90
>  28787  28262   0 revalidator55
>  28788  28262   0 revalidator58
>  28789  28262   0 revalidator59
>  28790  28262   0 revalidator70
>  28791  28262   0 revalidator94
>  28792  28262   0 revalidator91
>  28793  28262   0 revalidator72
>  28827  28262   4 pmd103
>  28829  28262   6 pmd102
> 

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-10-24 18:25 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-24 11:10 [dpdk-dev] why all the other threads except lcore-slave pinned to master lcore? ychen
2016-10-24 18:25 ` Kevin Traynor

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).