DPDK patches and discussions
 help / color / mirror / Atom feed
From: Venky Venkatesh <vvenkatesh@paloaltonetworks.com>
To: "Mattias Rönnblom" <mattias.ronnblom@ericsson.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] DSW eventdev and multi-process DPDK
Date: Fri, 18 Jan 2019 06:36:02 +0000	[thread overview]
Message-ID: <C87F4808-2A33-47E0-B959-B95895B19D2F@paloaltonetworks.com> (raw)
In-Reply-To: <58E15395-2C48-49C0-B702-40F511E61E55@paloaltonetworks.com>



On 1/17/19, 11:10 AM, "Venky Venkatesh" <vvenkatesh@paloaltonetworks.com> wrote:

    
    
    On 1/7/19, 7:36 AM, "Mattias Rönnblom" <mattias.ronnblom@ericsson.com> wrote:
    
        On 2018-12-21 20:12, Venky Venkatesh wrote:
        > 
        > 
        > On 12/21/18, 10:59 AM, "Mattias Rönnblom" <mattias.ronnblom@ericsson.com> wrote:
        > 
        >      On 2018-12-21 19:34, Venky Venkatesh wrote:
        >      >
        >      >
        >      > On 12/21/18, 10:24 AM, "Mattias Rönnblom" <mattias.ronnblom@ericsson.com> wrote:
        >      >
        >      >      On 2018-12-21 06:13, Venky Venkatesh wrote:
        >      >      > Hi,
        >      >      > We are considering using a multi-process mode of the DPDK with the event generators and consumers being spread across multiple processes (on different cores). We are also considering using the DSW eventdev. Is the DSW designed for such a use case? If so, are there some restrictions and something specific that need to be done to make it work correctly?
        >      >      >
        >      >
        >      >      The purpose of an event device is to do dynamic load balancing across
        >      >      multiple cores. Using the DPDK multiple-process support, with its
        >      >      requirement of having unique, non-overlapping, core masks works against
        >      >      or even defeats this purpose.
        >      >
        >      > [VV]: I don’t understand your last sentence. Suppose I am having multiple packet processing processes (each with a single thread and polling a disjoint set of queues) and each linked to DSW. Each process would invoke the enqueue which will be handled by the DSW linked to that process. Will the DSWs across these processes "collaborate" to get load balancing across the processes?
        >      >
        >      
        >      If the processes are to collaborate, and process packets in the same
        >      pipeline, they will need to share an event device (for example, a DSW
        >      instance).
        >      
        >      However, if you put each of your pipeline stages into a process with a
        >      single worker thread, you will not leave any room for an event device to
        >      load balance, since every eventdev queue will have only a single
        >      consumer linked to it.
        > 
        > [VV]: Sorry for the ambiguous terminology used by me -- queue (above) referred to port queues and not eventdev queues. Additionally, consider a very simple pipeline -- just 1 stage followed by transmit. Thus each process is pulling packets out of the port queue, enqueue into local DSW, dequeue from local DSW and running this 1 stage pipeline and transmitting. The role of eventdev in this world is to load balance across the processes -- that is what I meant by DSWs collaborate (since they need to exchange load information and do migration handshake). Hope that clarifies. Pls let me know if this will work.
        
        I'm not familiar with the details of DPDK multiprocess support, but I 
        think this should work. Again, the DSW instance needs to be shared, and 
        can't be local to the process in case you want to use it to load balance 
        across different DPDK processes.
        
        All of the huge page memory is shared, and that's the only memory a DSW 
        event device is using (except for execution stacks of course, which of 
        course doesn't have to be shared).
    
    [VV]: I had a question on the eventdev initialization API in the above multi-process setting. The following are the objects and API to init each of them. For each of these can you confirm whether it needs to be called in the PRIMARY process only or even the SECONDARY process must call these. The reason for the question is that the shared memory must be safely initialized once.
    
    Event device itself: 	rte_event_dev_configure, rte_event_dev_start
    Ports:			rte_event_port_setup
    Queues:		rte_event_queue_setup
    Port-Queue-Links:	rte_event_port_link

[VV]: Some more information. I went with the assumption that I will call these APIs on both the PRIMARY and SECONDARY processes and that these APIs would do the right thing viz. on the PRIMARY it would allocate and initialize while in the SECONDARY it would attach to those memories and structures. However that doesn’t seem to be the case: rte_event_port_setup is trying to allocate a ring of the same name in both processes and is crashing in rte_memzone_reserve_thread_safe (specifically at if ((memzone_lookup_thread_unsafe(name)) != NULL)) which is called from rte_event_ring_create. 
Can you pls advise if the DSW is multiprocess ready? If not, are there any plans to do so?

Thanks
-Venky


    
    Thanks
    -Venky
        


  reply	other threads:[~2019-01-18  6:36 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-21  5:13 Venky Venkatesh
2018-12-21 18:24 ` Mattias Rönnblom
2018-12-21 18:34   ` Venky Venkatesh
2018-12-21 18:59     ` Mattias Rönnblom
2018-12-21 19:12       ` Venky Venkatesh
2019-01-07 15:36         ` Mattias Rönnblom
2019-01-17 19:10           ` Venky Venkatesh
2019-01-18  6:36             ` Venky Venkatesh [this message]
2019-01-18 14:47               ` Mattias Rönnblom
2019-01-18 14:36             ` Mattias Rönnblom

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=C87F4808-2A33-47E0-B959-B95895B19D2F@paloaltonetworks.com \
    --to=vvenkatesh@paloaltonetworks.com \
    --cc=dev@dpdk.org \
    --cc=mattias.ronnblom@ericsson.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).