DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Mattias Rönnblom" <mattias.ronnblom@ericsson.com>
To: Jerin Jacob <jerinjacobk@gmail.com>
Cc: Jerin Jacob <jerinj@marvell.com>, dpdk-dev <dev@dpdk.org>,
	"Richardson, Bruce" <bruce.richardson@intel.com>
Subject: Re: [dpdk-dev] [RFC] eventdev: introduce event dispatcher
Date: Mon, 15 Mar 2021 14:44:44 +0000	[thread overview]
Message-ID: <c0e5c842-5ec5-a699-469a-e549cd25daf6@ericsson.com> (raw)
In-Reply-To: <CALBAE1Mq0VkavJwMsz=RW=4qM7E5CYh8ypgjOfBrHxzHRHrA7g@mail.gmail.com>

On 2021-03-07 14:04, Jerin Jacob wrote:
> On Fri, Feb 26, 2021 at 1:31 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
>> On 2021-02-25 13:32, Jerin Jacob wrote:
>>> On Fri, Feb 19, 2021 at 12:00 AM Mattias Rönnblom
>>> <mattias.ronnblom@ericsson.com> wrote:
>>>> The purpose of the event dispatcher is primarily to decouple different
>>>> parts of an application (e.g., processing pipeline stages), which
>>>> share the same underlying event device.
>>>>
>>>> The event dispatcher replaces the conditional logic (often, a switch
>>>> statement) that typically follows an event device dequeue operation,
>>>> where events are dispatched to different parts of the application
>>>> based on the destination queue id.
>>> # If the device has all type queue[1] this RFC would restrict to
>>> use queue ONLY as stage. A stage can be a Queue Type also.
>>> How we can abstract this in this model?
>>
>> "All queue type" is about scheduling policy. I would think that would be
>> independent of the "logical endpoint" of the event (i.e., the queue id).
>> I feel like I'm missing something here.
> Each queue type also can be represented as a stage.
> For example, If the system has only one queue, the Typical IPsec
> outbound stages can be
> Q0-Ordered(For SA lookup) -> Q0(Atomic)(For Sequence number update) ->
> Q0(Orderd)(Crypto operation)->Q0(Atomic)(Send on wire)


OK, this makes sense.


Would such an application want to add a callback 
per-queue-per-sched-type, or just per-sched-type? In your example, if 
you would have a queue Q1 as well, would want to have the option to have 
different callbacks for atomic-type events on Q0 and Q1?


Would you want to dispatch based on anything else in the event? You 
could basically do it on any field (flow id, priority, etc.), but is 
there some other field that's commonly used to denote a processing stage?


>>
>>> # Also, I think, it may make sense to add this type of infrastructure as
>>> helper functions as these are built on top of existing APIs i.e There
>>> is no support
>>> required from the driver to establish this model. IMO, If we need to
>>> add such support as
>>> one fixed set of functionality, we could have helper APIs to express a certain
>>> usage of eventdev. Rather defining the that's only way to do this.
>>> I think, A helper function can be used to as abstraction to define
>>> this kind of model.
>>>
>>> # Also, There is function pointer overhead and aggregating the events
>>> in implementation,
>>> That may be not always "the" optimized model of making it work vs switch case in
>>> application.
>>
>> Sure, but what to do in a reasonable generic framework?
>>
>>
>> If you are very sensitive to that 20 cc or whatever function pointer
>> call, you won't use this library. Or you will, and use static linking
>> and LTO to get rid of that overhead.
>>
>>
>> Probably, you have a few queues, not many. Probably, your dequeue bursts
>> are large, if the system load is high (and otherwise, you don't care
>> about efficiency). Then, you will have at least of couple of events per
>> function call.
> I am fine with this library and exposing it as a function pointer if
> someone needs to
> have a "helper" function to model the system around this logic.
>
> This RFC looks good to me in general. I would suggest to make it as
>
> - Helper functions i.e if someone chooses to do write the stage in
> this way, it can be enabled through this helper function.
> By choosing as helper function it depicts, this is one way to do the
> stuff but the NOT ONLY WAY.
> - Abstract stages as a queue(which already added in the patch) and
> each type in the queue for all type queue cases.
> - Enhance test-eventdev to showcase the functionality and performance
> of these helpers.
>
> Thanks for the RFC.
>
>>
>>> [1]
>>> See RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES in
>>> https://protect2.fireeye.com/v1/url?k=dcf3a2b9-83689b94-dcf3e222-8692dc8284cb-5ba19813a1556a85&q=1&e=0ff1861f-8e24-453c-a93b-73fd88e0f316&u=https%3A%2F%2Fdoc.dpdk.org%2Fguides%2Fprog_guide%2Feventdev.html
>>>
>>>
>>>> The concept is similar to a UNIX file descriptor event loop library.
>>>> Instead of tying callback functions to fds as for example libevent
>>>> does, the event dispatcher binds callbacks to queue ids.
>>>>
>>>> An event dispatcher is configured to dequeue events from a specific
>>>> event device, and ties into the service core framework, to do its (and
>>>> the application's) work.
>>>>
>>>> The event dispatcher provides a convenient way for an eventdev-based
>>>> application to use service cores for application-level processing, and
>>>> thus for sharing those cores with other DPDK services.
>>>>
>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>>> ---
>>>>    lib/librte_eventdev/Makefile                 |   2 +
>>>>    lib/librte_eventdev/meson.build              |   6 +-
>>>>    lib/librte_eventdev/rte_event_dispatcher.c   | 420 +++++++++++++++++++
>>>>    lib/librte_eventdev/rte_event_dispatcher.h   | 251 +++++++++++
>>>>    lib/librte_eventdev/rte_eventdev_version.map |  10 +
>>>>    5 files changed, 687 insertions(+), 2 deletions(-)
>>>>    create mode 100644 lib/librte_eventdev/rte_event_dispatcher.c
>>>>    create mode 100644 lib/librte_eventdev/rte_event_dispatcher.h
>>>>
>>>> diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile
>>>> index 0715256bb4..614d53af1b 100644
>>>> --- a/lib/librte_eventdev/Makefile
>>>> +++ b/lib/librte_eventdev/Makefile
>>>> @@ -26,6 +26,7 @@ SRCS-y += rte_event_eth_rx_adapter.c
>>>>    SRCS-y += rte_event_timer_adapter.c
>>>>    SRCS-y += rte_event_crypto_adapter.c
>>>>    SRCS-y += rte_event_eth_tx_adapter.c
>>>> +SRCS-y += rte_event_dispatcher.c
>>>>
>>>>    # export include files
>>>>    SYMLINK-y-include += rte_eventdev.h
>>>> @@ -40,6 +41,7 @@ SYMLINK-y-include += rte_event_timer_adapter.h
>>>>    SYMLINK-y-include += rte_event_timer_adapter_pmd.h
>>>>    SYMLINK-y-include += rte_event_crypto_adapter.h
>>>>    SYMLINK-y-include += rte_event_eth_tx_adapter.h
>>>> +SYMLINK-y-include += rte_event_dispatcher.h
>>>>
>>>>    # versioning export map
>>>>    EXPORT_MAP := rte_eventdev_version.map
>>>> diff --git a/lib/librte_eventdev/meson.build b/lib/librte_eventdev/meson.build
>>>> index d1f25ee5ca..2ca81983b5 100644
>>>> --- a/lib/librte_eventdev/meson.build
>>>> +++ b/lib/librte_eventdev/meson.build
>>>> @@ -13,7 +13,8 @@ sources = files('rte_eventdev.c',
>>>>                   'rte_event_eth_rx_adapter.c',
>>>>                   'rte_event_timer_adapter.c',
>>>>                   'rte_event_crypto_adapter.c',
>>>> -               'rte_event_eth_tx_adapter.c')
>>>> +               'rte_event_eth_tx_adapter.c',
>>>> +               'rte_event_dispatcher.c')
>>>>    headers = files('rte_eventdev.h',
>>>>                   'rte_eventdev_pmd.h',
>>>>                   'rte_eventdev_pmd_pci.h',
>>>> @@ -25,5 +26,6 @@ headers = files('rte_eventdev.h',
>>>>                   'rte_event_timer_adapter.h',
>>>>                   'rte_event_timer_adapter_pmd.h',
>>>>                   'rte_event_crypto_adapter.h',
>>>> -               'rte_event_eth_tx_adapter.h')
>>>> +               'rte_event_eth_tx_adapter.h',
>>>> +               'rte_event_dispatcher.h')
>>>>    deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev']
>>>> diff --git a/lib/librte_eventdev/rte_event_dispatcher.c b/lib/librte_eventdev/rte_event_dispatcher.c
>>>> new file mode 100644
>>>> index 0000000000..1c7e55a752
>>>> --- /dev/null
>>>> +++ b/lib/librte_eventdev/rte_event_dispatcher.c
>>>> @@ -0,0 +1,420 @@
>>>> +/* SPDX-License-Identifier: BSD-3-Clause
>>>> + * Copyright(c) 2021 Ericsson AB
>>>> + */
>>>> +
>>>> +#include <stdbool.h>
>>>> +#include <stdint.h>
>>>> +
>>>> +#include <rte_lcore.h>
>>>> +#include <rte_service_component.h>
>>>> +#include <rte_eventdev_pmd.h>
>>>> +
>>>> +#include <rte_event_dispatcher.h>
>>>> +
>>>> +#define RED_MAX_PORTS_PER_LCORE (4)
>>>> +
>>>> +struct rte_event_dispatcher_lcore_port {
>>>> +       uint8_t port_id;
>>>> +       uint16_t batch_size;
>>>> +       uint64_t timeout;
>>>> +};
>>>> +
>>>> +struct rte_event_dispatcher_lcore {
>>>> +       uint8_t num_ports;
>>>> +       struct rte_event_dispatcher_lcore_port ports[RED_MAX_PORTS_PER_LCORE];
>>>> +};
>>>> +
>>>> +struct rte_event_dispatcher_cb {
>>>> +       rte_event_dispatcher_cb_t cb_fun;
>>>> +       void *cb_data;
>>>> +};
>>>> +
>>>> +struct rte_event_dispatcher {
>>>> +       uint8_t id;
>>>> +       uint8_t event_dev_id;
>>>> +       int socket_id;
>>>> +       uint32_t service_id;
>>>> +       struct rte_event_dispatcher_lcore lcores[RTE_MAX_LCORE];
>>>> +       struct rte_event_dispatcher_cb queue_cbs[UINT8_MAX];
>>>> +       struct rte_event_dispatcher_cb fallback;
>>>> +};
>>>> +
>>>> +static struct rte_event_dispatcher *dispatchers[UINT8_MAX];
>>>> +
>>>> +static bool
>>>> +red_has_dispatcher(uint8_t id)
>>>> +{
>>>> +       return dispatchers[id] != NULL;
>>>> +}
>>>> +
>>>> +static struct rte_event_dispatcher *
>>>> +red_get_dispatcher(uint8_t id)
>>>> +{
>>>> +       return dispatchers[id];
>>>> +}
>>>> +
>>>> +static void
>>>> +red_set_dispatcher(uint8_t id, struct rte_event_dispatcher *dispatcher)
>>>> +{
>>>> +       dispatchers[id] = dispatcher;
>>>> +}
>>>> +
>>>> +#define RED_VALID_ID_OR_RET_EINVAL(id)                                 \
>>>> +       do {                                                            \
>>>> +               if (unlikely(!red_has_dispatcher(id))) {                \
>>>> +                       RTE_EDEV_LOG_ERR("Invalid dispatcher id %d\n", id); \
>>>> +                       return -EINVAL;                                 \
>>>> +               }                                                       \
>>>> +       } while (0)
>>>> +
>>>> +static struct rte_event_dispatcher_cb *
>>>> +red_lookup_cb(struct rte_event_dispatcher *dispatcher, uint8_t queue_id)
>>>> +{
>>>> +       struct rte_event_dispatcher_cb *cb = &dispatcher->queue_cbs[queue_id];
>>>> +
>>>> +       if (unlikely(cb->cb_fun == NULL))
>>>> +           cb = &dispatcher->fallback;
>>>> +
>>>> +       return cb;
>>>> +}
>>>> +
>>>> +static void
>>>> +red_dispatch_events(struct rte_event_dispatcher *dispatcher,
>>>> +                   struct rte_event *events, uint16_t num_events)
>>>> +{
>>>> +       uint16_t cb_start;
>>>> +       uint16_t cb_len;
>>>> +
>>>> +       for (cb_start = 0; cb_start < num_events; cb_start += cb_len) {
>>>> +               uint16_t cb_end = cb_start;
>>>> +               uint8_t queue_id = events[cb_start].queue_id;
>>>> +               struct rte_event_dispatcher_cb *cb;
>>>> +
>>>> +               while (++cb_end < num_events &&
>>>> +                      events[cb_end].queue_id == queue_id)
>>>> +                       ;
>>>> +
>>>> +               cb_len = cb_end - cb_start;
>>>> +
>>>> +               cb = red_lookup_cb(dispatcher, queue_id);
>>>> +
>>>> +               if (unlikely(cb->cb_fun == NULL)) {
>>>> +                       RTE_EDEV_LOG_ERR("Attempted to dispatch %d events "
>>>> +                                        "for queue id %d, but no queue or "
>>>> +                                        "fallback cb were configured\n",
>>>> +                                        cb_len, queue_id);
>>>> +                       continue;
>>>> +               }
>>>> +
>>>> +               cb->cb_fun(&events[cb_start], cb_len, cb->cb_data);
>>>> +       }
>>>> +}
>>>> +
>>>> +static void
>>>> +red_port_dequeue(struct rte_event_dispatcher *dispatcher,
>>>> +                struct rte_event_dispatcher_lcore_port *port)
>>>> +{
>>>> +       uint16_t batch_size = port->batch_size;
>>>> +       struct rte_event events[batch_size];
>>>> +       uint16_t n;
>>>> +
>>>> +       n = rte_event_dequeue_burst(dispatcher->event_dev_id, port->port_id,
>>>> +                                   events, batch_size, port->timeout);
>>>> +
>>>> +       red_dispatch_events(dispatcher, events, n);
>>>> +}
>>>> +
>>>> +static int32_t
>>>> +red_lcore_process(void *userdata)
>>>> +{
>>>> +       uint16_t i;
>>>> +       struct rte_event_dispatcher *dispatcher = userdata;
>>>> +       unsigned int lcore_id = rte_lcore_id();
>>>> +       struct rte_event_dispatcher_lcore *lcore =
>>>> +               &dispatcher->lcores[lcore_id];
>>>> +
>>>> +       for (i = 0; i < lcore->num_ports; i++) {
>>>> +               struct rte_event_dispatcher_lcore_port *port =
>>>> +                       &lcore->ports[i];
>>>> +
>>>> +               red_port_dequeue(dispatcher, port);
>>>> +       }
>>>> +
>>>> +       return 0;
>>>> +}
>>>> +
>>>> +static int
>>>> +red_service_runstate_set(uint32_t service_id, int state)
>>>> +{
>>>> +       int rc;
>>>> +
>>>> +       rc = rte_service_component_runstate_set(service_id, state);
>>>> +
>>>> +       if (rc)
>>>> +               RTE_EDEV_LOG_ERR("Error %d occurred while setting service "
>>>> +                                "component run state to %d\n", rc, state);
>>>> +
>>>> +       return rc;
>>>> +}
>>>> +
>>>> +static int
>>>> +red_service_register(struct rte_event_dispatcher *dispatcher)
>>>> +{
>>>> +       struct rte_service_spec service = {
>>>> +               .callback = red_lcore_process,
>>>> +               .callback_userdata = dispatcher,
>>>> +               .capabilities = RTE_SERVICE_CAP_MT_SAFE,
>>>> +               .socket_id = dispatcher->socket_id
>>>> +       };
>>>> +       int rc;
>>>> +
>>>> +       snprintf(service.name, RTE_SERVICE_NAME_MAX - 1, "red_%d",
>>>> +                dispatcher->id);
>>>> +
>>>> +       rc = rte_service_component_register(&service, &dispatcher->service_id);
>>>> +
>>>> +       if (rc)
>>>> +               RTE_EDEV_LOG_ERR("Registration of event dispatcher service "
>>>> +                                "%s failed with error code %d\n",
>>>> +                                service.name, rc);
>>>> +
>>>> +       rc = red_service_runstate_set(dispatcher->service_id, 1);
>>>> +
>>>> +       if (rc)
>>>> +               rte_service_component_unregister(dispatcher->service_id);
>>>> +
>>>> +       return rc;
>>>> +}
>>>> +
>>>> +static int
>>>> +red_service_unregister(struct rte_event_dispatcher *dispatcher)
>>>> +{
>>>> +       int rc;
>>>> +
>>>> +       rc = red_service_runstate_set(dispatcher->service_id, 0);
>>>> +
>>>> +       if (rc)
>>>> +               return rc;
>>>> +
>>>> +       rc = rte_service_component_unregister(dispatcher->service_id);
>>>> +
>>>> +       if (rc)
>>>> +               RTE_EDEV_LOG_ERR("Unregistration of event dispatcher service "
>>>> +                                "failed with error code %d\n", rc);
>>>> +
>>>> +       return rc;
>>>> +}
>>>> +
>>>> +int
>>>> +rte_event_dispatcher_create(uint8_t id, uint8_t event_dev_id)
>>>> +{
>>>> +       int socket_id;
>>>> +       struct rte_event_dispatcher *dispatcher;
>>>> +       int rc;
>>>> +
>>>> +       if (red_has_dispatcher(id)) {
>>>> +               RTE_EDEV_LOG_ERR("Dispatcher with id %d already exists\n",
>>>> +                                id);
>>>> +               return -EEXIST;
>>>> +       }
>>>> +
>>>> +       socket_id = rte_event_dev_socket_id(event_dev_id);
>>>> +
>>>> +       dispatcher =
>>>> +               rte_malloc_socket("event dispatcher",
>>>> +                                 sizeof(struct rte_event_dispatcher),
>>>> +                                 RTE_CACHE_LINE_SIZE, socket_id);
>>>> +
>>>> +       if (dispatcher == NULL) {
>>>> +               RTE_EDEV_LOG_ERR("Unable to allocate memory for event "
>>>> +                                "dispatcher\n");
>>>> +               return -ENOMEM;
>>>> +       }
>>>> +
>>>> +       *dispatcher = (struct rte_event_dispatcher) {
>>>> +               .id = id,
>>>> +               .event_dev_id = event_dev_id,
>>>> +               .socket_id = socket_id
>>>> +       };
>>>> +
>>>> +       rc = red_service_register(dispatcher);
>>>> +
>>>> +       if (rc < 0) {
>>>> +               rte_free(dispatcher);
>>>> +               return rc;
>>>> +       }
>>>> +
>>>> +       red_set_dispatcher(id, dispatcher);
>>>> +
>>>> +       return 0;
>>>> +}
>>>> +
>>>> +int
>>>> +rte_event_dispatcher_free(uint8_t id)
>>>> +{
>>>> +       struct rte_event_dispatcher *dispatcher;
>>>> +       int rc;
>>>> +
>>>> +       RED_VALID_ID_OR_RET_EINVAL(id);
>>>> +       dispatcher = red_get_dispatcher(id);
>>>> +
>>>> +       rc = red_service_unregister(dispatcher);
>>>> +
>>>> +       if (rc)
>>>> +               return rc;
>>>> +
>>>> +       red_set_dispatcher(id, NULL);
>>>> +
>>>> +       rte_free(dispatcher);
>>>> +
>>>> +       return 0;
>>>> +}
>>>> +
>>>> +int
>>>> +rte_event_dispatcher_service_id_get(uint8_t id, uint32_t *service_id)
>>>> +{
>>>> +       struct rte_event_dispatcher *dispatcher;
>>>> +
>>>> +       RED_VALID_ID_OR_RET_EINVAL(id);
>>>> +       dispatcher = red_get_dispatcher(id);
>>>> +
>>>> +       *service_id = dispatcher->service_id;
>>>> +
>>>> +       return 0;
>>>> +}
>>>> +
>>>> +static int16_t
>>>> +lcore_port_index(struct rte_event_dispatcher_lcore *lcore,
>>>> +                uint8_t event_port_id)
>>>> +{
>>>> +       uint16_t i;
>>>> +
>>>> +       for (i = 0; i < lcore->num_ports; i++) {
>>>> +               struct rte_event_dispatcher_lcore_port *port =
>>>> +                       &lcore->ports[i];
>>>> +               if (port->port_id == event_port_id)
>>>> +                       return i;
>>>> +       }
>>>> +
>>>> +       return -1;
>>>> +}
>>>> +
>>>> +int
>>>> +rte_event_dispatcher_bind_port_to_lcore(uint8_t id, uint8_t event_port_id,
>>>> +                                       uint16_t batch_size, uint64_t timeout,
>>>> +                                       unsigned int lcore_id)
>>>> +{
>>>> +       struct rte_event_dispatcher *dispatcher;
>>>> +       struct rte_event_dispatcher_lcore *lcore;
>>>> +       struct rte_event_dispatcher_lcore_port *port;
>>>> +
>>>> +       RED_VALID_ID_OR_RET_EINVAL(id);
>>>> +       dispatcher = red_get_dispatcher(id);
>>>> +
>>>> +       lcore = &dispatcher->lcores[lcore_id];
>>>> +
>>>> +       if (lcore->num_ports == RED_MAX_PORTS_PER_LCORE)
>>>> +               return -ENOMEM;
>>>> +
>>>> +       if (lcore_port_index(lcore, event_port_id) >= 0)
>>>> +               return -EEXIST;
>>>> +
>>>> +       port = &lcore->ports[lcore->num_ports];
>>>> +
>>>> +       *port = (struct rte_event_dispatcher_lcore_port) {
>>>> +               .port_id = event_port_id,
>>>> +               .batch_size = batch_size,
>>>> +               .timeout = timeout
>>>> +       };
>>>> +
>>>> +       lcore->num_ports++;
>>>> +
>>>> +       return 0;
>>>> +}
>>>> +
>>>> +int
>>>> +rte_event_dispatcher_unbind_port_from_lcore(uint8_t id, uint8_t event_port_id,
>>>> +                                           unsigned int lcore_id)
>>>> +{
>>>> +       struct rte_event_dispatcher *dispatcher;
>>>> +       struct rte_event_dispatcher_lcore *lcore;
>>>> +       int16_t port_idx;
>>>> +       struct rte_event_dispatcher_lcore_port *port;
>>>> +       struct rte_event_dispatcher_lcore_port *last;
>>>> +
>>>> +       RED_VALID_ID_OR_RET_EINVAL(id);
>>>> +       dispatcher = red_get_dispatcher(id);
>>>> +
>>>> +       lcore = &dispatcher->lcores[lcore_id];
>>>> +
>>>> +       port_idx = lcore_port_index(lcore, event_port_id);
>>>> +
>>>> +       if (port_idx < 0)
>>>> +               return -ENOENT;
>>>> +
>>>> +       port = &lcore->ports[port_idx];
>>>> +       last = &lcore->ports[lcore->num_ports - 1];
>>>> +
>>>> +       if (port != last)
>>>> +               *port = *last;
>>>> +
>>>> +       lcore->num_ports--;
>>>> +
>>>> +       return 0;
>>>> +}
>>>> +
>>>> +static int
>>>> +red_cb_manage(uint8_t id, uint8_t queue_id, bool reg, bool fallback,
>>>> +          rte_event_dispatcher_cb_t cb_fun, void *cb_data)
>>>> +{
>>>> +       struct rte_event_dispatcher *dispatcher;
>>>> +       struct rte_event_dispatcher_cb *cb;
>>>> +
>>>> +       RED_VALID_ID_OR_RET_EINVAL(id);
>>>> +       dispatcher = red_get_dispatcher(id);
>>>> +
>>>> +       if (fallback)
>>>> +               cb = &dispatcher->fallback;
>>>> +       else
>>>> +               cb = &dispatcher->queue_cbs[queue_id];
>>>> +
>>>> +       if (reg && cb->cb_fun != NULL)
>>>> +               return -EEXIST;
>>>> +
>>>> +       if (!reg && cb->cb_fun == NULL)
>>>> +               return -ENOENT;
>>>> +
>>>> +       *cb = (struct rte_event_dispatcher_cb) {
>>>> +               .cb_fun = cb_fun,
>>>> +               .cb_data = cb_data
>>>> +       };
>>>> +
>>>> +       return 0;
>>>> +}
>>>> +
>>>> +int
>>>> +rte_event_dispatcher_register(uint8_t id, uint8_t queue_id,
>>>> +                             rte_event_dispatcher_cb_t cb_fun, void *cb_data)
>>>> +{
>>>> +       return red_cb_manage(id, queue_id, true, false, cb_fun, cb_data);
>>>> +}
>>>> +
>>>> +int
>>>> +rte_event_dispatcher_unregister(uint8_t id, uint8_t queue_id)
>>>> +{
>>>> +       return red_cb_manage(id, queue_id, false, false, NULL, NULL);
>>>> +}
>>>> +
>>>> +int
>>>> +rte_event_dispatcher_register_fallback(uint8_t id,
>>>> +                                      rte_event_dispatcher_cb_t cb_fun,
>>>> +                                      void *cb_data)
>>>> +{
>>>> +       return red_cb_manage(id, 0, true, true, cb_fun, cb_data);
>>>> +}
>>>> +
>>>> +int
>>>> +rte_event_dispatcher_unregister_fallback(uint8_t id)
>>>> +{
>>>> +       return red_cb_manage(id, 0, false, true, NULL, NULL);
>>>> +}
>>>> diff --git a/lib/librte_eventdev/rte_event_dispatcher.h b/lib/librte_eventdev/rte_event_dispatcher.h
>>>> new file mode 100644
>>>> index 0000000000..11f57571ab
>>>> --- /dev/null
>>>> +++ b/lib/librte_eventdev/rte_event_dispatcher.h
>>>> @@ -0,0 +1,251 @@
>>>> +/* SPDX-License-Identifier: BSD-3-Clause
>>>> + * Copyright(c) 2021 Ericsson AB
>>>> + */
>>>> +
>>>> +#ifndef __RTE_EVENT_DISPATCHER_H__
>>>> +#define __RTE_EVENT_DISPATCHER_H__
>>>> +
>>>> +/**
>>>> + * @file
>>>> + *
>>>> + * RTE Event Dispatcher
>>>> + *
>>>> + */
>>>> +
>>>> +#ifdef __cplusplus
>>>> +extern "C" {
>>>> +#endif
>>>> +
>>>> +#include <rte_eventdev.h>
>>>> +
>>>> +/**
>>>> + * Function prototype for dispatcher callbacks.
>>>> + *
>>>> + * @param events
>>>> + *  Pointer to an array of events.
>>>> + *
>>>> + * @param num
>>>> + *  The number of events in the @p events array.
>>>> + *
>>>> + * @param cb_data
>>>> + *  The pointer supplied by the application in
>>>> + *  rte_event_dispatcher_register() or
>>>> + *  rte_event_dispatcher_register_fallback().
>>>> + */
>>>> +
>>>> +typedef void (*rte_event_dispatcher_cb_t)(struct rte_event *events,
>>>> +                                         uint16_t num, void *cb_data);
>>>> +
>>>> +/**
>>>> + * Create an event dispatcher with the specified id.
>>>> + *
>>>> + * @param id
>>>> + *  An application-specified, unique (across all event dispatcher
>>>> + *  instances) identifier.
>>>> + *
>>>> + * @param event_dev_id
>>>> + *  The identifier of the event device from which this event dispatcher
>>>> + *  will dequeue events.
>>>> + *
>>>> + * @return
>>>> + *   - 0: Success
>>>> + *   - <0: Error code on failure
>>>> + */
>>>> +__rte_experimental
>>>> +int
>>>> +rte_event_dispatcher_create(uint8_t id, uint8_t event_dev_id);
>>>> +
>>>> +/**
>>>> + * Frees an event dispatcher with the specified id.
>>>> + *
>>>> + * @param id
>>>> + *  The event dispatcher identifier.
>>>> + *
>>>> + * @return
>>>> + *   - 0: Success
>>>> + *   - <0: Error code on failure
>>>> + */
>>>> +__rte_experimental
>>>> +int
>>>> +rte_event_dispatcher_free(uint8_t id);
>>>> +
>>>> +/**
>>>> + * Retrieve the service identifier of the event dispatcher.
>>>> + *
>>>> + * @param id
>>>> + *  The event dispatcher identifier.
>>>> + *
>>>> + * @param [out] service_id
>>>> + *  A pointer to a caller-supplied buffer where the event dispatcher's
>>>> + *  service id will be stored.
>>>> + *
>>>> + * @return
>>>> + *  - 0: Success
>>>> + *  - <0: Error code on failure.
>>>> + */
>>>> +__rte_experimental
>>>> +int
>>>> +rte_event_dispatcher_service_id_get(uint8_t id, uint32_t *service_id);
>>>> +
>>>> +/**
>>>> + * Binds an event device port to a specific lcore on the specified
>>>> + * event dispatcher.
>>>> + *
>>>> + * This function configures an event dispatcher to dequeue events from
>>>> + * an event device port (as specified by @p event_port_id), in case
>>>> + * its service function is run on particular lcore (as specified by @p
>>>> + * lcore_id).
>>>> + *
>>>> + * Multiple event device ports may be bound to the same lcore. A
>>>> + * particular port may only be bound to one lcore.
>>>> + *
>>>> + * If the event dispatcher service is mapped (with
>>>> + * rte_service_map_lcore_set()) to a lcore for which no ports are
>>>> + * bound, the service function will be a no-operation.
>>>> + *
>>>> + * @param id
>>>> + *  The event dispatcher identifier.
>>>> + *
>>>> + * @param event_port_id
>>>> + *  The event device port identifier.
>>>> + *
>>>> + * @param batch_size
>>>> + *  The batch size to use in rte_event_dequeue_burst(), for the
>>>> + *  configured event device port and lcore.
>>>> + *
>>>> + * @param timeout
>>>> + *  The timeout parameter to use in rte_event_dequeue_burst(), for the
>>>> + *  configured event device port and lcore.
>>>> + *
>>>> + * @return
>>>> + *  - 0: Success
>>>> + *  - <0: Error code on failure.
>>>> + */
>>>> +__rte_experimental
>>>> +int
>>>> +rte_event_dispatcher_bind_port_to_lcore(uint8_t id, uint8_t event_port_id,
>>>> +                                       uint16_t batch_size, uint64_t timeout,
>>>> +                                       unsigned int lcore_id);
>>>> +
>>>> +/**
>>>> + * Unbind an event device port from a specific lcore.
>>>> + *
>>>> + * @param id
>>>> + *  The event dispatcher identifier.
>>>> + *
>>>> + * @param event_port_id
>>>> + *  The event device port identifier.
>>>> + *
>>>> + * @return
>>>> + *  - 0: Success
>>>> + *  - <0: Error code on failure.
>>>> + */
>>>> +__rte_experimental
>>>> +int
>>>> +rte_event_dispatcher_unbind_port_from_lcore(uint8_t id, uint8_t event_port_id,
>>>> +                                           unsigned int lcore_id);
>>>> +
>>>> +/**
>>>> + * Register a callback function for the specified queue identifier.
>>>> + *
>>>> + * At most one callback may be registered per queue id.
>>>> + *
>>>> + * The same callback function may be registered for multiple queue ids.
>>>> + *
>>>> + * For each callback invocation, events belonging to a single queue id
>>>> + * will dispatched.
>>>> + *
>>>> + * @param id
>>>> + *  The event dispatcher identifier.
>>>> + *
>>>> + * @param queue_id
>>>> + *  The event device queue id for which @p cb_fun should be called.
>>>> + *
>>>> + * @param cb_fun
>>>> + *  The callback function.
>>>> + *
>>>> + * @param cb_data
>>>> + *  A pointer to some application-specific opaque data (or NULL),
>>>> + *  which is supplied back to the application in the callback.
>>>> + *
>>>> + * @return
>>>> + *  - 0: Success
>>>> + *  - <0: Error code on failure.
>>>> + */
>>>> +__rte_experimental
>>>> +int
>>>> +rte_event_dispatcher_register(uint8_t id, uint8_t queue_id,
>>>> +                             rte_event_dispatcher_cb_t cb_fun, void *cb_data);
>>>> +
>>>> +/**
>>>> + * Unregister a callback function for the specified queue identifier.
>>>> + *
>>>> + * @param id
>>>> + *  The event dispatcher identifier.
>>>> + *
>>>> + * @param queue_id
>>>> + *  The event device queue id for which the callback should be removed.
>>>> + *
>>>> + * @return
>>>> + *  - 0: Success
>>>> + *  - <0: Error code on failure.
>>>> + */
>>>> +__rte_experimental
>>>> +int
>>>> +rte_event_dispatcher_unregister(uint8_t id, uint8_t queue_id);
>>>> +
>>>> +/**
>>>> + * Register a fallback callback function for the specified queue
>>>> + * identifier.
>>>> + *
>>>> + * Only events for which no queue-specific callback function will be
>>>> + * dispatched to the @p cb_fun callback.
>>>> + *
>>>> + * At most one callback fallback function may be registered.
>>>> + *
>>>> + * For each callback invocation, only events belonging to a single
>>>> + * queue id will be included.
>>>> + *
>>>> + * If the event dispatcher encounters an event with a queue id for
>>>> + * which the application has not registered any specific callback, and
>>>> + * there is also no fallback configured, the event will be dropped.
>>>> + *
>>>> + * @param id
>>>> + *  The event dispatcher identifier.
>>>> + *
>>>> + * @param cb_fun
>>>> + *  The fallback callback function.
>>>> + *
>>>> + * @param cb_data
>>>> + *  A pointer to some application-specific opaque data (or NULL),
>>>> + *  which is supplied back to the application in the callback.
>>>> + *
>>>> + * @return
>>>> + *  - 0: Success
>>>> + *  - <0: Error code on failure.
>>>> + */
>>>> +__rte_experimental
>>>> +int
>>>> +rte_event_dispatcher_register_fallback(uint8_t id,
>>>> +                                      rte_event_dispatcher_cb_t cb_fun,
>>>> +                                      void *cb_data);
>>>> +
>>>> +/**
>>>> + * Unregister the fallback callback function.
>>>> + *
>>>> + * @param id
>>>> + *  The event dispatcher identifier.
>>>> + *
>>>> + * @return
>>>> + *  - 0: Success
>>>> + *  - <0: Error code on failure.
>>>> + */
>>>> +__rte_experimental
>>>> +int
>>>> +rte_event_dispatcher_unregister_fallback(uint8_t id);
>>>> +
>>>> +#ifdef __cplusplus
>>>> +}
>>>> +#endif
>>>> +
>>>> +#endif /* __RTE_EVENT_DISPATCHER__ */
>>>> diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
>>>> index 91a62cd077..dcb887601b 100644
>>>> --- a/lib/librte_eventdev/rte_eventdev_version.map
>>>> +++ b/lib/librte_eventdev/rte_eventdev_version.map
>>>> @@ -134,4 +134,14 @@ EXPERIMENTAL {
>>>>           __rte_eventdev_trace_crypto_adapter_queue_pair_del;
>>>>           __rte_eventdev_trace_crypto_adapter_start;
>>>>           __rte_eventdev_trace_crypto_adapter_stop;
>>>> +
>>>> +       rte_event_dispatcher_create;
>>>> +       rte_event_dispatcher_free;
>>>> +       rte_event_dispatcher_service_id_get;
>>>> +       rte_event_dispatcher_bind_port_to_lcore;
>>>> +       rte_event_dispatcher_unbind_port_from_lcore;
>>>> +       rte_event_dispatcher_register;
>>>> +       rte_event_dispatcher_unregister;
>>>> +       rte_event_dispatcher_register_fallback;
>>>> +       rte_event_dispatcher_unregister_fallback;
>>>>    };
>>>> --
>>>> 2.25.1
>>>>


  reply	other threads:[~2021-03-15 14:44 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-18 18:30 Mattias Rönnblom
2021-02-22 15:28 ` Luca Boccassi
2021-02-26  7:48   ` Mattias Rönnblom
2021-02-25 12:32 ` Jerin Jacob
2021-02-26  8:01   ` Mattias Rönnblom
2021-03-07 13:04     ` Jerin Jacob
2021-03-15 14:44       ` Mattias Rönnblom [this message]
2021-03-15 15:00         ` Van Haaren, Harry
2021-03-22  9:50           ` Mattias Rönnblom
2021-04-09 11:32             ` [dpdk-dev] [RFC v2] " Mattias Rönnblom

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c0e5c842-5ec5-a699-469a-e549cd25daf6@ericsson.com \
    --to=mattias.ronnblom@ericsson.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=jerinj@marvell.com \
    --cc=jerinjacobk@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).