From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 8FE382E8A for ; Tue, 20 May 2014 12:01:54 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP; 20 May 2014 02:57:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.98,873,1392192000"; d="scan'208";a="534744900" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by fmsmga001.fm.intel.com with ESMTP; 20 May 2014 03:01:13 -0700 Received: from sivswdev02.ir.intel.com (sivswdev02.ir.intel.com [10.237.217.46]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id s4KA1CQA005093; Tue, 20 May 2014 11:01:12 +0100 Received: from sivswdev02.ir.intel.com (localhost [127.0.0.1]) by sivswdev02.ir.intel.com with ESMTP id s4KA1C5K030465; Tue, 20 May 2014 11:01:12 +0100 Received: (from bricha3@localhost) by sivswdev02.ir.intel.com with id s4KA1Cxs030461; Tue, 20 May 2014 11:01:12 +0100 From: Bruce Richardson To: dev@dpdk.org Date: Tue, 20 May 2014 11:00:55 +0100 Message-Id: <1400580057-30155-3-git-send-email-bruce.richardson@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1400580057-30155-1-git-send-email-bruce.richardson@intel.com> References: <1400580057-30155-1-git-send-email-bruce.richardson@intel.com> Subject: [dpdk-dev] [PATCH 2/4] distributor: new packet distributor library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 May 2014 10:01:55 -0000 This adds the code for a new Intel DPDK library for packet distribution. The distributor is a component which is designed to pass packets one-at-a-time to workers, with dynamic load balancing. Using the RSS field in the mbuf as a tag, the distributor tracks what packet tag is being processed by what worker and then ensures that no two packets with the same tag are in-flight simultaneously. Once a tag is not in-flight, then the next packet with that tag will be sent to the next available core. Signed-off-by: Bruce Richardson --- lib/librte_distributor/Makefile | 50 ++++ lib/librte_distributor/rte_distributor.c | 417 +++++++++++++++++++++++++++++++ lib/librte_distributor/rte_distributor.h | 173 +++++++++++++ 3 files changed, 640 insertions(+) create mode 100644 lib/librte_distributor/Makefile create mode 100644 lib/librte_distributor/rte_distributor.c create mode 100644 lib/librte_distributor/rte_distributor.h diff --git a/lib/librte_distributor/Makefile b/lib/librte_distributor/Makefile new file mode 100644 index 0000000..36699f8 --- /dev/null +++ b/lib/librte_distributor/Makefile @@ -0,0 +1,50 @@ +# BSD LICENSE +# +# Copyright(c) 2010-2014 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_distributor.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) := rte_distributor.c + +# install this header file +SYMLINK-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR)-include := rte_distributor.h + +# this lib needs eal +DEPDIRS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) += lib/librte_eal +DEPDIRS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) += lib/librte_mbuf + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_distributor/rte_distributor.c b/lib/librte_distributor/rte_distributor.c new file mode 100644 index 0000000..cc8384e --- /dev/null +++ b/lib/librte_distributor/rte_distributor.c @@ -0,0 +1,417 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "rte_distributor.h" + +#define NO_FLAGS 0 +#define RTE_DISTRIB_PREFIX "DT_" + +/* we will use the bottom four bits of pointer for flags, shifting out + * the top four bits to make room (since a 64-bit pointer actually only uses + * 48 bits). An arithmetic-right-shift will then appropriately restore the + * original pointer value with proper sign extension into the top bits. */ +#define RTE_DISTRIB_FLAG_BITS 4 +#define RTE_DISTRIB_FLAGS_MASK (0x0F) +#define RTE_DISTRIB_NO_BUF 0 +#define RTE_DISTRIB_GET_BUF (1) +#define RTE_DISTRIB_RETURN_BUF (2) + +#define RTE_DISTRIB_BACKLOG_SIZE 8 +#define RTE_DISTRIB_BACKLOG_MASK (RTE_DISTRIB_BACKLOG_SIZE - 1) + +#define RTE_DISTRIB_MAX_RETURNS 128 +#define RTE_DISTRIB_RETURNS_MASK (RTE_DISTRIB_MAX_RETURNS - 1) + +union rte_distributor_buffer { + volatile int64_t bufptr64; + char pad[CACHE_LINE_SIZE*3]; +} __rte_cache_aligned; + +struct rte_distributor_backlog { + unsigned start; + unsigned count; + int64_t pkts[RTE_DISTRIB_BACKLOG_SIZE]; +}; + +struct rte_distributor_returned_pkts { + unsigned start; + unsigned count; + struct rte_mbuf *mbufs[RTE_DISTRIB_MAX_RETURNS]; +}; + +struct rte_distributor { + TAILQ_ENTRY(rte_distributor) next; /**< Next in list. */ + + char name[RTE_DISTRIBUTOR_NAMESIZE]; /**< Name of the ring. */ + unsigned num_workers; /**< Number of workers polling */ + + uint32_t in_flight_tags[RTE_MAX_LCORE]; + struct rte_distributor_backlog backlog[RTE_MAX_LCORE]; + + union rte_distributor_buffer bufs[RTE_MAX_LCORE]; + + struct rte_distributor_returned_pkts returns; +}; + +TAILQ_HEAD(rte_distributor_list, rte_distributor); + +/**** APIs called by workers ****/ + +struct rte_mbuf * +rte_distributor_get_pkt(struct rte_distributor *d, + unsigned worker_id, struct rte_mbuf *oldpkt, + unsigned reserved __rte_unused) +{ + union rte_distributor_buffer *buf = &d->bufs[worker_id]; + int64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS) | \ + RTE_DISTRIB_GET_BUF; + while (unlikely(buf->bufptr64 & RTE_DISTRIB_FLAGS_MASK)) + rte_pause(); + buf->bufptr64 = req; + while (buf->bufptr64 & RTE_DISTRIB_GET_BUF) + rte_pause(); + /* since bufptr64 is a signed value, this should be an arithmetic shift */ + int64_t ret = buf->bufptr64 >> RTE_DISTRIB_FLAG_BITS; + return (struct rte_mbuf *)((uintptr_t)ret); +} + +int +rte_distributor_return_pkt(struct rte_distributor *d, + unsigned worker_id, struct rte_mbuf *oldpkt) +{ + union rte_distributor_buffer *buf = &d->bufs[worker_id]; + uint64_t req = ((uintptr_t)oldpkt << RTE_DISTRIB_FLAG_BITS) | \ + RTE_DISTRIB_RETURN_BUF; + buf->bufptr64 = req; + return 0; +} + +/**** APIs called on distributor core ***/ + +/* as name suggests, adds a packet to the backlog for a particular worker */ +static int +add_to_backlog(struct rte_distributor_backlog *bl, int64_t item) +{ + if (bl->count == RTE_DISTRIB_BACKLOG_SIZE) + return -1; + + bl->pkts[(bl->start + bl->count++) & (RTE_DISTRIB_BACKLOG_MASK)] = item; + return 0; +} + +/* takes the next packet for a worker off the backlog */ +static int64_t +backlog_pop(struct rte_distributor_backlog *bl) +{ + bl->count--; + return bl->pkts[bl->start++ & RTE_DISTRIB_BACKLOG_MASK]; +} + +/* stores a packet returned from a worker inside the returns array */ +static inline void +store_return(uintptr_t oldbuf, struct rte_distributor *d, + unsigned *ret_start, unsigned *ret_count) +{ + /* store returns in a circular buffer - code is branch-free */ + d->returns.mbufs[(*ret_start + *ret_count) + & RTE_DISTRIB_RETURNS_MASK] = (void *)oldbuf; + *ret_start += (*ret_count == RTE_DISTRIB_RETURNS_MASK) & !!(oldbuf); + *ret_count += (*ret_count != RTE_DISTRIB_RETURNS_MASK) & !!(oldbuf); +} + +/* process a set of packets to distribute them to workers */ +int +rte_distributor_process(struct rte_distributor *d, + struct rte_mbuf **mbufs, unsigned num_mbufs) +{ + unsigned next_idx = 0; + unsigned worker = 0; + struct rte_mbuf *next_mb = NULL; + int64_t next_value = 0; + uint32_t new_tag = 0; + unsigned ret_start = d->returns.start, + ret_count = d->returns.count; + + while (next_idx < num_mbufs || next_mb != NULL) { + + int64_t data = d->bufs[worker].bufptr64; + uintptr_t oldbuf = 0; + + if (!next_mb) { + next_mb = mbufs[next_idx++]; + next_value = (((int64_t)(uintptr_t)next_mb) << RTE_DISTRIB_FLAG_BITS); + new_tag = (next_mb->pkt.hash.rss | 1); + + uint32_t match = 0; + unsigned i; + for (i = 0; i < d->num_workers; i++) + match |= (!(d->in_flight_tags[i] ^ new_tag) << i); + + if (match) { + next_mb = NULL; + unsigned worker = __builtin_ctz(match); + if (add_to_backlog(&d->backlog[worker], next_value) < 0) + next_idx--; + } + } + + if ((data & RTE_DISTRIB_GET_BUF) && + (d->backlog[worker].count || next_mb)) { + + if (d->backlog[worker].count) + d->bufs[worker].bufptr64 = + backlog_pop(&d->backlog[worker]); + + else { + d->bufs[worker].bufptr64 = next_value; + d->in_flight_tags[worker] = new_tag; + next_mb = NULL; + } + oldbuf = data >> RTE_DISTRIB_FLAG_BITS; + } + else if (data & RTE_DISTRIB_RETURN_BUF) { + d->in_flight_tags[worker] = 0; + d->bufs[worker].bufptr64 = 0; + if (unlikely(d->backlog[worker].count != 0)) { + /* On return of a packet, we need to move the queued packets + * for this core elsewhere. + * Easiest solution is to set things up for + * a recursive call. That will cause those packets to be queued + * up for the next free core, i.e. it will return as soon as a + * core becomes free to accept the first packet, as subsequent + * ones will be added to the backlog for that core. + */ + struct rte_mbuf *pkts[RTE_DISTRIB_BACKLOG_SIZE]; + unsigned i; + struct rte_distributor_backlog *bl = &d->backlog[worker]; + + for (i = 0; i < bl->count; i++) { + unsigned idx = (bl->start + i) & RTE_DISTRIB_BACKLOG_MASK; + pkts[i] = (void *)((uintptr_t) + (bl->pkts[idx] >> RTE_DISTRIB_FLAG_BITS)); + } + /* recursive call */ + rte_distributor_process(d, pkts, i); + bl->count = bl->start = 0; + } + oldbuf = data >> RTE_DISTRIB_FLAG_BITS; + } + + /* store returns in a circular buffer */ + store_return(oldbuf, d, &ret_start, &ret_count); + + if (++worker == d->num_workers) + worker = 0; + } + /* to finish, check all workers for backlog and schedule work for them + * if they are ready */ + for (worker = 0; worker < d->num_workers; worker++) + if (d->backlog[worker].count && + (d->bufs[worker].bufptr64 & RTE_DISTRIB_GET_BUF)) { + + int64_t oldbuf = d->bufs[worker].bufptr64 >> RTE_DISTRIB_FLAG_BITS; + store_return(oldbuf, d, &ret_start, &ret_count); + + d->bufs[worker].bufptr64 = + backlog_pop(&d->backlog[worker]); + } + + d->returns.start = ret_start; + d->returns.count = ret_count; + return num_mbufs; +} + +/* return to the caller, packets returned from workers */ +int +rte_distributor_returned_pkts(struct rte_distributor *d, + struct rte_mbuf **mbufs, unsigned max_mbufs) +{ + struct rte_distributor_returned_pkts *returns = &d->returns; + unsigned retval = max_mbufs < returns->count ? max_mbufs : returns->count; + unsigned i; + + for (i = 0; i < retval; i++) + mbufs[i] = returns->mbufs[(returns->start + i) & + RTE_DISTRIB_RETURNS_MASK]; + returns->start += i; + returns->count -= i; + + return retval; +} + +/* local function used by the flush function only, to reassign a backlog for + * a shutdown core. The process function uses a recursive call for this, but + * that is not done in flush, as we need to track the outstanding packets count. + */ +static inline int +move_worker_backlog(struct rte_distributor *d, unsigned worker) +{ + struct rte_distributor_backlog *bl = &d->backlog[worker]; + unsigned i; + + for (i = 0; i < d->num_workers; i++) { + if (i == worker) + continue; + /* check worker is active and then if backlog will fit */ + if ((d->in_flight_tags[i] != 0 || + (d->bufs[i].bufptr64 & RTE_DISTRIB_GET_BUF)) && + (bl->count + d->backlog[i].count) <= RTE_DISTRIB_BACKLOG_SIZE) { + while (bl->count) + add_to_backlog(&d->backlog[i], backlog_pop(bl)); + return 0; + } + } + return -1; +} + +/* flush the distributor, so that there are no outstanding packets in flight or + * queued up. */ +int +rte_distributor_flush(struct rte_distributor *d) +{ + unsigned worker, total_outstanding = 0; + unsigned flushed = 0; + unsigned ret_start = d->returns.start, + ret_count = d->returns.count; + + for (worker = 0; worker < d->num_workers; worker++) + total_outstanding += d->backlog[worker].count + + !!(d->in_flight_tags[worker]); + + worker = 0; + while (flushed < total_outstanding) { + + if (d->in_flight_tags[worker] != 0 || d->backlog[worker].count) { + const int64_t data = d->bufs[worker].bufptr64; + uintptr_t oldbuf = 0; + + if (data & RTE_DISTRIB_GET_BUF) { + flushed += (d->in_flight_tags[worker] != 0); + if (d->backlog[worker].count) { + d->bufs[worker].bufptr64 = + backlog_pop(&d->backlog[worker]); + /* we need to mark something as being in-flight, but it + * doesn't matter what as we never check it except + * to check for non-zero. + */ + d->in_flight_tags[worker] = 1; + } else { + d->bufs[worker].bufptr64 = RTE_DISTRIB_GET_BUF; + d->in_flight_tags[worker] = 0; + } + oldbuf = data >> RTE_DISTRIB_FLAG_BITS; + } + else if (data & RTE_DISTRIB_RETURN_BUF) { + if (d->backlog[worker].count == 0 || + move_worker_backlog(d, worker) == 0) { + /* only if we move backlog, process this packet */ + d->bufs[worker].bufptr64 = 0; + oldbuf = data >> RTE_DISTRIB_FLAG_BITS; + flushed ++; + d->in_flight_tags[worker] = 0; + } + } + + store_return(oldbuf, d, &ret_start, &ret_count); + } + + if (++worker == d->num_workers) + worker = 0; + } + d->returns.start = ret_start; + d->returns.count = ret_count; + + return flushed; +} + +/* clears the internal returns array in the distributor */ +void +rte_distributor_clear_returns(struct rte_distributor *d) +{ + d->returns.start = d->returns.count = 0; +#ifndef __OPTIMIZE__ + memset(d->returns.mbufs, 0, sizeof(d->returns.mbufs)); +#endif +} + +/* creates a distributor instance */ +struct rte_distributor * +rte_distributor_create(const char *name, + unsigned socket_id, + unsigned num_workers, + struct rte_distributor_extra_args *args __rte_unused) +{ + struct rte_distributor *d; + struct rte_distributor_list *distributor_list; + char mz_name[RTE_MEMZONE_NAMESIZE]; + const struct rte_memzone *mz; + + /* compilation-time checks */ + RTE_BUILD_BUG_ON((sizeof(*d) & CACHE_LINE_MASK) != 0); + RTE_BUILD_BUG_ON((RTE_MAX_LCORE & 7) != 0); + + if (name == NULL || num_workers >= RTE_MAX_LCORE) { + rte_errno = EINVAL; + return NULL; + } + rte_snprintf(mz_name, sizeof(mz_name), RTE_DISTRIB_PREFIX"%s", name); + mz = rte_memzone_reserve(mz_name, sizeof(*d), socket_id, NO_FLAGS); + if (mz == NULL) { + rte_errno = ENOMEM; + return NULL; + } + + /* check that we have an initialised tail queue */ + if ((distributor_list = RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_DISTRIBUTOR, + rte_distributor_list)) == NULL) { + rte_errno = E_RTE_NO_TAILQ; + return NULL; + } + + d = mz->addr; + rte_snprintf(d->name, sizeof(d->name), "%s", name); + d->num_workers = num_workers; + TAILQ_INSERT_TAIL(distributor_list, d, next); + + return d; +} + diff --git a/lib/librte_distributor/rte_distributor.h b/lib/librte_distributor/rte_distributor.h new file mode 100644 index 0000000..d684ff9 --- /dev/null +++ b/lib/librte_distributor/rte_distributor.h @@ -0,0 +1,173 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_DISTRIBUTE_H_ +#define _RTE_DISTRIBUTE_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include + +#define RTE_DISTRIBUTOR_NAMESIZE 32 /**< Length of name for instance */ + +struct rte_distributor; + +struct rte_distributor_extra_args { }; /**< reserved for future use*/ + +/** + * Function to create a new distributor instance + * + * Reserves the memory needed for the distributor operation and + * initializes the distributor to work with the configured number of workers. + * + * @param name + * The name to be given to the distributor instance. + * @param socket_id + * The NUMA node on which the memory is to be allocated + * @param num_workers + * The maximum number of workers that will request packets from this + * distributor + * @param extra_args + * Reserved for future use, should be passed in as NULL + * @return + * The newly created distributor instance + */ +struct rte_distributor * +rte_distributor_create(const char *name, unsigned socket_id, + unsigned num_workers, struct rte_distributor_extra_args *extra_args); + +/** + * Process a set of packets by distributing them among workers that request + * packets. The distributor will ensure that no two packets that have the + * same flow id, or tag, in the mbuf will be procesed at the same time. + * + * NOTE: This is not thread safe, should only be called in one thread at a time + * + * @param d + * The distributor instance to be used + * @param mbufs + * The mbufs to be distributed + * @param num_mbufs + * The number of mbufs in the mbufs array + * @return + * The number of mbufs processed. + */ +int +rte_distributor_process(struct rte_distributor *d, + struct rte_mbuf **mbufs, unsigned num_mbufs); + +/** + * Get a set of mbufs that have been returned to the distributor by workers + * + * @param d + * The distributor instance to be used + * @param mbufs + * The mbufs pointer array to be filled in + * @param max_mbufs + * The size of the mbufs array + * @return + * The number of mbufs returned in the mbufs array. + */ +int +rte_distributor_returned_pkts(struct rte_distributor *d, + struct rte_mbuf **mbufs, unsigned max_mbufs); + +/** + * Flush the distributor component, so that there are no in-flight or + * backlogged packets awaiting processing + * + * @param d + * The distributor instance to be used + * @return + * The number of queued/in-flight packets that were completed by this call. + */ +int +rte_distributor_flush(struct rte_distributor *d); + +/** + * Clears the array of returned packets used as the source for the + * rte_distributor_returned_pkts() API call. + * + * @param d + * The distributor instance to be used + */ +void +rte_distributor_clear_returns(struct rte_distributor *d); + +/** + * API called by a worker to get a new packet to process. Any previous packet + * given to the worker is assumed to have completed processing, and may be + * optionally returned to the distributor via the oldpkt parameter. + * + * @param d + * The distributor instance to be used + * @param worker_id + * The worker instance number to use - must be less that num_workers passed + * at distributor creation time. + * @param oldpkt + * The previous packet, if any, being processed by the worker + * @param reserved + * Reserved for future use, should be set to zero. + * + * @return + * A new packet to be processed by the worker thread. + */ +struct rte_mbuf * +rte_distributor_get_pkt(struct rte_distributor *d, + unsigned worker_id, struct rte_mbuf *oldpkt, unsigned reserved); + +/** + * API called by a worker to return a completed packet without requesting a + * new packet, for example, because a worker thread is shutting down + * + * @param d + * The distributor instance to be used + * @param worker_id + * The worker instance number to use - must be less that num_workers passed + * at distributor creation time. + * @param mbuf + * The previous packet being processed by the worker + */ +int +rte_distributor_return_pkt(struct rte_distributor *d, unsigned worker_id, + struct rte_mbuf *mbuf); + +/******************************************/ + +#ifdef __cplusplus +} +#endif + +#endif -- 1.9.0