From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ig0-f182.google.com (mail-ig0-f182.google.com [209.85.213.182]) by dpdk.org (Postfix) with ESMTP id D9573559C for ; Wed, 10 Feb 2016 11:14:16 +0100 (CET) Received: by mail-ig0-f182.google.com with SMTP id xg9so10643622igb.1 for ; Wed, 10 Feb 2016 02:14:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=BEq5NSqJAqW54hmtEHaXtRZOIJOuBX7uPgXL7rbp2TU=; b=mwQYBm7IpQIHD5oAyVRvKn+feP+8g4/ckJORoGiF/tjkNT0B64B/ar3S1SYixuDvKW EFBj3Y0BM/fK80yKVS8AF8redrzrzsO8Hrc+8MUUhqMRdZzX3cnBkaMUZJzCW60NKSz1 yPpY2i/evApl+yrHbOzVXnULH8PrasV+oXxa/wtEYB1oQwJboEp+E2sRI21cLxnRSwu3 OLskTQFqrR6t6cULXe5IWQoLHtA4s9kcGgJhkZoYuj8cXQOKgj2HpxOTXpb+b0kHgn1x KpbcG55LxJpzUyTJXn2AbqB0+2bHSvhknYl0Z++qnsFebFP79lm6zVmWH+bcNOCQvXme 8MEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=BEq5NSqJAqW54hmtEHaXtRZOIJOuBX7uPgXL7rbp2TU=; b=GmKs7mD4glJqBK1+dkIx6Wh5I8IrpVJuyAHR5wmFYFSdItES1B0skBiOGCMzrjdhBF AVfdFpUrMN8YID3BaH0dSJUTYHyvmBSCICJJNoesTRZYS80q8xBuxGyyYi/HjAZayuo0 q6fw8RoaNkgUMuEMHKfLQex06oPTxbyRB+Rv908IbrV8cFDS/wUkLaGQ4jdatSV4P3BC eoGqMciiLxToDJR0ix7IYW0K3xaBE7nv3YQEgrj2mfcNHoCXfoNZH/Z2E37IH0e7qisG o61JUHVc1NNexmti8NlZ+EfDovaZJ64adkSiKLR0SAFdN7btP6JfqWgH95XEYiQKeG8g WkUw== X-Gm-Message-State: AG10YOTmUa4TjkOsLr5o5OLfKGMDcgyvyYkZLfTfi+9qjUg2lx1YUYkFPpF80QtOg6ZsLpF8ZamI3Imbe2/WSg== MIME-Version: 1.0 X-Received: by 10.50.112.102 with SMTP id ip6mr9320288igb.88.1455099256385; Wed, 10 Feb 2016 02:14:16 -0800 (PST) Received: by 10.79.102.4 with HTTP; Wed, 10 Feb 2016 02:14:16 -0800 (PST) In-Reply-To: <20160210100154.GB4084@bricha3-MOBL3> References: <20160210100154.GB4084@bricha3-MOBL3> Date: Wed, 10 Feb 2016 02:14:16 -0800 Message-ID: From: Saravana Kumar To: Bruce Richardson Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: dev@dpdk.org Subject: Re: [dpdk-dev] Regarding mbuf allocation/free in secondary process X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Feb 2016 10:14:17 -0000 Thanks for your response.. Sara On Wed, Feb 10, 2016 at 2:01 AM, Bruce Richardson < bruce.richardson@intel.com> wrote: > On Tue, Feb 09, 2016 at 11:43:19PM -0800, Saravana Kumar wrote: > > Hi DPDK community, > > > > > > > > I'd like to have DPDK NIC IO operations in (primary) process and > > execution logic in (secondary) processes. > > Primary process pushes NIC Rx mbufs to Secondary process through S/W ring > > > > Seconary process allocates mbuf for Tx path and pushes down to Primary > > process for NIC Tx > > > > > > I have few doubts here: > > > > 1. If Secondary process dies because of SIGKILL then how can the mbufs > > allocated in Secondary process can be freed. > > If it is normal signals like SIGINT/SIGTERM then we can be catch > > those and free in those respective signal handlers > > If a process terminates abnormally then the buffers being used by that > process > may well be leaked. The solution you propose of catching signals will > certainly > help as you want to try and ensure that a process always frees all its > buffers > properly on termination. > > > > > 2. Secondary process needs to poll on the S/W ring. This can consume > 100% cpu. > > Is there a way to avoid polling in secondary process for Rx path > > Not using DPDK software rings, no. You'd have to use some kernel > constructs such as > fifo's/named pipes to do blocking reads on those. However, the overhead of > using > such structures can be severe making them unusable for many packet > processing > applications. An alternative might be to use some small sleep calls i.e. > nanosleep > between polls of the SW ring in cases where traffic rates are low. That > will > reduce your cpu usage. > > /Bruce > >