From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <rich.lane@bigswitch.com>
Received: from mail-vk0-f43.google.com (mail-vk0-f43.google.com
 [209.85.213.43]) by dpdk.org (Postfix) with ESMTP id 415DE590C
 for <dev@dpdk.org>; Sat, 21 Nov 2015 01:15:42 +0100 (CET)
Received: by vkha189 with SMTP id a189so8298971vkh.2
 for <dev@dpdk.org>; Fri, 20 Nov 2015 16:15:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=bigswitch-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:in-reply-to:references:date:message-id:subject:from:to
 :cc:content-type;
 bh=OO/uXnhZh2qxzJN5YVxCWXwhZK9gj1hc2mdFlZ+wuzc=;
 b=lep984cIqZ3MeAkwat3iETKQHOUJy5/gdV4j5sMwYbwk7mMxqaUwpc9y+kUXZgXh0Q
 aSBaUZpVFw/b8UN+rETwWctu61s6WkUtC3zNCHS4oyMKX1LfonXR+bfe/1EQdPc/jcJ5
 Lx3/wO5CpIfPe5AXpPYqn8ERDOF8I+JZd5+6+Wi4GqP2SL4JMOUGGDmfZKIwBNLQ/7he
 GXypLCY+WHyjAUX60QxgHl+7dn7gOvuxnN5HKE6Kaq1en77FKsaz8xCi5UkGtamQVCEy
 MLt6XvT8uCdrE2ytHP6Nk8R3KXJgwDbkOvUANtb38CW+H4qBqwoSBZ+1I2z7ynMjkvL6
 v0WA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20130820;
 h=x-gm-message-state:mime-version:in-reply-to:references:date
 :message-id:subject:from:to:cc:content-type;
 bh=OO/uXnhZh2qxzJN5YVxCWXwhZK9gj1hc2mdFlZ+wuzc=;
 b=bXcVfo8jxfxLjZTeeDKpOzGtvHOZgUaugCsqAYqosQuBZlW8nk1KAaSIEX6Ni34arE
 jgrz59KVkoa8SOFgMx3KXsfZL0u3gbirdrBS01X/71kRGYq24Gx/K09uisOM/0yqwous
 0NCTcCEV1e+owA6ny6CrVSylCK2GOVvA9EHeP2z7TWvQEaQbABWxjv6KQooodjlZLvx1
 quFtY7cOnf4pseNEYGIWIhg8DwYi2H8igrI6e2tqeVNs7Oc1Tyk4yXeeeNIkmaLEou+e
 Dm5lBaUF3Pgqdx7Zikj7f0XGoMXSqyb8LFHG6q07kC4HKTuuVSpe5SHGdkEDn8ZsWEwg
 +uuA==
X-Gm-Message-State: ALoCoQmhHf9yaWM8SvNweihKegdreZ6Ew3zB+S7ehHxdgO+dkbpC/L651f8Lx+hQB/+uQh2M9Icn
MIME-Version: 1.0
X-Received: by 10.31.190.211 with SMTP id o202mr2488952vkf.100.1448064941796; 
 Fri, 20 Nov 2015 16:15:41 -0800 (PST)
Received: by 10.31.3.170 with HTTP; Fri, 20 Nov 2015 16:15:41 -0800 (PST)
In-Reply-To: <1447392031-24970-3-git-send-email-mukawa@igel.co.jp>
References: <1447046221-20811-3-git-send-email-mukawa@igel.co.jp>
 <1447392031-24970-1-git-send-email-mukawa@igel.co.jp>
 <1447392031-24970-3-git-send-email-mukawa@igel.co.jp>
Date: Fri, 20 Nov 2015 16:15:41 -0800
Message-ID: <CAGSMBPNtAffhccmtXs=OM30ToZoHUY3jRx3TdX+X6eS=uigTuw@mail.gmail.com>
From: Rich Lane <rich.lane@bigswitch.com>
To: Tetsuya Mukawa <mukawa@igel.co.jp>
Content-Type: text/plain; charset=UTF-8
X-Content-Filtered-By: Mailman/MimeDel 2.1.15
Cc: yuanhan.liu@intel.com, dev@dpdk.org, ann.zhuangyanying@huawei.com
Subject: Re: [dpdk-dev] [PATCH v4 2/2] vhost: Add VHOST PMD
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Sat, 21 Nov 2015 00:15:42 -0000

On Thu, Nov 12, 2015 at 9:20 PM, Tetsuya Mukawa <mukawa@igel.co.jp> wrote:

> +static uint16_t
> +eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
> +{
>
...

> +
> +       /* Enqueue packets to guest RX queue */
> +       nb_tx = rte_vhost_enqueue_burst(r->device,
> +                       r->virtqueue_id, bufs, nb_bufs);
> +
> +       r->tx_pkts += nb_tx;
> +       r->err_pkts += nb_bufs - nb_tx;
>

I don't think a full TX queue is counted as an error by physical NIC PMDs
like ixgbe and i40e. It is counted as an error by the af_packet, pcap, and
ring PMDs. I'd suggest not counting it as an error because it's a common
and expected condition, and the application might just retry the TX later.

Are the byte counts left out because it would be a performance hit? It
seems like it would be a minimal cost given how much we're already touching
each packet.


> +static int
> +new_device(struct virtio_net *dev)
> +{
>
...

> +
> +       if ((dev->virt_qp_nb < internal->nb_rx_queues) ||
> +                       (dev->virt_qp_nb < internal->nb_tx_queues)) {
> +               RTE_LOG(INFO, PMD, "Not enough queues\n");
> +               return -1;
> +       }
>

Would it make sense to take the minimum of the guest and host queuepairs
and use that below in place of nb_rx_queues/nb_tx_queues? That way the host
can support a large maximum number of queues and each guest can choose how
many it wants to use. The host application will receive vring_state_changed
callbacks for each queue the guest activates.