From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 67C7442923 for ; Wed, 12 Apr 2023 09:27:50 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5690040FAE; Wed, 12 Apr 2023 09:27:50 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 74B1B4067B for ; Wed, 12 Apr 2023 09:27:48 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681284467; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OXMA9y57eCvqtP9IG0Ie5gCdv+4/FP6QTpANEYMshUE=; b=i74GEIZDV98Jp6iEhJBYCOau8GOMELiETiAb94efoHEnkGDRYlhmj2/p4qRqBI37PorB9H n4x8+t9vnYoBGEKb24IfQvQEb7Pm2nMV4ohrLcyzU96mVjrDE6HiwtHoTKMYevP1xuuPo9 7qX1YGqWlwGYuBwm3rj+c3Qvp6FcDHo= Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-183-25rEQBajM_KiLtaUsrVzag-1; Wed, 12 Apr 2023 03:27:46 -0400 X-MC-Unique: 25rEQBajM_KiLtaUsrVzag-1 Received: by mail-pg1-f198.google.com with SMTP id bv129-20020a632e87000000b00517a229708bso4805731pgb.9 for ; Wed, 12 Apr 2023 00:27:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681284465; x=1683876465; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OXMA9y57eCvqtP9IG0Ie5gCdv+4/FP6QTpANEYMshUE=; b=Uc6bLRxFMTONIhCp+iJ44IoDAuTrfiQASOZlXe/fVxuwuwrAOPymqPEuPWc8cmSw8L /03CH+cxI5OWx634DMwyFRW++VbOcx9K5uhAC6p5jngAbFabOgQSsICqtmPJovZs4LXq ujJosFWZaxq0X3v468AaHV4lKM2hP+bEhB9ot9aLFy1KPX9I4QCvZdS1zyDyUoNsPPaN AVPyNjljm7z+O4KDWhZAjo8xmsPIKw8C4wX7RXa3C5VZ3wnhz/x1yONI0paCMoLHRoya QnA5EmRxDYhzn/dndVHOkeCr20W6G0ejzPEl3Ghnt/Oz2sPMQt+BXZwy0frNwIk5riOP A3sw== X-Gm-Message-State: AAQBX9dOHlg8PwW3jA+roz8WRRyzhrm1MrR9L3LZ+P1fp5cQrp7a+Exw PYBL2uID/if3ltjUXNPRQ2P91lOJ/bvivOBoZHpVYV5TSyEZmEJY6nMCF0QiUn9ZhLG08nGPhro D+69U+0Q5gGGFtBoRjHgTww== X-Received: by 2002:a17:90a:a393:b0:246:c60e:5441 with SMTP id x19-20020a17090aa39300b00246c60e5441mr2300699pjp.8.1681284465495; Wed, 12 Apr 2023 00:27:45 -0700 (PDT) X-Google-Smtp-Source: AKy350Zr2SeBkSeSX1egKIud2tHtyJOQDdvnfCcPHiE5QwOeqBvsa1vjHQpj1ARbLtlJjasmNnl2Z1ip6ZAxxNN+8XI= X-Received: by 2002:a17:90a:a393:b0:246:c60e:5441 with SMTP id x19-20020a17090aa39300b00246c60e5441mr2300698pjp.8.1681284465174; Wed, 12 Apr 2023 00:27:45 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: David Marchand Date: Wed, 12 Apr 2023 09:27:34 +0200 Message-ID: Subject: Re: MAX_MBUF_SIZE causes queue configuration failure. To: Dhamodaran Pandiyan , Beilei Xing , Jeff Guo Cc: users@dpdk.org X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Hello, On Wed, Apr 12, 2023 at 8:59=E2=80=AFAM Dhamodaran Pandiyan wrote: > > When I set the value to MAX_MBUF_SIZE as less than 16256, all the initial= ization went through fine and the application was online. > > But when I increase the MAX_MBUF_SIZE to more than 16256 like 16257, 1700= 0, or 18000, I get the attached log error which shows an issue in configuri= ng the virtual queues for the NIC. > > Log Snip: > i40evf_configure_vsi_queues(): Failed to execute command of VIRTCHNL_OP_C= ONFIG_VSI_QUEUES > i40evf_dev_start(): configure queues failed Added net/i40e driver maintainers. This error indicates a failure either when sending a message to the PF driver, or that the PF driver refused to configure this VF with the passed parameters. Assuming you are using the PF i40e driver, I tracked this message handling to find that buffer size is limited (the exact limit is not clear to me) probably due to some hw limitations. See: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/dri= vers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c#n4088 https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/dri= vers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c#n2371 https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/dri= vers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c#n714 /* max pkt. length validation */ if (info->max_pkt_size >=3D (16 * 1024) || info->max_pkt_size < 64) { ret =3D -EINVAL; goto error_param; } I'll let net/i40e reply with better details/explanations. > MTU size for port id: is: 9000 > Failed to start the fast pkt for port_id : 1Ret: -1 > > Some Observations: > 1. Thought memory availability was an issue, so provided 10x more memory = and allocated 10x huge pages than required, still noticed the same issue. > 2. Got to know MTU size also plays a role in deciding the mbuf_size, so t= ried running app with MTU of less value, but still issue persists. > > Please someone enlighten me on what is happening here. --=20 David Marchand