From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 9C5B4AE0A for ; Wed, 18 May 2016 14:48:15 +0200 (CEST) Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 0812F4F639; Wed, 18 May 2016 12:48:15 +0000 (UTC) Received: from sopuli.koti.laiskiainen.org (vpn1-5-221.ams2.redhat.com [10.36.5.221]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u4ICmDFV032372; Wed, 18 May 2016 08:48:13 -0400 To: Neil Horman References: <1463431287-4551-1-git-send-email-nhorman@tuxdriver.com> <1463431287-4551-5-git-send-email-nhorman@tuxdriver.com> <3ee4159d-fd29-1a20-1417-4c0a40c18779@redhat.com> <20160518120300.GA29900@hmsreliant.think-freely.org> Cc: dev@dpdk.org, Bruce Richardson , Thomas Monjalon , Stephen Hemminger From: Panu Matilainen Message-ID: Date: Wed, 18 May 2016 15:48:12 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.0 MIME-Version: 1.0 In-Reply-To: <20160518120300.GA29900@hmsreliant.think-freely.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Wed, 18 May 2016 12:48:15 +0000 (UTC) Subject: Re: [dpdk-dev] [PATCH 4/4] pmd_hw_support.py: Add tool to query binaries for hw support information X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 May 2016 12:48:16 -0000 On 05/18/2016 03:03 PM, Neil Horman wrote: > On Wed, May 18, 2016 at 02:48:30PM +0300, Panu Matilainen wrote: >> On 05/16/2016 11:41 PM, Neil Horman wrote: >>> This tool searches for the primer sting PMD_DRIVER_INFO= in any ELF binary, >>> and, if found parses the remainder of the string as a json encoded string, >>> outputting the results in either a human readable or raw, script parseable >>> format >>> >>> Signed-off-by: Neil Horman >>> CC: Bruce Richardson >>> CC: Thomas Monjalon >>> CC: Stephen Hemminger >>> CC: Panu Matilainen >>> --- >>> tools/pmd_hw_support.py | 174 ++++++++++++++++++++++++++++++++++++++++++++++++ >>> 1 file changed, 174 insertions(+) >>> create mode 100755 tools/pmd_hw_support.py >>> >>> diff --git a/tools/pmd_hw_support.py b/tools/pmd_hw_support.py >>> new file mode 100755 >>> index 0000000..0669aca >>> --- /dev/null >>> +++ b/tools/pmd_hw_support.py >>> @@ -0,0 +1,174 @@ >>> +#!/usr/bin/python3 >> >> I think this should use /usr/bin/python to be consistent with the other >> python scripts, and like the others work with python 2 and 3. I only tested >> it with python2 after changing this and it seemed to work fine so the >> compatibility side should be fine as-is. >> > Sure, I can change the python executable, that makes sense. > >> On the whole, AFAICT the patch series does what it promises, and works for >> both static and shared linkage. Using JSON formatted strings in an ELF >> section is a sound working technical solution for the storage of the data. >> But the difference between the two cases makes me wonder about this all... > You mean the difference between checking static binaries and dynamic binaries? > yes, there is some functional difference there > >> >> For static library build, you'd query the application executable, eg > Correct. > >> testpmd, to get the data out. For a shared library build, that method gives >> absolutely nothing because the data is scattered around in individual >> libraries which might be just about wherever, and you need to somehow > Correct, I figured that users would be smart enough to realize that with > dynamically linked executables, they would need to look at DSO's, but I agree, > its a glaring diffrence. Being able to look at DSOs is good, but expecting the user to figure out which DSOs might be loaded and not and where to look is going to be well above many users. At very least it's not what I would call user-friendly. >> discover the location + correct library files to be able to query that. For >> the shared case, perhaps the script could be taught to walk files in >> CONFIG_RTE_EAL_PMD_PATH to give in-the-ballpark correct/identical results > My initial thought would be to run ldd on the executable, and use a heuristic to > determine relevant pmd DSO's, and then feed each of those through the python > script. I didn't want to go to that trouble unless there was consensus on it > though. Problem is, ldd doesn't know about them either because the pmds are not linked to the executables at all anymore. They could be force-linked of course, but that means giving up the flexibility of plugins, which IMO is a no-go. Except maybe as an option, but then that would be a third case to support. > >> when querying the executable as with static builds. If identical operation >> between static and shared versions is a requirement (without running the app >> in question) then query through the executable itself is practically the >> only option. Unless some kind of (auto-generated) external config file >> system ala kernel depmod / modules.dep etc is brought into the picture. > Yeah, I'm really trying to avoid that, as I think its really not a typical part > of how user space libraries are interacted with. > >> >> For shared library configurations, having the data in the individual pmds is >> valuable as one could for example have rpm autogenerate provides from the >> data to ease/automate installation (in case of split packaging and/or 3rd >> party drivers). And no doubt other interesting possibilities. With static >> builds that kind of thing is not possible. > Right. > > Note, this also leaves out PMD's that are loaded dynamically (i.e. via dlopen). > For those situations I don't think we have any way of 'knowing' that the > application intends to use them. Hence my comment about CONFIG_RTE_EAL_PMD_PATH above, it at least provides a reasonable heuristic of what would be loaded by the app when run. But ultimately the only way to know what hardware is supported at a given time is to run an app which calls rte_eal_init() to load all the drivers that are present and work from there, because besides CONFIG_RTE_EAL_PMD_PATH this can be affected by runtime commandline switches and applies to both shared and static builds. >> >> Calling up on the list of requirements from >> http://dpdk.org/ml/archives/dev/2016-May/038324.html, I see a pile of >> technical requirements but perhaps we should stop for a moment to think >> about the use-cases first? > > To ennumerate the list: > > - query all drivers in static binary or shared library (works) > - stripping resiliency (works) > - human friendly (works) > - script friendly (works) > - show driver name (works) > - list supported device id / name (works) > - list driver options (not yet, but possible) > - show driver version if available (nope, but possible) > - show dpdk version (nope, but possible) > - show kernel dependencies (vfio/uio_pci_generic/etc) (nope) > - room for extra information? (works) > > Of the items that are missing, I've already got a V2 started that can do driver > options, and is easier to expand. Adding in the the DPDK and PMD version should > be easy (though I think they can be left out, as theres currently no globaly > defined DPDK release version, its all just implicit, and driver versions aren't > really there either). I'm also hesitant to include kernel dependencies without > defining exactly what they mean (just module dependencies, or feature > enablement, or something else?). Once we define it though, adding it can be > easy. Yup. I just think the shared/static difference needs to be sorted out somehow, eg requiring user to know about DSOs is not human-friendly at all. That's why I called for the higher level use-cases in my previous email. > > I'll have a v2 posted soon, with the consensus corrections you have above, as > well as some other cleanups > > Best > Neil > >> >> To name some from the top of my head: >> - user wants to know whether the hardware on the system is supported >> - user wants to know which package(s) need to be installed to support the >> system hardware >> - user wants to list all supported hardware before going shopping >> - [what else?] >> >> ...and then think how these things would look like from the user >> perspective, in the light of the two quite dramatically differing cases of >> static vs shared linkage. - Panu -