6. Various Aspects of Daily Work
6.1. Using an External Kernel Source Tree
This application note describes how to use an external kernel source tree within a PTXdist project. In this case the external kernel source tree is managed by GIT.
Cloning the Linux Kernel Source Tree
In this example we are using the officially Linux kernel development tree.
jbe@octopus:~$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
[...]
jbe@octopus:~$ ls -l
[...]
drwxr-xr-x 38 jbe ptx 4096 2015-06-01 10:21 myprj
drwxr-xr-x 25 jbe ptx 4096 2015-06-01 10:42 linux
[...]
Configuring the PTXdist Project
Note
assumption is here, the directory /myprj
contains a valid PTXdist project.
To make PTXdist use of this kernel source tree, instead of an archive we can simply create a link now:
jbe@octopus:~$ cd myprj
jbe@octopus:~/myprj$ mkdir local_src
jbe@octopus:~/myprj$ ptxdist local-src kernel ~/linux
jbe@octopus:~/myprj$ ls -l local_src
lrwxrwxrwx 1 jbe ptx 36 Nov 14 16:14 kernel.<platformname> -> /home/jbe/linux
Note
The <platformname>
in the example above will be replaced by the name of your own platform.
PTXdist will handle it in the same way as a kernel part of the project. Due to this, we must setup:
Some kind of kernel version
Kernel configuration
Image type used on our target architecture
If we want to build modules
Patches to be used (or not)
Lets setup these topics. We just add the kernel component to it.
jbe@octopus:~/myprj$ ptxdist platformconfig
We must enable the Linux kernel entry first, to enable kernel building as part of the project. After enabling this entry, we must enter it, and:
Setting up the kernel version
Setting up the MD5 sum of the corresponding archive
Selecting the correct image type in the entry Image Type.
Configuring the kernel within the menu entry patching & configuration.
If no patches should be used on top of the selected kernel source tree, we keep the patch series file entry empty. As GIT should help us to create these patches for deployment, it should be kept empty on default in this first step.
Select a name for the kernel configuration file and enter it into the kernel config file entry.
Important
Even if we do not intend to use a kernel archive, we must setup these entries with valid content, else PTXdist will fail. Also the archive must be present on the host, else PTXdist will start a download.
Now we can leave the menu and store the new setup. The only still
missing component is a valid kernel config file now. We can use one of
the default config files the Linux kernel supports as a starting point.
To do so, we copy one to the location, where PTXdist expects it in the
current project. In a multi platform project this location is the
platform directory usually in configs/<platform-directory>
. We must
store the file with a name selected in the platform setup menu (kernel
config file).
Work Flow
Now its up to ourself working on the GIT based kernel source tree and using PTXdist to include the kernel into the root filesystem.
To configure the kernel source tree, we simply run:
jbe@octopus:~/myprj$ ptxdist kernelconfig
To build the kernel:
jbe@octopus:~/myprj$ ptxdist targetinstall kernel
To rebuild the kernel:
jbe@octopus:~/myprj$ ptxdist drop kernel compile
jbe@octopus:~/myprj$ ptxdist targetinstall kernel
Note
To clean the kernel, change into the local_src directory and call
make clean
or the clean command for the build system used by the
package. A ptxdist clean kernel
call will only delete the
symlinks in the build directory, but not clean the kernel compiled files.
6.2. Using the Code Signing Infrastructure with the Kernel Recipe
The kernel recipe can make use of the code signing infrastructure to supply cryptographic key material for several kernel features.
They can be enabled in the Linux kernel section of ptxdist platformconfig
.
Important
When supplying the kernel with key material, you should also make sure that
all necessary crypto algorithms are enabled in the kernel.
For example, if your module signing key is signed with an SHA256 hash,
you must enable CONFIG_CRYPTO_SHA256
so that the signature can be verified.
Otherwise, some older kernels throw a stack trace on boot, and will not load
the supplied key material.
Trusted Root CAs
In some setups additional trusted CAs can be necessary; for example, when using EVM, the EVM key must be issued by a certificate that is trusted by the kernel.
When PTXCONF_KERNEL_CODE_SIGNING
(“depend on code signing infrastructure”)
is enabled in the platformconfig, and if the code signing provider supplies CA
certificates in the kernel-trusted
role,
PTXdist adds the option CONFIG_SYSTEM_TRUSTED_KEYS
to the kernel config to
add those certificates to the kernel trust root.
(The code signing provider should use cs_append_ca_from_der,
cs_append_ca_from_pem, or cs_append_ca_from_uri with the
kernel-trusted
role to supply those certificates.)
Note that the kernel also always adds the module signing key to the trust root (see Kernel Module Signing below). If the EVM key is signed by the module signing key (or if the two keys are the same and it is self-signed), no additional trust CA is necessary.
Kernel Module Signing
The kernel’s build system can generate cryptographic signatures for all kernel modules during the build process. This can ensure that all modules loaded on the target at runtime have been built by a trustworthy source.
If PTXCONF_KERNEL_MODULES_SIGN
(“sign modules”) is enabled in the
platformconfig, PTXdist augments the kernel config with the following config
options during the kernel.compile and kernel.install stages:
CONFIG_MODULE_SIG_KEY
(“File name or PKCS#11 URI of module signing key”): PTXdist supplies the URI from thekernel-modules
role of the configured code signing provider. (The code signing provider should use cs_set_uri to set the URI.)
However, additional settings must also be enabled in the kernel config:
CONFIG_MODULE_SIG=y
(“Module signature verification”): Enable this option for module signing, and to get access to its sub-options.CONFIG_MODULE_SIG_ALL=y
(“Automatically sign all modules”): Enable this option so that the kernel’s build system signs the modules during PTXdist’s kernel.install stage.Additionally,
CONFIG_MODULE_SIG_FORCE
(“Require modules to be validly signed”) can be useful so that the kernel refuses loading modules with invalid, untrusted, or no signature.
For the full overview, refer to the kernel’s module signing documentation.
6.3. Discovering Runtime Dependencies
Often it happens that an application on the target fails to run, because one of its dependencies is not fulfilled. This section should give some hints on how to discover these dependencies.
Dependencies on other Resources
Sometimes a binary fails to run due to missing files, directories or
device nodes. Often the error message (if any) which the binary creates
in this case is ambiguous. Here the strace
tool can help us, namely
to observe the binary at run-time. strace
shows all the system calls
the binary or its shared libraries are performing.
strace
is one of the target debugging tools which PTXdist provides
in its Debug Tools
menu.
After adding strace to the root filesystem, we can use it and observe
our foo
binary:
$ strace usr/bin/foo
execve("/usr/bin/foo", ["/usr/bin/foo"], [/* 41 vars */]) = 0
brk(0) = 0x8e4b000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY) = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=77488, ...}) = 0
mmap2(NULL, 77488, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7f87000
close(3) = 0
open("/lib//lib/libm-2.5.1.so", O_RDONLY) = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0p%\0\000"..., 512) = 512
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7f86000
fstat64(3, {st_mode=S_IFREG|0555, st_size=48272, ...}) = 0
mmap2(NULL, 124824, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb7f67000
mmap2(0xb7f72000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xb) = 0xb7f72000
mmap2(0xb7f73000, 75672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb7f73000
close(3) = 0
open("/lib/libc.so.6", O_RDONLY) = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\332X\1"..., 512) = 512
fstat64(3, {st_mode=S_IFREG|0755, st_size=1405859, ...}) = 0
[...]
Occasionally the output of strace
can be very long and the
interesting parts are lost. So, if we assume the binary tries to open a
nonexisting file, we can limit the output to all open
system calls:
$ strace -e open usr/bin/foo
open("/etc/ld.so.cache", O_RDONLY) = 3
open("/lib/libm-2.5.1.so", O_RDONLY) = 3
open("/lib/libz.so.1.2.3", O_RDONLY) = 3
open("/lib/libc.so.6", O_RDONLY) = 3
[...]
open("/etc/foo.conf", O_RDONLY) = -1 ENOENT (No such file or directory)
The binary may fail due to a missing /etc/foo.conf
. This could be a
hint on what is going wrong (it might not be the final solution).
6.4. Debugging with CPU emulation
If we do not need some target related feature to run our application, we can also debug it through a simple CPU emulation. Thanks to QEMU we can run ELF binaries for other architectures than our build host is.
Running an Application made for a different Architecture
PTXdist creates a fully working root filesystem with all run-time
components in root/
. Lets assume we made a PTXdist based project for
a CPU. Part of this project is our application myapp
we are
currently working on. PTXdist builds the root filesystem and also
compiles our application. It also installs it to usr/bin/myapp
in
the root filesystem.
With this preparation we can run it on our build host:
$ cd platform-example/root
platform-example/root$ qemu-<architecture> -cpu <cpu-core> -L . usr/bin/myapp
This command will run the application usr/bin/myapp
built for an
<cpu-core> CPU on the build host and is using all library components
from the current directory.
For the stdin and -out QEMU uses the regular mechanism of the build host’s operating system. Using QEMU in this way let us simply check our programs. There are also QEMU environments for other architectures available.
Debugging an Application made for a different Architecture
Debugging our application is also possible with QEMU. All we need are a root filesystem with debug symbols available, QEMU and an architecture aware debugger.
The root filesystem with debug symbols will be provided by PTXdist, the architecture aware debugger comes with the OSELAS.Toolchain. Two consoles are required for this debug session in this example. We start the QEMU in the first console as:
$ cd ptxdistPlatformDir/root
ptxdistPlatformDir/root$ qemu-<architecture> -g 1234 -cpu <cpu-core> -L . usr/bin/myapp
Note
PTXdist always builds a root filesystem root/
.
It contains all components without debug
information (all binaries are in the same size as used later on on the
real target). In addition, each directory that contains binaries also
contains a .debug/
directory. It contains a file with only the debug
symbols for each binary. These files are ignored while running
applications but GDB knows about it and will automatically load the debug
files.
The added -g 1234 parameter lets QEMU wait for a GDB connection to run the application.
In the second console we start GDB with the correct architecture support. This GDB comes with the same OSELAS.Toolchain that was also used to build the project:
$ ./selected_toolchain/<target>-gdb --tui platform-<platformname>/root/usr/bin/myapp
This will run a curses based GDB. Not so easy to handle (we must enter all the commands and cannot click with a mouse!), but very fast to take a quick look at our application.
At first we tell GDB where to look for debug symbols. The correct
directory here is root/
.
(gdb) set solib-absolute-prefix platform-<platformname>/root
Next we connect this GDB to the waiting QEMU:
(gdb) target remote localhost:1234
Remote debugging using localhost:1234
[New Thread 1]
0x40096a7c in _start () from root/lib/ld.so.1
As our application is already started, we can’t use the GDB command
start
to run it until it reaches main()
. We set a breakpoint
instead at main()
and continue the application:
(gdb) break main
Breakpoint 1 at 0x100024e8: file myapp.c, line 644.
(gdb) continue
Continuing.
Breakpoint 1, main (argc=1, argv=0x4007f03c) at myapp.c:644
The top part of the running gdbtui console will always show us the
current source line. Due to the root/
directory usage all
debug information for GDB is available.
Now we can step through our application by using the commands step, next, stepi, nexti, until and so on.
Note
It might be impossible for GDB to find debug symbols for components like the main C run-time library. In this case they where stripped while building the toolchain. There is a switch in the OSELAS.Toolchain menu to keep the debug symbols also for the C run-time library. But be warned: This will enlarge the OSELAS.Toolchain installation on your hard disk! When the toolchain was built with the debug symbols kept, it will be also possible for GDB to debug C library functions our application calls (so it might worth the disk space).
6.5. Migration between Releases
To migrate an existing project from within one minor release to the next one, we do the following step:
~/my_bsp# ptxdist migrate
PTXdist will ask us for every new configuration entry what to do. We must read and answer these questions carefully. At least we shouldn’t answer blindly with ’Y’ all the time because this could lead into a broken configuration. On the other hand, using ’N’ all to time is more safer. We can still enable interesting new features later on.
6.6. Increasing Build Speed
Modern host systems are providing more than one CPU core. To make use of this additionally computing power recent applications should do their work in parallel.
Using available CPU Cores
PTXdist uses all available CPU cores when building a project by default. There are some exceptions:
the prepare stage of all autotools build system based packages can use only one CPU core. This is due to the fact, the running “configure” is a shell script.
some packages have a broken buildsystem regarding parallel build. These kind of packages build successfully only when building on one single CPU core.
creating the root filesystem images are also done on a single core only
Manually adjusting CPU Core usage
Manual adjustment of the parallel build behaviour is possible via command line parameters.
-ji<number>
this defines the number of CPU cores to build a package. The default is two times the available CPU cores.
-je<number>
this defines the number of packages to be build in parallel. The default is one package at a time.
-j<number>
this defines the number of CPU cores to be used at the same time. These cores will be used on a package base and file base.
-l<number>
limit the system load to the given value.
Important
using -ji
and -je
can overload the system
immediately. These settings are very hard.
A much softer setup is to just use the -j<number>
parameter. This will run
up to <number>
tasks at the same time which will be spread over everything
to do. This will create a system load which is much user friendly. Even the
filesystem load is smoother with this parameter.
Building in Background
To build a project in background PTXdist can be ’niced’.
-n[<number>]
run PTXdist and all of its child processes with the given nicelevel <number>. Without a nicelevel the default is 10.
Building Platforms in Parallel
Due to the fact that more than one platform can exist in a PTXdist project, all these platforms can be built in parallel within the same project directory. As they store their results into different platform subdirectories, they will not conflict. Only PTXdist must be called differently, because each call must be parametrized individually.
The used Platform Configuration
$ ptxdist platform <some-platform-config>
This call will create the soft link selected_platformconfig
to the
<some-platform-config>
in the project’s directory. After this call,
PTXdist uses this soft link as the default platform to build for.
It can be overwritten temporarily by the command line parameter
--platformconfig=<different-platform-config>
.
The used Project Configuration
$ ptxdist select <some-project-config>
This call will create the soft link selected_ptxconfig
to the
<some-project-config>
in the project’s directory. After this call,
PTXdist uses this soft link as the default configuration to build the
project.
It can be overwritten temporarily by the command line parameter
--ptxconfig=<different-project-config>
.
The used Toolchain to Build
$ ptxdist toolchain <some-toolchain-path>
This call will create the soft link selected_toolchain
to the
<some-toolchain-path>
in the project’s directory. After this call,
PTXdist uses this soft link as the default toolchain to build the
project with.
It can be overwritten temporarily by the command line parameter
--toolchain=<different-toolchain-path>
.
By creating the soft links all further PTXdist commands will use these as the default settings.
By using the three --platformconfig
, --ptxconfig
and
--toolchain
parameters, we can switch (temporarily) to a completely
different setting. This feature we can use to build everything in one
project.
A few Examples
The project contains two individual platforms, sharing the same architecture and same project configuration.
$ ptxdist select <project-config>
$ ptxdist toolchain <architecture-toolchain-path>
$ ptxdist --platformconfig=<architecture-A> --quiet go &
$ ptxdist --platformconfig=<architecture-B> go
The project contains two individual platforms, sharing the same project configuration.
$ ptxdist select <project-config>
$ ptxdist --platformconfig=<architecture-A> --toolchain=<architecture-A-toolchain-path> --quiet go &
$ ptxdist --platformconfig=<architecture-B> --toolchain=<architecture-B-toolchain-path> go
The project contains two individual platforms, but they do not share anything else.
$ ptxdist --select=<project-A-config> --platformconfig=<architecture-A> --toolchain=<architecture-A-toolchain-path> --quiet go &
$ ptxdist --select=<project-B-config> --platformconfig=<architecture-B> --toolchain=<architecture-B-toolchain-path> go
Running one PTXdist in background and one in foreground would render the
console output unreadable. That is why the background PTXdist uses the
--quiet
parameter in the examples above. Its output is still
available in the logfile under the platform build directory tree.
By using more than one virtual console, both PTXdists can run with their full output on the console.
6.7. Using a Distributed Compiler
To increase the build speed of a PTXdist project can be done by doing more tasks in parallel. PTXdist itself uses all available CPU cores by default, but is is limited to the local host. For further speedup a distributed compilation can be used. This is the task of ICECC aka icecream. With this feature a PTXdist project can make use of all available hosts and their CPUs in a local network.
Setting-Up the Distributed Compiler
How to setup the distributed compiler can be found on the project’s homepage at GITHUB:
https://github.com/icecc/icecream.
Read their README.md
for further details.
Important
as of July 2014 you need at least an ICECC in its version 1.x. Older revisions are known to not work.
Enabling PTXdist for the Distributed Compiler
Since the 2014.07 release, PTXdist supports the usage of ICECC by simply enabling a setup switch.
Run the PTXdist setup and navigate to the new ICECC menu entry:
$ ptxdist setup
Developer Options --->
[*] use icecc
(/usr/lib/icecc/icecc-create-env) icecc-create-env path
Maybe you must adapt the icecc-create-env path
to the setting on
your host. Most of the time the default path should work.
How to use the Distributed Compiler with PTXdist
PTXdist still uses two times the count of cores of the local CPU for parallel tasks. But if a faster CPU in the net exists, ICECC will now start to do all compile tasks on this/these faster CPU(s) instead of the local CPU.
To really boost the build speed you must increase the tasks to be done
in parallel manually. Use the -ji<x>
command line option to start
more tasks at the same time. This command line option just effects one
package to build at a time. To more increase the build speed use the
-je<x>
command line option as well. This will build also packages in
parallel.
A complete command line could look like this:
$ ptxdist go -ji64 -je8
This command line will run up to 64 tasks in parallel and builds 8 packages at the same time. Never worry again about your local host and how slow it is. With the help of ICECC every host will be a high speed development machine.
6.8. Using Pre-Built Archives
PTXdist is a tool which creates all the required parts of a target’s filesystem to breathe life into it. And it creates these parts from any kind of source files. If a PTXdist project consists of many packages the build may take a huge amount of time.
For internal checks we have a so called “ALL-YES” PTXdist project. It has - like the name suggests - all packages enabled which PTXdist supports. To build this “ALL-YES” PTXdist project our build server needs about 6 hours.
Introduction
While developing a PTXdist project it is necessary to clean and re-build everything from time to time to get a re-synced project result which honors all changes made in the project. But since cleaning and re-building everything from time to time is a very good test case for if some adaptions are still missing or if everything is complete, it can be a real time sink to do so.
To not lose developer’s temper when doing such tests, PTXdist can keep archives from the last run which includes all the files the package’s build system has installed while the PTXdist’s install stage runs for it.
The next time PTXdist shall build a package it can use the results from the last run instead. This feature can drastically reduce the time to re-build the whole project. But also, this PTXdist feature must be handled with care and so it is not enabled and used as default.
This section describes how to make use of this PTXdist feature and what pitfalls exist when doing so.
Creating Pre-Built Archives
To make PTXdist create pre-built archives, enable this feature prior to a build in the menu:
$ ptxdist menuconfig
Project Name & Version --->
[*] create pre-built archives
Now run a regular build of the whole project:
$ ptxdist go
When the build is finished, the directory packages
contains
additional archive files with the name scheme *-dev.tar.gz
. These
files are the pre-built archives which PTXdist can use later on to
re-build the project.
Using Pre-Built Archives
To make PTXdist use pre-built archives, enable this feature prior to a build in the menu:
$ ptxdist menuconfig
Project Name & Version --->
[*] use pre-built archives
(</some/path/to/the/archives>)
During the next build (e.g. ptxdist go
) PTXdist will look for a
specific package if its corresponding pre-built archive exists. If
it exists and the used hash value in the pre-built archive’s
filename matches, PTXdist will skip all source archive handling
(extract, patch, compile and install) and just extract and use the
pre-built archive’s content.
Sufficient conditions for safe application of pre-built archives are:
using one pre-built archive pool for one specific PTXdist project.
using a constant PTXdist version all the time.
using a constant OSELAS.Toolchain() version all the time.
no package with a pre-built archive in the project is under development.
The hash as a part of the pre-built archive’s filename only reflects
the package’s configuration made in the menu (ptxdist menuconfig
).
If this package specific configuration changes, a new hash value will be
the result and PTXdist can select the matching pre-built archive.
This hash value change is an important fact, as many things outside and inside the package can have a big impact of the binary result but without a hash value change!
Please be careful when using the pre-built archives if you:
intend to switch to a different toolchain with the next build.
change the patch set applied to the corresponding package, e.g. the package is under development.
change the hard coded configure settings in the package’s rule file, e.g. the package is under development.
intend to use one pre-built archive pool from different PTXdist projects.
change a global PTXdist configuration parameter (e.g. PTXCONF_GLOBAL_IPV6).
To consider all these precautions the generated pre-built archives are not transferred automatically to where the next build expects them. This must be done manually by the user of the PTXdist project. Doing so, we can decide on a package by package basis if its pre-built archive should be used or not.
If you are unsure if your modifications rendered some or all of your pre-built archives invalid you can always delete and build them again to be on the safe side.
Packages without Pre-Built Archives Support
Not all packages support pre-built archives. This is usually caused by relocation problems or files outside the install directory are needed:
Some host packages are not relocatable and install directly into sysroot-host.
Linux kernel: it has an incomplete install stage, which results in an incomplete pre-built archive. Due to this, it cannot be used as a pre-built archive.
Barebox bootloader: it has an incomplete install stage, which results in an incomplete pre-built archive. Due to this, it cannot be used as a pre-built archive.
a few somehow broken packages that are all explicitly marked with a
<packagename>_DEVPKG := NO
in their corresponding rule file.
Workflow with Pre-Built Archives
We are starting with an empty PTXdist project and enabling the pre-built archive feature as mentioned in the previous section. After that a regular build of the project can be made.
When the build is finished it’s time to copy all the pre-built archives of interest to where the next build will expect them. The previous section mentions the step to enable their use. It also allows to define a directory. The default path of this directory is made from various other menu settings to ensure the pre-built archives of the current PTXdist project do not conflict with pre-built archives of different PTXdist projects. To get an idea of what the final path is, we can ask PTXdist.
$ ptxdist print PTXCONF_PROJECT_DEVPKGDIR
/home/jbe/OSELAS.BSP/Pengutronix/OSELAS.BSP-Pengutronix-Generic
If this directory does not exist, we can simply create it:
$ mkdir -p /home/jbe/OSELAS.BSP/Pengutronix/OSELAS.BSP-Pengutronix-Generic
Now it’s time to copy the pre-built archives to this new directory. We
could simply copy all pre-built archives from the /packages
directory. But we should keep in mind, if any of the related packages
are under development, we must omit their corresponding pre-built archives
in this step.
$ cp platform-<platformname>/packages/*-dev.tar.gz /home/jbe/OSELAS.BSP/Pengutronix/OSELAS.BSP-Pengutronix-Generic
Use Cases
Some major possible use cases are covered in this section:
speed up a re-build of one single project.
share pre-built archives between two platforms based on the same architecture.
increase reproducibility of binaries.
To simply speed up a re-build of the whole project (without development
on any of the used packages) we just can copy all *-dev.tar.gz
archives after the first build to the location where PTXdist expects
them at the next build time.
If two platforms are sharing the same architecture it is possible to share pre-built archives as well. The best way it can work is, if both platforms are part of the same PTXdist project. They must also share the same toolchain settings, patch series and rule files. If these precautions are handled the whole project can be built for the first platform and these pre-built archives can be used to build the project for the second platform. This can reduce the required time to build the second platform from hours to minutes.
6.9. Downloading Packages from the Web
Sometimes it makes sense to get all required source archives at once. For example prior to a shipment we want to also include all source archives, to free the user from downloading it by him/herself.
PTXdist supports this requirement by the export-src
parameter. It
collects all required source archives into one given single directory.
To do so, the current project must be set up correctly, e.g. the
select
and platform
commands must be ran prior the
export-src
step.
If everything is set up correctly we can run the following commands to get the full list of required archives to build the project again without an internet connection.
$ mkdir my_archives
$ ptxdist export-src my_archives
PTXdist will now collect all source archives to the my_archives/
directory.
Note
If PTXdist is configured to share one source archive directory for all projects, this step will simply copy the source archives from the shared source archive directory. Otherwise PTXdist will start to download them from the world wide web.
6.10. Creating Autotools based Packages
Developing your own programs and libraries can be one of the required tasks to support an embedded system. PTXdist comes with three autotoolized templates to provide a comfortable buildsystem:
a library package template
an executable package template
a program combined with a library package template
Some template components are shared between all three packages types and described here, some other template components are individual to each package type and described later on.
Creating a Library Template
This template creates a library only package and can be done by the PTXdist’s newpackage option:
$ ptxdist newpackage src-autoconf-lib
ptxdist: creating a new 'src-autoconf-lib' package:
ptxdist: enter package name...........: foo
ptxdist: enter version number.........: 1
ptxdist: enter package author.........: Juergen Borleis <jbe@pengutronix.de>
ptxdist: enter package section........: project_specific
generating rules/foo.make
generating rules/foo.in
local_src/foo does not exist, create? [Y/n] Y
./
./configure.ac
./Makefile.am
./COPYING
./lib@name@.pc.in
./wizard.sh
./lib@name@.h
./@name@.c
./autogen.sh
mkdir: created directory 'm4'
./
./ax_armv6_detection.m4
./ptx.m4
./internal.h
./pkg.m4
./ax_armv4_detection.m4
./ax_floating_point.m4
./INSTALL
./ax_armv5_detection.m4
./attributes.m4
./ax_armv7_detection.m4
./ax_code_coverage.m4
After this step the new directory local_src/foo
exists and contains
various template files. All of these files are dedicated to be modified
by yourself.
The content of this directory is:
$ tree local_src/foo/
local_src/foo/
|-- COPYING
|-- INSTALL
|-- Makefile.am
|-- autogen.sh
|-- configure.ac
|-- foo.c
|-- internal.h
|-- libfoo.h
|-- libfoo.pc.in
`-- m4/
|-- ptx.m4
|-- attributes.m4
|-- ax_code_coverage.m4
|-- pkg.m4
|-- ax_armv4_detection.m4
|-- ax_armv5_detection.m4
|-- ax_armv6_detection.m4
|-- ax_armv7_detection.m4
`-- ax_floating_point.m4
Most files and their content are already described above. Some files and their content are library specific:
Build system related files
configure.ac
The shared part is already described above. For a library there are some extensions:
- LT_CURRENT / LT_REVISION / LT_AGE
define the binary compatibility of your library. The rules how these numbers are defined are:
library code was modified:
LT_REVISION++
interfaces changed/added/removed:
LT_CURRENT++
andLT_REVISION = 0
interfaces added:
LT_AGE++
interfaces removed:
LT_AGE = 0
You must manually change these numbers whenever you change the code in your library prior a release.
- REQUIRES
to enrich the generated *.pc file for easier dependency handling you should also fill the REQUIRES variable. Here you can define from the package management point of view the dependencies of your library. For example if your library depends on the udev library and requires a specific version of it, just add the string
udev >= 1.0.0
to the REQUIRES variable. Note: the listed packages must be space-separated.- CONFLICTS
if your library conflicts with a different library, add this different library to the CONFLICTS variable (from the package management point of view).
libfoo.pc.in
This file gets installed to support the pkg-config tool for package management. It contains some important information for users of your package how to use your library and also handles its dependencies. Some TODOs in this file need your attention:
- Name
A human-readable name for the library.
- Description
add a brief description of your library here
- Version
the main revision of the library. Will automatically replaced from your settings in
configure.ac
.- URL
where to find your library. Will automatically replaced from your settings in
configure.ac
.- Requires
space-separated list of modules your library itself depends on and managed by pkg-config. The listed modules gets honored for the static linking case and should not be given again in the Libs.private line. This line will be filled by the REQUIRES variable from the
configure.ac
.- Requires.private
space-separated list of modules your library itself depends on and managed by pkg-config. The listed modules gets honored for the static linking case and should not be given again in the Libs.private line. This line will be filled by the REQUIRES variable from the
configure.ac
.- Conflicts
list of packages your library conflicts with. Will automatically replaced from your CONFLICTS variable settings in
configure.ac
.- Libs
defines the linker command line content to use your library and link it against other applications or libraries
- Libs.private
defines the linker command line content to use your library and link it against other application or libraries statically. List only libraries here which are not managed by pkg-config (e.g. do not conflict with modules given in the Requires). This line will be filled by the LIBS variable from the
configure.ac
.- Cflags
required compile flags to make use of your library. Unfortunately you must mix CPPFLAGS and CFLAGS here which is a really bad idea.
It is not easy to fully automate the adaption of the pc file. At least the lines Requires, Requires.private and Libs.private are hardly to fill for packages which are highly configurable.
I nice and helpful description about this kind of configuration file can be found here:
Creating an Executable Template
Creating an executable template works nearly the same like the example above in Creating a Library Template. It just skips the library related stuff.
The command:
$ ptxdist newpackage src-autoconf-prog
Results into the following generated files:
$ tree local_src/foo
|-- COPYING
|-- INSTALL
|-- Makefile.am
|-- autogen.sh
|-- configure.ac
|-- foo.c
|-- internal.h
`-- m4/
|-- ptx.m4
|-- attributes.m4
|-- ax_code_coverage.m4
|-- pkg.m4
|-- ax_armv4_detection.m4
|-- ax_armv5_detection.m4
|-- ax_armv6_detection.m4
|-- ax_armv7_detection.m4
`-- ax_floating_point.m4
Creating an Executable with a Library Template
Creating a library and an executable which makes use of this library is a combination of Creating a Library Template and Creating an Executable Template.
The command:
$ ptxdist newpackage src-autoconf-proglib
Results into the following generated files:
$ tree local_src/foo
|-- COPYING
|-- INSTALL
|-- Makefile.am
|-- autogen.sh
|-- configure.ac
|-- internal.h
|-- libfoo.c
|-- libfoo.h
|-- libfoo.pc.in
|-- foo.c
`-- m4/
|-- ptx.m4
|-- attributes.m4
|-- ax_code_coverage.m4
|-- pkg.m4
|-- ax_armv4_detection.m4
|-- ax_armv5_detection.m4
|-- ax_armv6_detection.m4
|-- ax_armv7_detection.m4
`-- ax_floating_point.m4
The intended purpose of this template is a new tool, which has all its features implemented in the library. And the executable is a shell command frontend to provide the library’s features to an interactive user.
The advantage of this approach is, the library’s features can also be used by a non-interactive user, e.g. a different application.
Note
If you intend to use the GPL license, think about using the LGPL license variant for the library part of your project.
Important
If you want to be able to move code from the executable (and GPL licensed) part into the library (and LGPL licensed) part later on, you should use the LGPL license for both parts from the beginning. Else you may cannot move source code in such a way, because it would require a license change for this specific piece of source code (to be pedantic!).
6.11. Controlling Package Dependencies in more Detail
In section Managing External Compile Time Dependencies a simple method is shown how to define an external package dependency a particular package can have in order to build it.
Implicit Dependencies
For the simple dependency definition PTXdist adds internally a dependency to the install stage of the defined external dependency (or to a different package to use PTXdist glossary).
We must keep this in mind, because there are packages out there, which don’t install anything in their install stage. They install something in their targetinstall stage instead. In this case even if the dependency is defined like shown in Managing External Compile Time Dependencies, building the particular package may fail depending on the build order.
To avoid this, an explicit make
style dependency must be added to the rule
file. If the compile stage of package foo
has a dependency to package
bar
’s targetinstall stage just add the following lines to your rule file:
$(STATEDIR)/foo.compile: $(STATEDIR)/bar.targetinstall
Build-Time only Dependency
Sometimes packages have a compile-time dependency to a different package, but can live without its content at run-time. An example can be a static library which is linked at compile-time and not required as a separate package at run-time. Another example is making use of this detailed dependency can make developer’s life easier when using individual package lists for dedicated image files. Think about a development image and a production image which should be built at the same time but should contain a different packages list each (refer Creating Individual Root-Filesystems for each Variant for details).
Marking a menu file based dependency with if BUILDTIME
limits the dependency
to compile-time only. In this case its possible to have the package in one
image’s list, but not its dependency.
Run-Time only Dependency
The other way round is if RUNTIME
. This forces the dependency package is
part of the final image as well, but PTXdist can improve its build-time job by
reordering package’s build.
A use case for this run-time dependency can be a package which just installs a
shell script. This shell script makes use of some shell commands which must be
present at run-time and thus depends on a package which provides these shell
commands. But these shell commands are not required to build the shell script
itself. In this case PTXdist can build both packages independently.
6.12. umask
Pitfall
When using PTXdist keep in mind it requires some ‘always expected’ permissions to do its job (this does not include root permissions!). But it includes some permissions related to file permission masks.
PTXdist requires a umask
of 0022
to be able to create files accessible
by regular users. This is important at build-time, since it propagates to the
generated target filesystem images as well. For example the install_tree
macro (refer install_tree) uses the file permissions it finds
in the build machine’s filesystem also for the target filesystem image. With
a different umask
than 0022
at build-time this may fail badly at
run-time with strange erroneous behaviour (for example some daemons with
regular user permissions cannot access their own configuration files).
If the current umask
is more permissive than the required umask
,
then ptxdist will change it as required. For example, a umask
of
0002
is quite common when the primary group of a user has the same name
as the user.
For security reasons, PTXdist will not set a more permissive umask
than the current one.
6.13. Read Only Filesystem
A system can run a read-only root filesystem in order to have a unit which can be powered off at any time, without any previous shut down sequence.
But many applications and tools are still expecting a writable filesystem to temporarily store some kind of data or logging information for example. All these write attempts will fail and thus, the applications and tools will fail, too.
According to the Filesystem Hierarchy Standard 2.3 the directory tree in
/var/
is traditionally writable and its content is persistent across system
restarts. Thus, this directory tree is used by most applications and tools to
store their data.
The Filesystem Hierarchy Standard 2.3 defines the following directories
below /var/
:
cache/
: Application specific cache datacrash/
: System crash dumpslib/
: Application specific variable state informationlock/
: Lock fileslog/
: Log files and directoriesrun/
: Data relevant to running processesspool/
: Application spool datatmp/
: Temporary files preserved between system reboots
Although this writable directory tree is useful and valid for full blown host machines, an embedded system can behave differently here: For example a requirement can drop the persistency of changed data across reboots and always start with empty directories.
Partially RAM Disks
This is the default behaviour of PTXdist: it mounts a couple of RAM disks over
directories in /var
expected to be writable by various applications and
tools. These RAM disks start always in an empty state and are defined as follows:
mount point |
mount options |
---|---|
/var/log |
nosuid,nodev,noexec,mode=0755,size=10% |
/var/lock |
nosuid,nodev,noexec,mode=0755,size=1M |
/var/tmp |
nosuid,nodev,mode=1777,size=20% |
This is a very simple and optimistic approach and works for surprisingly many use
cases. But some applications expect a writable /var/lib
and will fail due
to this setup. Using an additional RAM disk for /var/lib
might not help in
this use case, because it will bury all build-time generated data already present
in this directory tree (package pre-defined configuration files for example).
Overlay RAM Disk
A different approach to have a writable /var
without persistency is to use
a so called overlay filesystem. This overlay filesystem is a transparent
writable layer on top of a read-only filesystem. After the system’s start the
overlay filesystem layer is empty and all reads will be satisfied by the
underlying read-only filesystem. Writes (new files, directories, changes of
existing files) are stored in the overlay filesystem layer and on the
next read satisfied by this layer, instead of the underlying read-only
filesystem.
PTXdist supports this use case, by enabling the overlay feature for the
/var
directory in its configuration menu:
Root Filesystem --->
directories in rootfs --->
/var --->
[*] overlay '/var' with RAM disk
Keep in mind: this approach just enables write support to the /var
directory
tree, but nothing stored/changed in there at run-time will be persistent and is
always lost if the system restarts. And each additional RAM disk consumes
additional main memory, and if applications and tools will fill up the directory
tree in /var
the machine might run short on memory and slows down
dramatically.
Thus, it is a good idea to check the amount of data written by applications and
tools to the /var
directory tree and limit it by default.
You can limit the size of the overlay filesystem RAM disk as well. For this
you can provide your own
projectroot/usr/lib/systemd/system/run-varoverlayfs.mount
with restrictive
settings. But then the used applications and tools must deal with the
“no space left on device” error correctly…
This overlay filesystem approach requires the overlay filesystem feature
from the Linux kernel. In order to use it, the feature CONFIG_OVERLAY_FS must
be enabled. A used mount option of the overlayfs in the default
projectroot/usr/lib/systemd/system/var.mount
unit requires a Linux-4.19 or
newer.
If your kernel does not meet this requirement you can provide your own local
and adapted variant of the mentioned mount unit.
6.14. Using a userland NFS Server for the Target
To improve the development of software for a target system, it is very exhausting changing files or settings at the target itself.
Or trying the application under development on the target again and again to see if a feature works or a GUI looks nicer now or is more handy to control on a small touchscreen display.
Using the Network File System (NFS) can improve the development speed by grades in this case. Everything filesystem related is still happening on the development host and each modification can be used at the target immediately.
Using PTXdist’s built-in NFS Userland Server
PTXdist can export the BSP’s root filesystem by itself. Since a userspace tool running as a regular user cannot open network ports below 1024, it uses a different network port. The default is port 2049. To make use of this PTXdist feature, run inside the BSP at your development host:
$ ptxdist nfsroot
[...]
Mount rootfs with nfsroot=/root,v3,tcp,port=2049,mountport=2049
At the target side a slightly different configuration must be used to work with
the userspace NFS server PTXdist provides instead of a regular kernel space
NFS server the Linux kernel provides. When starting PTXdist’s nfsroot
feature
it outputs the special command line we need to instruct the Linux kernel to
use this userland NFS server for its root filesystem to boot its userland from.
What is still to be considered here is the network confiuration. Refer the
kernel documentation about the capabilities of the ip=
kernel command line
option and check, if we need to setup a special IP address at the target side
to reach the host running PTXdist and its nfsroot feature.
If we need a special IP address to setup, the kernel command line parameter to use PTXdist’s nfsroot feature, the parameter looks like this:
nfsroot=<host-ip>:/root,v3,tcp,port=2049,mountport=2049
In the case we must replace the <host-ip>
part of the line above with the
IP address of our host running PTXdist’s nfsroot feature.
If we run a recent Barebox bootloader with bootspec support, booting a target via network only is very easy. In the Barebox prompt just enter:
barebox@target:/ boot nfs://<host-ip>:2049//root
In this case Barebox will mount the defined root filesysem via NFS, loads the included bootspec file, read its information and continues to load the matching kernel and maybe a matching device tree.
File Permissions and Ownership
PTXdist runs as a regular user. As a result, the files in the root directory are owned by the user. Any SUID bits are removed and all special files, such as device nodes, are represented by empty regular files.
The userland NFS server has two mechanisms to provide the correct ownership, permissions, etc. to the client.
Fakeroot is started and the correct ownership, permissions, etc. are applied. Then the userland NFS server is started.
- Known issues with this approach:
Ownership changes made by then NFS client are lost when the NFS server is stopped.
Fakeroot writes SUID bits to the underlying filesystem. As a result, the file is now SUID for the regular user. This causes problems when the same rootfs is used with a regular NFS server as well.
If the underlying filesystem is changed behind its back then fakeroot can get confused and may provide incorrect data.
While ownership and permissions are presented correctly, they are not fully enforced that way. So this is useful for testing but not secure in any way.
In the developer options in ptxdist setup there is an option “provide ownership/permission metadata in the nfsroot”. If this is enabled, then PTXdist will store the permission data inside the rootfs as additional files. The format is mostly what qemu uses for its security_model=mapped-file option for virtual filesystems. The only difference is that symlinks remain real symlinks. The userland NFS server reads there extra files and provides the correct permissions.
- Known issues with this approach:
The additional files can be a problem when the rootfs is shared with a regular NFS server where these files are visible. For example, programs that search for plugins will find the extra non-binary files.
While ownership and permissions are presented correctly, they are not fully enforced that way. So this is useful for testing but not secure in any way.
6.15. Supporting Multiple Device Variants in one Platform
Many projects have to deal with more than just one hardware and software configuration. There may be multiple hardware generations or variants with different hardware and software features.
To simplify maintenance, it can be desirable to handle all variants in one PTXdist platform. This can be achieved by creating new image rules for each supported variant.
Providing a Bootloader for each Variant
What needs to be done here depends on the hardware and which bootloader is used. For example, for barebox on i.MX6, images for all variants can be generate from one build tree. In this case the default barebox packages is sufficient.
If different builds are needed, then a new bootloader package for each variant can be created. For barebox PTXdist provides a template to simplify this. For other bootloaders more work is needed to create the package rules manually. See Adding New Packages for more details one how to create a new package.
Note
PTXdist looks in patches/$(<PKG>)
for the patches. Symlinks
can be used to share the patch stack across multiple bootloader packages.
Creating Individual Root-Filesystems for each Variant
For each variant, a rootfs image can be created. The image-genimage
template for new packages can be used to create these images. See
Image Packages for more details.
In this case, the important part is the <PKG>_PKGS
variable.
There are two ways to mange the package list for the image:
Creating the package list manually
Manually create the package list by listing package names or using make macros to manipulate the default package list
$(PTX_PACKAGES_INSTALL)
.To add a single custom package, extra packages can be used. An extra package is not added to
$(PTX_PACKAGES_INSTALL)
. It is created by modifying the package rule like this:EXTRA_PACKAGES-$(PTXCONF_BOARD_A_EXTRA) += board-a-extra
The resulting package is then added explicitly to on image:
IMAGE_ROOT_BOARD_A_PKGS := $(PTX_PACKAGES_INSTALL) board-a-extra
This is not recommended for larger changes to the packages list, as it is easy to break dependencies this way.
Use a collection config to create the package list
To prepare for this, all packages that are not part of _all_ variants must be set
M
in menuconfig.Then a new collection for the variant is created:
$ touch configs/collectionconfig-board-a $ ptxdist --collectionconfig=configs/collectionconfig-board-a menuconfig collection
All extra packages for this variant are selected here. Then the collection config is configured for the image:
IMAGE_ROOT_BOARD_A_PKGS := $(call ptx/collection, $(PTXDIST_WORKSPACE)/configs/collectionconfig-board-a)
With a collection PTXdist will take care of all dependencies. This makes it easy to manage multiple root filesystems with significantly different package lists.
Putting it all Together
The final steps are highly hardware dependent. In some cases a bootloader image and a rootfs are all that is needed.
To boot from SD-Card a disk image including bootloader, partition table and
rootfs is needed. The image-genimage
template can be used again to
create such an image for each variant.
Note
The genimage config files in config/images/
are good examples
when writing genimage for the new images.
6.16. The PTXdist User Manual
The HTML based PTXdist user manual can be found in web at
Requirements to build the Documentation
PTXdist can build its own user manual and supports HTML or PDF as the output formats. PTXdist uses the Sphinx documentation maker to build both output formats. The host system itself must provide some tools and data:
- Fonts:
Liberation Sans/Liberation Sans Bold or DejaVu Sans/DejaVu Sans Bold (for the “Portable Document Format”, e.g. PDF)
Inconsolata, DejaVu Sans Mono or Liberation Sans Mono (for the “Portable Document Format”, e.g. PDF)
- Tools:
Sphinx version 1.3.4, better 1.4.2…1.4.9 or >= 1.6.5 (for all kind of document formats)
Sphinx theme from https://readthedocs.org/
TeX Live 2016 (for the “Portable Document Format”, e.g. PDF)
Using a Python virtual environment
Sphinx is Python based and thus can be installed via a virtual environment when not globally present in the host system.
$ pip3 install --upgrade --user pip virtualenv
$ source env/bin/activate
$ pip3 install sphinx
$ pip3 install sphinx_rtd_theme
Note
Whenever you want to create the PTXdist user manual, you must first
source the env/bin/activate
file if not already done or do each PTXdist
call with the –virtualenv=<dir> parameter.
Building the Documentation
PTXdist comes with support to generate HTML or Portable Document Format based documentation from the sources.
The command:
$ ptxdist docs-html
will build the HTML based documentation into Documentation/html
and the entry
file for this kind of documentation is Documentation/html/index.html
.
The command:
$ ptxdist docs-latex
will build the Latex based documentation which results into the final
Portable Document Format document. This result can be found in
Documentation/latex/OSELAS.BSP-Pengutronix-Example-Quickstart.pdf
.
Both commands can be executed in the BSP or the toplevel PTXdist directory to create the BSP specific or generic documentation respectively.
6.17. Integrate project specific Documentation into the Manual
PTXdist supports the ability to integrate project specific documentation into the final PTXdist manual. To do so, PTXdist handles file replacements and additions, while generating the documentation.
File replacement is working in the same manner like for all other files in a PTXdist based project: a local file with the same name superseds a global file from PTXdist.
With this mechanism we can replace existing PTXdist documentation or add new one.
If we want to add a new global section to the manual we can copy the global
PTXdist doc/index.rst
file into our local doc/
directory and adapt it
accordingly.
To change or add things less intrusive we can do it on the various *.inc
files in the PTXdist’s doc/
directory which define the content of the
sections.
For example to change the image createn section’s content, we can copy the
global PTXdist doc/user_images.inc
into our local doc/
directory and
adapt it to the behaviour of our project.
In the generic documentation source many text uses variables instead of fixed content. These variables are filled with values extracted from the current PTXdist project prior building the final documentation. Since PTXdist projects are bound to a defined PTXdist version and toolchain version, this kind of information is extracted from the current settings and substituted in the documentation. This behaviour ensures the documentation includes the project’s exact definition to external dependencies.
Refer the PTXdist file doc/conf.py
for more information on variable
substitution. This PTXdist global file can be superseded by a local copy as well.
Documentation structure for layered BSPs
When you call ptxdist docs-html
in a layer, PTXdist will assemble the
doc/
directory from all lower layers in the usual layering fashion,
and flatten it into a single directory.
In the highest-level table of contents, PTXdist uses a wildcard match for
index-layer*
files, which is the entry point to integrate documentation for
your own layers by creating files with that pattern.
It is advisable to number the index files accordingly so their ordering in the
documentation reflects the layer order.
PTXdist itself uses the file index-layer-0-ptxdist.rst
to include the title
page of the PTXdist manual first, and includes the rest of the PTXdist
documentation after the layer-specific files.
For example, see the following directory structure:
my-bsp/
├── common/
│ └── doc/
│ └── index-layer-1-common.rst
└── product-layer
. ├── base/ -> ../common
. └── doc
. └── index-layer-2-product.rst
In this example, the contents of index-layer-1-common.rst
and
index-layer-2-product.rst
would describe some layer-specific content, or
even have their own table of contents in the usual reStructuredTest fashion to
include more sub-sections in separate files.
The documentation built for the product-layer will therefore include a
section each for the common layer documentation, then for the product-layer
documentation, and finally the rest of the PTXdist documentation.