Getting started
This section describes how to download and build the Microchip BSP from sources.
A reference board is required in order to test the resulting artifacts.
1. Terms
The following terms are used to describe the generated artifacts throughout this document:
-
ITB - Image Tree Blob. An image using the FIT (Flattened Image Tree) format that can contain the Linux kernel, device tree blob, root file system image etc.
-
ITB-Rootfs - An ITB containing kernel, a number of device trees and a full SquashFS as rootfs.
-
ITB-Initramfs - An ITB containing kernel, a number of device trees and an initramfs containing a small stage2-loader.
-
ITB-Bare - An ITB containing kernel, a number of device trees and no root file system.
-
ext4-ITB-Bare - An ext4 filesystem containing an ITB-Bare in the boot folder.
-
ext4-ITB-Initramfs - An ext4 filesystem containing an ITB-Initramfs in the boot folder.
-
ext4-Bare - An ext4 filesystem without an ITB in the boot folder.
The images created by default are ITB_Rootfs and ext4-Bare.
2. Development environment
The development environment must support at least the packages required by Buildroot: https://buildroot.org/downloads/manual/manual.html#requirement
On Ubuntu 20.04 LTS, the required packages can be installed like this:
$ sudo apt-get install -y \ asciidoc \ astyle \ autoconf \ bc \ bison \ build-essential \ ccache \ cmake \ cmake-curses-gui \ cpio \ dblatex \ default-jre \ doxygen \ file \ flex \ gdisk \ genext2fs \ gettext-base \ git \ graphviz \ gzip \ help2man \ iproute2 \ iputils-ping \ libacl1-dev \ libelf-dev \ libglade2-0 \ libgtk2.0-0 \ libmpc-dev \ libncurses5 \ libncurses5-dev \ libncursesw5-dev \ libssl-dev \ libtool \ locales \ m4 \ mtd-utils \ parted \ patchelf \ python3 \ python3-pip \ rsync \ ruby-full \ ruby-parslet \ squashfs-tools \ sudo \ texinfo \ tree \ u-boot-tools \ udev \ util-linux \ vim \ w3m \ wget \ xz-utils \ # Additional Ruby packages $ sudo gem install nokogiri asciidoctor # Enable use of `python` command instead of `python3` $ sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 100 # Additional Python packages $ sudo python -m pip install matplotlib
An alternative is to use the Docker image provided by Microchip which contains Ubuntu 20.04 LTS and all of the necessary packages.
See Using Docker, for details on how to configure and use a Docker image. |
3. Download
3.1. BSP
Microchip BSPs are stored in Amazon Web Services (AWS) at this location:
http://mscc-ent-open-source.s3-website-eu-west-1.amazonaws.com/?prefix=public_root/bsp/
Download e.g. mscc-brsdk-source-2024.09.tar.gz via browser or wget:
$ cd <workspace-to-install-sources> $ wget http://mscc-ent-open-source.s3-eu-west-1.amazonaws.com/public_root/bsp/mscc-brsdk-source-2024.09.tar.gz $ ls mscc-brsdk-source-2024.09.tar.gz
When the BSP has been downloaded it must be extracted.
$ tar xf mscc-brsdk-source-2024.09.tar.gz $ ls mscc-brsdk-source-2024.09 mscc-brsdk-source-2024.09.tar.gz
3.2. Toolchain
Microchip BSPs requires an appropriate toolchain. They are stored in Amazon Web Services (AWS) at this location:
http://mscc-ent-open-source.s3-website-eu-west-1.amazonaws.com/?prefix=public_root/toolchain/
The toolchains can be downloaded as either binary or source. We always recommend to use the binary toolchain.
See Customizing the Toolchain if you want to customize the toolchain.
The BSP expects the toolchain to be located in the /opt/mscc/
folder.
To identify the toolchain version that matches the BSP:
$ cd mscc-brsdk-source-2024.09 $ cat ./external/support/misc/mscc-version | grep toolchain toolchain: 2024.02-105
Download mscc-toolchain-bin-2024.02-105.tar.gz via browser or wget:
$ wget http://mscc-ent-open-source.s3-eu-west-1.amazonaws.com/public_root/toolchain/mscc-toolchain-bin-2024.02-105.tar.gz $ ls mscc-toolchain-bin-2024.02-105.tar.gz
When the toolchain has been downloaded it must be extracted into /opt/mscc
:
$ sudo mkdir -p /opt/mscc $ sudo tar xf mscc-toolchain-bin-2024.02-105.tar.gz -C /opt/mscc
Test the toolchain:
$ /opt/mscc/mscc-toolchain-bin-2024.02-105/arm-cortex_a8-linux-gnueabihf/usr/bin/arm-cortex_a8-linux-gnueabihf-gcc --version arm-cortex_a8-linux-gnueabihf-gcc.br_real (Buildroot 2024.02) 13.2.0 Copyright (C) 2023 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
4. BSP structure
This is the structure used by the BSP:
|
This folder contains source that is not provided by the Buildroot project. |
---|---|
|
All packages added by Microchip are located in the |
|
This folder contains all the defconfigs added by Microchip. |
|
All source code referenced by Buildroot or added by Microchip are available as compressed files in a
folder in the |
|
These are the predefined build folders for each target. |
|
This folder is created during build and contains the BSP package result. |
|
This folder is created during build and contains the BSP binaries (before the final package is created). |
Most of the other folders are described here: https://buildroot.org/downloads/manual/manual.html#_developer_guide
Building the BSP does not require any other downloads than the BSP and toolchain packages.
5. Building
The BSP can be built by a script provided by Microchip or by the normal Buildroot out-of-tree
procedure.
Buildroot is designed to be built as non-root. There is no need to be root to configure and build when using Buildroot. |
5.1. Building via script
The source package contains a Ruby script called build.rb
which makes it very easy to build the BSP for one or more targets.
Only targets defined by Microchip can be build with the build.rb
script.
The syntax is:
./build.rb <STEP> [--configs <TARGETS>`]
where <STEP> is one of:
[ build | pack | relocate | all ]
and <TARGETS>
is a regular expression that will be matched with the defconfigs located in the ./external/configs
folder, and only those that match will be built. All targets will be built if the --configs
option is omitted.
Using build step all
executes all the above build steps in the following order: build
, pack
, relocate
.
For more options in the ./build.rb script, please use ./build.rb --help |
If your development system is not Ubuntu 2020.04 you can also use the docker-run tool to run the build in a
docker container. This is described in the Using Docker section.
|
5.1.1. Build step build
This is an example on how to run the build
step with the arm_standalone target:
$ ./build.rb build --configs arm_standalone_defconfig
This step builds from the sources and stores its output in the corresponding target build folder.
The output from the build
command above is therefore located in ./output/build_arm_standalone
folder.
This folder contains several subfolders that are documented here:
https://buildroot.org/downloads/manual/manual.html#_buildroot_quick_start
The build process is logged in a file, ./output/build_arm_standalone/arm_standalone_defconfig.log
. You can monitor this file from
another terminal during the build process using:
$ tail -f ./output/build_arm_standalone/arm_standalone_defconfig.log
5.1.2. Build step pack
This is an example on how to run the pack
step with the arm_standalone target:
$ ./build.rb pack --configs arm_standalone_defconfig
This step collects all the results and add them to a folder: ./output/mscc-brsdk-<arch>-<version>
.
The output from the pack
command above is therefore located in folder: ./output/mscc-brsdk-arm-2024.09
.
Build step pack
also saves the log file from the build
step in file:
./output/mscc-brsdk-logs-2024.09/arm_standalone_defconfig.log
.
5.1.3. Build step relocate
This is an example on how to run the relocate
step with the arm_standalone target:
$ ./build.rb relocate --configs arm_standalone_defconfig [--local]
This step creates an artifact folder, ./output/artifact
, that contains all artifacts needed for distribution of the
binary BSP.
The binary BSP is installed in /opt/mscc
on the local PC if the --local
option is present.
5.2. Building for XStaX
In the previous sections the example showed how to build arm_standalone_defconfig.
If you are building a BSP package for XStaX the procedure is the same, but the XStaX build also depends on artifacts provided by the bootloader, so to include these in the BSP package these you will need to combine these builds like this when building e.g. for the arm64 version:
$ ./build.rb all --configs arm64_xstax --configs arm64_bootloaders
This build will collect artifacts from the two configurations, so the final BSP package will contain all the necessary artifacts to build the XStaX image.
5.3. Building the Buildroot way
The BSP package has been designed to work with the Building out-of-tree
concept described here:
https://buildroot.org/downloads/manual/manual.html#_building_out_of_tree
This means that you can switch into the build directory for the actual target and then just run e.g. make menuconfig
or make
without the need to pass O=<…> and -C <…>
:
Here is how to build for the same target as when using the build script above:
$ cd ./output/build_arm_standalone $ make menuconfig $ make
The output from the command above is located in folder: ./output/build_arm_standalone/images
:
If you want to use another output folder you must prepare it first:
$ make BR2_EXTERNAL=./external O=./output/mybuild arm_standalone_defconfig $ cd ./output/mybuild $ make menuconfig $ make
When building in this way you cannot use the build script ./build.rb to collect the artifacts. This must be done
manually.
|
If you create your own output folder structure and configuration files, please be aware that the folder
structure must be two levels, corresponding to the output/target folder structure maintained with the build.rb
tool. The reason for this limitation is that some of the references in the defconfig files uses relative folder
references to point to build artifacts, so this must be respected unless you also update the defconfig file that you
use.
|
5.4. Using Docker
Both the BSP and the toolchain are based on Buildroot which requires a number of mandatory and optional packages installed in the build host.
You can avoid installing all of these packages by using a Docker image together with a dr
helper script provided by
Microchip: https://github.com/microchip-ung/docker-run
Clone the repo and copy the dr
script into a folder that is within your path, e.g. /usr/local/bin, and make it
executable with sudo chmod a+x /usr/local/bin/dr
.
The dr
script requires a configuration file, .docker.env
, which is included in the project root of both the BSP and
the toolchain sources.
This configuration file contains information about which Docker image to run and also a set of options to pass to the Docker image.
You will need to install Docker in order to use the dr
script. See https://docs.docker.com/engine/install/ for
instructions.
The Docker image has been verified to work with Ubuntu 20.04 and Windows 10 with WSL2 and Ubuntu 20.04.
Change current folder to the project root and try the following command:
$ dr bash <username>@d3aeab611408:/home/<username>/project/brsdk/mscc-brsdk-source-2024.09$
The Docker image will automatically be downloaded if it is the first time you use the dr
script and it can take some
time.
The new prompt indicates that you are running bash inside the Docker container where all the packages required by Buildroot are available.
Note that you are the same user inside the container as outside the container.
The current folder is the same inside the container as outside the container.
Everything within the project root is accessible inside the container.
Type exit
to leave the Docker container.
It is now possible to run all kinds of commands inside the Docker container just by prepending the command with dr
:
$ dr ./build.rb -all $ cd ./output/build_arm_standalone $ dr ls -al $ dr make menuconfig $ dr make
If you need to mount other folders in the container you can add them in the variable MCHP_DOCKER_OPTIONS
in
.docker_env
from this:
MCHP_DOCKER_OPTIONS="-v $opt:$opt --tmpfs /tmp:exec -e BLD_USER=$(id -un) -e BLD_UID=$(id -u)"
to this:
MCHP_DOCKER_OPTIONS="-v $opt:$opt --tmpfs /tmp:exec -e BLD_USER=$(id -un) -e BLD_UID=$(id -u) -v /usr/local/src/xyz:/usr/local/src/xyz"
Now you can access the /usr/local/src/xyz
folder both inside and outside the container.
The following Docker related repos and corresponding Docker image are public available on GitHub:
-
dr
script: https://github.com/microchip-ung/docker-run -
Dockerfile etc.: https://github.com/microchip-ung/docker-bsp-buildenv
-
Docker image: https://github.com/orgs/microchip-ung/packages/container/package/bsp-buildenv
6. Selecting artifacts to build
The artifact ITB-Rootfs is built by default.
This is done via a shell script that Buildroot calls after creating filesystem images.
The configuration name is BR2_ROOTFS_POST_IMAGE_SCRIPT and is accessible via menuconfig:
$ cd ./output/build_arm_standalone $ make menuconfig
Select System configuration
and Custom scripts to run after creating filesystem images
and add the script to run.
Most of the targets are initially configured to use a script called post-image.sh
that is located somewhere under the ./board folder.
In the next field, Extra arguments passed to custom scripts
, you can configure
which artifacts to build.
Valid values are:
itb-rootfs |
Build fit.itb |
---|---|
itb-initramfs |
Build itb-initramfs.itb |
itb-bare |
Build itb-bare.itb |
ext4-itb-bare |
Build ext4-itb-bare.ext4 |
ext4-itb-initramfs |
Build ext4-itb-initramfs.ext4 |
ext4-bare |
Build ext4-bare.ext4 |
ubifs-itb-bare |
Build a UBIFS image with a itb-bare.itb |
Add the artifacts to build separated by a space. If this field is empty then ITB-Rootfs is built by default.
Some targets supports only a subset of the artifacts.
The predefined post-image.sh scripts makes use of a helper script called imggen.rb
,
which is located in the ./external/support/scripts
folder.
To see how to use this script directly:
$ ./external/support/scripts/imggen.rb --help
7. Changing the build
The next sections explains how you can change the build or its content.
7.1. Adding new packages
If a new package is needed, it can be added by following the description here: https://buildroot.org/downloads/manual/manual.html#adding-packages
7.2. Modifying a package
If a package needs to be modified, it is recommended to use the procedure described here: https://buildroot.org/downloads/manual/manual.html#_using_buildroot_during_development
As an example, we will change the ethtool
package in the arm_standalone target.
First create a home for the ethtool
sources:
$ mkdir -p ./source/ethtool
Locate the ethtool
repo in the ./dl
folder:
$ ls ./dl/ethtool ethtool-5.10.tar.xz
Unpack ethtool
into its new home:
$ tar xf ./dl/ethtool/ethtool-5.10.tar.xz -C ./source/ethtool
Add an override file local.mk
in the output folder for the arm_standalone target with the following content:
$ echo "ETHTOOL_OVERRIDE_SRCDIR = ./source/ethtool/ethtool-5.10" > ./output/build_arm_standalone/local.mk
It is possible to override more than one package in local.mk by adding a line for each package.
|
Now do the necessary modifications in ./source/ethtool/ethtool-5.10/…
The ethtool
package must be rebuilt after being modified:
$ cd ./output/build_arm_standalone $ make ethtool-rebuild all
If the source folders are located outside of the BSP root and you use a Docker image you will have to mount the source folder(s) inside the Docker image. See Using Docker, for details on how to mount other folders inside the Docker image. |
7.3. Hello World example
This is a step by step guide for adding a new CMake-based package called Hello World
.
More details can be found here:
https://buildroot.org/downloads/manual/manual.html#_infrastructure_for_cmake_based_packages
The package is as simple as possible and contains only a CMake file and a .c file but the steps are the same for more complicated packages.
First create a package directory:
$ mkdir ./external/package/mscc-hello-world
Create ./external/package/mscc-hello-world/Config.in with the following content:
config BR2_PACKAGE_MSCC_HELLO_WORLD bool "mscc-hello-world" help Tool for configuring hello-world
Create ./external/package/mscc-hello-world/mscc-hello-world.mk with the following content:
MSCC_HELLO_WORLD_VERSION = 1.0 MSCC_HELLO_WORLD_SITE = http://www.hello-world.org/download MSCC_HELLO_WORLD_SOURCE = mscc-hello-world-$(MSCC_HELLO_WORLD_VERSION).tar.gz MSCC_HELLO_WORLD_INSTALL_STAGING = YES $(eval $(cmake-package))
The macro assignments needs not to select an existing repository at this point but they must be present. |
Include the new package directory at the bottom of the existing ./external/Config.in file:
. source "$BR2_EXTERNAL_MSCC_PATH/package/mscc-hello-world/Config.in"
Create a new home for the package sources:
mkdir /home/alice/mscc-hello-world
Create /home/alice/mscc-hello-world/CMakeLists.txt with the following content:
cmake_minimum_required(VERSION 3.4) PROJECT(HelloWorld) add_executable(hello-world hello-world.c) install(TARGETS hello-world DESTINATION bin)
Create /home/alice/mscc-hello-world/hello-world.c with the following content:
#include <stdio.h> int main() { printf("Hello, World!"); return 0; }
Create an override file for each target that must include the Hello World application.
This example uses the arm_standalone target.
Create ./output/build_arm_standalone/local.mk with the following content:
MSCC_HELLO_WORLD_OVERRIDE_SRCDIR = /home/alice/mscc-hello-world
Change to the target directory in order to use the following make commands:
$ cd ./output/build_arm_standalone
Enable mscc-hello-world (select External options and check mscc-hello-world):
$ make menuconfig
To build:
$ make
Modify source code and rebuild mscc-hello-world only:
$ make mscc-hello-world-rebuild all
If using Docker the source directory must be mounted in .docker.env, so change the MCHP_DOCKER_OPTIONS setting in the .docker.env file from this:
MCHP_DOCKER_OPTIONS="-v $opt:$opt --tmpfs /tmp:exec -e BLD_USER=$(id -un) -e BLD_UID=$(id -u)"
to this:
MCHP_DOCKER_OPTIONS="-v $opt:$opt --tmpfs /tmp:exec -e BLD_USER=$(id -un) -e BLD_UID=$(id -u) -v /home/alice/mscc-hello-world:/home/alice/mscc-hello-world"
The Hello World application is now part of the artifacts and can be tested on target.
When the application has been tested and added into a repository the following steps must be done:
-
Modify ./external/package/mscc-hello-world/mscc-hello-world.mk to point to the new repository.
-
Remove the ./output/build_arm_standalone/local.mk file.
The new application has now been fully added to the Buildroot system.
7.4. Customizing the Toolchain
The toolchain contains cross-compilers that are utilized to compile on one architecture and get an output that can run on a different architecture. The toolchain is distributed both in binary and source format. To customize the toolchain, the sources are needed and must be downloaded.
See the Toolchain section for details on how to find the correct version of the toolchain.
This tells us the toolchain version is 2024.02-105
. Get and install toolchain sources:
$ cd <workspace-to-install-sources> $ wget http://mscc-ent-open-source.s3-eu-west-1.amazonaws.com/public_root/toolchain/mscc-toolchain-source-2024.02-105.tar.gz $ tar -xf mscc-toolchain-source-2024.02-105.tar.gz
Before starting to customize the toolchain, make sure it compiles without any
modifications. The build process is automated by the ./build.rb
script.
Here is how to build all toolchains:
$ cd mscc-toolchain-source-2024.02-105 $ ./build.rb all
Lots of warnings are printed on the screen when compiling the toolchain. These are warnings in third-party code and can be ignored. |
For more options in the ./build.rb script, use ./build.rb --help .
|
If the build completes successfully, it stores the resulting binary toolchain in the ./output/artifact
folder.
$ ls ./output/artifact files.md5 mscc-toolchain-bin-2024.02-105.tar.gz mscc-toolchain-logs-2024.02-105.tar.gz
Now the toolchain is ready to be installed in /opt/mscc
.
See the Toolchain section for details on how to extract and install the toolchain.
7.4.1. Changing the toolchain configuration
To alter e.g. the ARM toolchain:
$ cd ./output/build_arm_toolchain $ make menuconfig $ cd ../..
The toolchain package only contains the toolchain, so in case you need to add or remove non-toolchain related packages, refer to the Build section. |
After this, rebuild the toolchain and pack everything to be able to use them:
$ ./build.rb build --configs arm_toolchain_defconfig $ ./build.rb pack $ ./build.rb relocate
Now, the new toolchain is ready to be installed into /opt/mscc
as described in the
Toolchain section.